text
stringlengths
1
1.53M
meta
dict
\section{Introduction} Suppose $E$ is a commutative $S$-algebra, in the sense of \cite{EKMM}, and $A$ is a commutative $E$-algebra. We want to capture the properties and underlying structure of the homotopy groups $\pi_* A = A_*$ of $A$, by studying operations associated to the cohomology theory that $E$ represents. An important family of cohomology operations, called {\em power operations}, is constructed via the extended powers. Specifically, consider the {\em $m$'th extended power} functor \[ {\mb P}_E^m (-) \coloneqq (-)^{\wedge_E m} / \Sigma_m \colon\thinspace {\rm Mod}_E \to {\rm Mod}_E \] on the category of $E$-modules, which sends an $E$-module to its $m$-fold smash product over $E$ modulo the action by the symmetric group on $m$ letters. The ${\mb P}_E^m (-)$'s assemble together to give the {\em free commutative $E$-algebra} functor \[ {\mb P}_E (-) \coloneqq \bigvee_{m \geq 0} {\mb P}_E^m (-) \colon\thinspace {\rm Mod}_E \to {\rm Alg}_E \] from the category of $E$-modules to the category of commutative $E$-algebras. These functors descend to homotopy categories. In particular, each $\alpha \in \pi_{d+i}~{\mb P}_E^m (\Sigma^d E)$ gives rise to a power operation \[ Q_\alpha \colon\thinspace A_d \to A_{d+i} \] (cf.~\cite[Sections I.2 and IX.1]{H_infty} and \cite[Section 3]{cong}). Under the action of power operations, $A_*$ is an algebra over some operad in $E_*$-modules involving the structure of $E_* B\Sigma_m$ for all $m$. This operad is traditionally called a {\em Dyer-Lashof~ algebra}, or more precisely, a Dyer-Lashof~ {\em theory} as the {\em algebraic theory} of power operations acting on the homotopy groups of commutative $E$-algebras (cf.~\cite[Chapters III, VIII, and IX]{H_infty} and \cite[Section 9]{lpo}). A specific case is when $E$ represents a Morava $E$-theory of height $n$ and $A$ is $K(n)$-local. Morava $E$-theory spectra play a crucial role in modern stable homotopy theory, particularly in the work of Ando, Hopkins, and Strickland on the topological approach to elliptic genera (see \cite{cube}). As recalled in \cite[1.5]{cong}, the $K(n)$-local $E$-Dyer-Lashof~ theory is largely understood based on work of those authors. In \cite{cong}, Rezk maps out the foundations of this theory. He gives a congruence criterion for an algebra over the Dyer-Lashof~ theory (\cite[Theorem A]{cong}). This enables one to study the Dyer-Lashof~ {\em theory}, which models all the algebraic structure naturally adhering to $A_*$, by working with a certain associative ring $\Gamma$ as the Dyer-Lashof~ {\em algebra}. Moreover, Rezk provides a geometric description of this congruence criterion, in terms of sheaves on the moduli problem of deformations of formal groups and Frobenius isogenies (see \cite[Theorem B]{cong}). This connects the structure of $\Gamma$ to the geometry underlying $E$, moving one step forward from a workable object $\Gamma$ to things that are computable. In a companion paper \cite{h2p2}, Rezk gives explicit calculations of the Dyer-Lashof~ theory for a specific Morava $E$-theory of height $n = 2$ at the prime 2. The purpose of this paper is to make available calculations analogous to some of the results in \cite{h2p2}, at the prime 3, together with calculations of the corresponding power operations on the $K(1)$-localization of the Morava $E$-theory spectrum. \subsection{Outline of the paper} As in \cite{h2p2}, the computation of power operations in this paper follows the approach of \cite{steenrod}: one first defines a total power operation, and then uses the computation of the cohomology of the classifying space $B\Sigma_m$ for the symmetric group to obtain individual power operations. These two steps are carried out in Sections \ref{sec:total} and \ref{sec:individual} respectively. In Section \ref{sec:total}, by doing calculations with elliptic curves associated to our Morava $E$-theory $E$, we give formulas for the total power operation $\psi^3$ on $E_0$ and the ring $S_3$ which represents the corresponding moduli problem. In Section \ref{sec:individual}, based on calculations of $E^* B\Sigma_m$ in \cite{Str98} as reflected in the formula for $S_3$, we define individual power operations, and derive the relations they satisfy. In view of the general structures studied in \cite{cong}, we then get an explicit description of the Dyer-Lashof~ algebra $\Gamma$ for $K(2)$-local commutative $E$-algebras. In Section \ref{sec:K(1)}, we describe the relationship between the total power operation $\psi^3$, at height 2, and the corresponding $K(1)$-local power operations. We then derive formulas for the latter from the calculations in Section \ref{sec:total}. \begin{rmk} \label{rmk:grading} In Section \ref{sec:total}, we do calculations with a universal elliptic curve over {\em all} of the moduli stack which is an affine open subscheme of a weighted projective space (cf.~Proposition \ref{prop:C}). At the prime 3, the supersingular locus consists of a single closed point, and the corresponding Morava $E$-theory arises {\em locally} in an affine coordinate chart of this weighted projective space containing the supersingular locus. In this paper we choose a particular affine coordinate chart for computing the homotopy groups of the $E$-theory spectrum and the power operations; we hope that the generality of the calculations in Section \ref{sec:total} makes it easier to work with other coordinate charts as well. \end{rmk} \begin{rmk} \label{rmk:parameter} The ring $S_3$ turns out to be an algebra with one generator over the base ring where our elliptic curve is defined (cf.~\isog{i} and \eqref{S_3}). This generator appears as a parameter in the formulas for the total power operation $\psi^3$, and is responsible for how the individual power operations are defined and how their formulas look. Different choices of this parameter result in different bases of the Dyer-Lashof~ algebra $\Gamma$. The parameter in this paper comes from the relative cotangent space of the elliptic curve at the identity (see \isog{iv}, Corollary \ref{cor:K'}, and Remark \ref{rmk:K'}). This choice is convenient for deriving Adem relations in \q{iv}, and it fits into the treatment of gradings in \cite[Section 2]{cong} (see \go{ii} and Theorem \ref{thm:gamma}). We should point out that our choice is by no means canonical. We do not know yet, as part of the structure of the Dyer-Lashof~ algebra, if there is a canonical basis which is both geometrically interesting and computationally convenient. Somewhat surprisingly, although it appears to come from different considerations, our choice has an analog at the prime 2 which coincides with the parameter used in \cite{h2p2} (see Remarks \ref{rmk:K} and \ref{rmk:KK'}). The calculations follow a recipe in hope of generalizing to other Morava $E$-theories of height 2; we hope to address these matters and recognize more of the general patterns based on further computational evidence. \end{rmk} \subsection{Acknowledgements} I thank Charles Rezk for his encouragement on this work, and for his observation in a correspondence which led to Proposition \ref{prop:frob^2} and Corollary \ref{cor:K'}. I thank Kyle Ormsby for helpful discussions on Section \ref{sec:total}, and for directing me to places in the literature. I thank Tyler Lawson for the sustained support from him I received as a student. \subsection{Conventions} Let $p$ be a prime, $q$ a power of $p$, and $n$ a positive integer. We use the symbols \[ {\mb F}_q\text{,}~~{\mb Z}_q\text{,}~~{\rm and}~~{\mb Z}/n \] to denote a field with $q$ elements, the ring of $p$-typical Witt vectors over ${\mb F}_q$, and the additive group of integers modulo $n$, respectively. If $R$ is a ring, then $R\llbracket x \rrbracket$ and $R (\!(x)\!)$ denote the rings of formal power series and formal Laurent series over $R$ in the variable $x$ respectively. If $I \subset R$ is an ideal, then $R_I^\wedge$ denotes the completion of $R$ with respect to $I$. If $E$ is an elliptic curve and $m$ is an integer, then $[m]$ denotes the multiplication-by-$m$ map on $E$, and $E[m]$ denotes the $m$-torsion subgroup scheme of $E$. All formal groups mentioned in this paper will be commutative and one-dimensional. The terminology for the structure of the Dyer-Lashof~ theory will follow \cite{cong} and \cite{h2p2}; some of the notions there are taken in turn from \cite{BW} and \cite{V}. \section{Total power operations} \label{sec:total} \subsection{A universal elliptic curve and a Morava $E$-theory spectrum} \label{subsec:ec} A Morava $E$-theory of height 2 at the prime 3 has its formal group as the universal deformation of a height-2 formal group over a perfect field of characteristic 3. Given a supersingular elliptic curve over such a field, its formal completion at the identity produces a formal group of height 2. To study power operations for the corresponding $E$-theory, we do calculations with the universal deformation of that supersingular elliptic curve which is a family of elliptic curves with a $\Gamma_1(N)$-structure (see \cite[Section 3.2]{KM}) where $N$ is prime to 3. Here is a specific model (cf.~\cite[4(4.6a)]{husemoller}). \begin{prop} \label{prop:C} Over ${\mb Z} [1/4]$, the moduli problem of nonsingular elliptic curves with a choice of a point of exact order 4 and a nowhere-vanishing invariant one-form is represented by \begin{equation} \label{Cxy} C \colon\thinspace y^2 + a x y + a b y = x^3 + b x^2 \end{equation} with chosen point $(0,0)$ and one-form $dx / (2 y + a x + a b) = dy / (3 x^2 + 2 b x - a y)$ over the graded ring \[ S^\bullet \coloneqq {\mb Z} [1/4] [a, b, \Delta^{-1}] \] where $|a| = 1$, $|b| = 2$, and $\Delta = a^2 b^4 (a^2 - 16 b)$. \end{prop} \begin{proof} Let $P$ be the chosen point of exact order 4. Since $2P$ is 2-torsion, the tangent line of the elliptic curve at $P$ passes through $2P$, and the tangent line at $2P$ passes through the identity at the infinity. With this observation, the rest of the proof is analogous to that of \cite[Proposition 3.2]{tmf3}. \end{proof} Over a finite field of characteristic 3, $C$ is supersingular precisely when the quantity \begin{equation} \label{H} H \coloneqq a^2 + b \end{equation} vanishes (cf.~\cite[V.4.1a]{AEC}). As $(3,H)$ is a homogeneous maximal ideal of $S^\bullet$ corresponding to the closed subscheme ${\rm Spec\thinspace} {\mb F}_3$, the supersingular locus consists of a single closed point, and $C$ restricts to ${\mb F}_3$ as \[ C_0 \colon\thinspace y^2 + x y - y = x^3 - x^2. \] From the above universal deformation $C$ of $C_0$, we next produce a Morava $E$-theory spectrum which is 2-periodic. We follow the convention that elements in algebraic degree $n$ lie in topological degree $2n$, and work in an affine \'etale coordinate chart of the weighted projective space ${\rm Proj\thinspace} {\mb Z} [1/4] [a, b]$ (see Remark \ref{rmk:grading}). Define elements $u$ and $c$ such that \[ a = u c \qquad {\rm and} \qquad b = u^2. \] Consider the graded ring \[ S^\bullet [u^{-1}] \cong {\mb Z} [1/4] [a, \Delta^{-1}] [u^{\pm1}] \] where $|u| = 1$, and denote by $S$ its subring of elements in degree 0 so that \begin{equation} \label{S} S \cong {\mb Z} [1/4] [c, \delta^{-1}] \end{equation} where $\delta = u^{-12} \Delta = c^2 (c^2 - 16)$. Write \[ \widehat{S} = {\mb Z}_9 \llbracket h \rrbracket \] where \begin{equation} \label{h} h \coloneqq u^{-2} H = c^2 + 1. \end{equation} Let $i$ be an element generating ${\mb Z}_9$ over ${\mb Z}_3$ with $i^2 = -1$. We may choose \[ c \equiv i ~~{\rm mod}~ (3,h) \] and we have \[ \delta \equiv -1 ~~{\rm mod}~ (3,h) \] where $(3,h)$ is the maximal ideal of the complete local ring $\widehat{S}$. Then by Hensel's lemma, both $c$ and $\delta$ lie in $\widehat{S}$, and both are invertible. Thus \[ \widehat{S} \cong S_{(3,h)}^\wedge. \] Now $C$ restricts to $S$ as \begin{equation} \label{Cc} y^2 + c x y + c y = x^3 + x^2. \end{equation} Let $\widehat{C}$ be the formal completion of $C$ over $S$ at the identity. It is a formal group over $\widehat{S}$, and its reduction to ${\mb F}_9 = \widehat{S} / (3,h)$ is a formal group ${\mb G}$ of height 2 in view of \eqref{h} and \eqref{H}. By the Serre-Tate theorem (see \cite[2.9.1]{KM}), 3-adically the deformation theory of an elliptic curve is equivalent to the deformation theory of its 3-divisible group, and thus $\widehat{C}$ is the universal deformation of ${\mb G}$ in view of Proposition \ref{prop:C}. Let $E$ be the $E_\infty$-ring spectrum which represents the Morava $E$-theory associated to ${\mb G}$ (see \cite[Corollary 7.6]{GH}). Then \[ E_* \cong {\mb Z}_9 \llbracket h \rrbracket [u^{\pm 1}] \] where $u$ is in topological degree 2, and it corresponds to a local uniformizer at the identity of $C$. \subsection{Points of exact order 3} To study $C$ in a formal neighborhood of the identity, it is convenient to make a change of variables. Let \[ u = \frac{x}{y} \quad {\rm and} \quad v = \frac{1}{y}, \qquad {\rm so} \qquad x = \frac{u}{v} \quad {\rm and} \quad y = \frac{1}{v}. \] The identity of $C$ is then $(u,v) = (0,0)$, with $u$ a local uniformizer. The equation \eqref{Cxy} of $C$ becomes \begin{equation} \label{Cuv} v + a u v + a b v^2 = u^3 + b u^2 v. \end{equation} \begin{prop} \label{prop:tors} On the elliptic curve $C$ over $S^\bullet$, the $uv$-coordinates $(d,e)$ of any nonzero 3-torsion point satisfy the identities \begin{equation} \label{f} f(d) = 0 \end{equation} and \begin{equation} \label{g} e = g(d) \end{equation} where $f, g \in S^\bullet [u]$ are given by \begin{equation*} \begin{split} f(u) = & ~ b^4 u^8 + 3 a b^3 u^7 + 3 a^2 b^2 u^6 + (a^3 b + 7 a b^2) u^5 + (6 a^2 b - 6 b^2) u^4 + 9 a b u^3 \\ & + (-a^2 + 8 b) u^2 - 3 a u - 3, \\ g(u) = & -\frac{1}{a (a^2 - 16 b)} \big( a b^3 u^7 + (3 a^2 b^2 - 2 b^3) u^6 + (3 a^3 b -6 a b^2) u^5 + (a^4 + a^2 b \\ & + 2 b^2) u^4 + (4 a^3 - 15 a b) u^3 + 18 b u^2 - 12 a u - 18 \big). \end{split} \end{equation*} \end{prop} \begin{proof} \footnote{See Appendix \ref{apx:tors} for explicit formulas for the polynomials $\widetilde{f}$, $Q_1$, $R_1$, $Q_2$, $R_2$, $K$, $L$, $M$, and $N$ that appear in the proof. } Given the elliptic curve $C$ with equation \eqref{Cxy}, a nonzero point $Q$ is 3-torsion if and only if the polynomial \[ \psi_3 (x) \coloneqq 3 x^4 + (a^2 + 4 b) x^3 + 3 a^2 b x^2 + 3 a^2 b^2 x + a^2 b^3 \] vanishes at $Q$ (cf.~\cite[Exercise 3.7f]{AEC}). Substituting $x = u / v$ and clearing the denominators, we get a polynomial \[ \widetilde{\psi}_3(u,v) \coloneqq 3 u^4 + (a^2 + 4 b) u^3 v + 3 a^2 b u^2 v^2 + 3 a^2 b^2 u v^3 + a^2 b^3 v^4. \] As $Q = (d,e)$ in $uv$-coordinates, we then have \begin{equation} \label{Tp} \widetilde{\psi}_3(d,e) = 0. \end{equation} To get the polynomial $f$, we take $v$ as variable and rewrite \eqref{Cuv} as a quadratic equation \begin{equation} \label{quadratic} a b v^2 + (-b u^2 + a u + 1) v - u^3 = 0, \end{equation} where the leading coefficient $a b$ is invertible in $S^\bullet = {\mb Z} [1/4] [a, b, \Delta^{-1}]$ as $\Delta = a^2 b^4 (a^2 - 16 b)$. Define \begin{equation} \label{Tfdef} \widetilde{f}(u) \coloneqq \widetilde{\psi}_3(u,v) \widetilde{\psi}_3(u,\bar{v}) \end{equation} where $v$ and $\bar{v}$ are formally the conjugate roots of \eqref{quadratic} so that we compute $\widetilde{f}$ in terms of $u$ by substituting \[ v + \bar{v} = \frac{b u^2 - a u - 1}{a b} \qquad {\rm and} \qquad v \bar{v} = -\frac{u^3}{a b}. \] We then factor $\widetilde{f}$ over $S^\bullet$ as \begin{equation} \label{Tffactor} \widetilde{f}(u) = -\frac{u^4 f(u)}{a^2 b} \end{equation} with $f$ the stated polynomial of order 8. We check that $f$ is irreducible by applying Eisenstein's criterion to the homogeneous prime ideal $(3,H)$ of $S^\bullet$. We have $\widetilde{f}(d) = 0$ by \eqref{Tfdef} and \eqref{Tp}. To see $f(d) = 0$, consider the closed subscheme $D \subset C[3]$ of points of exact order 3. By \cite[2.3.1]{KM} it is finite locally free of rank 8 over $S^\bullet$. By the Cayley-Hamilton theorem, as a global section of $D$, $u$ locally satisfies a homogeneous monic equation of order 8, and this equation locally defines the rank-8 scheme $D$. Since $D$ is affine, it is then globally defined by such an equation. In view of $\widetilde{f}(d) = 0$ and \eqref{Tffactor}, we determine this equation, and (up to a unit in $S^\bullet$) get the first stated identity \eqref{f}. To get the polynomial $g$, we note that both the quartic polynomial \[ A(v) \coloneqq \widetilde{\psi}_3(d,v) \] and the quadratic polynomial \[ B(v) \coloneqq a b v^2 + (-b d^2 + a d + 1) v - d^3 \] vanish at $e$, and thus so does their greatest common divisor (gcd). Applying the Euclidean algorithm (see Appendix \ref{apx:tors} for explicit expressions), we have \begin{equation*} \begin{split} A(v) = & ~ Q_1(v) B(v) + R_1(v), \\ B(v) = & ~ Q_2(v) R_1(v) + R_2, \end{split} \end{equation*} where \[ R_1(v) = K(d) v + L(d) \] for some polynomials $K$ and $L$, and $R_2 = 0$ in view of \eqref{f}. Thus $R_1(v)$ is the gcd of $A(v)$ and $B(v)$, and hence \[ K(d) e + L(d) = R_1(e) = 0. \] To write $e$ in terms of $d$ from the above identity, we apply the Euclidean algorithm to $f$ and $K$. Their gcd turns out to be 1, and thus there are polynomials $M$ and $N$ with \[ M(u) f(u) + N(u) K(u) = 1. \] By \eqref{f} we then have $N(d) K(d) = 1$, and thus \[ e = -N(d) L(d) = g(d) \] where $g$ is as stated. \end{proof} \begin{rmk} \label{rmk:dmod3} The formula for $f$ in Proposition \ref{prop:tors} satisfies a congruence \[ f(u) \equiv u^2 (b^4 u^6 + a b H u^3 - H) ~~{\rm mod}~ 3. \] The two roots (counted with multiplicity) of $f(u)$ which reduce to zero modulo 3 correspond to the two nonzero points in the unique order-3 subgroup of $C$ in a formal neighborhood of the identity. \end{rmk} \subsection{A universal isogeny and a total power operation} \begin{prop} \label{prop:isog} \mbox{} \begin{enumerate}[(i)] \item \label{isog(i)} The universal degree-3 isogeny $\psi$ with source $C$ is defined over the graded ring \[ S^\bullet_3 \coloneqq S^\bullet [\kappa] \big/ \big( W(\kappa) \big) \] where $|\kappa| = -2$ and \begin{equation} \label{W} W(\kappa) = \kappa^4 - \frac{6}{b^2} ~ \kappa^2 + \frac{a^2 - 8 b}{b^4} ~ \kappa - \frac{3}{b^4}, \end{equation} and has target the elliptic curve \[ C' \colon\thinspace v + a' u v + a' b' v^2 = u^3 + b' u^2 v \] where \begin{equation*} \begin{split} a' = & ~ \frac{1}{a} \big( (a^2 b^4 - 4 b^5) \kappa^3 + 4 b^4 \kappa^2 + (-6 a^2 b^2 + 20 b^3) \kappa + a^4 - 12 a^2 b + 12 b^2 \big), \\ b' = & ~ b^3. \end{split} \end{equation*} \item \label{isog(ii)} The kernel of $\psi$ is generated by a point $Q$ of exact order 3 with coordinates $(d,e)$ satisfying \begin{equation} \label{K} \begin{split} \kappa = & -\frac{1}{a^2 - 16 b} \big( a b^3 d^7 + (3 a^2 b^2 - 2 b^3) d^6 + (3 a^3 b - 6 a b^2) d^5 + (a^4 \\ & + a^2 b + 2 b^2) d^4 + (4 a^3 - 15 a b) d^3 + (a^2 + 2 b) d^2 - 12 a d - 18 \big) \\ = & ~ a e - d^2. \end{split} \end{equation} \item \label{isog(iii)} The restriction of $\psi$ to the supersingular locus at the prime 3 is the 3-power Frobenius endomorphism. \item \label{isog(iv)} The induced map $\psi^*$ on the relative cotangent space of $C'$ at the identity sends $du$ to $\kappa du$. \end{enumerate} \end{prop} \begin{proof} \footnote{See Appendix \ref{apx:isog} for the power series expansion of $v$ and details of the calculations involving the group law on $C$ that appear in the proof. } Let $P = (u,v)$ be a point on $C$, and $Q = (d,e)$ be a nonzero 3-torsion point. Rewriting \eqref{Cuv} as \[ v = u^3 + b u^2 v - a u v - a b v^2, \] we express $v$ as a power series in $u$ by substituting this equation into itself recursively. For the purpose of our calculations, we take this power series up to $u^{12}$ as an expression for $v$, and write $e = g(d)$ as in \eqref{g}. Define functions $u'$ and $v'$ by \begin{equation} \label{u'v'} \begin{split} u' \coloneqq & ~ u(P) \cdot u(P-Q) \cdot u(P+Q), \\ v' \coloneqq & ~ v(P) \cdot v(P-Q) \cdot v(P+Q), \end{split} \end{equation} where $u(-)$ and $v(-)$ denote the $u$-coordinate and $v$-coordinate of a point respectively. By computing the group law on $C$, we express $u'$ and $v'$ as power series in $u$: \begin{equation} \label{KL} \begin{split} u' = & ~ \kappa u + (\text{higher-order terms}), \\ v' = & ~ \lambda u^3 + (\text{higher-order terms}), \end{split} \end{equation} where the coefficients ($\kappa$, $\lambda$, etc.)~involve $a$, $b$, and $d$. In particular, in view of \eqref{f}, we compute that $\kappa$ satisfies $W(\kappa) = 0$ with $|\kappa| = -2$ as stated in \eqref{isog(i)}. Now define the isogeny $\psi \colon\thinspace C \to C'$ by \begin{equation} \label{psi} u\big( \psi(P) \big) \coloneqq u' \qquad {\rm and} \qquad v\big( \psi(P) \big) \coloneqq \frac{\kappa^3}{\lambda} \cdot v', \end{equation} where we introduce the factor $\kappa^3 / \lambda$ so that the equation of $C'$ will be in the Weierstrass form. Using \eqref{KL} (see Appendix \ref{apx:isog} for explicit expressions), we then determine the coefficients in a Weierstrass equation and get the stated equation of $C'$. We next check the statement of \eqref{isog(ii)}. In view of \eqref{psi} and \eqref{u'v'}, the kernel of $\psi$ is the order-3 subgroup generated by $Q$. In \eqref{K}, the first identity is computed in \eqref{KL}; we then compare it with the formula for $g$ in Proposition \ref{prop:tors} and get the second identity. For \eqref{isog(iii)}, recall from Section \ref{subsec:ec} that the supersingular locus at the prime 3 is ${\rm Spec\thinspace} {\mb F}_3$. Over ${\mb F}_3$, since $C[3] = 0$ by \cite[V.3.1a]{AEC}, $Q$ coincides with the identity, and thus \[ u\big( \psi(P) \big) = u(P) \cdot u(P-Q) \cdot u(P+Q) = \big( u(P) \big)^3. \] As the $u$-coordinate is a local uniformizer at the identity, $\psi$ then restricts to ${\mb F}_3$ as the 3-power Frobenius endomorphism. The statement of \eqref{isog(iv)} follows by definition of $\kappa$ in \eqref{KL}. \end{proof} \begin{rmk} In view of \isog{iii}, the formal completion of $\psi \colon\thinspace C \to C'$ at the identity of $C$ is a {\em deformation of Frobenius} in the sense of \cite[11.3]{cong}. When it is clear from the context, we will simply call $\psi$ itself a deformation of Frobenius. \end{rmk} \begin{rmk} \label{rmk:K} From \eqref{u'v'} and \eqref{KL} we have \begin{equation} \label{norm} u(P-Q) \cdot u(P+Q) = \kappa + u \cdot (\text{higher-order terms}). \end{equation} In particular $u(-Q) \cdot u(Q) = \kappa$ (cf.~\cite[Proposition 7.5.2 and Section 7.7]{KM}). The analog of $\kappa$ at the prime 2 coincides with $d$ as studied in \cite[Section 3]{h2p2}. \end{rmk} Recall from Section \ref{subsec:ec} that \[ E^0 \cong {\mb Z}_9 \llbracket h \rrbracket = \widehat{S} \cong S_{(3,h)}^\wedge \] in which $c$ and $i$ are elements with $c^2 + 1 = h$ and $i^2 = -1$. Given the graded ring $S^\bullet_3$ in \isog{i}, define \begin{equation} \label{S_3} S_3 \coloneqq S [\alpha] / \big( w(\alpha) \big) \end{equation} where \begin{equation} \label{w} w(\alpha) = \alpha^4 - 6 \alpha^2 + (c^2 - 8) \alpha - 3 \end{equation} (cf.~the definition of $S$ from $S^\bullet$ in \eqref{S}). By \cite[Theorem 1.1]{Str98} we have \[ E^0 B\Sigma_3 / I \cong \big( S_3 \big)_{(3,h)}^\wedge \] where \begin{equation} \label{transfer} I \coloneqq \bigoplus_{0<i<3} {\rm image} \big( E^0 B(\Sigma_i \times \Sigma_{3-i}) \xrightarrow{\rm transfer} E^0 B\Sigma_3 \big) \end{equation} is the {\em transfer ideal}. In view of this and the construction of {\em total power operations} for Morava $E$-theories in \cite[3.23]{cong}, we have the following corollary. \begin{cor} \label{cor:psi3} The total power operation \[ \psi^3 \colon\thinspace E^0 \to E^0 B\Sigma_3 / I \cong E^0 [\alpha] \big/ \big( w(\alpha) \big) \] is given by \begin{equation*} \begin{split} \psi^3(h) = & ~ h^3 + (\alpha^3 - 6 \alpha - 27) h^2 + 3 (-6 \alpha^3 + \alpha^2 + 36 \alpha + 67) h \\ & + 57 \alpha^3 - 27 \alpha^2 - 334 \alpha - 342, \\ \psi^3(c) = & ~ c^3 + (\alpha^3 - 6 \alpha - 12) c - 4 (\alpha + 1)^2 (\alpha - 3) c^{-1}, \\ \psi^3(i) \thinspace = & -i, \end{split} \end{equation*} where \begin{equation} \label{Amod3} \alpha \equiv 0 ~~{\rm mod}~ 3. \end{equation} \end{cor} \begin{proof} By \isog{i}, in $xy$-coordinates, $C'$ restricts to $S_3$ as \[ y^2 + c' x y + c' y = x^3 + x^2 \] where \[ c' = \frac{1}{c} \big( (c^2 - 4) \alpha^3 + 4 \alpha^2 + (-6 c^2 + 20) \alpha + c^4 - 12 c^2 + 12 \big). \] By \cite[Theorem B]{cong}, since the above equation is in the form of \eqref{Cc}, there is a correspondence between the restriction to $S_3$ of the universal isogeny $\psi$, which is a deformation of Frobenius, and the total power operation $\psi^3$. In particular $\psi^3(c)$ is given by $c'$. As $\psi^3$ is a ring homomorphism, we then get the formula for $\psi^3(h) = \psi^3(c^2 + 1)$. We also have \[ \big( \psi^3(i) \big)^2 = \psi^3(-1) = -1, \] and thus $\psi^3(i) = i$ or $-i$. We exclude the former possibility in view of the congruence \[ \psi^3(i) \equiv i^3 ~~{\rm mod}~ 3 \] by \cite[Propositions 3.25 and 10.5]{cong}. The congruence \eqref{Amod3} follows from Remark \ref{rmk:dmod3} and \eqref{K}. \end{proof} \section{Individual power operations} \label{sec:individual} \subsection{A composite of deformations of Frobenius} Recall from Proposition \ref{prop:isog} that over $S^\bullet_3$ we have the universal degree-3 isogeny $\psi \colon\thinspace C \to C' = C/G$ where $G$ is an order-3 subgroup of $C$; in particular, $\psi$ is a deformation of the 3-power Frobenius endomorphism over the supersingular locus. We want to construct a similar isogeny $\psi'$ with source $C'$ so that the composite $\psi' \circ \psi$ will correspond to a composite of total power operations via \cite[Theorem B]{cong}. Let $G' \coloneqq C[3]/G$ which is an order-3 subgroup of $C'$. Recall from Section \ref{subsec:ec} that $C$ is the universal deformation of a supersingular elliptic curve $C_0$. Since the 3-divisible group of $C_0$ is formal, $C_0[3]$ is connected. Thus over a formal neighborhood of the supersingular locus, if $G$ is the unique connected order-3 subgroup of $C$, $G'$ is then the unique connected order-3 subgroup of $C'$. As in the proof of Proposition \ref{prop:isog}, we define $\psi' \colon\thinspace C' \to C'/G'$ using a nonzero point in $G'$ (see \eqref{u'v'} and \eqref{psi}), and $\psi'$ is then a deformation of Frobenius. Over the supersingular locus, the pair $(\psi, \psi')$ is {\em cyclic in standard order} in the sense of \cite[6.7.7]{KM}. We describe it more precisely as below. \begin{prop} \label{prop:frob^2} The following diagram of elliptic curves over $S^\bullet_3$ commutes: \begin{equation} \label{frob^2} \begin{tikzpicture}[baseline=(current bounding box.center)] \node (LT) at (0, 2) {$C$}; \node (MT) at (3.8, 2) {$C/G = $}; \node (RT) at (4.65, 2.04) {$C'$}; \node (LB) at (1.9, 0) {$C/C[3]$}; \node (MB) at (3.5, 0) {$\cong \frac{C/G}{C[3]/G} = $}; \node (RB) at (4.65, 0.025) {$\frac{C'}{G'}$}; \node at (4.95, -0.15) {.}; \draw [->] (LT) -- node [above] {$\scriptstyle \psi$} (MT); \draw [->] (LT) -- node [left] {$\scriptstyle [-3]$} (LB); \draw [->] (RT) -- node [right] {$\scriptstyle \psi'$} (RB); \end{tikzpicture} \end{equation} \end{prop} \begin{proof} By \cite[2.4.2]{KM}, since ${\rm Proj\thinspace} S^\bullet_3$ is connected, we need only show that the locus over which $\psi' \circ \psi = [-3]$ is not empty, where by abuse of notation $[-3]$ denotes the map $[-3]$ on $C$ composed with the canonical isomorphism from $C/C[3]$ to $C'/G'$. Recall from Section \ref{subsec:ec} that $C$ restricts to the supersingular locus ${\mb F}_3$ as \[ C_0 \colon\thinspace y^2 + x y - y = x^3 - x^2. \] By \isog{iii} both $\psi$ and $\psi'$ restrict as the 3-power Frobenius endomorphism $\psi_0$. By \cite[2.6.3]{KM}, in the endomorphism ring of $C_0$, $\psi_0$ is a root of the polynomial \begin{equation} \label{charpoly} X^2 - {\rm trace}(\psi_0) \cdot X + 3 \end{equation} with ${\rm trace}(\psi_0)$ an integer satisfying \[ \big( {\rm trace}(\psi_0) \big)^2 \leq 12. \] Moreover by \cite[Exercise 5.10a]{AEC}, since $C_0$ is supersingular, we have \[ {\rm trace}(\psi_0) \equiv 0 ~~{\rm mod}~ 3. \] Thus ${\rm trace}(\psi_0) = 0$, 3, or $-3$. We exclude the latter two possibilities by checking the action of $\psi_0$ at the 2-torsion point $(1,0)$. It then follows from \eqref{charpoly} that $\psi_0 \circ \psi_0$ agrees with $[-3]$ on $C_0$ over ${\mb F}_3$. \end{proof} Analogous to \isog{iv}, let $\kappa'$ be the element in $S^\bullet_3$ such that $(\psi')^*$ sends $du$ to $\kappa' du$. Note that $|\kappa'| = -6$. \begin{cor} \label{cor:K'} The following relations hold in $S^\bullet_3$: \[ b^4 \kappa \kappa' + 3 = 0 \] and \[ \kappa' = -\kappa^3 + \frac{6}{b^2} ~ \kappa - \frac{a^2 - 8 b}{b^4}. \] \end{cor} \begin{proof} The isogenies in \eqref{frob^2} induce maps on relative cotangent spaces at the identity. By \isog{iv} we then have a commutative diagram \begin{center} \begin{tikzpicture} \node (LT) at (0, 2) {$\kappa \kappa' du$}; \node (RT) at (4.65, 2) {$\kappa' du$}; \node (LB) at (1.9, 0) {$du$}; \node (RB) at (4.65, 0) {$du$}; \node at (4.95, -0.125) {.}; \draw [|->] (RT) -- node [above] {$\scriptstyle \psi^*$} (LT); \draw [|->] (LB) -- node [left] {$\scriptstyle [-3]^*$} (LT); \draw [|->] (RB) -- node [right] {$\scriptstyle (\psi')^*$} (RT); \draw [double distance=1.3pt] (LB) -- (RB); \end{tikzpicture} \end{center} Thus for the first stated relation we need only show that $[3]^*$ sends $du$ to $3 du / b^4$. For $i = 1$, 2, 3, and 4, let $Q_i$ be a generator for each of the four order-3 subgroups of $C$. Each $Q_i$ can be chosen as $Q$ in \eqref{u'v'}, and we denote the corresponding quantity $\kappa$ in \eqref{KL} by $\kappa_i$. Define an isogeny $\Psi$ with source $C$ by \begin{equation*} \begin{split} u\big( \Psi(P) \big) \coloneqq & ~ u(P) \prod_{i=1}^4 \big( u(P-Q_i) \cdot u(P+Q_i) \big), \\ v\big( \Psi(P) \big) \coloneqq & ~ v(P) \prod_{i=1}^4 \big( v(P-Q_i) \cdot v(P+Q_i) \big). \end{split} \end{equation*} In view of \eqref{norm}, since $[3]$ has the same kernel as $\Psi$, we have \begin{equation} \label{s} [3]^* (du) = s \cdot \kappa_1 \kappa_2 \kappa_3 \kappa_4 \cdot du \end{equation} where $s$ is a degree-0 unit in $S^\bullet$ coming from an automorphism of $C$ over $S^\bullet$. In view of \eqref{W} we have \[ \kappa_1 \kappa_2 \kappa_3 \kappa_4 = -\frac{3}{b^4}. \] We then compute that $s = -1$ by comparing the restrictions of the two sides of \eqref{s} to $S$ (see \eqref{S} for the definition of $S$): $[3]^*$ becomes the multiplication-by-3 map, and $-3 / b^4$ becomes $-3$ (cf.~the constant term in \eqref{w}). Thus $[3]^*$ sends $du$ to $3 du / b^4$. The second stated relation follows by a computation from the first relation and the relation $W(\kappa) = 0$ as in \isog{i}. \end{proof} \begin{rmk} \label{rmk:KK'} As noted in Remark \ref{rmk:K}, the (local) analog of $\kappa$ at the prime 2 coincides with the parameter $d$ in \cite[Section 3]{h2p2}. In particular, with the notations there and the equation in \cite[Proposition 3.2]{tmf3}, $d$ and $d'$ satisfy an analogous relation $A_3 d d' + 2 = 0$ which locally reduces to $d d' + 2 = 0$ (the analog of the factor $s$ in the proof of Corollary \ref{cor:K'} equals 1; cf.~\cite[Theorem 2.5.7]{andoduke}). These arise as examples of \cite[Lemma 3.21]{poonen}. \end{rmk} \begin{rmk} \label{rmk:K'} In view of \eqref{frob^2}, $-\psi'$ (composed with the canonical isomorphism on the target) turns out to be the dual isogeny of $\psi$ (cf.~the proof of \cite[2.9.4]{KM}). If $G$ is the unique order-3 subgroup of $C$ in a formal neighborhood of the identity, then \[ \kappa \equiv 0 ~~{\rm mod}~ 3 \] by Remark \ref{rmk:dmod3} and \eqref{K}. Thus in view of Corollary \ref{cor:K'} and \eqref{H} we have \[ -\kappa' = \kappa^3 - \frac{6}{b^2} ~ \kappa + \frac{a^2 - 8 b}{b^4} \equiv \frac{H}{b^4} ~~{\rm mod}~ 3. \] This congruence agrees with the interpretation of $H$ as defined by the tangent map of the Verschiebung isogeny over ${\mb F}_3$ (see \cite[12.4.1]{KM}). \end{rmk} \subsection{Individual power operations} Let $A$ be a $K(2)$-local commutative $E$-algebra. By \cite[3.23]{cong} and Corollary \ref{cor:psi3}, we have a total power operation \[ \psi^3 \colon\thinspace A_0 \to A_0 \otimes_{E_0} (E^0 B\Sigma_3 / I) \cong A_0 [\alpha] \big/ \big( w(\alpha) \big). \] We also have a composite of total power operations \begin{equation} \label{psi3^2} \begin{split} A_0 \stackrel{\psi^3}{\longrightarrow} A_0 \otimes_{E_0} (E^0 B\Sigma_3 / I) \stackrel{\psi^3}{\longrightarrow} & ~ \big( A_0 \otimes_{E_0} (E^0 B\Sigma_3 / I) \big) \tensor[^\psi^3]{\otimes}{_{E_0 [\alpha]}} (E^0 B\Sigma_3 / I) \\ \cong \thinspace \thinspace & ~ \Big( A_0 [\alpha] \big/ \big( w(\alpha) \big) \Big) \tensor[^\psi^3]{\otimes}{_{E_0 [\alpha]}} \Big( E^0 [\alpha] \big/ \big( w(\alpha) \big) \Big), \end{split} \end{equation} where the elements in the target $M \tensor[^\psi^3]{\otimes}{_R} N$ are subject to the equivalence relation (as well as other ones in a usual tensor product) \[ m \otimes (r \cdot n) \sim \big( m \cdot \psi^3(r) \big) \otimes n \] for $m \in M$, $n \in N$, and $r \in R$ with \[ \psi^3(\alpha) = -\alpha^3 + 6 \alpha - h + 9 \] in view of Corollary \ref{cor:K'}. \begin{defn} Define the {\em individual power operations} \[ Q_k \colon\thinspace A_0 \to A_0 \] for $k = 0$, 1, 2, and 3 by \begin{equation} \label{Q_k} \psi^3 (x) = Q_0(x) + Q_1(x) \alpha + Q_2(x) \alpha^2 + Q_3(x) \alpha^3. \end{equation} \end{defn} \begin{prop} \label{prop:Q} The following relations hold among the individual power operations $Q_0$, $Q_1$, $Q_2$, and $Q_3$: \begin{enumerate}[(i)] \item \label{Q(i)} $Q_0(1) = 1, \quad Q_1(1) = Q_2(1) = Q_3(1) = 0;$ \item \label{Q(ii)} $Q_k(x+y) = Q_k(x) + Q_k(y) \text{~for all~} k;$ \item \label{Q(iii)} {\em Commutation relations } \begin{equation*} \begin{split} Q_0(h x) = & ~ (h^3 - 27 h^2 + 201 h - 342) Q_0(x) + (3 h^2 - 54 h + 171) Q_1(x) \qquad \qquad \\ & + (9 h - 81) Q_2(x) + 24 Q_3(x), \\ Q_1(h x) = & ~ (-6 h^2 + 108 h - 334) Q_0(x) + (-18 h + 171) Q_1(x) + (-72) Q_2(x) \\ & + (h - 9) Q_3(x), \\ Q_2(h x) = & ~ (3 h - 27) Q_0(x) + 8 Q_1(x) + 9 Q_2(x) + (-24) Q_3(x), \\ Q_3(h x) = & ~ (h^2 - 18 h + 57) Q_0(x) + (3 h - 27) Q_1(x) + 8 Q_2(x) + 9 Q_3(x), \\ Q_0(c x) = & ~ (c^3 - 12 c + 12 c^{-1}) Q_0(x) + (3 c - 12 c^{-1}) Q_1(x) + (12 c^{-1}) Q_2(x) \\ & + (-12 c^{-1}) Q_3(x), \\ Q_1(c x) = & ~ (-6 c + 20 c^{-1}) Q_0(x) + (-20 c^{-1}) Q_1(x) + (- c + 20 c^{-1}) Q_2(x) \\ & + (4 c - 20 c^{-1}) Q_3(x), \\ Q_2(c x) = & ~ (4 c^{-1}) Q_0(x) + (-4 c^{-1}) Q_1(x) + (4 c^{-1}) Q_2(x) + (- c - 4 c^{-1}) Q_3(x), \\ Q_3(c x) = & ~ (c - 4 c^{-1}) Q_0(x) + (4 c^{-1}) Q_1(x) + (-4 c^{-1}) Q_2(x) + (4 c^{-1}) Q_3(x), \\ Q_k(i x) = & ~ (-i) Q_k(x) \text{~for all~} k; \\ \end{split} \end{equation*} \item \label{Q(iv)} {\em Adem relations } \begin{equation*} \begin{split} Q_1Q_0(x) = & ~ (-6) Q_0Q_1(x) + 3 Q_2Q_1(x) + (6 h - 54) Q_0Q_2(x) + 18 Q_1Q_2(x) \\ & + (-9) Q_3Q_2(x) + (-6 h^2 + 108 h - 369) Q_0Q_3(x) \\ & + (-18 h + 162) Q_1Q_3(x) + (-54) Q_2Q_3(x), \\ Q_2Q_0(x) = & ~ 3 Q_3Q_1(x) + (-3) Q_0Q_2(x) + (3 h - 27) Q_0Q_3(x) + 9 Q_1Q_3(x), \qquad \qquad \\ Q_3Q_0(x) = & ~ Q_0Q_1(x) + (-h + 9) Q_0Q_2(x) + (-3) Q_1Q_2(x) \\ & + (h^2 - 18 h + 63) Q_0Q_3(x) + (3 h - 27) Q_1Q_3(x) + 9 Q_2Q_3(x); \end{split} \end{equation*} \item \label{Q(v)} {\em Cartan formulas } \begin{equation*} \begin{split} Q_0(xy) = & ~ Q_0(x) Q_0(y) + 3 \big( Q_3(x) Q_1(y) + Q_2(x) Q_2(y) + Q_1(x) Q_3(y) \big) \\ & + 18 Q_3(x) Q_3(y), \\ Q_1(xy) = & ~ \big( Q_1(x) Q_0(y) + Q_0(x) Q_1(y) \big) \\ & + (-h + 9) \big( Q_3(x) Q_1(y) + Q_2(x) Q_2(y) + Q_1(x) Q_3(y) \big) \\ & + 3 \big( Q_3(x) Q_2(y) + Q_2(x) Q_3(y) \big) + (-6 h + 54) Q_3(x) Q_3(y), \qquad \qquad \qquad \end{split} \end{equation*} \begin{equation*} \begin{split} Q_2(xy) = & ~ \big( Q_2(x) Q_0(y) + Q_1(x) Q_1(y) + Q_0(x) Q_2(y) \big) \\ & + 6 \big( Q_3(x) Q_1(y) + Q_2(x) Q_2(y) + Q_1(x) Q_3(y) \big) \\ & + (-h + 9) \big( Q_3(x) Q_2(y) + Q_2(x) Q_3(y) \big) + 39 Q_3(x) Q_3(y), \\ Q_3(xy) = & ~ \big( Q_3(x) Q_0(y) + Q_2(x) Q_1(y) + Q_1(x) Q_2(y) + Q_0(x) Q_3(y) \big) \qquad \qquad \qquad \\ & + 6 \big( Q_3(x) Q_2(y) + Q_2(x) Q_3(y) \big) + (-h + 9) Q_3(x) Q_3(y); \end{split} \end{equation*} \item \label{Q(vi)} {\em The Frobenius congruence } \begin{equation*} Q_0(x) \equiv x^3 ~~{\rm mod}~ 3. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \end{equation*} \end{enumerate} \end{prop} \begin{proof} The relations in \eqref{Q(i)}, \eqref{Q(ii)}, \eqref{Q(iii)}, and \eqref{Q(v)} follow computationally from the formulas in Corollary \ref{cor:psi3} together with the fact that $\psi^3$ is a ring homomorphism. For \eqref{Q(iv)}, there is a canonical isomorphism $C/C[3] \cong C$ of elliptic curves. Given the correspondence between deformations of Frobenius and power operations in \cite[Theorem B]{cong}, the commutativity of \eqref{frob^2} then implies that the composite \eqref{psi3^2} lands in $A_0$. In terms of formulas, we have \begin{equation*} \begin{split} \psi^3 \big( \psi^3(x) \big) = & ~ \psi^3 \big( Q_0(x) + Q_1(x) \alpha + Q_2(x) \alpha^2 + Q_3(x) \alpha^3 \big) \\ = & ~ \sum_{k = 0}^3 \psi^3 \big( Q_k(x) \big) \big( \psi^3(\alpha) \big)^k \\ = & ~ \sum_{k = 0}^3 \sum_{j = 0}^3 Q_jQ_k(x) \alpha^j (-\alpha^3 + 6 \alpha - h + 9)^k \\ \equiv & ~ \Psi_0(x) + \Psi_1(x) \alpha + \Psi_2(x) \alpha^2 + \Psi_3(x) \alpha^3 ~~{\rm mod}~ \big( w(\alpha) \big) \end{split} \end{equation*} where each $\Psi_i$ is an $E_0$-linear combination of the $Q_jQ_k$'s. The vanishing of $\Psi_1(x)$, $\Psi_2(x)$, and $\Psi_3(x)$ gives the three relations in \eqref{Q(iv)}. For \eqref{Q(vi)}, by \cite[Propositions 3.25 and 10.5]{cong} we have \[ \psi^3(x) \equiv x^3 ~~{\rm mod}~ 3. \] In view of \eqref{Amod3}, the congruence in \eqref{Q(vi)} then follows from \eqref{Q_k}. \end{proof} \begin{ex} \label{ex} We have $E^0 S^2 \cong {\mb Z}_9 \llbracket h \rrbracket [u] / (u^2)$. By definition of $\kappa$ in \eqref{KL}, the $Q_k$'s act canonically on $u \in E^0 S^2$: \[ Q_k(u) = \left\{ \begin{array}{ll} u, & \quad {\rm if}~k = 1, \\ 0, & \quad {\rm if}~k \neq 1. \\ \end{array} \right. \] We then get the values of the $Q_k$'s on elements in $E^0 S^2$ from \q{i}-\eqref{Q(iii)}. \end{ex} \subsection{The Dyer-Lashof~ algebra} \begin{defn} \label{def:go} \mbox{} \begin{enumerate}[(i)] \item \label{go(i)} Let $i$ be an element generating ${\mb Z}_9$ over ${\mb Z}_3$ with $i^2 = -1$. Define $\gamma$ to be the associative ring generated over ${\mb Z}_9 \llbracket h \rrbracket$ by elements $q_0$, $q_1$, $q_2$, and $q_3$ subject to the following relations: the $q_k$'s commute with elements in ${\mb Z}_3 \subset {\mb Z}_9 \llbracket h \rrbracket$, and satisfy {\em commutation relations} \begin{equation*} \begin{split} q_0 h = & ~ (h^3 - 27 h^2 + 201 h - 342) q_0 + (3 h^2 - 54 h + 171) q_1 + (9 h - 81) q_2 \\ & + 24 q_3, \\ q_1 h = & ~ (-6 h^2 + 108 h - 334) q_0 + (-18 h + 171) q_1 + (-72) q_2 + (h - 9) q_3, \\ q_2 h = & ~ (3 h - 27) q_0 + 8 q_1 + 9 q_2 + (-24) q_3, \\ q_3 h = & ~ (h^2 - 18 h + 57) q_0 + (3 h - 27) q_1 + 8 q_2 + 9 q_3, \\ q_k i ~ = & ~ (-i) q_k \text{~for all~} k, \end{split} \end{equation*} and {\em Adem relations} \begin{equation*} \begin{split} q_1q_0 = & ~ (-6) q_0q_1 + 3 q_2q_1 + (6 h - 54) q_0q_2 + 18 q_1q_2 + (-9) q_3q_2 \\ & + (-6 h^2 + 108 h - 369) q_0q_3 + (-18 h + 162) q_1q_3 + (-54) q_2q_3, \quad~~ \\ q_2q_0 = & ~ 3 q_3q_1 + (-3) q_0q_2 + (3 h - 27) q_0q_3 + 9 q_1q_3, \\ q_3q_0 = & ~ q_0q_1 + (-h + 9) q_0q_2 + (-3) q_1q_2 + (h^2 - 18 h + 63) q_0q_3 \\ & + (3 h - 27) q_1q_3 + 9 q_2q_3. \end{split} \end{equation*} \item \label{go(ii)} Write $\omega \coloneqq \pi_2 E$ which is the kernel of $E^0 S^2 \to E^0$ with $E^0 S^2 \cong {\mb Z}_9 \llbracket h \rrbracket [u] / (u^2)$. Define $\omega$ as a $\gamma$-module in the sense of \cite[2.2]{h2p2} with one generator $u$ by \[ q_k \cdot u = \left\{ \begin{array}{ll} u, & \quad {\rm if}~k = 1, \\ 0, & \quad {\rm if}~k \neq 1. \\ \end{array} \right. \] \end{enumerate} \end{defn} \begin{rmk} \label{rmk:rank} In \go{i}, an element $r \in {\mb Z}_9 \llbracket h \rrbracket \cong E_0$ corresponds to the multiplication-by-$r$ operation (see \cite[Proposition 6.4]{cong}), and each $q_k$ corresponds to the individual power operation $Q_k$ (also cf.~\go{ii} and Example \ref{ex}). Under this correspondence, the relations in \q{ii}-\eqref{Q(v)} describe explicitly the structure of $\gamma$ as that of a {\em graded twisted bialgebra over $E_0$} in the sense of \cite[Section 5]{cong}. The grading of $\gamma$ comes from the number of the $q_k$'s in a monomial: for example, commutation relations are in degree 1, and Adem relations are in degree 2. Under these relations, $\gamma$ has an {\em admissible basis}: it is free as a left $E_0$-module on the elements of the form \[ q_0^m q_{k_1} \cdots q_{k_n} \] where $m, n \geq 0$ ($n = 0$ gives $q_0^m$), and $k_i = 1$, 2, or 3. If we write $\gamma[d]$ for the degree-$d$ part of $\gamma$, then $\gamma[d]$ is of rank $1 + 3 + \cdots + 3^d$. \end{rmk} We now identify $\gamma$ with the Dyer-Lashof~ algebra of power operations on $K(2)$-local commutative $E$-algebras. \begin{thm} \label{thm:gamma} Let $A$ be a $K(2)$-local commutative $E$-algebra. Let $\gamma$ be the graded twisted bialgebra over $E_0$ as defined in \go{i}, and $\omega$ be the $\gamma$-module in \go{ii}. Then $A_*$ is an {\em $\omega$-twisted ${\mb Z}/2$-graded amplified $\gamma$-ring} in the sense of \cite[Section 2]{cong} and \cite[2.5 and 2.6]{h2p2}. In particular, \[ \pi_* L_{K(2)} {\mb P}_E (\Sigma^d E) \cong \big( F_d \big)_{(3,h)}^\wedge, \] where $F_d$ is the free $\omega$-twisted ${\mb Z}/2$-graded amplified $\gamma$-ring with one generator in degree $d$. \end{thm} Formulas for $\gamma$ aside, this result is due to Rezk \cite{cong, h2p2}. \begin{proof} Let $\Gamma$ be the graded twisted bialgebra of power operations on $E_0$ in \cite[Section 6]{cong}. We need only identify $\Gamma$ with $\gamma$. There is a direct sum decomposition $\Gamma = \bigoplus_{d \geq 0} \Gamma[d]$ where the summands come from the completed $E$-homology of $B\Sigma_{3^d}$ (see \cite[6.2]{cong}). As in Remark \ref{rmk:rank}, we have a degree-preserving ring homomorphism \[ \phi \colon\thinspace \gamma \to \Gamma, \qquad q_k \mapsto Q_k \] which is an isomorphism in degrees 0 and 1. We need to show that $\phi$ is both surjective and injective in all degrees. For the surjectivity of $\phi$, we use a transfer argument. We have \[ \nu_3(|\Sigma_3^{\wr d}|) = \nu_3(|\Sigma_{3^d}|) = (3^d - 1) / 2 \] where $\nu_3(-)$ is the 3-adic valuation, and $(-)^{\wr d}$ is the $d$-fold wreath product. Thus following the proof of \cite[Proposition 3.17]{cong}, we see that $\Gamma$ is generated in degree 1, and hence $\phi$ is surjective. By Remark \ref{rmk:rank} and (the $E_0$-linear dual of) \cite[Theorem 1.1]{Str98}, $\gamma[d]$ and $\Gamma[d]$ are of the same rank $1 + 3 + \cdots + 3^d$ as free modules over $E_0$. Hence $\phi$ is also injective. \end{proof} \section{$K(1)$-local power operations} \label{sec:K(1)} Let $F \coloneqq L_{K(1)} E$ be the $K(1)$-localization of $E$. The following diagram describes the relationship between $K(1)$-local power operations on $F^0$ (cf.~\cite[Section 3]{hopkins} and \cite[Section IX.3]{H_infty}) and the power operation on $E^0$ in Corollary \ref{cor:psi3}: \begin{center} \begin{tikzpicture} \node (LT) at (0, 2) {$E^0$}; \node (RT) at (3, 2) {$E^0 B\Sigma_3 / I$}; \node (LB) at (0, 0) {$F^0$}; \node (MB) at (3, 0) {$F^0 B\Sigma_3 / J$}; \node (RB) at (4.3, 0) {$\cong F^0. $}; \draw [->] (LT) -- node [above] {$\scriptstyle \psi^3$} (RT); \draw [->] (LT) -- (LB); \draw [->] (RT) -- (MB); \draw [->] (LB) -- node [above] {$\scriptstyle \psi_F^3$} (MB); \end{tikzpicture} \end{center} Here $\psi_F^3$ is the $K(1)$-local power operation induced by $\psi^3$, and $J \cong F^0 \otimes_{E^0} I$ is the transfer ideal (cf.~\eqref{transfer}). Recall from \isog{i}, \eqref{S_3}, and Corollary \ref{cor:psi3} that $\psi^3$ arises from the universal degree-3 isogeny which is parametrized by the ring $S^\bullet_3$ with \[ \big( S_3 \big)_{(3,h)}^\wedge \cong E^0 B\Sigma_3 / I. \] The vertical maps are induced by the $K(1)$-localization $E \to F$. In terms of homotopy groups, this is obtained by inverting the generator $h$ and completing at the prime 3 (see \cite[Corollary 1.5.5]{hovey}): \[ E_* = {\mb Z}_9 \llbracket h \rrbracket [u^{\pm1}] \qquad {\rm and} \qquad F_* = {\mb Z}_9 \llbracket h \rrbracket [h^{-1}]_3^\wedge [u^{\pm1}] \] with \[ F_0 = {\mb Z}_9 (\!(h)\!)_3^\wedge = \left.\left\{\sum_{n = -\infty}^{\infty} k_n h^n~\right|~k_n \in {\mb Z}_9, \lim_{n \to -\infty} k_n = 0\right\}. \] The formal group $\widehat{C}$ over $E^0$ has a unique order-3 subgroup after being pulled back to $F^0$ (cf.~Remark \ref{rmk:dmod3}), and the map \[ E^0 B\Sigma_3 / I \to F^0 B\Sigma_3 / J \cong F^0 \] classifies this subgroup. Along the base change \[ E^0 B\Sigma_3 / I \to F^0 \otimes_{E^0} (E^0 B\Sigma_3 / I) \cong (F^0 \otimes_{E^0} E^0 B\Sigma_3) / J \cong F^0 B\Sigma_3 / J, \] the special fiber of the 3-divisible group of $\widehat{C}$ which consists solely of a formal component may split into formal and \'etale components. We want to take the formal component so as to keep track of the unique order-3 subgroup of the formal group over $F^0$. This subgroup gives rise to the $K(1)$-local power operation $\psi_F^3$. Recall from \eqref{S_3} that $S_3 = S[\alpha] \big/ \big( w(\alpha) \big)$. Since \[ w(\alpha) = \alpha^4 - 6 \alpha^2 + (h - 9) \alpha - 3 \equiv \alpha (\alpha^3 + h) ~~{\rm mod}~ 3, \] the equation $w(\alpha) = 0$ has a unique root $\alpha = 0$ in ${\mb F}_9 (\!(h)\!)$ (cf.~\eqref{Amod3}). By Hensel's lemma this unique root lifts to a root in ${\mb Z}_9 (\!(h)\!)_3^\wedge$; it corresponds to the unique order-3 subgroup of $\widehat{C}$ over $F^0$. Plugging this specific value of $\alpha$ into the formulas for $\psi^3$ in Corollary \ref{cor:psi3}, we then get an endomorphism of the ring $F^0$. This endomorphism is the $K(1)$-local power operation $\psi_F^3$. Explicitly, with $h$ invertible in $F^0$, we solve for $\alpha$ from $w(\alpha) = 0$ by first writing \[ \alpha = (3 + 6 \alpha^2 - \alpha^4) / (h - 9) = (3 + 6 \alpha^2 - \alpha^4) \sum_{n = 1}^\infty 9^{n-1} h^{-n} \] and then substituting this equation into itself recursively. We plug the power series expansion for $\alpha$ into $\psi^3(h)$ and get \[ \psi_F^3(h) = h^3 - 27 h^2 + 183 h - 180 + 186 h^{-1} + 1674 h^{-2} + (\text{lower-order terms}). ~~~ \] Similarly, writing $h$ as $c^2 + 1$ in $w(\alpha) = 0$, we solve for $\alpha$ in terms of $c$ and get \[ \psi_F^3(c) = c^3 - 12 c - 6 c^{-1} - 84 c^{-3} - 933 c^{-5} - 10956 c^{-7} + (\text{lower-order terms}). \]
{ "timestamp": "2012-10-16T02:02:08", "yymm": "1210", "arxiv_id": "1210.3730", "language": "en", "url": "https://arxiv.org/abs/1210.3730" }
\subsection{Clustered Erd\"{o}s-R\'{e}nyi graphs} \label{sub:structure} Recall that for an integer $n \ge 1$ and $0 \le p \le 1$, the Erd\"{o}s-R\'{e}nyi graph $G(n, p)$ is the random graph obtained by starting with vertex set $V = \{1,2,\ldots,n\}$ and connecting each pair of vertices $u, v \in V$, independently with probability $p$. Let $\Pi$ denote a partition $(V_1, V_2, \ldots, V_k)$ of $V$, let $\pi$ denote the real number sequence $(p_1, p_2, \ldots, p_k)$, where $0 \le p_i \le 1$ for all $i$ and let $0 \le p' < \min_i \{p_i\}$. The \textit{clustered Erd\"{o}s-R\'{e}nyi } graph $G(\Pi, \pi, p')$ has vertex set $V$ and edges obtained by independently connecting each pair of vertices $u, v \in V$ with probability $p_i$ if $u, v \in V_i$ for some $i$ and with probability $p'$, otherwise (see Figure~\ref{fig:erdos}). Thus each induced subgraph $G[V_i]$ is the standard Erd\"{o}s-R\'{e}nyi graph $G(n_i, p_i)$, where $n_i = |V_i|$. \begin{figure}[t] \centering \fbox{ \begin{tikzpicture}[scale=1] \draw (2,0) ellipse (1 and 2); \node[blank] at (2,-2.5) {$V_1$}; \node[cir] at (2.2,1) (u1) {$u_1$}; \node[cir] at (1.8,-0.3) (u2) {$u_2$}; \node[cir] at (2.2,-1.2) (u3) {$u_{n_1}$}; \node[blank] at (2.4,0) {$\vdots$}; \draw (u1) -- (u2) node[above,left,midway]{$p_1$}; \draw (u2) -- (u3) node[above,left,midway]{$p_1$}; \draw (6,0) ellipse (1 and 2); \node[blank] at (6,-2.5) {$V_2$}; \node[cir] at (5.6,1) (v1) {$v_1$}; \node[cir] at (6,0) (v2) {$v_2$}; \node[cir] at (5.8,-1) (v3) {$v_{n_2}$}; \node[blank] at (6.5,0.5) {$\vdots$}; \draw (v1) -- (v2) node[above,left,midway]{$p_2$}; \draw (v2) -- (v3) node[above,left,midway]{$p_2$}; \node[blank] at (8.5,0) {$\ldots$}; \node[blank] at (8.5,-2.5) {$\ldots$}; \draw (11,0) ellipse (1 and 2); \node[blank] at (11,-2.5) {$V_k$}; \node[cir] at (11.2,1) (w1) {$w_1$}; \node[cir] at (10.7,-0.4) (w2) {$w_2$}; \node[cir] at (11.2,-1.2) (w3) {$w_{n_k}$}; \node[blank] at (10.4,0.3) {$\vdots$}; \draw (w1) -- (w3) node[above,right,midway]{$p_k$}; \draw (w2) -- (w3) node[above,left,midway]{$p_k$}; \draw (u1) -- (v2) node[below=5pt,left,midway]{$p'$}; \draw (v3) -- (w2) node[below=5pt,right,midway]{$p'$}; \end{tikzpicture} } \caption{The clustered Erd\"{o}s-R\'{e}nyi graph. We connect two nodes in the $i$-th ellipse (i.e., $V_i$) with probability $p_i$ and nodes from different ellipses are connected with probability $p'< \min_i\{p_i\}$. \label{fig:erdos}} \end{figure} Given that $p' < p_i$ for all $i$, one might view $G(\Pi, \pi, p')$ as having a natural community structure given by the vertex partition $\Pi$. Specifically, when $p'$ is much smaller than $\min_i\{p_i\}$, the inter-community edge density is much less than the intra-community edge density and it may be easier to detect the community structure $\Pi$. On the other hand as the intra-community probabilities $p_i$ get closer to $p'$, it may be hard for an algorithm such as \textsc{Max-LPA} to identify $\Pi$ as the community structure. Similarly, if an intra-community probability $p_i$ becomes very small, then the subgraph $G[V_i]$ can itself be quite sparse and independent of how small $p'$ is relative to $p_i$, any community detection algorithm may end up viewing each $V_i$ as being composed of several communities. In the rest of the section, we explore values of the $p_i$'s and $p'$ for which \textsc{Max-LPA} ``correctly'' and quickly identifies $\Pi$ as the community structure of $G(\Pi, \pi, p')$. \subsection{Analysis} \label{subsection:analysis} In the following theorem we establish fairly general conditions on the probabilities $\{p_i\}$ and $p'$ and on the node subset sizes $\{n_i\}$ and $n$ under which \textsc{Max-LPA} converges correctly, i.e., to the node partition $\Pi$, w.h.p. Furthermore, we show that under these circumstances just 2 rounds suffice for \textsc{Max-LPA} to reach convergence! \begin{lemma} \label{lemma:probBound} Let $G(\Pi, \pi, p')$ be a clustered Erd\"{o}s-R\'{e}nyi graph such that $p' < \min_i\{\frac{n_i}{n}\}$. Let $\ell_i$ be the maximum label of a node in $V_i$. Then for any node $v \in V_i$ the probability that $v$ is not adjacent to a node outside $V_i$ with label higher than $\ell_i$ is at least $1/2e$. \end{lemma} \begin{proof} Let $v'$ be a node in $V \setminus V_i$. Given that $|V_i| = n_i$ and $\ell_i$ is the maximum label among these $n_i$ nodes, the probability that the label assigned uniformly at random to $v'$ is larger than $\ell_i$ is $1/(n_i + 1)$. The probability that $v$ has an edge to $v'$ \textit{and} $v'$ has a higher label than $\ell_i$ is $p'/(n_i + 1)$. Therefore the probability that $v'$ has no edge to a node outside $V_i$ with label larger than $\ell_i$ is $$\left(1 - \frac{p'}{n_i + 1}\right)^{n - n_i}.$$ We bound this expression below as follows: $$\left(1 - \frac{p'}{n_i + 1}\right)^{n - n_i} > \left(1 - \frac{p'}{n_i}\right)^{n} > \left(1 - \frac{1}{n}\right)^{n} > \frac{1}{2e}.$$ \qed \end{proof} \begin{theorem} \label{theorem:ER} Let $G(\Pi, \pi, p')$ be a clustered Erd\"{o}s-R\'{e}nyi graph. Suppose that the probabilities $\{p_i\}$ and $p'$ and the node subset sizes $\{n_i\}$ and $n$ satisfy the inequalities: $$\mbox{(i) } n_i p_i^2 > 8n p' \qquad\mbox{and}\qquad \mbox{(ii) } n_i p_i^4 > 1800 c \log n,$$ for some constant $c$. Then, given input $G(\Pi, \pi, p')$, \textsc{Max-LPA} converges correctly to node partition $\Pi$ in two rounds w.h.p. (Note that condition (ii) implies for each $i$, $p_i > \frac{\log n_i}{n_i}$ and hence each $G[V_i]$ is connected.) \end{theorem} \begin{proof} Let $V_i = \{u_1, u_2, \ldots, u_{n_i}\}$ and without loss of generality assume that $\ell_{u_1} > \ell_{u_2} > \cdots > \ell_{u_{n_i}}$. Since all initial node labels are assumed to be distinct, in the first round of \textsc{Max-LPA} every node $u \in V$ acquires a label by breaking ties. Since ties are broken in favor of larger labels, all neighbors of $u_1$ in $V_i$ that have no neighbor outside $V_i$ with a label larger than $\ell_{u_1}$ will acquire the label $\ell_{u_1}$. Consider a node $v \in V_i$. Let $\beta$ denote the probability that $v$ has no neighbor outside $V_i$ with label larger than $\ell_{u_1}$. Note that inequality (i) in the theorem statement implies the hypothesis of Lemma \ref{lemma:probBound} and therefore $\beta > 1/2e$. The probability that $v$ is a neighbor of $u_1$ and does not have a neighbor outside $V_i$ is $\beta \cdot p_i$. Hence, after the first round of \textsc{Max-LPA}, in expectation, $n_i \cdot \beta \cdot p_i$ nodes in $V_i$ would have acquired the label $\ell_{u_1}$. In the rest of the proof we will use $$X := n_i \cdot \beta \cdot p_i.$$ Now consider node $u_j$ for $j > 1$. For a node $v \in V_i$ to acquire the label $\ell_{u_j}$ it must be the case that $v$ is adjacent to $u_j$, not adjacent to any node in $\{u_1, u_2, \ldots, u_{j-1}\}$, and not adjacent to any node outside $V_i$ with a label higher than $\ell_{u_j}$. Since $\ell_{u_j}$ is smaller than $\ell_{u_1}$, the probability that $v$ is not adjacent to a node outside $V_i$ with label higher than $\ell_{u_j}$ is less than $\beta$. Thus the probability that a node in $V_i$ acquires the label $\ell_{u_j}$ is at most $p_i (1 - p_i)^{j-1} \cdot \beta < p_i (1 - p_i) \cdot \beta$. Furthermore, the probability that a node outside $V_i$ will acquire the label $\ell_{u_j}$ at the end of the first round is at most $p'$. Therefore, the expected number of nodes in $V$ that acquire the label $u_j$, at the end of the first round, is in expectation at most $n_i \cdot p_i (1 - p_i) \cdot \beta + (n - n_i)p'$. We now use inequality (i) and the fact that $2\beta e > 1$ to upper bound this expression as follows: $$n_i \cdot p_i(1 - p_i) \cdot \beta + (n - n_i)p' < n_i \cdot p_i(1 - p_i) \cdot \beta + \frac{2 \beta e \cdot n_i p_i^2}{8} < n_i \cdot p_i\left(1 - \frac{3p_i}{4}\right) \cdot \beta.$$ Therefore, the expected number of nodes in $V$ that acquire the label $u_j$, at the end of the first round, is in expectation at most $$Y := n_i \cdot p_i\left(1 - \frac{3p_i}{4}\right) \cdot \beta.$$ It is worth mentioning at this point that $X - Y = n_i p_i^2 \beta/4$. Note that all the random variables we have utilized thus far, e.g., the number of nodes adjacent to $u_1$ and not adjacent to any node outside $V_i$ with label higher than $\ell_{u_1}$, can be expressed as sums of independent, identically distributed indicator random variables. Hence we can bound the deviation of such random variables using the tail bound in (\ref{eqn:chernoff2}). In particular, let $Y'$ denote $Y + \sqrt{3c Y \log n}$ and $X'$ denote $X - \sqrt{3c X \log n}$. With high probability, at the end of the first round of \textsc{Max-LPA}, the number of nodes in $V_i$ that acquire the label $u_1$ is at least $X'$ and the number of nodes in $V$ that acquire the label $\ell_{u_j}$, $j > 1$, is at most $Y'$. Next we bound the ``gap'' between $X'$ and $Y'$ as follows: \begin{align*} X' - Y' & = X - Y - \sqrt{3c X \log n} - \sqrt{3c Y \log n}\\ & > \frac{3n_i p_i^2 \beta}{4} - 2 \sqrt{3c X \log n}\\ & > \frac{3n_i p_i^2 \beta}{4} - 2 \sqrt{3c n_i p_i \beta \log n}\\ & > \frac{3n_i p_i^2 \beta}{4} - \frac{3n_i p_i^2 \beta}{5}\\ & = \frac{3 n_i p_i^2 \beta}{20} \end{align*} The second inequality follows from $X - Y = 3n_ip_i^2\beta/4$ and $Y < X$, the third from the fact that $X = n_i p_i \beta$, and the fourth by using inequality (ii) from the theorem statement. We now condition the execution of the second round of \textsc{Max-LPA} on the occurrence of the two high probability events: (i) number of nodes in $V_i$ that acquire the label $u_1$ is at least $X'$ and (ii) the number of nodes in $V$ that acquire the label $\ell_{u_j}$, $j > 1$, is at most $Y'$. Consider a node $v \in V_i$ just before the execution of the second round of \textsc{Max-LPA}. Node $v$ has in expectation at least $p_i X'$ neighbors labeled $\ell_{u_1}$ in $V_i$. Also, node $v$ has in expectation at most $p_i Y'$ neighbors labeled $\ell_{u_j}$, for each $j > 1$, in $V$. Let us now use $X''$ to denote the quantity $p_i X' - \sqrt{3c p_i X' \log n}$ and $Y''$ to denote the quantity $p_i Y' + \sqrt{3c p_i Y' \log n}$. By using (\ref{eqn:chernoff2}) again, we know that w.h.p. $v$ has at least $X''$ neighbors with label $\ell_{u_1}$ and at most $Y''$ neighbors with a label $\ell_{u_j}$, $j > 1$. We will now show that $X'' > Y''$ and this will guarantee that in the second round of \textsc{Max-LPA} $v$ will acquire the label $\ell_{u_1}$, with high probability. Since $v$ is an arbitrary node in $V_i$, this implies that all nodes in $V_i$ will acquire the label $\ell_{u_1}$ in the second round of \textsc{Max-LPA} w.h.p. \begin{align*} X'' - Y'' & = p_i(X' - Y') - \sqrt{3c p_i X' \log n} - \sqrt{3c p_i Y' \log n}\\ & > \frac{3 n_i p_i^3}{20} - 2 \sqrt{3c p_i X' \log n}\\ & > \frac{3 n_i p_i^3}{20} - 2 \sqrt{3c n_i p_i^2 \beta \log n}\\ & > \frac{3 n_i p_i^3}{20} - \frac{n_i p_i^3 \beta}{10}\\ & = \frac{3 n_i p_i^2}{20}\\ & > 0 \end{align*} The second inequality follows from the bound on $X' - Y'$ derived earlier and $Y' < X'$, the third from the fact that $X' < n_i p_i \beta$, and the fourth by using inequality (ii) from the theorem statement. Thus at the end of the second round of \textsc{Max-LPA}, w.h.p., every node in $V_i$ has label $\ell_{u_1}$. This is of course true, w.h.p., for all of the $V_i$'s. Now note that every node $v \in V_i$ has, in expectation $n_i p_i$ neighbors in $V_i$ and fewer than $n p'$ neighbors outside $V_i$. Inequality (i) implies that $n p' < n_i p_i/8$ and inequality (ii) implies that $n_i p_i = \Omega(\log n)$. Pick a constant $\epsilon > 0$ such that $n_i p_i (1 + \epsilon)/8 < n_i p_i (1 - \epsilon)$. By applying tail bound (\ref{eqn:chernoff1}), we see that w.h.p. $v$ has more than $n_i p_i (1 - \epsilon)$ neighbors in $V_i$ and fewer than $n_i p_i (1 + \epsilon)/8$ neighbors outside $V_i$. Hence, w.h.p. $v$ has no reason to change its label. Since $v$ is an arbitrary node in an arbitrary $V_i$, w.h.p. there are no further changes to the labels assigned by \textsc{Max-LPA}. \qed \end{proof} To understand the implications of Theorem \ref{theorem:ER} consider the following example. Suppose that the clustered Erd\"{o}s-R\'{e}nyi graph has $O(1)$ clusters and each cluster had size $\Theta(n)$. In such a setting, inequality (ii) from the theorem simplifies to requiring that each $p_i = \Omega((\log n/n)^{1/4})$ and inequality (ii) simplifies to $p' < p_i^2/c$ for all $i$. This tells us, for instance, that \textsc{Max-LPA} converges in just two rounds on a clustered Erd\"{o}s-R\'{e}nyi graph in which each cluster has $\Theta(n)$ vertices and an intra-community probability of $\Theta(1/n^{1/3})$ and the inter-community probability is $\Theta(1/n^{2/3})$. This example raises several questions. If we were willing to allow more time for \textsc{Max-LPA} to converge, say $O(\log n)$ rounds, could we significantly weaken the requirements on the $p_i$'s and $p'$. Specifically, could we permit an intra-community probability $p_i$ to become as small as $c \log n/n$ for some constant $c > 1$? Similarly, could we permit the inter-community probability $p'$ to come much closer to the smallest $p_i$, say within a constant factor. We believe that it may be possible to obtain such results, but only via substantively different analysis techniques. \subsection{Preliminaries} \label{sub:pre} We use $G = (V,E)$ to denote an undirected connected graph (network) of size $n=|V|$. For $v \in V$, we denote by $N(v) = \{u : u \in V, (u,v) \in E\}$ the neighborhood of $v$ in graph $G$, by $deg(v) = |N(v)|$ the degree of $v$, and by $\Delta(G)=\max_{v\in V} deg(v)$ the maximum degree over all the vertices in $G$. A \textit{$k$-hop neighborhood} ($k \geqslant 1$) of $v$ is defined as $N_k(v) = \{w : \mbox{dist}_G(w, v) \le k\} \setminus \{v\}$. We denote the \textit{closed neighborhood} (respectively, \textit{closed $k$-hop neighborhood}) of $v$ as $N'(v) = N(v) \cup \{v\}$ (respectively, $N'_k(v) = N_k(v) \cup \{v\}$). Denote by $\ell_u(t)$ the label of node $u$ just before round $t$. When the round number is clear from the context, we use $\ell_u$ to denote the current label of $u$. Since the number of labels in the network is finite, LPA will behave periodically starting in some round $t^*$, i.e., for some $p \ge 1$, $0 \le i < p$, and $j = 0, 1, 2, \ldots$, $$\ell_u(t^*+ i)=\ell_u(t^* + i + j \cdot p)$$ for all $u \in V$. Then we say that \textsc{Max-LPA} has \textit{converged} in $t^*$ rounds. We now describe \textsc{Max-LPA} precisely (see \textbf{Algorithm 1}). Every node $v \in V$ is assigned a unique label uniformly and independently at random. For concreteness, we assume that these labels come from the range $[0, 1]$. At the start of a round, each node sends its label to all neighboring nodes. After receiving labels from all neighbors, a node $v$ updates its label as: \begin{equation} \label{eqn:label} l_v \leftarrow \max\left\{\ell \mid \sum_{u \in N'(v)}[\ell_u== \ell] \ge \sum_{u \in N'(v)}[\ell_u== \ell']\mbox{ for all $\ell'$}\right\}, \end{equation} where $[\ell_u==\ell]$ evaluates to 1 if $\ell_u=\ell$, otherwise evaluates to 0. Note that there is no randomness in the algorithm after the initial assignments of labels. \begin{algorithm}[t] \caption{\textsc{Max-LPA} on a node $v$} \label{algo:lpa} \begin{algorithmic} \STATE $i=0$ \STATE $l_v[i] \leftarrow$ random(0,1) \WHILE{true}\label{algo:lpa:while} \STATE $i++$; \STATE send $l_v[i-1]$ to $\forall u \in N(v)$ \STATE receive $l_u[i-1]$ from $\forall u \in N(v)$ \STATE $l_v[i] \leftarrow \max\left\{\ell \mid \sum_{u \in N'(v)}[\ell_u[i-1]== \ell] \ge \sum_{u \in N'(v)}[\ell_u[i-1]== \ell']\mbox{ for all $\ell'$}\right\}$ \ENDWHILE \end{algorithmic} \end{algorithm} By ``w.h.p.'' (with high probability) we mean with probability at least $1-\frac{1}{n^c}$ for some constant $c\geqslant1$. In this paper we repeatedly use the following versions of a tail bound on the probability distribution of a random variable, due to Chernoff and Hoeffding \cite{chernoff1952measure, hoeffding1963probability}. Let $X_1, X_2, \ldots, X_m$ be independent and identically distributed binary random variables. Let $X = \sum_{i = 1}^m X_i$. Then, for any $0 \le \epsilon \le 1$ and $c \geqslant 1$, \begin{align} \label{eqn:chernoff1} \Pr\left[X > (1 + \epsilon) \cdot E[X] \right] & \le \exp\left(-\frac{\epsilon^2 E[X]}{3}\right)\\ \label{eqn:chernoff3} \Pr\left[X < (1 - \epsilon)\cdot E[X]\right] & \le \exp \left(-\frac{\epsilon^2 E[X]}{2}\right) \\ \label{eqn:chernoff2} \Pr\left[|X - E[X]| > \sqrt{ 3c \cdot E[X] \cdot \log n}\right] & \le \frac{1}{n^c} \end{align} \subsection{Results} \label{sub:result} As mentioned earlier, the purpose of this paper is to counterbalance the predominantly empirical line of research on LPA and initiate a systematic analysis of \textsc{Max-LPA}. Our main results can be summarized as follows: \begin{itemize} \item As a ``warm-up'' we prove (Section~\ref{sec:path}) that when executed on an $n$-node path \textsc{Max-LPA} converges to a cycle of period one in $\Theta(\log n)$ rounds w.h.p. Moreover, we show that w.h.p. the state that \textsc{Max-LPA} converges to has $\Omega(n)$ communities. \item % In our main result (Section~\ref{sec:er}), we define a class of random graphs that we call \textit{clustered Erd\"{o}s-R\'{e}nyi graphs}. A clustered Erd\"{o}s-R\'{e}nyi graph $G = (V, E)$ comes with a node partition $\Pi = (V_1, V_2, \ldots, V_k)$ and pairs of nodes in each $V_i$ are connected with probability $p_i$ and pairs of nodes in distinct parts in $\Pi$ are connected with probability $p' < \min_i \{p_i\}$. Since $p'$ is small relative to any of the $p_i$'s, one might view a clustered Erd\"{o}s-R\'{e}nyi graph as having a natural community structure given by $\Pi$. We prove that even with fairly general restrictions on the $p_i$'s and $p'$ and on the sizes of the $V_i$'s, \textsc{Max-LPA} converges to a period-1 cycle in just 2 rounds, w.h.p. \textit{and} ``correctly'' identifies $\Pi$ as the community structure of $G$. \item Roughly speaking, the above result requires each $p_i$ to be $\Omega\left(\left(\frac{\log n}{n}\right)^{1/4}\right)$. We believe that \textsc{Max-LPA} would correctly and quickly identify $\Pi$ as the community structure of a given clustered Erd\"{o}s-R\'{e}nyi graph even when the $p_i$'s are much smaller, e.g. even when $p_i = \frac{c \log n}{n}$ for $c > 1$. However, at this point our analysis techniques do not seem adequate for situations with smaller $p_i$ values and so we provide empirical evidence (Section~\ref{sec:erp}) for our conjecture that \textsc{Max-LPA} correctly converges to $\Pi$ in $O(\mbox{polylog}(n))$ rounds even when $p_i = \frac{c \log n}{n}$ for some $c > 1$ and $p'$ is just a logarithmic factor smaller than $p_i$. \end{itemize} \subsection{Related Work}\label{sub:relwork} There are several variants of LPA presented in the literature~\cite{cordasco2010community, gregory2009finding, subelj2011unfolding, liu2009bipartite}. Most of these are concerned about ``quality'' of the output and present empirical studies of output produced by LPA. Raghavan et al.~\cite{raghavan2007near}, based on the experiments, claimed that at least 95\% of the nodes are classified correctly by the end of 5 rounds of label updates. But the experiments that they carried out were on the small networks. Cordasco and Gargano~\cite{cordasco2010community} proposed a semi-synchronous approach which is guaranteed to converge without oscillations and can be parallelized. They provided a formal proof of convergence but did bound the running time of the algorithm. Lui and Murata~\cite{liu2009bipartite} presented a variation of LPA for bipartite networks which converges but no formal proof is provided, neither for the convergence nor for the running time. Leung et al.~\cite{leung2008towards} presented empirical analysis of quality of output produced by LPA on larger data sets. From experimental results on a special structured network they claimed that running time of LPA is $O(\log n)$. \section{Introduction} \input{introduction} \section{Analysis of \textsc{Max-LPA} on a Path} \label{sec:path} \input{path} \section{Analysis of \textsc{Max-LPA} on Clustered Erd\"{o}s-R\'{e}nyi Graphs} \label{sec:er} \input{clustered_er} \section{Empirical Results on Sparse Erd\"{o}s-R\'{e}nyi Graphs} \label{sec:erp} \input{sparse_er_arxiv} \section{Future Work} \label{section:futureWork} We believe that with some refinements, the analysis technique used to show $O(\log n)$-rounds convergence of \textsc{Max-LPA} on paths, can be used to show poly-logarithmic convergence on sparse graphs in general, e.g., those with degree bounded by a constant. This is one direction we would like to take our work in. At this point the techniques used in Section~\ref{sec:er} do not seem applicable to more sparse clustered Erd\"{o}s-R\'{e}nyi graphs. But if we were willing to allow more time for \textsc{Max-LPA} to converge, say $O(\log n)$ rounds, could we significantly weaken the requirements on the $p_i$'s and $p'$? Specifically, could we permit an intra-community probability $p_i$ to become as small as $c \log n/n$ for some constant $c > 1$? Similarly, could we permit the inter-community probability $p'$ to come much closer to the smallest $p_i$, say within a constant factor? This is another direction for our research. \subsubsection*{Acknowledgments.} We would like to thank James Hegeman for helpful discussions and for some insightful comments. \subsection{Simulation Setup} We implemented \textsc{Max-LPA} in a C program and executed on a Linux machine (with 2.4 GHz Intel(R) Core(TM)2 processor). We examined the number of rounds it takes and also number of communities it declares at the end of the execution. We executed \textsc{Max-LPA} on $G(n,p)$ and on $G(\Pi, \pi, p')$ with $\Pi = (V_1, V_2)$, $|V_1| = |V_2| = n/2$, $\pi = (p, p)$, $p'=0.6/n$ for various values of $n$ and $p$. For each $n$, $p$ combination we ran \textsc{Max-LPA} 50 times. We used $p$ values of the form $\frac{c\cdot \log n}{n}$ for various values of $c\geq 1$. \subsection{Results} We executed \textsc{Max-LPA} using the setup discussed above. Table~\ref{tab:multi} shows the number of simulations out of 50 simulations per $n$ and $c$ values for which it ended up in a single community for each pair of $n$ and $c$. If the input graph is disconnected then obviously there will be multiple communities. Therefore, we also noted number of simulations for which the graph was connected and this number is shown in the brackets. \begin{table}[!ht] \centering \caption{This table shows simulations on Erd\"{o}s-R\'{e}nyi graphs $G(n, p)$ where $p = \frac{c \log n}{n}$. Each entry in the table shows the number of simulations out of 50 simulations per $n$ and $c$ values in which a single community is declared by \textsc{Max-LPA} and number of simulations in which the graph $G(n,p)$ was connected is shown in brackets.\label{tab:multi}} \begin{tabularx}{\textwidth}{|X|X|X|X|X|} \hline $n$ & $c=1$ & $c=1.2$ & $c=1.5$ & $c=1.7$\\ \hline 1000 & 44 (50) & 47 (47) & 50 (50) & 50 (50)\\ \hline 2000 & 42 (46) & 47 (50) & 47 (50) & 50 (50)\\ \hline 4000 & 45 (47) & 49 (50) & 50 (50) & 50 (50) \\ \hline 8000 & 47 (48) & 50 (50) & 50 (50) & 50 (50) \\ \hline 16000 & 49 (50) & 50 (50) & 50 (50) & 50 (50) \\ \hline 32000 & 49 (50) & 50 (50) & 50 (50) & 50 (50) \\ \hline 64000 & 50 (50) & 50 (50) & 50 (50) & 50 (50)\\ \hline 128000 & 50 (50) & 50 (50) & 50 (50) & 50 (50)\\ \hline \end{tabularx} \end{table} \begin{figure}[!ht] \centering \input{fig} \caption{Number of rounds for \textsc{Max-LPA} when executed on sparse Erd\"{o}s-R\'{e}nyi (averaged over simulations where it ended with a single community out of 50 simulations per $n$ and $p$). \label{fig:rounds}} \end{figure} It is well known that $p=\frac{\log n}{n}$ is a threshold for connectivity in Erd\"{o}s-R\'{e}nyi graphs and therefore we are getting few runs for $c=1$ where the input graph was disconnected. From Table~\ref{tab:multi}, we can say that \textsc{Max-LPA} when executed on Erd\"{o}s-R\'{e}nyi graphs with $p=\frac{c\log n}{n}$ and $c>1$, with high probability, terminate with one community. It also seem to be the case that as $c$ increases, we are getting more single community runs. This is because as $c$ increases, the graph become more dense. Figure~\ref{fig:rounds} shows a plot of the number of rounds \textsc{Max-LPA} takes to converge on $G(n, p)$ as $n$ increases, averaged over all simulations which resulted in a single community at the end of the execution. The running time seems to grow in a linear fashion with logarithm of graph size. Also as $c$ increases the running time decreases, which implies that as the graph becomes more dense \textsc{Max-LPA} converges more quickly to a single community. Our results lead us to conjecture that when \textsc{Max-LPA} is executed on Erd\"{o}s-R\'{e}nyi graphs $G(n,p)$ with $p=O(\frac{\log n}{n})$ it will, with high probability, terminate with a single community in $O(\log n)$ rounds. Table~\ref{tab:clustered} shows the number of simulations out of 50 simulations per $n$ and $c$ values for which \textsc{Max-LPA} correctly identified the partition $\Pi$ when executed on $G(\Pi,\pi,p')$ for $p'=\frac{0.6}{n}$. From previous results in Table~\ref{tab:multi}, for $c=1.5$ \textsc{Max-LPA} declared a single community when executed on $G(n,p)$ w.h.p. Therefore in this experiments we started with $c=1.5$. But for $c=1.5$, the influence from the nodes from other partition is significant. As $c$ increases this influence is not significant compared to the influence from nodes within the same partition. \begin{table}[!ht] \centering \caption{This table shows simulations of \textsc{Max-LPA} on $G(\Pi, \pi, p')$ with $\Pi = (V_1, V_2)$, $|V_1| = |V_2| = n/2$, $\pi = (p, p)$, where $p = \frac{c \log n}{n}$ and $p' = \frac{0.6}{n}$. Each entry in the table shows, for particular $n$ and $c$ values, the number of simulations out of 50 in which \textsc{Max-LPA} identified two communities $V_1$ and $V_2$. The number of simulations in which graph was connected is shown in brackets.\label{tab:clustered}} \begin{tabularx}{\textwidth}{|X|X|X|X|} \hline $n$ & $c=1.5$ & $c=2$ & $c=4$ \\ \hline 1000 & 22 (45) & 39 (50) & 50 (50) \\ \hline 2000 & 21 (39) & 40 (50) & 50 (50) \\ \hline 4000 & 22 (36) & 47 (50) & 50 (50) \\ \hline 8000 & 14 (38) & 47 (50) & 50 (50) \\ \hline 16000 & 26 (35) & 49 (49) & 50 (50) \\ \hline 32000 & 17 (33) & 49 (49) & 50 (50) \\ \hline 64000 & 26 (34) & 46 (50) & 50 (50) \\ \hline 128000 & 5 (35) & 47 (47) & 50 (50) \\ \hline \end{tabularx} \end{table}
{ "timestamp": "2012-10-16T02:02:14", "yymm": "1210", "arxiv_id": "1210.3735", "language": "en", "url": "https://arxiv.org/abs/1210.3735" }
\section{Introduction} Young dense star clusters observed in the Milky Way and the Large Magellanic Cloud (LMC), e.g., R136 \citep{1998ApJ...493..180M,2010MNRAS.408..731C}, NGC 3603 \citep{2006AJ....132..253S,2008ApJ...675.1319H}, Westerlund 1 \citep{2005A&A...434..949C,2008A&A...478..137B,2011MNRAS.412.2469G} and 2 \citep{2007A&A...466..137A,2007A&A...463..981R}, are good samples for understanding the formation mechanism of dense star clusters. They are massive ($\sim 10^5M_{\odot}$) and dense ($>10^4M_{\odot}{\rm pc}^{-3}$), and seem to be approaching (or might have experienced) core collapse although they are young ($<4$Myr) \citep{2003MNRAS.338...85M}. For example, in R136 in the LMC, its high core density ($>5\times10^4 M_{\odot}{\rm pc}^{-3}$) \citep{2003MNRAS.338...85M} and the existence of high-velocity stars (runaway stars) escaping from the cluster \citep{2007ASPC..367..629B,2010ApJ...715L..74E, 2011A&A...530L..14B,2011MNRAS.410..304G} suggest that it experienced core collapse \citep{2011Sci...334.1380F}. If such a young massive cluster experiences core collapse, repeated collisions (so-called runaway collisions) of stars, and as a consequence the formation of very massive stars ($>100M_{\odot}$), are expected \citep{1999A&A...348..117P,2002ApJ...576..899P}. Such very massive stars form through multiple stellar collisions could result in the formation of intermediate-mass black holes (IMBHs) \citep{2001ApJ...562L..19E}. The formation of IMBHs in dense star clusters via multiple collisions has been studied using $N$-body simulations \citep{1999A&A...348..117P, 2004Natur.428..724P,2004ApJ...604..632G,2006MNRAS.368..141F}, and the results suggest that IMBHs with $10^2-10^3M_{\odot}$ could be formed in such dense clusters. Including stellar evolution, however, a high mass-loss rate due to the stellar wind of massive stars prevents the growth of the massive stars \citep{2007ApJ...659.1576B,2009A&A...497..255G}. A very high collision rate is required for such very massive stars to overcome the copious mass-loss and nevertheless leads to the formation of an IMBH \citep{2008ApJ...686.1082F}. There are some mechanisms to enhance the growth rate of the very massive stars, but the most important factor is the moment of core collapse, $t_{\rm cc}$. This short but high density phase is necessary for the cluster to become collisionally dominated, which is critical for the collision rate of stars in the cluster. Earlier collapse times assist an efficient mass accumulation because stars can start multiple collisions before the cluster starts to lose massive stars via stellar evolution. The core-collapse time is determined by the relaxation time of the cluster if the cluster is initially virialized, and it is roughly 20\% of the half-mass relaxation time, $t_{\rm rh}$, with a Salpeter-type power-low mass function \citep{2002ApJ...576..899P,2003gmbp.book.....H}. For most massive clusters, therefore, it is difficult to reach core collapse before the end of main-sequence lifetime of the most massive stars, which is $\sim 3$ Myr for $> 40M_{\odot}$. One way to achieve an early core collapse is kinematically cool initial conditions. A sub-virial cluster can reach core collapse faster than initially virialized clusters \citep{2009ApJ...700L..99A}, and that enables an efficient mass-growth via multiple collisions of stars. Another way to enhance the stellar-mass growth is by adopting mass segregated initial conditions. A mass function causes massive stars to sink to the cluster center and as a result massive stars pile up in the cluster core. The mass-growth due to stellar collisions can be quite efficient, if massive stars concentrate in the core. Initial mass segregation enhances the growth-rate of the colliding stars \citep{2008ApJ...682.1195A,2012ApJ...752...43G}, and cool initial conditions result in a high degree of the mass segregation in a short time \citep{2009ApJ...700L..99A}. Another way for star clusters to reduce their core collapse time is by the assemblage of sub-clusters \citep{1972A&A....21..255A, 2007ApJ...655L..45M,2009MNRAS.400..657M,2011ApJ...732...16Y, 2011MNRAS.416..383S,2012ApJ...753...85F}. The short relaxation time of sub-clusters compared to initially more massive clusters causes early mass segregation and core collapse. Since the memory of such early dynamical evolution is conserved in the merger remnant \citep[][hereafter Paper 1]{2007ApJ...655L..45M,2012ApJ...753...85F}, the formation of star clusters by assembling them is also an effective way for efficient multiple collisions of stars in young star clusters. In paper 1, we found that the formation scenario of young dense star clusters via mergers of ensemble clusters can successfully explain the mature characteristics of young massive star clusters such as R136 in 30 Dor region. The age of R136 is only 2--3 Myr, but it shows dynamically mature characteristics, such as mass segregation, a high core density, and a wealth of high velocity escaping stars \citep{2003MNRAS.338...85M,2007ASPC..367..629B, 2010ApJ...715L..74E,2011A&A...530L..14B,2011MNRAS.410..304G}. However, the relaxation time of R136 obtained from its current mass and radius is $\sim 100$ Myr \citep{2003MNRAS.338...85M}, which is too long to have reached core collapse at its current age. In paper 1, we performed a series of $N$-body simulations of ensemble clusters and demonstrated that ``ensemble''-cluster models can reproduce observations such as the core density, the fraction of high-velocity escapers, and the distribution of massive stars which experienced collisions, but ``solo''-cluster models, which are initially spherical and virialized, fail to reproduce these observations. Furthermore, these characteristics of the ensemble models are also consistent with the characteristics of other massive young clusters like R136 in the LMC and NGC 3603 in the Milky Way \citep{2010MNRAS.408..731C}. If young dense clusters formed via assembling sub-clusters and have experienced core collapse, it is expected that repeating collisions can lead to the formation of very massive stars and possibly even IMBHs. In the observed young dense clusters, however, there is no evidence of IMBHs, but some very massive stars with an initial mass of 100--300 $M_{\odot}$ are observed \citep{2006AJ....132..253S, 2011A&A...530L..14B, 2009Ap&SS.324..321M, 2011MNRAS.412.2469G,2011MNRAS.416..501R}. In this paper, we perform a series of $N$-body simulations of solo and ensemble star clusters and demonstrate that the growth of very massive stars through multiple collisions is mediated by star cluster complexes. Our simulations show that the quick dynamical evolution of ensemble clusters does not always result in the formation of extremely-high-mass stars. When the assembling of clusters proceeds after each sub-cluster experiences core collapse (``late-assembling'' case), multiple-collision stars that form in each sub-cluster fail to coalesce to an extremely massive star, but leads to the formation of several very massive stars. Some of these very massive stars can escape from the cluster as high-velocity stars due to the three-body or binary-binary encounters. When the sub-clusters assemble before they experience core-collapse (``early-assembling'' case), the collision rate is enhanced and the assembled cluster forms an extremely massive star of $\sim 1000 M_{\odot}$. \section{Method and Initial Conditions} We performed a series of $N$-body simulations of solo clusters and ensemble clusters, that merge to a single cluster with a mass equal to solo clusters. For the ensemble of sub-clusters, we adopted two models. A: a King model \citep{1966AJ.....71...64K} with a dimensionless concentration parameter, $W_0$, of 2 and the total mass $M_{\rm cl}=6300M_{\odot}$, and B: a King model with $W_0=5$ and $M_{\rm cl}=2.5\times 10^4 M_{\odot}$. The half-mass radii, $r_{\rm h}$, of these models are 0.092 and 0.22 pc, and the numbers of particles, $N$, are 2048 (2k) and 8192 (8k), respectively. The core density is the same for both models ($\rho_{\rm c} \simeq 2 \times 10^6 M_{\odot}\,\mathrm{pc}^{-3}$). We assumed a Salpeter initial mass function (IMF) \citep{1955ApJ...121..161S} between 1 and 100 $M_{\odot}$. We call these models 2kw2 and 8kw5. We distribute 4 or 8 of these sub-clusters in two different initial configurations: spherical or filamentary. The former model stems from clumpy star formation in giant molecular clouds, and the latter is motivated by star formation in a filamentary gas distribution or shocked region of colliding gas in the spiral arms of a galactic disk. The clumpy star formation is initiated by observations of Westerlund 1 \citep{2011MNRAS.412.2469G} and R136 \citep{2012ApJ...754L..37S} and simulations \citep{2011MNRAS.410.2339B,2011IAUS..270..483S}. For the spherical models, we adopted 4 or 8 of models 2kw2 as sub-clusters, and distributed them randomly in a volume with a radius of $r_{\rm max}$ and with zero velocity. We varied $r_{\rm max}$ between 1 and 6 pc. For the filamentary models, we initialized 8 individual 8kw5 model sub-clusters. We initialized these sub-clusters with two different initial mean separations (models e8k8f1 and e8k8f2), but with zero velocity. The initial positions of the sub-clusters for these models are illustrated in Figure \ref{fig:init_pos}. All runs are summarized in table \ref{tb:model}. For the solo models, we adopted two more initial conditions with $M_{\rm cl}$ of $5.1 \times 10^4 M_{\odot}$ and $2.0 \times 10^5 M_{\odot}$. With the same mass function, these models have 16384 (16k) and 65536 (64k) stars and are initialized using King models with $W_0=6$ and 8, respectively. In order to obtain the same core density as that of sub-clusters, their half-mass radii are 0.32 and 1.0 pc. We call these models as 16kw6 and 64kw8. In Table \ref{tb:model_cl} we summarize the initial conditions, and we present their initial density profiles in Figure \ref{fig:cd}. We performed additional simulations of sub-virial (cold) initial conditions for 16kw6, and an extra set of simulations in which we reduced the kinetic energy (velocity of each particle) to two-thirds and 10\% of the virialized velocity. We call these models as s16k-cool and s16k-cold, respectively (see table \ref{tb:model}). The $N$-body simulations are performed using the sixth-order Hermite scheme with individual timesteps with an accuracy parameter $\eta=$0.15--0.3 \citep{2008NewA...13..498N}. We adopted the accuracy parameter to balance speed and accuracy, and the energy error was $< 0.1$ \% for all runs. Our code does not include special treatment for binaries, but the sixth-order Hermite scheme can handle hard binaries formed in our simulations (see section 2 in Paper 1). We took into account collisions of stars with a sticky-sphere approach and mass loss due to the stellar wind for stars with $>100M_{\odot}$ with a rate of $5.0\times 10^{-7} M_{\odot}$yr$^{-1}$ \citep{2009ApJ...695.1421F}. We neglected the mass-loss from stars with $<100M_{\odot}$ because it does not affect the results on the short timescale of our simulations ($<5$Myr). The stellar radii are taken from the zero-age main-sequence for solar metallicity \citep{2000MNRAS.315..543H}. \begin{table*} \begin{center} \caption{Models of single clusters\label{tb:model_cl}} \begin{tabular}{cccccccccc}\hline \hline Model & $N$ & $M_{\rm cl}$ & $W_0$ & $r_{\rm h}$ & $\rho_{\rm c}$& $\sigma$ & $t_{\rm rh}$ & $t_{\rm rc}$ &$M_{\rm core}/M_{\rm cl}$ \\ & & ($M_{\odot}$) & & (pc) & ($M_{\odot}$pc$^{-3}$) & (km/s) & (Myr) & (Myr) \\\hline 2kw2 & 2048 & $6.3\times 10^3$ & 2 & 0.097 & $1.7\times 10^6$ & 11 & 0.30 & 0.58 & 0.28 \\ 8kw5 & 8192 & $2.5\times 10^4$ & 5 & 0.22 & $1.7\times 10^6$ & 15 & 1.9 & 0.92 & 0.15 \\ 16kw6 & 16384 & $5.1\times 10^4$ & 6 & 0.32 & $1.7\times 10^6$ & 17 & 4.4 & 1.1 & 0.12\\ 64kw8 & 65536 & $2.0\times 10^5$ & 8 & 1.0 & $1.6\times 10^6$ & 19 & 44 & 1.8 & 0.053 \\ \hline \end{tabular} \medskip \\ $\sigma$ is the velocity dispersion. \end{center} \end{table*} \begin{figure} \begin{center} \includegraphics[width=80mm]{f1.eps} \caption{Initial density profiles of single clusters\label{fig:density_init}} \end{center} \end{figure} \begin{table*} \begin{center} \caption{Runs\label{tb:model}} \begin{tabular}{cccccc}\hline Model & $N_{\rm cl}$ & geometry & $\langle d_{\rm min}\rangle$ (pc) & (sub-)cluster & $N_{\rm run}$ \\ \hline\hline e2k4r3 & 4 & spherical & 2.5 & 2k2w & 3\\ e2k4r6 & 4 & spherical & 5.1 & 2k2w & 1\\ \hline e2k8r1 & 8 & spherical & 0.51 & 2k2w & 2\\ e2k8r3 & 8 & spherical & 1.3 & 2k2w & 1\\ e2k8r5 & 8 & spherical & 2.8 & 2k2w & 2\\ e2k8r6 & 8 & spherical & 3.3 & 2k2w & 2\\ \hline e8k8f1 & 8 & filamentary & 2.8 & 8kw5 & 1\\ e8k8f2 & 8 & filamentary & 4.2 & 8kw5 & 1\\ \hline \hline s2k & 1 & - & - & 2kw3 & 7 \\ s8k & 1 & - & - & 8kw5 & 6\\ s16k & 1 & - & - & 16kw6 & 6\\ s64k & 1 & - & - & 64kw8 & 2\\ \hline s16k-cool & 1 & - & - & 16kw6 & 2\\ s16k-cold & 1 & - & - & 16kw6 & 1\\ \hline \end{tabular} \medskip \\ The models are named according to the following rules; ``e'' and ``s'' indicate ensemble and solo models, respectively. For ensemble models, following numbers indicate the number of particles of sub-clusters and the number of sub-clusters. The last part indicates the initial configuration of sub-clusters; ``r'' and the following number mean spherical and the value of the maximum radius, $r_{\rm max}$, ``f'' indicates filamentary initial configurations (see figure \ref{fig:init_pos} for the initial positions of sub-clusters in these models). For solo models, the number indicates the number of particles. $\langle d_{\rm min}\rangle$ is the averaged distance to the nearest-neighbour sub-clusters, and $N_{\rm run}$ is the number of runs. s16k-cool and s16k-cold are the same model as s16k, but the velocity of 67\% and 10\% of s16k, respectively. \end{center} \end{table*} \begin{figure*} \begin{center} \includegraphics[width=70mm]{f2a.eps} \includegraphics[width=70mm]{f2b.eps} \caption{Initial position of ensemble models, e8k8f1 (right) and e8k8f2 (left). We mimicked filamentary star forming regions. \label{fig:init_pos}} \end{center} \end{figure*} \section{Solo-cluster models} \subsection{Virialized solo-cluster models} We describe the results of the initially virialized solo-cluster models, which we will refer as the ``standard'' model. In Figure \ref{fig:cd} we present the time evolution of the core density for models s2k, s8k, s16k, and s64k. The core densities are calculated using the method of \citet{1985ApJ...298...80C}. We identify the moment when the cluster reaches the highest core density as the core-collapse time. The core-collapse time measured from the simulations is $t_{\rm cc} = 0.29\pm 0.07$, $0.71\pm 0.11$, $1.2\pm 0.13$, and $1.8\pm 0.0$ Myr for models s2k, s8k, s16k, and s64k, respectively (see also Table \ref{tb:results}). The core-collapse time is consistent with those obtained previous simulations \citep{2004ApJ...604..632G}, if we take into account the differences in the mass range of the mass function. \citet{2004ApJ...604..632G} showed that the core-collapse time scales with the central relaxation time \citep{2003gmbp.book.....H}: \begin{eqnarray} t_{\rm rc} = \frac{0.065 \sigma_{\rm c, 3D} ^3}{G^2 \langle m\rangle \rho_{\rm c} \ln \Lambda}. \label{eq:tcrlx} \end{eqnarray} Here $G$, $\langle m \rangle$, $\sigma _{\rm c}$, and, $\rho_{\rm c}$ are the gravitational constant, the mean mass of stars, and the central velocity dispersion and density, respectively. Here $\ln \Lambda$ is the Coulomb logarithm, and $\Lambda$ is written as a function of the number of particles as $\gamma N$ and $\gamma \simeq 0.1$ for $t_{\rm rh}$ of star clusters \citep{1994MNRAS.268..257G}. We adopted the number of particles in the core as $N$. We find that the core-collapse time scales better using our definition than when we adopt $0.01N$, which is adopted by \citet{2004ApJ...604..632G}. In our simulations, $t_{\rm cc}/t_{\rm rc}\simeq 1$ for models s8k, s16k, and s64k, but $t_{\rm cc}/t_{\rm rc}\simeq 0.5$ for model s2k. For model s2k, however, $t_{\rm rh}$ is shorter than $t_{\rm rc}$ because the core radius exceeds the half-mass radius. If we adopt a shorter relaxation time, then $t_{\rm cc}/t_{\rm rh}\sim 1$ for all the models. \begin{figure} \begin{center} \includegraphics[width=84mm]{f3.eps} \caption{Time evolution of the core densities for solo clusters. The results are averaged in order to reduce the run-to-run variations.\label{fig:cd}} \end{center} \end{figure} The core collapse of the cluster initiates a collision runaway in the cluster core \citep{1999A&A...348..117P}. In Figure \ref{fig:m_his_single} we present the merger histories of the multiple-collision stars in the solo-cluster simulations s2k, s8k, s16k, and s64k. In each model, one primary collision product (PCP) per cluster grows through repeated collisions of stars. In model s2k the mass-loss due to the stellar wind exceeds the mass-gain by the collisions, and therefore the PCP has lost all gained mass by the end of the simulation (5Myr). PCPs grow up to the maximum mass $m_{\rm max}\sim 400 M_{\odot}$ via repeating collisions, but by the time it explodes the star is $\sim 100M_{\odot}$. Here we define $m_{\rm max}$ as the maximum mass of a star reached during its lifetime as a result of collisions. PCPs are not the only stars that experienced collisions. In models s8k, s16k, and s64k, we find secondary collision products (SCPs). In most cases SCPs experience only one collision (sometimes a few collisions), but never grow as massive as PCPs, although SCPs sometimes exceed our adopted upper-limit to the IMF ($100M_{\odot}$). The SCPs end up merging with PCPs (see bottom right panel in Figure \ref{fig:m_his_single}) or just lose their mass by stellar evolution (see top right panel in Figure \ref{fig:m_his_single}). This result agrees with previous numerical simulations \citep{2006MNRAS.368..141F}. We also find that the time when the PCPs reach their maximum mass $m_{\rm max}$, $t_{\rm max}$, is scaled by $t_{\rm rc}$, and that $t_{\rm max}/t_{\rm rc}=$ 2.3, 2.2, 2.2, and 2.6 for models s2k, s8k, s16k, and s64k, respectively. \begin{figure*} \begin{center} \includegraphics[width=70mm]{f4a.eps} \includegraphics[width=70mm]{f4b.eps} \includegraphics[width=70mm]{f4c.eps} \includegraphics[width=70mm]{f4d.eps} \caption{Mass evolution of PCPs (solid curve) and SCPs (dashed curve) for models s2k, s8k, s16k, and 64k. Dotted line indicates the core-collapse time. Cross indicates the time when the star merged with more massive ones.\label{fig:m_his_single}} \end{center} \end{figure*} \subsection{Cold solo-cluster models} Sub-virial (cold) initial conditions reach core collapse considerably earlier than virialized ones. Cold models have therefore been suggested to explain the dynamically advanced appearance of observed young star clusters \citep{2009ApJ...700L..99A}. In Figure \ref{fig:cd_cold} we present the core-density evolution of models s16k-cool and s16k-cold, which initially have 67\% and 10\% of the virialized temperature. These models reach core collapse much earlier than virialized models, and as a consequence multiple collisions start earlier and proceed at a higher collision rate. In figure \ref{fig:m_his_cold} we present the mass evolution of the PCPs for models s16k-cold and s16k-cool. Colder initial conditions result in a higher $m_{\rm max}$ of the PCPs. The high $m_{\rm max}$ is a result of the high collision rate, which is caused by the high density in the core (see Figure \ref{fig:cd_cold}). By the time the PCPs leave the main sequence (of $\sim 3$Myr), their masses have been reduced considerably due to stellar mass-loss, which competes with the mass gain by collisions. In model s16k-cold, the PCP grows quickly in the beginning of the simulation, but after 0.5 Myr the mass-loss rate due to the stellar wind becomes higher than the mass-growth by stellar collisions. In model s16k-cool, the PCP stops growing at $\sim$0.5 Myr because the mass-growth rate balances to the mass-loss rate, and then the PCP maintains its mass until the end of the simulation (3.5 Myr). Although the final masses of the PCPs are comparable in both s16k-cool and s16k-cold models, $m_{\rm max}$ of model s16k-cold is twice as massive as that of model s16k-cool. \begin{figure} \begin{center} \includegraphics[width=84mm]{f5.eps} \caption{Time evolution of the core density, $\rho_{\rm c}$, for models s16k, s16k-cool, and s16k-cold.\label{fig:cd_cold}} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=84mm]{f6.eps} \caption{Merger history of PCPs for models s16k, s16k-cool, and s16k-cold.\label{fig:m_his_cold}} \end{center} \end{figure} In Figure \ref{fig:m_max_single} we show $m_{\rm max}$ of the PCPs for all solo models. The maximum mass of the PCPs in models s8k, s16k, and s64k is quite similar ($\sim 400 M_{\odot}$) irrespective of $M_{\rm cl}$. One might expect that more massive clusters contain a larger number of massive stars and therefore a more massive cluster can form a more massive PCP. In our simulation, however, the number of stars which merged into the PCPs and the mean mass of the merged stars are quite similar among these models (see Table \ref{tb:results}). By comparing models s8k, s16k, and s64k, their $\langle m_{\rm col} \rangle$ and $N_{\rm col}$ are quite similar even though their total cluster masses are different. If the collisions selectively occur among the most massive stars and the numbers of collisions are the same, larger clusters should have a larger mean collision mass $\langle m_{\rm col} \rangle$ because larger clusters contain more massive stars. However, the number of massive stars does not simply follow this relation. In figure \ref{fig:nr_m50} we plot the cumulative number distribution of massive stars with $m>50M_{\odot}$ at the moment in which the mass of the PCP reaches $m_{\rm max}$. The number of stars with $>50M_{\odot}$ within $\sim$0.05 pc are similar ($\sim 20$) among models s8k, s16k, and s64k and slightly smaller for model s2k. In particular for the models s16k and s64k, the distribution of massive stars preserves the initial distribution in the outer part of the cluster because the half-mass relaxation time exceeds $t_{\rm max}$. The dynamical evolution in these models is driven on a timescale of $t_{\rm rc}$, and they have similar core properties: $t_{\rm rc}$, $M_{\rm core}$, and $\rho _{\rm c}$ (see Table \ref{tb:model_cl} and Figure \ref{fig:cd}). In model s2k, on the other hand, $t_{\rm rc}\sim t_{\rm rh}$ and as a consequence the dynamical evolution proceeds throughout the entire cluster. Model s8k shows an evolution similar to that of model s2k: $m_{\rm max}/M_{\rm cl}$ for model s8k is as high as that of model s2k. For these models $t_{\rm rh}\sim 2$ Myr, which is sufficiently short for massive stars in the outer part of the cluster to join the collisions in the core. Similar to the model s2k, models s16k-cool and s16k-cold can also gather massive stars from the entire cluster to the cluster center irrespective of their initial positions. In addition, these sub-virial models achieve very high density (see Figure \ref{fig:cd_cold}), which enhances the collision rate. The massive stars in model s16k-cold are more concentrated towards the cluster center compared with model s16k (see Figure \ref{fig:nr_m50}). Even though for model s2k the PCP can accumulate stars from the entire cluster population of massive stars, their total number and mass still cannot compete with the population of massive stars in the more massive clusters. In these latter models, the maximum mass of the PCP is limited by the reservoir of massive stars, which manages to segregate to the core by the moment of the core collapse. A larger cluster mass therefore does not automatically lead to a massive PCP. As seen in Figures \ref{fig:m_his_single} and \ref{fig:m_his_cold}, the mass evolution of the PCPs in models s2k, s8k, and s16k-cold show a clear peak in the middle of the simulation. In the later phase, when the collision rate decays, their mass-loss rate exceeds their mass-growth rate by stellar collisions. In models s16k and s64k, on the other hand, they have not exhausted their reservoir of massive stars because their half-mass relaxation time is not shorter than the simulation time and therefore some of the massive stars still remain in the outer part of the clusters. We empirically obtained a relation that $m_{\rm max} = 0.02M_{\rm cl}$ (dotted line in Figure \ref{fig:m_max_single}) for the low cluster-mass models ($M_{\rm cl} < 2\times 10^4M_{\odot}$) and the cold model. For massive clusters, however, $m_{\rm max}$ is smaller than that according to this relation. For the most massive cluster ($M_{\rm cl} = 2\times 10^5M_{\odot}$), $m_{\rm max}$ is consistent with the result presented by \citet{2002ApJ...576..899P}, $m_{\rm max} = 0.002M_{\rm cl}$. \begin{figure} \begin{center} \includegraphics[width=84mm]{f7.eps} \caption{The maximum mass of PCPs for solo models. Filled circles with error bars indicate models s2k, s8k, s16k, and s64k from left to right. Cross and plus indicate s16k-cool and s16k-cold, respectively. Since the error bar for model s16k-cool is smaller than the marker size, we do not plot the error bar. Dotted and dashed lines indicate $m_{\rm max} = 0.02M_{\rm cl}$ and $m_{\rm max} = 0.002M_{\rm cl}$, respectively. \label{fig:m_max_single}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=84mm]{f8.eps} \caption{Cumulative number distribution of stars with $m>50M_{\odot}$ at $t_{\rm max}$ for models s2k, s8k, s16k, s64k, and s16k-cold. \label{fig:nr_m50}} \end{center} \end{figure} \begin{table*} \begin{center} \caption{Summary of the results.\label{tb:results}} \begin{tabular}{ccccccccc}\hline Model & $m_{\rm max} (M_{\odot})$ & $t_{\rm max}$ (Myr)& $t_{\rm merge}$ (Myr)& $t_{\rm cc}$ (Myr) & $m_{\rm SCPs} (M_{\odot})$ & $\langle m_{\rm col}\rangle (M_{\odot})$ & $N_{\rm col}$ \\ \hline\hline e2k4r3-1 & 287 & 0.45 & 0.2--0.87 & $0.29 \pm 0.07$ & 375 & 78.3 & 8 \\ e2k4r3-2 & 260 & 0.78 & 0.03--1.2 & & 454 & 60.6 & 15 \\ e2k4r3-3 & 268 & 0.61 & 0.6--1.3 & & 139 & 56.7 & 12 \\ e2k4r6 & 238 & 0.57 & 2.2--2.7 & & 743 & 46.8 & 16 \\ \hline e2k8r1-1 & 998 & 0.80 & 0.03--0.38 & $0.29 \pm 0.07$ & 0 & 44.2 & 45 \\ e2k8r1-2 & 667 & 1.35 & 0.03--0.32 & & 160 & 69.4 & 22 \\ e2k8r3 & 530 & 0.86 & 1.0 --0.75 & & 147 & 80.2 & 14 \\ e2k8r5-1 & 334 & 1.11 & 0.47--1.9 & & 1192 & 57.0 & 25 \\ e2k8r5-2 & 486 & 0.78 & 0.03--2.0 & & 651 & 61.2 & 19 \\ e2k8r6-1 & 245 & 0.59 & 0.77--2.4 & & 1367 & 51.0 & 24 \\ e2k8r6-2 & 274 & 0.42 & 0.03--$>3$ & & 970 & 45.5 & 21 \\ \hline e8k8f1 & 1310 & 1.40 & 0.4--1.2 & $0.71\pm 0.11$ & 268 & 73.0 & 42 \\ e8k8f2 & 659 & 2.28 & 0.8--2.0 & & 995 & 88.4 & 33\\ \hline\hline s2k & $182 \pm 21$ & $1.3 \pm 0.6$ & - & $0.29 \pm 0.07$ & $16 \pm 40$ &53.3 & 4.6 \\ \hline s8k & $399\pm 60$ & $2.2\pm 0.2$ & - & $0.71\pm 0.11$ & $149 \pm 115$ & 63.2 & 11.3 \\ \hline s16k & $431\pm 54$ & $2.6\pm 0.9$ & - & $1.2\pm 0.13$ & $54 \pm 77$ & 65.8 & 13.2\\ \hline s64k & $488\pm 57$ & $4.4\pm 0.2$ &- & $1.8 \pm 0.0$ & 0 & 66.0 & 15.5 \\ \hline s16k-cold & 1064 & 0.59 & - & $<0.02$ & 0 & 46.6 & 40\\ s16k-cool & $707\pm 36$ & $2.55\pm 0.75$ & - & $0.325\pm0.075$ & 0 & 51.1 & 28.5 \\ \hline \end{tabular} \end{center} \end{table*} \section{Ensemble-cluster models} In section 3 we demonstrated that the results obtained from our solo-cluster models are consistent with previous numerical studies. In this section we present the results of ensemble-cluster models, in which sub-clusters assemble to finally form one single cluster. In ensemble-cluster models, sub-clusters collapse on a timescale shorter than that for solo-clusters with the same total mass. Their further evolution is dominated by the dynamical evolution of the sub-clusters before they merge. The conservation of the dynamical states through the mergers \citep{2009Ap&SS.324..277V} drives the further evolution of the cluster merger products. As a result, ensemble clusters tend to experience core collapse considerably faster than solo clusters which have initially similar properties to those of the merger remnant of ensemble clusters. In paper 1 we already showed that the quicker dynamical evolution of ensemble clusters can explain the mature characteristics of young dense clusters such as R136 and NGC 3603. Here we use that enhanced dynamical evolution to study the PCPs. The early dynamical evolution of ensemble clusters is similar to that of cold solo-clusters. One might expect that ensemble clusters also result in the formation of massive PCPs, but we will show that the early evolution of ensemble clusters is somehow more complicated. In Figure \ref{fig:merger} we illustrate the schematic evolution of two typical evolutionary paths of ensemble clusters. We find that the most important parameter for the evolution of ensemble clusters is the moment of assembling, $t_{\rm ens}$, compared to $t_{\rm cc}$ of sub-clusters. If $t_{\rm cc}>t_{\rm ens}$ (``early assembling''), the PCPs in the remnant cluster grow efficiently by stellar collisions because the short relaxation time of the sub-clusters drives mass-segregation and core collapse faster than solo clusters. This evolution is similar to that of cold solo-clusters. If $t_{\rm cc}<t_{\rm ens}$ (``late assembling''), each sub-cluster experiences core collapse before they assemble and form a PCP per individual sub-cluster. The mass of each PCP is limited by the sub-cluster mass as we described in section 3. After the assembling of two or more sub-clusters, the PCPs formed in the sub-clusters sink to the center of the remnant cluster and interact each other. Most of them, however, are scattered and ejected from the cluster because they tend to reside in hard binaries with a massive companion. The PCPs tend to be in the hardest binaries with the most massive stars when they formed in the sub-clusters. In each binary-binary encounter following a sub-cluster merger, two PCPs may collide although they are also ejected without experiencing a collision. Therefore, the majority of the PCP-binaries are scattered or ionized, and only one PCP-binary survives in the remnant cluster by the time the assembly is completed. The surviving PCP cannot continue to grow in mass because by that time the central density of the assembled clusters has been depleted due to the early dynamical evolution. \begin{figure*} \begin{center} \includegraphics[width=140mm]{f9.eps} \caption{Schematics picture of two typical assembling processes. Early assembling ($t_{\rm cc}>t_{\rm ens}$): Sub-clusters assemble before they experience core collapse. The merger remnant is more mass-segregated than solo clusters which initially have similar properties to the merger remnant because sub-clusters have a shorter relaxation time than the solo cluster. After their assembling, the remnant cluster collapses and a massive PCP forms. Late assembling ($t_{\rm cc}<t_{\rm ens}$): sub-clusters experience core collapse and form small PCPs before they assemble. After their assembling, however, the PCPs do not grow efficiently because most of them are scattered from the remnant cluster by binary-binary encounters. \label{fig:merger}} \end{center} \end{figure*} \subsection{Stellar collisions in ensemble clusters\label{sc_ensamble}} In Figures \ref{fig:m_his_2k_8} and \ref{fig:m_his_8k_8}, we present the mass evolution of PCPs and SCPs in ensemble clusters. The left and right panels show early and late assembling models, respectively. In early assembling models, one massive PCP per remnant cluster grows after the assembling of sub-clusters. Even though some of the sub-clusters start forming PCPs before assembling, the PCPs merge after the host sub-clusters merged. In late assembling models, on the other hand, each sub-cluster grows its own PCP, but most of them do not collide with each other even after the assembling of their host sub-clusters. \begin{figure*} \begin{center} \includegraphics[width=70mm]{f10a.eps} \includegraphics[width=70mm]{f10b.eps} \includegraphics[width=70mm]{f10c.eps} \includegraphics[width=70mm]{f10d.eps} \caption{Top: Time evolution of the separation between sub-clusters projected onto $x$-axis (full curves) and the collisions of PCPs (black dots) for models e2k8r1 (left) and e2k8r6-1 (right). The positions of the dots show the collision time and the the sub-cluster to which the star initially belongs. Bottom: Mass evolution of PCPs and SCPs for models e2k8r1 (left) and e2k8r6-1 (right). Crosses indicate the time when the SCPs merged to PCPs. Arrows indicate the time when sub-clusters merged. In all panels, the shaded region indicates the core-collapse time with error obtained from the simulations of isolated sub-clusters. \label{fig:m_his_2k_8}} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=70mm]{f11a.eps} \includegraphics[width=70mm]{f11b.eps} \includegraphics[width=70mm]{f11c.eps} \includegraphics[width=70mm]{f11d.eps} \caption{Same as Figure \ref{fig:m_his_2k_8} but for models e8k8f1 (left) and e8k8f2 (right). \label{fig:m_his_8k_8}} \end{center} \end{figure*} We find the reason for the difference between early and late assembling cases in the density evolution of these clusters. In figure \ref{fig:n_dens_merger} we show the time evolution of the maximum number densities for ensemble and solo clusters. Here we plot the maximum value of the local density, which is calculated using six nearest neighbours. (Note that the maximum local density does not trace the density of one individual sub-cluster.) In early assembling cases, the density increases on the core-collapse timescale of the solo sub-cluster (model s2k), but the maximum density is higher than that of model s2k and rather comparable to those of the cold models (models s16k-cold, s16k-cool). The evolution after the core collapse is similar to that of the cold models. The density gradually decreases and eventually becomes comparable to that of virialized solo-clusters (model s16k). The density in late assembling cases also grows on the core-collapse timescale of the sub-clusters until a peak is reached at $\sim 0.5$ Myr. The density decreases as quickly as that of the solo sub-clusters (model s2k), which is different from early assembling cases. By the end of the simulations, the number density of the late assembling cases is an order of magnitude lower than in the early assembling cases. The relatively low density prevents the growth of PCPs in the late assembled clusters. The effect of the difference in the density can be seen in the number of stellar collisions, $N_{\rm col}$ in Table \ref{tb:results}. In early assembling models (e2k8r1-1 and e8k8f1) and the cold solo model (s16k-cold), $N_{\rm col}= 42\pm 2$ and $m_{\rm max}= 1100\pm 130$, but in late assembling models (e2k8r5, e2k8r6, and e8k8f2) $N_{\rm col}=24 \pm 5$ and $m_{\rm max}= 400\pm 150$. \begin{figure} \begin{center} \includegraphics[width=84mm]{f12.eps} \caption{Time evolution of maximum local number density (local densities of six nearest neighbours) for models s16k, s2k, s16k-cool, s16k-cold, e2k8r1 (early assembling), and e2k8r6 (late assembling). \label{fig:n_dens_merger}} \end{center} \end{figure} In late assembling models (for e2k8r5 and e2k8r6), the maximum mass of the PCPs is 200--400 $M_{\odot}$, but the mass of the PCP is similar to those of multiple SCPs, which were PCPs in the sub-clusters. This feature is consistent with young dense clusters such as R136 in the LMC, which contains five $>100M_{\odot}$ mass stars \citep{2010MNRAS.408..731C,2011A&A...530L..14B}, although there is no evidence of any extremely massive stars with $\sim 1000 M_{\odot}$. \subsection{Maximum mass of PCPs in ensemble clusters} As we show in section \ref{sc_ensamble}, early assembling of sub-clusters results in the formation of a PCP, while late assembling forms a less massive PCP and multiple SCPs as massive as the PCP. In figure \ref{fig:m_max_t} we present the relation between $m_{\rm max}/M_{\rm cl}$ and $t_{\rm enc}/t_{\rm cc}$ of ensemble models, where $t_{\rm cc}$ is the core-collapse time of the sub-clusters. Irrespective of the number of sub-clusters, the maximum mass of the PCPs decreases as the assembling time is delayed. \begin{figure} \begin{center} \includegraphics[width=84mm]{f13.eps} \caption{The maximum mass of the PCPs scaled by the total mass of the ensemble clusters as a function of the assembling time scaled by the core-collapse time of the sub-clusters for all e2k8 and e2k4 models. Each horizontal line corresponds to one model. Vertical lines and crosses indicate the individual merger time of sub-clusters for four and eight sub-cluster models, respectively. The dotted line indicates $t_{\rm ens}/t_{\rm cc} = 1$. \label{fig:m_max_t}} \end{center} \end{figure} In the left panel of Figure \ref{fig:m_max}, we show the relation between $m_{\rm max}$ of the PCPs and $M_{\rm cl}$ for both solo and ensemble clusters. (Note that for the solo clusters, the data is the same as that shown in Figure \ref{fig:m_max_single}). The PCP mass of early assembling models is higher than that of solo clusters with the same cluster mass and as massive as that of the cold model. In late assembling models, the PCPs is almost as massive as those of the solo clusters with the same mass. The difference in the maximum mass of PCPs is understood if we take into account all the PCPs and SCPs in the cluster. In the right panel of Figure \ref{fig:m_max}, we present the total mass of all the PCPs and SCPs in the cluster. The total masses are roughly located on the relation that $m_{\rm max} = 0.02 M_{\rm cl}$. This result suggests that the potential maximum mass of the PCPs is 2\% of the cluster mass, although the value depends on the initial mass function and the mass-loss rate due to the stellar wind. The total mass of the SCPs is summarized in Table \ref{tb:results} as $m_{\rm SCPs}$. These PCPs fail to merge with the most massive PCP and their mass will be lost from the cluster by escape or stellar evolution. \begin{figure*} \begin{center} \includegraphics[width=70mm]{f14a.eps} \includegraphics[width=70mm]{f14b.eps} \caption{The maximum mass of the PCP, $m_{\rm max}$, in the cluster (left) and the total mass of $m_{\rm max}$ and the sum of the maximum mass of the SCPs, $m_{\rm SCPs}$ (right). \label{fig:m_max}} \end{center} \end{figure*} In Figure \ref{fig:massive_star}, we plot the radial distribution of PCPs and SCPs, which grows to $>100 M_{\odot}$. We combine the results from several runs, separating them in the early and late assembling cases. While all the PCPs are located in the cluster core in the early assembling case, $\sim40$\% of the PCPs are ejected from the clusters or located in the outskirts of the cluster ($>10$ pc) in the late assembling case. The numbers of PCPs per cluster are on average 1.75 and 5.8 for the early and late assembling cases, respectively. In Figure \ref{fig:massive_star} we also present the cumulative number distribution of stars with $>100M_{\odot}$ in the R136 region \citep{2010MNRAS.408..731C,2011A&A...530L..14B}. The number of such massive stars and their distribution imply that R136 experienced some late assembling, and observationally a sub-cluster has been found around R136 \citep{2012ApJ...754L..37S}. \begin{figure} \begin{center} \includegraphics[width=84mm]{f15.eps} \caption{Cumulative distribution of PCPs and SCPs as a function of the distance from the cluster center. Dashed and filled curves indicate early and late assembling models, respectively. For the early assembling models, we combined the data from e2k8r1-1, e2k8r1-2, e2k8r2, and e8k8s1 (4 runs), and the average number of PCPs per run is 1.75. For the late assembling models, we combined the data from e2k8r5-1, e2k8r5-2, e2k8r6-1, e2k8r6-2, and e8k8s2 (5 runs), and the average number of PCPs is 5.8. Squares indicate the distribution of massive ($>100M_{\odot}$) stars observed in R136 region \citep{2010MNRAS.408..731C,2011A&A...530L..14B}. Since the observation is projected distance, we multiplied them by $\sqrt[]{3}$. For both the simulations and observations, we treat stars within 0.1 pc as at 0.1 pc because the distance is affected by the definition of the cluster center. \label{fig:massive_star}} \end{center} \end{figure} \section{Summary and Discussion} We performed a series of $N$-body simulations of solo and ensemble star clusters and found that ensemble clusters evolve through typically two paths depending on their assembling time compared to the core-collapse time of the sub-clusters. In the early assembling case ($t_{\rm cc}>t_{\rm ens}$), the remnant clusters have dynamically mature characteristics (mass segregation and core collapse) compared to solo-clusters. The evolution of early assembling clusters is similar to that of sub-virial solo-clusters. The early assembling clusters experience mass segregation and core collapse on the time scale of the sub-clusters, which is shorter than that of initially large solo clusters, and the short relaxation time of sub-clusters is conserved in the remnant clusters. This dynamically early evolution results in efficient multiple collisions of stars and helps the formation of extremely massive PCPs with $\sim 1000 M_{\odot}$. In the late assembling case ($t_{\rm cc}<t_{\rm ens}$), the dynamically mature characteristics suppress the growth of massive stars via stellar collisions. In this case, the sub-clusters experience core collapse individually and form their own PCPs, but the maximum mass of the PCPs in the sub-clusters is limited by the total mass of the sub-clusters. Even after the sub-clusters assemble, the PCPs stop growing because the central density of the remnant cluster is already depleted due to the quick dynamical evolution of the sub-clusters. Since the PCPs in sub-clusters form massive binaries, they interact with each other in the remnant clusters. Some of them (SCPs) collide, but the others are scattered from the cluster by three-body or binary-binary encounters. In our simulations, 40\% of the SCPs are ejected from the cluster or scattered to the outskirts of the remnant clusters. The SCPs sometimes escape with a high velocity ($>30$km/s) and reach $\sim 100$ pc from the cluster within their life time ($\sim 3$ Myr). The observed massive high-velocity stars such as VFTS 682 might be formed in this way (see also Paper 1). We also investigated the maximum mass of the PCPs and found that in ensemble clusters, the maximum mass depends on the assembling time of sub-clusters. In the early assembling models, the maximum mass of the PCPs is comparable to that of sub-virial solo-clusters. In the late assembling models, however, the maximum mass is similar to that of the solo sub-clusters. The difference between them is mainly caused by the number of collisions. In the late assembling models, a larger number of SCPs are ejected from the cluster and fail to merge to the PCP than in the early assembling case. When the collisions of stars proceed most successfully (in early assembling and cold solo models), we find that the maximum masses of the PCPs reach $\sim$2\% of the total mass of the clusters even if we take into account the high mass-loss rate due to the stellar wind. Assuming an R136-like cluster of $\sim 5\times 10^4 M_{\odot}$, the expected maximum mass is $\sim 1000 M_{\odot}$. Such an efficient mass growth might result in the formation of IMBHs. For lower metalicity, the massive stars are predicted to collapse directly to IMBHs \citep{2003ApJ...591..288H}. In late assembling cases, however, multiple smaller PCPs ($100$--$400 M_{\odot}$) are expected to exist inside or around the remnant clusters. These stars are in the mass range of type Ib/c supernovae (SNe) assuming solar metallicity \citep{2003ApJ...591..288H}. In recent observations of dense molecular clouds in the central molecular zone in the Galactic center, several expanding shells were found, and the estimated total kinetic energy of them is $\sim 10^{52}$ erg. \citep{2007PASJ...59..323T, 2012ApJS..201...14O}. Especially, three major shells have a kinetic energy of $\sim 10^{51}$ erg, which corresponds to a hypernova explosion. A young dense massive clusters which is similar to our late-merger models might be embedded in this dense molecular cloud. Furthermore, escaping PCPs will explode up to $\sim 100$ pc from the host cluster. Actually type Ib/c SNe associate with star forming regions \citep{2010MNRAS.407.2660A,2011A&A...530A..95L,2012MNRAS.424.1372A, 2012arXiv1210.1126C}, and for example Type Ic SN 2007gr is located at $\sim 7$ pc from a young cluster \citep{2008ApJ...672L..99C}. \section*{Acknowledgments} The authors thank Jeroen B\'{e}dorf for the Sapppro2 library, Alex Rimoldi for careful reading of the manuscript, and Masaomi Tanaka for fruitful discussion. This work was supported by the Japan Society for the Promotion of Science (JSPS) Research Fellowship for Research Abroad, the Netherlands Research Council NWO (grants \#643.200.503, \#639.073.803 and \#614.061.608), the Netherlands Research School for Astronomy (NOVA). Numerical computations were carried out on the Cray XT4 at the Center for Computational Astrophysics (CfCA) of the National Astronomical Observatory of Japan and the Little Green Machine at Leiden University. \bibliographystyle{mn}
{ "timestamp": "2012-10-16T02:02:10", "yymm": "1210", "arxiv_id": "1210.3732", "language": "en", "url": "https://arxiv.org/abs/1210.3732" }
\section*{} \begin{Large} \begin{center} Abstract \end{center} \end{Large} \begin{small} \noindent\noindent The current attempt is aimed to honor the first centennial of Johannes Diderik van der Waals (VDW) awarding Nobel Prize in Physics. The VDW theory of ordinary fluids is reviewed in the first part of the paper, where special effort is devoted to the equation of state and the law of corresponding states. In addition, a few mathematical features involving properties of cubic equations are discussed, for appreciating the intrinsic beauty of the VDW theory. A theory of astrophysical fluids is shortly reviewed in the second part of the paper, grounding on the tensor virial theorem for two-component systems, and an equation of state is formulated with a convenient choice of reduced variables. Additional effort is devoted to particular choices of density profiles, namely a simple guidance case and two cases of astrophysical interest. The related macroisothermal curves are found to be qualitatively similar to VDW isothermal curves below the critical threshold and, for sufficiently steep density profiles, a critical macroisothermal curve exists, with a single horisontal inflexion point. Under the working hypothesis of a phase transition (assumed to be gas-stars) for astrophysical fluids, similar to the vapour-liquid phase transition in ordinary fluids, the location of gas clouds, stellar systems, galaxies, cluster of galaxies, on the plane scanned by reduced variables, is tentatively assigned. A brief discussion shows how van der Waals' two great discoveries, namely a gas equation of state where tidal interactions between molecules are taken into account, and the law of corresponding states, related to microcosmos, find a counterpart with regard to macrocosmos. In conclusion, after a century since the awarding of the Nobel Prize in Physics, van der Waals' ideas are still valid and helpful to day for a full understanding of the universe. \noindent {\it keywords - galaxies: evolution - dark matter: haloes.} \end{small} \end{quotation} \section{Introduction} \label{intro} One century ago (1910), the Nobel Prize in Physics was awarded to Johannes Diderik van der Waals (hereafter quoted as VDW). In his doctoral thesis (1873) the ideal gas equation of state was generalized for embracing both the gaseous and the liquid state, where these two states of aggregation not only merge into each other in a continuous manner, but are in fact of the same nature. With respect to ideal gases, the volume of the molecules and the intermolecular tidal forces were taken into account. The VDW equation was later reformulated in terms of reduced (dimensionless) variables (1880), which allows the description of all substances in terms of a single equation. In other words, the state of any substance, defined by the values of reduced volume, reduced pressure, and reduced temperature, is independent of the nature of the substance. This result is known as the law of corresponding states. The VDW equation of state in dimensional and reduced form, served as a guide during experiments which ultimately led to hydrogen (1898) and helium (1908) liquefaction. The Cryogenic Laboratory at Leiden had developed under the influence of VDW's theories. For further details on VDW's biography refer to specialized textbooks (e.g., Nobel Lectures 1967). The current paper has been written in honor of the first centennial of VDW awarding Nobel Prize in Physics. The ideal and VDW equation of state, both in dimensional and reduced form, are reviewed, and a number of features are analysed in detail, in Section \ref{vande}. Counterparts to ideal and VDW equations of state for astrophysical fluids, or macrogases, are briefly summarized and compared with the classical formulation in Section \ref{macro}. The discussion and the conclusion are drawn in Section \ref{disc}. \section{Equation of state of ordinary fluids}\label{vande} Let ordinary fluids be conceived as fluids which can be investigated in laboratory. The simplest description is provided by the theory of ideal gas, where the following restrictive assumptions are made: (i) particles are identical spheres; (ii) the number of particles is extremely large; (iii) the motion of particles is random; (iv) collisions between particles or with the wall of the box are perfectly elastic; (v) interactions between particles or with the wall of the box are null. The equation of state of ideal gases may be written under the form (e.g., Landau and Lifchitz, 1967, Chap.\,IV, \S42, hereafter quoted as LL67): \begin{equation} \label{eq:gid} pV=kNT~~; \end{equation} where $p$ is the pressure, $V$ the volume, $T$ the temperature, $N$ the particle number, and $k$ the Boltzmann constant. In getting a better description of ordinary fluids, the above assumption (v) is relaxed and tidal interactions between particles are taken into consideration. The VDW generalization of the equation of state of ideal gases, Eq.\,(\ref{eq:gid}), reads (van der Waals, 1873): \begin{equation} \label{eq:VdW} \left(p+A\frac{N^2}{V^2}\right)(V-NB)=kNT~~; \end{equation} where $A$ and $B$ are constants which depend on the nature of the particles. More specifically, the presence of an attractive interaction between particles reduces both the force and the frequency of particle-wall collisions: the net effect is a reduction of the pressure, proportional to the square numerical density, expressed as $A(N/V)^2$. On the other hand, the whole volume of the box, $V$, is not accessible to particles, in that they are conceived as identical spheres: the free volume within the box is $V-NB$, where $B$ is the volume of a single sphere. For further details refer to specific textbooks (e.g., LL67, Chap.\,VII, \S74). The isothermal ($T=$ const) curves for ideal gases are hyperbolas with axes, $p=\mp V$, conformly to Eq.\,(\ref {eq:gid}). In VDW theory of real gases, the isothermal curves exhibit two extremum points below a threshold, which reduce to a single horisontal inflexion point when a critical temperature is attained, as shown in Fig.\,\ref{f:viso}. \begin{figure*}[t] \begin{center} \includegraphics[scale=0.8]{viso100.eps} \caption{Isothermal curves related to ideal (left panel) and VDW (right panel) gases, respectively. Isothermal curves (from bottom to top) correspond to $T/T_{\rm c}=$ 20/23, 20/22, 20/21, 20/20, 20/19, 20/18. No extremum point exists above the critical isothermal curve, $T/T_{\rm c}=1$. } \label{f:viso} \end{center} \end{figure*} Well above the critical isothermal curve, $T\gg T_{\rm c}$, the trends exhibited by ideal and VDW gases look very similar. Below the critical isothermal curve, $T<T_{\rm c}$, the behaviour of VDW gases is different with respect to ideal gases and, in addition, the related isothermal curves provide a wrong description within a specific region where saturated vapour and liquid phases coexist. Further details are shown in Fig.\,\ref{f:vris}. \begin{figure*}[t] \begin{center} \includegraphics[scale=0.8]{vris100.eps} \caption{Same as in Fig.\,\ref{f:viso} (right panel), where the occurrence (within the bell-shaped area bounded by the dashed curve) of saturated vapour is considered. Above the critical isothermal curve $(T=T_{\rm c})$ the trend is similar with respect to ideal gases. Below the critical isothermal curve and on the right of the dashed curve, the gas still behaves as an ideal gas. Below the critical isothermal curve and on the left of the dashed curve, the liquid shows little change in volume as the pressure rises. Within the bell-shaped area bounded by the dashed curve, the liquid phase is in equilibrium with the saturated vapour phase. A diminished volume implies smaller saturated vapour fraction and larger liquid fraction at constant pressure, and vice versa. The VDW equation of state is no longer valid in this region. The dashed curve (including the central branch) is the locus of intersection between VDW and real isothermal curves, the latter being related to constant pressure where liquid and vapour phases coexist. The dotted curve is the locus of VDW isothermal extremum points.} \label{f:vris} \end{center} \end{figure*} Above the critical isothermal curve $(T=T_{\rm c})$ the trend is similar with respect to ideal gases. Below the critical isothermal curve and on the right of the dashed curve, the supersaturated vapour still behaves as an ideal gas. Below the critical isothermal curve and on the left of the dashed curve, the liquid shows little change in volume as the pressure rises. Within the bell-shaped area bounded by the dashed curve, the liquid phase is in equilibrium with the saturated vapour phase. A diminished volume implies smaller saturated vapour fraction and larger liquid fraction at constant pressure, and vice versa. The VDW equation of state is no longer valid in this region. The dashed curve (including the central branch) is the locus of intersections between VDW and real isothermal curves, the latter being related to constant pressure where liquid and vapour phases coexist. The dotted curve is the locus of VDW isothermal extremum points. A specific $(T/T_{\rm c}=20/23)$ VDW and corresponding real isothermal curve, are represented in Fig.\,\ref{f:vrar}. \begin{figure*}[t] \begin{center} \includegraphics[scale=0.8]{vrar100.eps} \caption{A specific $(T/T_{\rm c}=20/23)$ VDW and corresponding real isothermal curve. The above mentioned curves coincide within the range, $V\le V_{\rm A}$ and $V\ge V_{\rm E}$. The VDW isothermal curve exhibits two extremum points: a minimum, ${\sf B}$, and a maximum, ${\sf D}$, while the real isothermal curve is flat within the range, $V_{\rm A}\le V\le V_{\rm E}$. Configurations related to the VDW isothermal curve within the range, $V_{\rm A}\le V\le V_{\rm B}$ (due to tension forces acting on the particles yielding superheated liquid), and $V_{\rm D}\le V\le V_{\rm E}$ (due to the occurrence of undercooled vapour), may be obtained under special conditions, while configurations within the range, $V_{\rm B}\le V\le V_{\rm D}$, are always unstable. The volumes, $V_{\rm A}$ and $V_{\rm E}$, correspond to the maximum value in presence of the sole liquid phase and the minimum value in presence of the sole vapour phase, respectively. The regions, {\sf ABC} and {\sf CDE}, have equal area. For further details refer to the text.} \label{f:vrar} \end{center} \end{figure*} The VDW isothermal curve and the real isothermal curve coincide within the range, $V\le V_{\rm A}$ and $V\ge V_{\rm E}$. The VDW isothermal curve exhibits two extremum points: a minimum, ${\sf B}$, and a maximum, ${\sf D}$, while the real isothermal curve is flat, within the range, $V_{\rm A}\le V\le V_{\rm E}$. Configurations related to the VDW isothermal curve within the range, $V_{\rm A}\le V\le V_{\rm B}$ (due to tension forces acting on the particles yielding superheated liquid), and $V_{\rm D}\le V\le V_{\rm E}$ (due to the occurrence of undercooled vapour), may be obtained under special conditions, while configurations within the range, $V_{\rm B}\le V\le V_{\rm D}$, are always unstable. The volumes, $V_{\rm A}$ and $V_{\rm E}$, correspond to the maximum value in presence of the sole liquid phase and the minimum value in presence of the sole vapour phase, respectively. The surfaces, {\sf ABC} and {\sf CDE}, are equal, as first inferred by Maxwell (e.g., Rostagni, 1957, Chap.\,XII, \S19). The VDW and real isothermal curves represented in Fig.\,\ref{f:vrar} being related to the same temperature, $T$, the cycle, {\sf ABCDECA}, is completely both isothermal and reversible, and the work, $W$, performed therein cannot be positive to avoid violation of the second law of the thermodynamics. The cycles, {\sf ABCA} and {\sf CDEC}, occurring in counterclockwise and clockwise sense, respectively, are also completely both isothermal and reversible. Accordingly, $W_{\sf ABCDECA}=W_{\sf ABCA}-W_{\sf CDEC}\le0$. A similar procedure, related to the reversed cycle, {\sf ACEDCBA}, yields $W_{\sf ACEDCBA}=W_{\sf CEDC}-W_{\sf CBAC} \le0$. Then $W_{\sf ABCDECA}=W_{\sf ACEDCBA}=0$, which implies $W_{\sf ABCA}=W_{\sf CDEC}= W_{\sf CEDC}=W_{\sf CBAC}$ and, in turn, the equality between the related surfaces. For further details refer to specific textbooks (e.g., LL67, Chap.\,VIII, \S85). In order to simplify both notation and calculations, it is convenient to deal with (dimensionless) reduced variables (e.g., Rostagni, 1957, Chap.\,XII, \S16; LL67, Chap.\,VIII, \S85). To this aim, the first step is the knowledge of the parameters related to the critical point, $V_{\rm c}, p_{\rm c}, T_{\rm c}$. Using the VDW equation of state, Eq.\,(\ref{eq:VdW}), the pressure and its first and second partial derivatives, with respect to the volume, read: \begin{lefteqnarray} \label{eq:pW} && p=\frac{kNT}{V-NB}-A\frac{N^2}{V^2}~~;\qquad N={\rm const}~~; \\ \label{eq:p1W} && \left(\frac{\partial p}{\partial V}\right)_{V,T}=-\frac{kNT}{(V-NB)^2}+ 2A\frac{N^2}{V^3}~~; \\ \label{eq:p2W} && \left(\frac{\partial^2 p}{\partial V^2}\right)_{V,T}=\frac{2kNT} {(V-NB)^3}-6A\frac{N^2}{V^4}~~; \end{lefteqnarray} where the domain is $V>NB$, $V=NB$ is a vertical asymptote, and $p=0$ is a horisontal asymptote. The critical isothermal corresponds to the highest temperature allowing a liquid phase, which occurs therein only at the critical point. The critical isothermal curve exhibits neither a minimum nor a maximum, which are replaced by a horisontal inflexion point coinciding with the critical point. Accordingly, $(\partial p/\partial V)_{V_{\rm c},T_{\rm c}}=0$, $(\partial^2p/\partial V^2)_{V_{\rm c},T_{\rm c}}=0$, and $p_{\rm c}=kNT_{\rm c}/(V_{\rm c}-NB)-AN^2/V_{\rm c}^2$. The solution of the related system is: \begin{lefteqnarray} \label{eq:Vc} && V_{\rm c}=3NB~~; \\ \label{eq:Tc} && T_{\rm c}=\frac8{27}\frac AB\frac1k~~; \\ \label{eq:pc} && p_{\rm c}=\frac1{27}\frac A{B^2}~~; \\ \label{eq:Zc} && Z_c=\frac{p_{\rm c}V_{\rm c}}{kNT_{\rm c}}=\frac38~~; \end{lefteqnarray} where, in general, the compressibility factor, $Z=pV/(kNT)$, defines the degree of departure from the behaviour of ideal gases, for which $Z=1$, according to Eq.\,(\ref{eq:gid}). For further details refer to specific textbooks (e.g., Rostagni, 1957, Chap.\,XII, \S20; LL67, Chap.\,VIII, \S85). With regard to the reduced variables: \begin{equation} \label{eq:rv} \sV=\frac V{V_{\rm c}}~~;\qquad\sP=\frac p{p_{\rm c}}~~; \qquad\sT=\frac T{T_{\rm c}}~~; \end{equation} the ideal gas equation of state, Eq.\,(\ref{eq:gid}), and the VDW equation of state, Eq.\,(\ref{eq:VdW}), reduce to: \begin{lefteqnarray} \label{eq:ri} && \sP\sV=\frac83\sT~~; \\ \label{eq:rW1} && \left(\sP+\frac3{\sV^2}\right)\left(\sV-\frac13\right)=\frac83\sT~~;\qquad \sV>\frac13~~; \end{lefteqnarray} and Eqs.\,(\ref{eq:pW}), (\ref{eq:p1W}), and (\ref{eq:p2W}), reduce to: \begin{lefteqnarray} \label{eq:rW2} && \sP=\frac{8\sT}{3\sV-1}-\frac3{\sV^2}~~; \\ \label{eq:rW3} && \left(\frac{\partial\sP}{\partial\sV}\right)_{\sV,\sT}=-\frac{24\sT} {(3\sV-1)^2}+\frac6{\sV^3}~~; \\ \label{eq:rW4} && \left(\frac{\partial^2\sP}{\partial\sV^2}\right)_{\sV,\sT}=\frac {144\sT}{(3\sV-1)^3}-\frac{18}{\sV^4}~~; \end{lefteqnarray} where, for assigned $\sT$, the domain of the function, $\sP(\sV)$, is $\sV>1/3$, $\sV=1/3$ is a vertical asymptote, and $\sP=0$ is a horisontal asymptote. In the special case of the critical point, $\sV=1$, $\sT=1$, $\sP=1$, the partial derivatives are null, as expected. The extremum points, via Eq.\,(\ref{eq:rW3}), are defined by the relation: \begin{equation} \label{eq:ext} f(\sV)=\frac{(3\sV-1)^2}{4\sV^3}=\sT~~; \end{equation} which is satisfied at the critical point, as expected. The function on the left-hand side of Eq.\,(\ref{eq:ext}) has two extremum points: a minimum at $\sV=1/3$ (outside the physical domain) and a maximum at $\sV=1$, where $\sT=1$. Accordingly, Eq.\,(\ref{eq:ext}) is never satisfied for $\sT>1$, which implies no extremum point for related isothermal curves, as expected. The contrary holds for $\sT<1$, where it can be seen that the third-degree equation associated to Eq.\,(\ref{eq:rW3}) has three real solutions, related to extremum points. One lies outside the physical domain, which implies $\sV\le1/3$. The remaining two are obtained as the intersections between the curve, $f(\sV)$, expressed by Eq.\,(\ref {eq:ext}), and the straight line, $y=\sT$, keeping in mind that $f(1/3)=0$, $f(1)=1$, and $\lim_{\sV\to +\infty}f(\sV)=0$. The third-degree equation associated to Eq.\,(\ref {eq:rW3}), may be ordered as: \begin{leftsubeqnarray} \slabel{eq:3dea} && \sV^3-9a\sV^2+6a\sV-a=0~~; \\ \slabel{eq:3deb} && a=\frac1{4\sT}~~; \label{seq:3de} \end{leftsubeqnarray} with regard to the standard formulation (e.g., Spiegel, 1968, Chap.\,9): \begin{equation} \label{eq:3dx} x^3+a_1x^2+a_2x+a_3=0~~; \end{equation} the discriminants of Eq.\,(\ref{eq:3dea}) are: \begin{lefteqnarray} \label{eq:Q} && Q=\frac{3a_2-a_1^2}9=a(2-9a)~~; \\ \label{eq:R} && R=\frac{9a_1a_2-27a_3-2a_1^3}{54}=\frac{a(1-18a+54a^2)}2~~; \\ \label{eq:D} && D=Q^3+R^2=\frac{a^2(1-4a)}4~~; \end{lefteqnarray} where $D=0$ in the special case of the critical isothermal curve $(\sT=1, a=1/4)$, $D<0$ for $\sT<1$, and $D>0$ for $\sT>1$. Accordingly, three coincident real solutions exist if $D=0$, three (at least two) different real solutions if $D<0$, one real (outside the physical domain) and two complex coniugate if $D>0$. The three real solutions $(D\le0)$ may be expressed as (e.g., Spiegel, 1968, Chap.\,9): \begin{leftsubeqnarray} \slabel{eq:rsola} && \sV_1=2\sqrt{-Q}\cos\left(\pi+\frac\theta3\right)-\frac13a_1~~; \\ \slabel{eq:rsolb} && \sV_2=2\sqrt{-Q}\cos\left(\pi+\frac\theta3+\frac{2\pi}3\right)- \frac13a_1~~; \\ \slabel{eq:rsolc} && \sV_3=2\sqrt{-Q}\cos\left(\pi+\frac\theta3+\frac{4\pi}3\right)- \frac13a_1~~; \\ \slabel{eq:rsold} && \theta=\arctan\frac{\sqrt{-D}}R~~; \label{seq:rsol} \end{leftsubeqnarray} where $a_1=-9a$ and, in the special case of the critical isothermal curve, $a=1/4$, $Q=-1/16$, $D=0$, which implies $\sV_0=\min(\sV_1,\sV_2, \sV_3)$, $\sV_{\rm A}=\sV_{\rm B}=\sV_{\rm C}=\sV_{\rm D}=\sV_{\rm E}=\max (\sV_1,\sV_2,\sV_3)$. In the special case, $\sT\to0$, Eq.\,(\ref{eq:3dea}) reduces to a second-degree equation whose solutions are $\sV_{01}=\sV_{02}=1/3$, while the related function is otherwise divergent as $a\to+\infty$. In general, the extremum points of VDW isothermal curves $(\sT\le1)$ occur at $\sV=\sV_{\rm B}$ (minimum) and $\sV=\sV_{\rm D}$ (maximum), $\sV_{\rm B}\le\sV_{\rm D}$. As $\sT\to0$, $\sV_{\rm B}\to1/3$, $\sV_{\rm D}\to+\infty$, where, in all cases, $1/3<\sV_{\rm B}\le1\le\sV_{\rm D}$. The two areas defined by the intersection of a generic VDW isothermal curve $(\sT\le1)$ and related real isothermal curves (see Fig.\,\ref {f:vrar}), are expressed as: \begin{leftsubeqnarray} \slabel{eq:S1a} && W_1=\int_{V_{\rm A}}^{V_{\rm C}}p_{\rm C}\diff V-\int_{V_{\rm A}}^{V_ {\rm C}}p\diff V=p_{\rm C}V_{\rm C} \left[\sP_C(\sV_{\rm C}-\sV_{\rm A})-\int_{\sV_{\rm A}}^{\sV_{\rm C}}\sP \diff\sV\right];\qquad \\ \slabel{eq:S1b} && W_2=\int_{V_{\rm C}}^{V_{\rm E}}p\diff V-\int_{V_{\rm C}}^{V_{\rm E}}p_ {\rm C}\diff V=p_{\rm C}V_{\rm C} \left[\int_{\sV_{\rm C}}^{\sV_{\rm E}}\sP\diff\sV-\sP_C(\sV_{\rm E}-\sV_ {\rm C})\right];\qquad \label{seq:S1} \end{leftsubeqnarray} and the substitution of Eq.\,(\ref{eq:rW2}) into (\ref{seq:S1}) allows explicit expressions for the integrals. The result is: \begin{leftsubeqnarray} \slabel{eq:S2a} && \frac{W_1}{p_{\rm C}V_{\rm C}}=\sP_{\rm C}(\sV_{\rm C}-\sV_{\rm A})-\frac 83\sT\ln\frac{3\sV_{\rm C}-1}{3\sV_{\rm A}-1}+ \frac{3(\sV_{\rm C}-\sV_{\rm A})}{\sV_{\rm A}\sV_{\rm C}}~~; \\ \slabel{eq:S2b} && \frac{W_2}{p_{\rm C}V_{\rm C}}=\frac83\sT\ln\frac{3\sV_{\rm E}-1}{3\sV_ {\rm C}-1}- \frac{3(\sV_{\rm E}-\sV_{\rm C})}{\sV_{\rm C}\sV_{\rm E}}-\sP_C(\sV_{\rm E}- \sV_{\rm C})~~; \label{seq:S2} \end{leftsubeqnarray} and the condition, $W_1=W_2$, after some algebra reads (Caimmi 2010, hereafter quoted as C10): \begin{equation} \label{eq:S12} \sP_C=\frac83\frac{\sT}{\sV_{\rm E}-\sV_{\rm A}}\ln\frac{3\sV_{\rm E}-1}{3\sV_{\rm A}-1}-\frac3 {\sV_{\rm A}\sV_{\rm E}}~~; \end{equation} where, for a selected isothermal curve, the unknowns are $\sP_C=\sP_A=\sP_E$, $\sV_{\rm A}$, and $\sV_{\rm E}$. The reduced volumes, $\sV_{\rm A}$, $\sV_{\rm C}$, $\sV_{\rm E}$, see Fig.\,\ref{f:vrar}, may be considered as intersections between a VDW isothermal curve $(\sT<1)$ and a horisontal straight line, $\sP=\sP_C$, in the $({\sf O}\sV\sP)$ plane. In other words, $\sV_{\rm A}$, $\sV_{\rm C}$, $\sV_{\rm E}$, are the real solutions of the third-degree equation: \begin{equation} \label{eq:3Wrr} \sV^3-\left(\frac13+\frac83\frac{\sT}{\sP_C}\right)\sV^2+\frac3{\sP_C}\sV- \frac1{\sP_C}=0~~; \end{equation} which has been deduced from Eq.\,(\ref{eq:rW2}), particularized to $\sP=\sP_C$. The related solutions may be calculated using Eqs.\,(\ref{seq:rsol}). The last unknown, $\sP_C$, is determined from Eq.\,(\ref {eq:S12}). An inspection of Fig.\,\ref{f:vrar} shows that the points, {\sf A} and {\sf E}, are located on the left of the minimum, {\sf B}, and on the right of the maximum, {\sf D}, respectively. Keeping in mind the above results, the following inequality holds: $\sV_{\rm A}\le\sV_{\rm B}\le1\le\sV_{\rm D}\le \sV_{\rm E}$, which implies further investigation on the special case, $\sV_{\rm C}=1$. The particularization of the VDW equation of state, Eq.\,(\ref{eq:rW2}), to the point, ${\sf C}={\sf C_1}$, assuming $\sV_{C_1}=1$, yields: \begin{equation} \label{eq:TVC1} \sT=\frac{\sP_{C_1}+3}4~~; \end{equation} and Eq.\,(\ref{eq:3Wrr}) reduces to: \begin{leftsubeqnarray} \slabel{eq:3dba} && \sV^3-(1+2b)\sV^2+3b\sV-b=0~~; \\ \slabel{eq:3dbb} && b=\frac1{\sP_{C_1}}~~; \label{seq:3db} \end{leftsubeqnarray} with regard to the generic third-degree equation, Eq.\,(\ref{eq:3dx}), the three solutions, $x_1$, $x_2$, $x_3$, satisfy the relations (e.g., Spiegel, 1968, Chap.\,9): \begin{leftsubeqnarray} \slabel{eq:x123a} && x_1+x_2+x_3=-a_1~~; \\ \slabel{eq:x123b} && x_1x_2+x_2x_3+x_3x_1=a_2~~; \\ \slabel{eq:x123c} && x_1x_2x_3=-a_3~~; \label{seq:x123} \end{leftsubeqnarray} where, in the case under discussion: \begin{leftsubeqnarray} \slabel{eq:b123a} && a_1=-1-2b~~;\qquad a_2=3b~~;\qquad a_3=-b~~; \\ \slabel{eq:b123b} && x_1=\sV_{\rm A}~~;\qquad x_2=\sV_{C_1}=1~~;\qquad x_3=\sV_{\rm E}~~; \label{seq:b123} \end{leftsubeqnarray} and the substitution of Eqs.\,(\ref{seq:b123}) into two among (\ref{seq:x123}) yields: \begin{leftsubeqnarray} \slabel{eq:VAEa} && \sV_{\rm A}=b-\sqrt{b^2-b}~~; \\ \slabel{eq:VAEb} && \sV_{\rm E}=b+\sqrt{b^2-b}~~; \label{seq:VAE} \end{leftsubeqnarray} and the combination of Eqs.\,(\ref{eq:TVC1}), (\ref{eq:3dbb}), and (\ref{seq:VAE}) produces: \begin{leftsubeqnarray} \slabel{eq:VAETa} && \sV_{\rm A}=\frac{1-2\sqrt{1-\sT}}{4\sT-3}~~;\qquad\sT\le1~~; \\ \slabel{eq:VAETb} && \sV_{\rm E}=\frac{1+2\sqrt{1-\sT}}{4\sT-3}~~;\qquad\sT\le1~~;~~; \label{seq:VAET} \end{leftsubeqnarray} which, together with $\sV_{C_1}=1$, are the abscissae of the intersection points between a selected VDW isothermal curve in the $({\sf O}\sV\sP)$ plane and the straight line, $\sP=\sP_{C_1}$, in the special case under discussion. The substitution of Eqs.\,(\ref{seq:VAET}) into (\ref{eq:S12}), the last being related to the real isothermal curve, yields: \begin{equation} \label{eq:S12C} \frac{\sT}{\sqrt{1-\sT}}\ln\frac{3-2\sT+3\sqrt{1-\sT}}{3-2\sT-3\sqrt{1-\sT}} =6~~; \end{equation} which holds only for the critical isothermal curve, $\sT=1$. Accordingly, the abscissa of the intersection point, {\sf C}, between a selected VDW isothermal curve and related real isothermal curve, see Fig.\,\ref{f:vrar}, cannot occur at $\sV_{\rm C}=1$ unless the critical isothermal curve is considered. Then the third-degree equation, Eq.\,(\ref{eq:3Wrr}), must be solved in the general case by use of Eqs.\,(\ref{seq:rsol}). The results are shown in Tab.\,\ref{t:vispo}, where the following parameters (in reduced variables) are listed for each VDW isothermal curve, see Fig.\,\ref{f:vrar}: the temperature, $\sT$, the lower volume limit, $\sV_{\rm A}$, for which the liquid and vapour phase coexist; the extremum point (minimum) volume, $\sV_{\rm B}$; the intermediate volume, $\sV_{\rm C}$, for which the pressure equals its counterpart related to the corresponding lower and upper volume limit, for which the liquid and vapour phase coexist; the extremum point (maximum) volume, $\sV_{\rm D}$; the upper volume limit, $\sV_{\rm E}$, for which the liquid and vapour phase coexist; the extremum point (minimum) pressure, $\sP_B$; the pressure, $\sP_A=\sP_C=\sP_E$, related to the horisontal real isothermal curve; the extremum point (maximum) pressure, $\sP_D$. \begin{table} \caption{Values of parameters, $\sT$, $\sV_{\rm A}$, $\sV_{\rm B}$, $\sV_{\rm C}$, $\sV_{\rm D}$, $\sV_{\rm E}$, $\sP_B$, $\sP_C$, $\sP_D$, within the range, $0.85\le\sT\le0.99$, using a step, $\Delta\sT=0.01$. Additional values are computed near the critical point, to increase the resolution. The true value of the reduced temperature on the last row is $\sT=0.9999$ or $10\sT=9.999$. All values equal unity at the critical point. Index captions: A, C, E - intersections between VDW and real isothermal curves; B - extremum point of minimum; D - extremum point of maximum. Extremum points are related to VDW isothermal curves, while their real counterparts are flat in presence of both liquid and vapour phase. To save aesthetics, 01 on head columns stands for unity.} \label{t:vispo} \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline $10\sT$ & $10\sV_{\rm A}$ & $10\sV_{\rm B}$ & $01\sV_{\rm C}$ & $01\sV_{\rm D}$ & $01\sV_{\rm E}$ & $10\sP_B$ & $10\sP_C$ & $10\sP_D$ \\ \hline 8.50 & 5.5336 & 6.7168 & 1.1453 & 1.7209 & 3.1276 & 0.4963 & 5.0449 & 6.2055 \\ 8.60 & 5.6195 & 6.8003 & 1.1337 & 1.6821 & 2.9545 & 1.2750 & 5.3125 & 6.4005 \\ 8.70 & 5.7116 & 6.8883 & 1.1225 & 1.6436 & 2.7909 & 2.0346 & 5.5887 & 6.6011 \\ 8.80 & 5.8106 & 6.9814 & 1.1116 & 1.6052 & 2.6360 & 2.7752 & 5.8736 & 6.8076 \\ 8.90 & 5.9176 & 7.0804 & 1.1009 & 1.5669 & 2.4889 & 3.4965 & 6.1674 & 7.0205 \\ 9.00 & 6.0340 & 7.1860 & 1.0905 & 1.5285 & 2.3488 & 4.1984 & 6.4700 & 7.2401 \\ 9.10 & 6.1615 & 7.2994 & 1.0804 & 1.4900 & 2.2151 & 4.8807 & 6.7816 & 7.4669 \\ 9.20 & 6.3022 & 7.4221 & 1.0706 & 1.4511 & 2.0869 & 5.5430 & 7.1021 & 7.7014 \\ 9.30 & 6.4593 & 7.5561 & 1.0610 & 1.4117 & 1.9634 & 6.1849 & 7.4318 & 7.9443 \\ 9.40 & 6.6369 & 7.7040 & 1.0516 & 1.3715 & 1.8438 & 6.8058 & 7.7707 & 8.1963 \\ 9.50 & 6.8412 & 7.8697 & 1.0425 & 1.3300 & 1.7271 & 7.4049 & 8.1188 & 8.4584 \\ 9.60 & 7.0819 & 8.0593 & 1.0336 & 1.2867 & 1.6118 & 7.9811 & 8.4762 & 8.7319 \\ 9.70 & 7.3756 & 8.2830 & 1.0249 & 1.2404 & 1.4960 & 8.5328 & 8.8429 & 9.0185 \\ 9.80 & 7.7554 & 8.5611 & 1.0164 & 1.1892 & 1.3761 & 9.0576 & 9.2191 & 9.3209 \\ 9.90 & 8.3091 & 8.9461 & 1.0081 & 1.1278 & 1.2430 & 9.5510 & 9.6048 & 9.6437 \\ 9.95 & 8.7471 & 9.2353 & 1.0040 & 1.0876 & 1.1618 & 9.7830 & 9.8012 & 9.8157 \\ 9.98 & 9.1727 & 9.5049 & 1.0016 & 1.0540 & 1.0972 & 9.9158 & 9.9202 & 9.9240 \\ 9.99 & 9.4018 & 9.6456 & 1.0008 & 1.0377 & 1.0670 & 9.9585 & 9.9600 & 9.9614 \\ 9.9$\bar{9}$ & 9.8035 & 9.8856 & 1.0001 & 1.0117 & 1.0204 & 9.9960 & 9.9960 & 9.9960 \\ \hline \end{tabular} \end{center} \end{table} The locus of the intersections between VDW and real isothermal curves is represented in Fig.\,\ref{f:vris} as a trifid curve, where the left, the right, and the middle branch correspond to $\sV_{\rm A}$, $\sV_{\rm E}$, and $\sV_{\rm C}$, respectively. The common starting point coincides with the critical point. The locus of the VDW isothermal curve extremum points is represented in Fig.\,\ref{f:vris} as a dotted curve starting from the critical point, where the left and the right branch corresponds to minimum and maximum points, respectively. A fluid state can be represented in reduced variables as ($\sV$, $\sP$, $\sT$), where one variable may be expressed as a function of the remaining two, by use of the reduced ideal gas equation of state, Eq.\,(\ref {eq:ri}), or the reduced VDW equation of state, Eq.\,(\ref{eq:rW1}). The formulation in terms of reduced variables, Eqs.\,(\ref{eq:rv}), makes the related equation of state universal i.e. it holds for any fluid. Similarly, the Lane-Emden equation expressed in polytropic (dimensionless) variables, describes the whole class of polytropic gas spheres with assigned polytropic index, in hydrostatic equilibrium (e.g., Chandrasekhar 1939, Chap.\,IV, \S4). The states of two fluids with equal ($\sV$, $\sP$, $\sT$), are defined as corresponding states. The mere existence of an equation of state yields the following result. \begin{trivlist} \item[\hspace\labelsep{\bf Law of corresponding states.}] \sl Given two fluids, the equality between two among three reduced variables, $\sV$, $\sP$, $\sT$, implies the equality between the remaining related reduced variables i.e. the two fluids are in corresponding states. \end{trivlist} The law was first formulated by van der Waals in 1880. For further details refer to specific textbooks (e.g., LL67, Chap.\,VIII, \S85). \section{Equation of state of astrophysical fluids}\label{macro} Let macrogases be defined as two-component fluids which interact only gravitationally. For assigned density profiles, the virial theorem can be formulated for each subsystem, where the potential energy is the sum of the self potential energy of the component under consideration, and the tidal energy induced by the other one. The virial theorem for subsystem can be expressed as a macrogas equation of state in terms of dimensionless variables, $X_V$, $X_p$, $X_T$, related to axis ratio, mass ratio, virial (i.e. self + tidal) potential energy ratio, respectively. The result is (C10): \begin{leftsubeqnarray} \slabel{eq:Xa} && X_pX_VF_X(X_p,X_V)=X_T~~; \\ \slabel{eq:Xb} && X_p=m^2~~;\qquad X_V=\frac1y~~;\qquad X_T=\phi~~; \label{seq:X} \end{leftsubeqnarray} where the function, $F_X$, depends on the selected density profiles, $m$ is the (outer to inner component) mass ratio, $y$ is the (outer to inner component) axis ratio along a generic direction, $\phi$ is the (outer to inner component) virial energy ratio, and the density profiles are restricted to be homeoidally striated. The variables, $X_V$, $X_p$, $X_T$, play a similar role as the volume, the pressure, and the temperature, for ordinary fluids. Accordingly, $X_V$, $X_p$, $X_T$, may be defined as macrovolume, macropressure, and macrotemperature, respectively. For further details refer to the parent paper (C10). Macroisothermal curves on the $({\sf O}X_VX_p)$ plane exhibit a similar trend with respect to VDW isothermal curves on the $({\sf O}Vp)$ plane, with two main differences. First, no critical point occurs for sufficiently mild density profiles, where all macroisothermal curves are characterized by two extremum points, one maximum and one minimum. Second, a critical macroisothermal curve appears for sufficiently steep density profiles, above (instead of below) which macroisothermal curves exhibit extremum points. For further details refer to the parent paper (C10) and an earlier attempt (Caimmi and Valentinuzzi 2008). The last inconvenient may be avoided turning Eq.\,(\ref{seq:X}) into the following: \begin{leftsubeqnarray} \slabel{eq:Ya} && Y_pY_VF_Y(Y_p,Y_V)=Y_T~~; \\ \slabel{eq:Yb} && Y_p=\frac1{X_p}~~;\qquad Y_V=\frac1{X_V}~~;\qquad Y_T=\frac1{X_T}~~; \\ \slabel{eq:Yc} && F_Y(Y_p,Y_V)=F_X(X_p,X_V)~~; \label{seq:Y} \end{leftsubeqnarray} as suggested in the parent paper (C10). The existence of a phase transition moving along a selected macroisothermal curve, where the path is a horisontal line (``real'' macroisothermal curve) instead of a curve including the extremum points (``actual'' macroisothermal curve), must necessarily be assumed as a working hypothesis, due to the analogy between VDW isothermal curves and macroisothermal curves. Unlike the VDW equation of state, Eq.\,(\ref{eq:pW}), the theoretical macrogas equation of state, Eq.\,(\ref{eq:Ya}), is not analytically integrable, which implies the procedure used for determining a selected macroisothermal curve, must be numerically performed. The main steps are (i) calculate the intersections, $Y_{V_{\rm A}}$, $Y_{V_{\rm C}}$, $Y_{V_{\rm E}}$, $Y_{V_{\rm A}}<Y_{V_{\rm C}}<Y_{V_{\rm E}}$, between the generic horizontal line in the $({\sf O}Y_VY_p)$ plane, $Y_p=$const, and the theoretical macrogas equation of state, within the range, $Y_{p_{\rm B}}<Y_p< Y_{p_{\rm D}}$, where B and D denote the extremum points of minimum and maximum, respectively; (ii) calculate the area of the regions, ${\sf ABC}$ and ${\sf CDE}$; (iii) find the special value, $Y_p=Y_{p_{\rm C}}$, which makes the two areas equal; (iv) trace the real macroisothermal curve as a horisontal line connecting the points, $(Y_{V_{\rm A}},Y_{p_{\rm A}})$, $(Y_{V_{\rm C}},Y_{p_{\rm C}})$, $(Y_{V_{\rm E}},Y_{p_{\rm E}})$, $Y_{p_{\rm A}}=Y_{p_{\rm C}}= Y_{p_{\rm E}}=Y_{p_c}$. For further details refer to an earlier attempt (C10). The procedure related to point (ii) above is rather cumbersome and should be performed again with the new variables, $Y_{\rm V}$, $Y_{\rm p}$, and $Y_{\rm T}$, with respect to an earlier attempt (C10). For this reason, the current paper shall be restricted to theoretical macroisothermal curves and related extremum points. In order to preserve the analogy with ideal and VDW gases, the tidal potential energy shall be excluded and included, respectively, in the formulation of the virial theorem and related equation of state. The following cases shall be dealt with: UU macrogases, where no critical point occurs; HH macrogases, where the critical point occurs; HN/NH macrogases, where the critical point occurs. In presence of the critical point, Eq.\,(\ref{seq:Y}) may be translated into reduced variables, as: \begin{leftsubeqnarray} \slabel{eq:sYa} && \sY_p\sY_VF_Y(\sY_p,\sY_V)\frac{Y_{p_c}Y_{V_c}}{Y_{T_c}}=\sY_T~~; \\ \slabel{eq:sYb} && \sY_p=\frac{Y_p}{Y_{p_c}}~~;\qquad\sY_V=\frac{Y_V}{Y_{V_c}}~~; \qquad \sY_T=\frac{Y_T}{Y_{T_c}}~~; \\ \slabel{eq:sYc} && F_Y(\sY_p,\sY_V)=F_Y(\sY_pY_{p_c},\sY_VY_{V_c})~~; \label{seq:sY} \end{leftsubeqnarray} where $Y_{p_c}$, $Y_{V_c}$, $Y_{T_c}$, are the values of the variables related to the critical point. The counterpart of Eq.\,(\ref{eq:sYa}) for ideal macrogases reads: \begin{lefteqnarray} \label{eq:sYi} && \sY_p\sY_VG_Y(\sY_p,\sY_V)\frac{Y_{p_c}Y_{V_c}}{Y_{T_c}}=\sY_T~~; \end{lefteqnarray} where $G_Y(\sY_p,\sY_V)$ is the expression of $F_Y(\sY_p,\sY_V)$ where the interaction terms are omitted. For further details refer to an earlier attempt (C10). Accordingly, the equation of state for ideal macrogases where $G_Y(\sY_p,\sY_V)Y_{p_c}/$ $(Y_{V_c}Y_{T_c})=3/8$, coincides with its counterpart related to ideal gases, conformly to Eq.\,(\ref{eq:ri}). Macroisothermal curves related to IUU (tidal potential energy excluded) and AUU (tidal potential energy included) macrogases, are plotted in Fig.\,\ref{f:uuso}, left and right panel, respectively, for values of the macrotemperature, $Y_{\rm T}=20/23$, 20/22, 20/21, 20/20, 20/19, 20/18, from bottom to top. The coordinates, $Y_{\rm V}$, $Y_{\rm p}$, $Y_{\rm T}$, may be conceived as normalized to their fictitious critical counterparts, $Y_{V_{\rm c}}=1$, $Y_{p_{\rm c}}=1$, $Y_{T_{\rm c}}=1$ (C10). The comparison with ideal and \begin{figure*}[t] \begin{center} \includegraphics[scale=0.8]{uuso100.eps} \caption{Macroisothermal curves related to IUU (left panel) and AUU (right panel) macrogases, respectively. Macroisothermal curves (from bottom to top) correspond to $Y_{\rm T}=$ 20/23, 20/22, 20/21, 20/20, 20/19, 20/18. No critical macroisothermal curve exists, above which the extremum points disappear. The coordinates, $Y_{\rm V}$, $Y_{\rm p}$, $Y_{\rm T}$, may be conceived as normalized to their fictitious critical counterparts, $Y_{V_{\rm c}}=1$, $Y_{p_{\rm c}}=1$, $Y_{T_{\rm c}}=1$.} \label{f:uuso} \end{center} \end{figure*} VDW gases, plotted in Fig.\,\ref{f:viso}, shows a similar trend, except the absence of a critical macroisothermal curve, above which the extremum points disappear. Macroisothermal curves related to IHH (tidal potential energy excluded) and AHH (tidal potential energy included) macrogases, are plotted in Fig.\,\ref{f:hhso}, left and right panels, respectively, for infinitely extended subsystems and values of the reduced macrotemperature, $\sY_{\rm T}=Y_{\rm T}/Y_{T_{\rm c}}=$ 20/23, 20/22, 20/21, 20/20, 20/19, 20/18, from bottom to top. \begin{figure*}[t] \begin{center} \includegraphics[scale=0.8]{hhso100.eps} \caption{Macroisothermal curves ($\sY_{\rm p}=Y_{\rm p}/Y_{p_c}$ vs. $\sY_{\rm V}=Y_{\rm V}/Y_{V_c}$) related to IHH (left panels) and AHH (right panels) macrogases, respectively, for infinitely extended subsystems. Macroisothermal curves (from bottom to top) correspond to $\sY_{\rm T}=Y_{\rm T}/ Y_{T_{\rm c}}=$20/23, 20/22, 20/21, 20/20, 20/19, 20/18. The general case of bounded subsystems makes only little changes.} \label{f:hhso} \end{center} \end{figure*} The general case of bounded subsystems makes only little changes. The comparison with ideal and VDW gases, plotted in Fig.\,\ref{f:viso}, shows a similar trend where macroisothermal curves are more extended along the horisontal direction with respect to isothermal curves. Macroisothermal curves related to IHN/NH (tidal potential energy excluded) and AHN/NH (tidal potential energy included) macrogases, are plotted in Fig.\,\ref{f:hnso}, left and right panels, respectively, for infinitely extended subsystems and values of the reduced macrotemperature, $\sY_{\rm T}=Y_{\rm T}/Y_{T_{\rm c}}=$ 20/23, 20/22, 20/21, 20/20, 20/19, 20/18, from bottom to top. \begin{figure*}[t] \begin{center} \includegraphics[scale=0.8]{hnso100.eps} \caption{Macroisothermal curves ($\sY_{\rm p}=Y_{\rm p}/Y_{p_c}$ vs. $\sY_{\rm V}=Y_{\rm V}/Y_{V_c}$) related to IHN/NH (left panels, to be noted the scale difference) and AHN/NH (right panels) macrogases, respectively, for infinitely extended subsystems. Macroisothermal curves (from bottom to top) correspond to $\sY_{\rm T}=Y_{\rm T}/ Y_{T_{\rm c}}=$23/20, 22/20, 21/20, 20/20, 19/20, 18/20. The general case of bounded subsystems makes only little changes for AHN/NH macrogases, while the scale difference tends to disappear for IHN/NH macrogases.} \label{f:hnso} \end{center} \end{figure*} The general case of bounded subsystems makes only little changes for AHN/NH macrogases, while the scale change tends to disappear for IHN/NH macrogases. The comparison with ideal and VDW gases, plotted in Fig.\,\ref{f:viso}, shows a similar trend where macroisothermal curves are more extended along the horisontal direction with respect to isothermal curves, and the occurrence of a scale difference for ideal macrogases. The last is due to a mass divergence for infinitely extended N density profiles, which makes tidal effects higly increase. The comparison between the VDW critical isothermal curve and its counterparts related to HH and HN/NH macrogases is shown in Fig.\,\ref{f:mris}. \begin{figure*}[t] \begin{center} \includegraphics[scale=0.8]{mris100.eps} \caption{Comparison between VDW critical isothermal curve (full), HH critical macroisothermal curve (dotted) and HN/NH critical macroisothermal curve (dot-dashed). With regard to ordinary fluids, the vapour and the liquid phase coexist within the bell-shaped region bounded by the dashed curve and, in addition, $Y_{\rm V}=V$, $Y_{\rm p}=p$. More extended (along the horisontal direction) bell-shaped regions are expected for HH and HN/NH macroisothermal curves. The critical point belongs to all curves. Different letters denote the expected location of different astrophysical systems. Caption: EG - elliptical galaxies; S0 - lenticular galaxies; SG - spiral galaxies including barred; IG - irregular galaxies; DS - dwarf spheroidal galaxies; GC - globular clusters; CG - clusters of galaxies; WC - wholly gaseous clouds i.e. in absence of star formation; WG - (hypothetical) wholly gaseous galaxies i.e. in absence of star formation.} \label{f:mris} \end{center} \end{figure*} The broken curve is the same as in Fig.\,\ref{f:vris}. Accordingly, the vapour and the liquid phase of ordinary fluids coexist within the bell-shaped region bounded by the broken curve. Both HH and HN/NH macroisothermal curves are more extended along the horisontal direction with respect to VDW isothermal curves, which implies a more flattened counterpart of the above mentioned bell-shaped region. The critical point belongs to all curves. \section{Discussion and conclusion} \label{disc} Tidal interactions between neighbourhing bodies span across the whole admissible range of lengths in nature: from, say, atoms and molecules to galaxies and clusters of galaxies i.e. from micro to macrocosmos. Ordinary fluids are collisional, which makes the stress tensor be isotropic and the velocity distribution obey the Maxwell's law. Tidal interactions (electromagnetic in nature) therein act between colliding particles (e.g., LL67, Chap.\,VII, \S74). Astrophysical fluids are collisionless, which makes the stress tensor be anisotropic and the velocity distribution no longer obey the Maxwell's law. Tidal interactions (gravitational in nature) therein act between a single particle and the system as a whole (e.g., C10). In both cases, an equation of state can be formulated in reduced variables: the VDW equation for ordinary fluids and an equation which depends on the density profiles for astrophysical fluids. For sufficiently mild density profiles, macroisothermal curves are characterized by the occurrence of two extremum points, similarly to isothermal curves where a transition from liquid to gaseous phase takes place, or vice versa. For sufficiently steep density profiles, a critical macroisothermal curve exhibits a single horisontal inflexion point, which defines the critical point. Macroisothermal curves below and above the critical one, show two or no extremum point, respectively, in complete analogy with VDW isothermal curves. In any case, the existence of an equation of state in reduced variables implies the validity of the law of corresponding states for macrogases with assigned density profiles. For astrophysical fluids, the existence of a phase transition must necessarily be assumed as a working hypothesis by analogy with ordinary fluids. The phase transition has to be conceived between gas and stars, and the (${\sf O} \sY_V\sY_p$) plane may be divided into three parts, namely (i) a region bounded by the critical macroisothermal curve on the left of the critical point, and the locus of onset of phase transition on the right of the critical point, where only gas exists; (ii) a region bounded by the critical macroisothermal curve on the left of the critical point, the locus of onset of phase transition on the left of the critical point, and the vertical axis, where only stars exist; (iii) a region bounded by the locus of onset of phase transition, and the horisontal axis, where gas and stars coexist. The locus of onset of phase transition, not shown in Fig.\,\ref{f:mris} for reasons explained above, is similar to its counterpart related to ordinary fluids, represented by the bell-shaped curve in Fig.\,\ref{f:mris}, but more extended along the horisontal direction. In this view, elliptical and S0 galaxies lie on (ii) region unless hosting hot interstellar gas, and the same holds for globular clusters; spiral, irregular, and dwarf spheroidal galaxies lie on (iii) region, and the same holds for cluster of galaxies; gas clouds in absence of star formation lie on (i) region, and the same holds for hypothetic galaxies with no stars. In conclusion, van der Waals' two great discoveries, more specifically a gas equation of state where tidal interactions between molecules are taken into account, and the law of corresponding states, related to microcosmos, find a counterpart with regard to macrocosmos. After a century since the awarding of the Nobel Prize in Physics, van der Waals' ideas are still valid and helpful to day for a full understanding of the universe.
{ "timestamp": "2012-10-16T02:01:28", "yymm": "1210", "arxiv_id": "1210.3688", "language": "en", "url": "https://arxiv.org/abs/1210.3688" }
\section{Introduction} Suppose that one has $n$ copies of a quantum system each in the same state depending on an unknown parameter $\theta$, and one wishes to estimate $\theta$ by making some measurement on the $n$ systems together. This yields data whose distribution depends on $\theta$ and on the choice of the measurement. Given the measurement, we therefore have a classical parametric statistical model, though not necessarily an i.i.d. model, since we are allowed to bring the $n$ systems together before measuring the resulting joint system as one quantum object. In that case the resulting data need not consist of (a function of) $n$ i.i.d. observations, and a key quantum feature is that we can generally extract more information about $\theta$ using such ``collective'' or ``joint'' measurements than when we measure the systems separately. What is the best we can do as $n\to\infty$, when we are allowed to optimize both over the measurement and over the ensuing data processing? The objective of this paper is to study this question by extending the theory of local asymptotic normality (LAN), which is known to form an important part of the classical asymptotic theory, to quantum statistical models. Let us recall the classical LAN theory first. Given a statistical model $\S=\left\{ p_{\theta}\,;\;\theta\in\Theta\right\}$ on a probability space $(\Omega, \F, \mu)$ indexed by a parameter $\theta$ that ranges over an open subset $\Theta$ of $\R^{d}$, let us introduce a local parameter $h:=\sqrt{n}(\theta-\theta_0)$ around a fixed $\theta_0\in\Theta$. If the parametrization $\theta\mapsto p_\theta$ is sufficiently smooth, it is known that the statistical properties of the model $\left\{ p^{\otimes n}_{\theta_0+h/\sqrt{n}}\,;\, h\in\R^d\right\}$ is similar to that of the Gaussian shift model $\left\{ N(h,J_{\theta_{0}}^{-1})\,;\;h\in\R^{d}\right\}$ for large $n$, where $p^{\otimes n}_\theta$ is the $n$th i.i.d. extension of $p_\theta$, and $J_{\theta_0}$ is the Fisher information matrix of the model $p_\theta$ at $\theta_0$. This property is called the local asymptotic normality of the model $\S$ \cite{Vaart}. More generally, a sequence $\left\{ p_{\theta}^{(n)}\,;\;\theta\in\Theta\subset\R^{d}\right\}$ of statistical models on $(\Omega^{(n)}, \F^{(n)}, \mu^{(n)})$ is called {\em locally asymptotically normal} (LAN) at $\theta_{0}\in\Theta$ if there exist a $d\times d$ positive matrix $J$ and random vectors $\Delta^{(n)}=(\Delta_1^{(n)},\,\dots,\,\Delta_d^{(n)})$ such that $\Delta^{(n)}\convd 0 N(0,J)$ and \[ \log\frac{p_{\theta_{0}+h/\sqrt{n}}^{(n)}}{p_{\theta_{0}}^{(n)}} =h^{i}\Delta_{i}^{(n)}-\frac{1}{2}h^{i}h^{j}J_{ij}+o_{p_{\theta_0}}(1) \] for all $h\in\R^{d}$. Here the arrow $\convd h$ stands for the convergence in distribution under $p_{\theta_{0}+h/\sqrt{n}}^{(n)}$, the remainder term $o_{p_{\theta_0}}(1)$ converges in probability to zero under $p_{\theta_0}^{(n)}$, and Einstein's summation convention is used. The above expansion is similar in form to the log-likelihood ratio of the Gaussian shift model: \[ \log\frac{dN(h,J^{-1})}{dN(0,J^{-1})}(X^1,\dots,\,X^d)=h^i(X^j J_{ij}) -\frac{1}{2}h^ih^j J_{ij}. \] This is the underlying mechanism behind the statistical similarities between models $\left\{ p_{\theta_{0}+h/\sqrt{n}}^{(n)}\,;\;h\in\R^{d}\right\}$ and $\left\{ N(h,J^{-1})\,;\;h\in\R^{d}\right\}$. In order to put the similarities to practical use, one needs some mathematical devices. In general, a statistical theory comprises two parts. One is to prove the existence of a statistic that possesses a certain desired property (direct part), and the other is to prove the non-existence of a statistic that exceeds that property (converse part). In the problem of asymptotic efficiency, for example, the converse part, the impossibility to do asymptotically better than the best which can be done in the limit situation, is ensured by the following proposition, which is usually referred to as ``Le Cam's third lemma'' \cite{Vaart}. \begin{prop} \label{prop:clecam3} Suppose $\left\{ p_{\theta}^{(n)}\,;\;\theta\in\Theta\subset\R^{d}\right\} $ is LAN at $\theta_{0}\in\Theta$, with $\Delta^{(n)}$ and $J$ being as above, and let $X^{(n)}=(X_1^{(n)},\dots, X_r^{(n)})$ be a sequence of random vectors. If the joint distribution of $X^{(n)}$ and $\Delta^{(n)}$ converges to a Gaussian distribution, in that \[ \begin{pmatrix} X^{(n)}\\ \Delta^{(n)} \end{pmatrix} \convd{0} N\left(\begin{pmatrix}0\\ 0 \end{pmatrix}, \begin{pmatrix} \Sigma & \tau\\ \trans\tau & J \end{pmatrix}\right), \] then $X^{(n)}\convd hN(\tau h,\Sigma)$ for all $h\in\R^{d}$. Here $\trans\tau$ stands for the transpose of $\tau$. \end{prop} Now, it appears from this lemma that it already tells us something about the direct problem. \noindent In fact, by putting $X^{(n)j}:=\sum_{k=1}^{d}\left[J^{-1}\right]^{jk}\Delta_{k}^{(n)}$, we have \[ \begin{pmatrix}X^{(n)}\\ \Delta^{(n)} \end{pmatrix}\convd 0N\left(\begin{pmatrix}0\\ 0 \end{pmatrix},\begin{pmatrix}J^{-1} & I\\ I & J \end{pmatrix}\right), \] so that $X^{(n)} \convd hN(h,J^{-1})$ follows from Proposition \ref{prop:clecam3}. This proves the existence of an asymptotically efficient estimator for $h$. In the real world however, we do not know $\theta_0$ (obviously!). Thus the existence of an asymptotically optimal estimator for $h$ does not translate into the existence of an asymptotically optimal estimator of $\theta$. In fact, the usual way that Le Cam's third lemma is used in the subsequent analysis is in order to prove the so-called representation theorem, \cite[Theorem 7.10]{Vaart}. This theorem can be used to tell us in several precise mathematical senses that no estimator can asymptotically do better than what can be achieved in the limiting Gaussian model. For instance, Van der Vaart's version of the representation theorem leads to the asymptotic minimax theorem, telling us that the worst behaviour of an estimator as $\theta$ varies in a shrinking (1 over root $n$) neighbourhood of $\theta_0$ cannot improve on what we expect from the limiting problem. This theorem applies to \emph{all} possible estimators, but only discusses their \emph{worst} behaviour in a neighbourhood of $\theta$. Another option is to use the representation theorem to derive the convolution theorem, which tells us that \emph{regular} estimators (estimators whose asymptotic behaviour in a small neighbourhood of $\theta$ is more or less stable as the parameter varies) have a limiting distribution which in a very strong sense is more disperse than the optimal limiting distribution which we expect from the limiting statistical problem. This paper addresses a quantum extension of LAN (abbreviated as QLAN). As in the classical statistics, one of the important subjects of QLAN is to show the existence of an estimator (direct part) that enjoys certain desired properties. Some earlier works of asymptotic quantum parameter estimation theory revealed the asymptotic achievability of the Holevo bound, a quantum extension of the Cram\'er-Rao type bound (see Section B.1 and B.2). Using a group representation theoretical method, Hayashi and Matsumoto \cite{HayashiMatsumoto} showed that the Holevo bound for the quantum statistical model $\S(\C^2)=\{\rho_\theta\,;\,\theta\in\Theta\subset\R^3\}$ comprising the totality of density operators on the Hilbert space $\H\simeq\C^2$ is asymptotically achievable at a given single point $\theta_{0}\in\Theta$. Following their work, Gu\c{t}\u{a} and Kahn \cite{GutaQLANfor2,GutaQLANforD} developed a theory of strong QLAN, and proved that the Holevo bound is asymptotically uniformly achievable around a given $\theta_{0}\in\Theta$ for the quantum statistical model $\S(\C^D)=\{\rho_\theta\,;\,\theta\in\Theta\subset\R^{D^2-1}\}$ comprising the totality of density operators on the finite dimensional Hilbert space $\H\simeq\C^D$. They proved that an i.i.d. model $\left\{ \rho_{\theta_0+h/\sqrt{n}}^{\otimes n}\,;\;h\in\R^{D^2-1}\right\}$ and a certain quantum Gaussian shift model can be translated by quantum channels to each other asymptotically. Although their result is powerful, their QLAN has several drawbacks. First of all, their method works only for i.i.d. extension of the totality $\S(\H)$ of the quantum states on the Hilbert space $\H$, and is not applicable to generic submodels of $\S(\H)$. Moreover, it makes use of a special parametrization $\theta$ of $\S(\H)$, in which the change of eigenvalues and eigenvectors are treated as essential. Furthermore, it does not work if the reference state $\rho_{\theta_0}$ has a multiplicity of eigenvalues. Since these difficulties are inevitable in representation theoretical approach advocated by Hayashi and Matsumoto \cite{HayashiMatsumoto}, Gu\c{t}\u{a} and Jen\c{c}ov\'a \cite{GutaQLANweak} also tried a different approach to QLAN via the Connes cocycle derivative, which was put forward in the literature as an appropriate quantum analogue of the likelihood ratio. However they did not formally establish an expansion which would be directly analogous to the classical LAN. In addition, their approach is limited to faithful state models. The purpose of the present paper is to develop a theory of weak QLAN based on a new quantum extension of the log-likelihood ratio. This formulation is applicable to any quantum statistical model satisfying a mild smoothness condition, and is free from artificial setups such as the use of a special coordinate system and/or non-degeneracy of eigenvalues of the reference state at which QLAN works. We also prove asymptotic achievability of the Holevo bound for the local shift parameter $h$ that belong to a dense subset of $\R^d$. This paper is organized as follows. The main results are summarized in Section \ref{sec:mainResults}. We first introduce a novel type of quantum log-likelihood ratio, and define a quantum extension of local asymptotic normality in a quite analogous way to the classical LAN. We then explore some basic properties of QLAN, including a sufficient condition for an i.i.d. model to be QLAN, and a quantum extension of Le Cam's third lemma. Section \ref{sec:appQLAN} is devoted to application of QLAN, including the asymptotic achievability of the Holevo bound and asymptotic estimation theory for some typical qubit models. Proofs of main results are deferred to Section A. Furthermore, since we assume some basic knowledge of quantum estimation theory throughout the paper, we provide, for the reader's convenience, a brief exposition of quantum estimation theory in Section B, including quantum logarithmic derivatives, the commutation operator and the Holevo bound (Section B.1), estimation theory for quantum Gaussian shift models (Section B.2), and estimation theory for pure state models (Section B.3). It is also important to notice the limits of this work, which means that there are many open problems left to study in the future. In the classical case, the theory of LAN builds, of course, on the rich theory of convergence in distribution, as studied in probability theory. In the quantum case, there still does not exist a full parallel theory. Some of the most useful lemmas in the classical theory simply are not true when translated in the quantum domain. For instance, in the classical case, we know that if the sequence of random variables $X_n$ converges in distribution to a random variable $X$, and at the same time the sequence $Y_n$ converges in probability to a constant $c$, then this implies joint convergence in distribution of $(X_n,Y_n)$ to the pair $(X,c)$. The obvious analogue of this in the quantum domain is simply untrue. In fact, there is not even a general theory of convergence in distribution at all: there is only a theory of convergence in distribution towards quantum Gaussian limits. Unfortunately, even in this special case the natural analogue of the just mentioned result simply fails to be true. Because of these obstructions we are not at present able to follow the standard route from Le Cam's third lemma to the representation theorem, and from there to asymptotic minimax or convolution theorems. However we believe that the paper presents some notable steps in this direction. Moreover, just as with Le Cam's third lemma, one is able to use the lemma to construct what can be conjectured to be asymptotically optimal measurement and estimation schemes. We make some more remarks on these possibilities later in the paper. \section{Main results\label{sec:mainResults}} \subsection{Quantum log-likelihood ratio} In developing the theory of QLAN, it is crucial what quantity one should adopt as the quantum counterpart of the likelihood ratio. One may conceive of the Connes cocycle \[ [D\sigma, D\rho]_t:=\sigma^{\sqrt{-1}t} \, \rho^{-\sqrt{-1}t} \] as {\em the} proper counterpart since it plays an essential role in discussing the sufficiency of a subalgebra in quantum information theory \cite{Petz}. Nevertheless, we shall take a different route to the theory of QLAN, paying attention to the fact that a ``quantum exponential family'' \[ \rho_{\theta}= \e^{\frac{1}{2} (\theta L -\psi(\theta)I)} \rho_0 \, \e^{\frac{1}{2} (\theta L -\psi(\theta)I)} \] inherits nice properties of the classical exponential family \cite{{AmariNagaoka}, {FujiwaraNagaoka:1995}}. \begin{defn}[Quantum log-likelihood ratio] \label{def:qlikelihoodRatio} We say a pair of density operators $\rho$ and $\sigma$ on a finite dimensional Hilbert space $\H$ are \textit{mutually absolutely continuous,} $\rho\sim\sigma$ in symbols, if there exist a Hermitian operator $\L$ that satisfies \[ \sigma=\e^{\frac{1}{2}\L}\rho\,\e^{\frac{1}{2}\L}. \] We shall call such a Hermitian operator $\L$ a \textit{quantum log-likelihood ratio}. When the reference states $\rho$ and $\sigma$ need to be specified, $\L$ shall be denoted by $\ratio{\sigma}{\rho}$, so that \[ \sigma=\e^{\frac{1}{2}\ratio{\sigma}{\rho}}\rho\,\e^{\frac{1}{2}\ratio{\sigma}{\rho}}. \] We use the convention that $\ratio{\rho}{\rho}=0$. \end{defn} \begin{example} We say a state on $\H\simeq \C^d$ is {\em faithful} if its density operator is positive definite. Any two faithful states are always mutually absolutely continuous, and the corresponding quantum log-likelihood ratio is unique. In fact, given $\rho>0$ and $\sigma>0$, they are related as $\sigma=\e^{\frac{1}{2}\ratio{\sigma}{\rho}}\rho\e^{\frac{1}{2}\ratio{\sigma}{\rho}}$, where \[ \ratio{\sigma}{\rho}=2\log\left(\sqrt{\rho^{-1}}\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\sqrt{\rho^{-1}}\right). \] Note that $\Tr\rho\,\e^{\frac{1}{2}\ratio{\sigma}{\rho}}$ is identical to the fidelity between $\rho$ and $\sigma$, and $\e^{\frac{1}{2}\ratio{\sigma}{\rho}}$ is nothing but the operator geometric mean $\rho^{-1}\#\sigma$, where $A\#B:=A^{1/2}\left(A^{-1/2}BA^{-1/2}\right)^{1/2}A^{1/2}$ for positive operators $A,B$ \cite{KuboAndo}. Since $A\# B=B\# A$, the quantum log-likelihood ratio can also be written as \[ \ratio{\sigma}{\rho}=2\log\left(\sqrt{\sigma}\left(\sqrt{\sqrt{\sigma}\rho\sqrt{\sigma}}\right)^{-1}\sqrt{\sigma}\right). \] \end{example} \begin{example} Pure states $\rho=\ket{\psi}\bra{\psi}$ and $\sigma=\ket{\xi}\bra{\xi}$ are mutually absolutely continuous if and only if $\braket{\xi}{\psi}\not=0$. In fact, the `only if' part is obvious. For the `if' part, consider $\ratio{\sigma}{\rho}:=2\log R$ where \[ R:=I+\frac{1}{\left|\braket{\xi}{\psi}\right|}\ket{\xi}\bra{\xi}-\ket{\psi}\bra{\psi}. \] Now \[ \e^{\frac{1}{2}\ratio{\sigma}{\rho}}\ket{\psi}=R\ket{\psi}=\frac{\braket{\xi}{\psi}}{\left|\braket{\xi}{\psi}\right|}\ket{\xi}, \] showing that $\rho\sim\sigma$. \end{example} \begin{rem} In general, density operators $\rho$ and $\sigma$ are mutually absolutely continuous if and only if \begin{equation}\label{eq:absCont} \sigma\!\!\downharpoonleft_{\supp\rho} \,>0 \quad\mbox{and}\quad \rank\rho=\rank\sigma, \end{equation} where $\sigma\!\!\downharpoonleft_{\supp\rho}$ denotes the ``excision'' of $\sigma$, the operator on the subspace $\supp\rho:=(\ker\rho)^\perp$ of $\H$ defined by \[ \sigma\!\!\downharpoonleft_{\supp\rho}:=\iota_\rho^*\, \sigma\, \iota_\rho, \] where $\iota_\rho: \supp\rho\hookrightarrow \H$ is the inclusion map. In fact, the `only if' part is immediate. To prove the `if' part, let $\rho$ and $\sigma$ be represented in the form of block matrices \[ \rho= \begin{pmatrix} \rho_0 & 0\\ 0 & 0 \end{pmatrix}, \qquad \sigma= \begin{pmatrix} \sigma_0 & \alpha\\ \alpha^* & \beta \end{pmatrix} \] with $\rho_0>0$. Since the first condition in (\ref{eq:absCont}) is equivalent to $\sigma_0>0$, the matrix $\sigma$ is further decomposed as \[ \sigma= E^* \begin{pmatrix} \sigma_0 & 0\\ 0 & \beta-\alpha^* \sigma_0^{-1} \alpha \end{pmatrix} E, \qquad E:= \begin{pmatrix} I & \sigma_0^{-1} \alpha\\ 0& I \end{pmatrix}, \] and the second condition in (\ref{eq:absCont}) turns out to be equivalent to $\beta-\alpha^* \sigma_0^{-1} \alpha=0$. Now let $\ratio{\sigma}{\rho}:=2\log R$, where \[ R:= E^* \begin{pmatrix} \rho_0^{-1} \# \sigma_0 & 0\\ 0 & \gamma \end{pmatrix} E \] with $\gamma$ being an arbitrary positive matrix. Then a simple calculation shows that $\sigma=R \rho R$. The above argument demonstrates that a quantum log-likelihood ratio, if it exists, is not unique when the reference states are not faithful. To be precise, the operator $\e^{\frac{1}{2}\ratio{\sigma}{\rho}}$ is determined up to an additive constant Hermitian operator $K$ satisfying $\rho K=0$. This fact also proves that the quantity $\Tr\rho\,\e^{\frac{1}{2}\ratio{\sigma}{\rho}}$ is well-defined regardless of the uncertainty of $\ratio{\sigma}{\rho}$, and is identical to the fidelity. \end{rem} \subsection{Quantum central limit theorem} In quantum mechanics, canonical observables are represented by the following canonical commutation relations (CCR): \[ [Q_i,P_j]=\sqrt{-1}\,\hbar\delta_{ij} I, \quad [Q_i, Q_j]=0, \quad [P_i,P_j]=0, \] where $\hbar$ is the Planck constant. In what follows we shall treat a slightly generalized form of the CCR: \[ \frac{\i}{2}[X_i,X_j] = S_{ij} I \qquad(1\leq i,j\leq d), \] where $S=[S_{ij}]$ is a $d\times d$ real skew-symmetric matrix. The algebra generated by the observables $(X_{1},\,\dots,\,X_{d})$ is denoted by $\CCR S$, and $X:=(X_{1},\,\dots,\,X_{d})$ is called the basic canonical observables of the algebra $\CCR S$. (See \cite{Holevo,qclt,CCR1,CCR2} for a rigorous definition of the CCR algebra.) A state $\phi$ on the algebra $\CCR S$ is characterized by the {\em characteristic function} \[ \F_{\xi}\{\phi\}:=\phi(\e^{\i\xi^{i}X_{i}}), \] where $\xi=(\xi^{i})_{i=1}^{d}\in\R^{d}$ and Einstein's summation convention is used. A state $\phi$ on $\CCR S$ is called a \textit{quantum Gaussian state}, denoted by $\phi\sim N(h,J)$, if the characteristic function takes the form \[ \F_{\xi}\{\phi\}=\e^{\i\xi^{i}h_{i}-\frac{1}{2}\xi^{i}\xi^{j}V_{ij}}, \] where $h=(h_{i})_{i=1}^{d}\in\R^{d}$ and $V=(V_{ij})$ is a real symmetric matrix such that the Hermitian matrix $J:=V+\sqrt{-1}S$ is positive semidefinite. When the canonical observables $X$ need to be specified, we also use the notation $(X,\phi)\sim N(h,J)$. (See \cite{GillGuta,GutaUsta,Holevo,GutaQLANforD} for more information about quantum Gaussian states.) We will discuss relationships between a quantum Gaussian state $\phi$ on a CCR and a state on another algebra. In such a case, we need to use the {\em quasi-characteristic function} \begin{equation} \phi\left(\prod_{t=1}^{r}\e^{\i\xi_{t}^{i}X_{i}}\right)=\exp\left(\sum_{t=1}^{r}\left(\sqrt{-1}\xi_{t}^{i}h_{i}-\frac{1}{2}\xi_{t}^{i}\xi_{t}^{j}J_{ji}\right)-\sum_{t=1}^{r}\sum_{s=t+1}^{r}\xi_{t}^{i}\xi_{s}^{j}J_{ji}\right),\label{eq:quasiChara} \end{equation} of a quantum Gaussian state, where $(X,\phi)\sim N(h,J)$ and $\{\xi_{t}\}_{t=1}^{r}$ is a finite subset of $\C^{d}$ \cite{qclt}. Given a sequence $\H^{(n)}$, $n\in\N$, of finite dimensional Hilbert spaces, let $X^{(n)}=(X_{1}^{(n)},\,\dots,\,X_{d}^{(n)})$ and $\rho^{(n)}$ be a list of observables and a density operator on each $\H^{(n)}$. We say the sequence $\left(X^{(n)},\rho^{(n)}\right)$ {\em converges in law to a quantum Gaussian state} $N(h,J)$, denoted as $(X^{(n)},\rho^{(n)})\convq qN(h,J)$, if \[ \lim_{n\rightarrow\infty}\Tr\rho^{(n)}\left(\prod_{t=1}^{r}\e^{\sqrt{-1}\xi_{t}^{i}X_{i}^{(n)}}\right)=\phi\left(\prod_{t=1}^{r}\e^{\sqrt{-1}\xi_{t}^{i}X_{i}}\right) \] for any finite subset $\{\xi_{t}\}_{t=1}^{r}$ of $\C^{d}$, where $(X,\phi)\sim N(h,J)$. Here we do not intend to introduce the notion of ``quantum convergence in law'' in general. We use this notion only for quantum Gaussian states in the sense of convergence of quasi-characteristic function. The following is a version of the quantum central limit theorem (see \cite{qclt}, for example). \begin{prop}[Quantum central limit theorem] \label{prop:qclt} Let $A_{i}$ $(1\leq i\leq d)$ and $\rho$ be observables and a state on a finite dimensional Hilbert space $\H$ such that $\Tr\rho A_{i}=0$, and let \[ X_{i}^{(n)}:=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}I^{\otimes(k-1)}\otimes A_{i}\otimes I^{\otimes(n-k)}. \] Then $(X^{(n)},\rho^{\otimes n})\convq qN(0,J)$, where $J$ is the Hermitian matrix whose $(i,j)$th entry is given by $J_{ij}=\Tr\rho A_{j}A_{i}$. \end{prop} For later convenience, we introduce the notion of an ``infinitesimal'' object relative to the convergence $(X^{(n)},\rho^{(n)})\convq qN(0,J)$ as follows. Given a list $X^{(n)}=(X_{1}^{(n)},\,\dots,\,X_{d}^{(n)})$ of observables and a state $\rho^{(n)}$ on each $\H^{(n)}$ that satisfy $(X^{(n)},\rho^{(n)})\convq qN(0,J)\sim\left(X,\phi\right)$, we say a sequence $R^{(n)}$ of observables, each being defined on $\H^{(n)}$, is \textit{infinitesimal relative to the convergence} $(X^{(n)},\rho^{(n)})\convq qN(0,J)$ if it satisfies \begin{equation} \lim_{n\rightarrow\infty}\Tr\rho^{(n)}\left(\prod_{t=1}^{r}\e^{\sqrt{-1}\left(\xi_{t}^{i}X_{i}^{(n)}+\eta_{t}R^{(n)}\right)}\right)=\phi\left(\prod_{t=1}^{r}\e^{\sqrt{-1}\xi_{t}^{i}X_{i}}\right)\label{eq:infinitesimal} \end{equation} for any finite subset of $\left\{ \xi_{t}\right\} _{t=1}^{r}$ of $\C^{d}$ and any finite subset $\left\{ \eta_{t}\right\} _{t=1}^{r}$ of $\C$. This is equivalent to saying that \[ \left(\begin{pmatrix}X^{(n)}\\ R^{(n)} \end{pmatrix},\rho^{(n)}\right)\convq q N\left(\begin{pmatrix}0\\ 0 \end{pmatrix}, \begin{pmatrix} J & 0\\ 0& 0 \end{pmatrix}\right), \] and is much stronger a requirement than \[ (R^{(n)},\rho^{(n)})\convq qN(0,0). \] An infinitesimal object $R^{(n)}$ relative to $(X^{(n)},\rho^{(n)})\convq qN(0,J)$ will be denoted as $o(X^{(n)},\rho^{(n)})$. The following is in essence a simple extension of Proposition \ref{prop:qclt}, but will turn out to be useful in applications. \begin{lem} \label{lem:oclt} In addition to assumptions of Proposition \ref{prop:qclt}, let $P(n)$, $n\in\mathbb{N}$, be a sequence of observables on $\H$, and let \[ R^{(n)}:=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}I^{\otimes(k-1)}\otimes P(n)\otimes I^{\otimes(n-k)}. \] If $\lim_{n\rightarrow\infty}P(n)=0$ and $\lim_{n\rightarrow\infty}\sqrt{n}\,\Tr\rho P(n)=0$, then $R^{(n)}=o(X^{(n)},\rho^{\otimes n})$. \end{lem} This lemma gives a precise criterion for the convergence of quasi-characteristic function for quantum Gaussian states. \subsection{Quantum local asymptotic normality} We are now ready to extend the notion of local asymptotic normality to the quantum domain. \begin{defn}[QLAN] \label{def:QLAN}Given a sequence $\H^{(n)}$ of finite dimensional Hilbert spaces, let $\S^{(n)}=\left\{ \rho_{\theta}^{(n)}\;;\; \theta\in\Theta\subset\R^{d}\right\} $ be a quantum statistical model on $\H^{(n)}$, where $\rho_{\theta}^{(n)}$ is a parametric family of density operators and $\Theta$ is an open set. We say $\S^{(n)}$ is \textit{quantum locally asymptotically normal }(QLAN) at $\theta_{0}\in\Theta$ if the following conditions are satisfied:\end{defn} \begin{enumerate} \item for any $\theta\in\Theta$ and $n\in\N$, $\rho_{\theta}^{(n)}$ is mutually absolutely continuous to $\rho_{\theta_{0}}^{(n)}$, \item there exist a list $\Delta^{(n)}=(\Delta_{1}^{(n)},\,\dots,\,\Delta_{d}^{(n)})$ of observables on each $\H^{(n)}$ that satisfies \[ \left(\Delta^{(n)},\rho_{\theta_{0}}^{(n)}\right)\convq qN(0,J), \] where $J$ is a $d\times d$ Hermitian positive semidefinite matrix with $\Re J>0$, \item quantum log-likelihood ratio $\L_{h}^{(n)}:=\bigratio{\rho_{\theta_{0}+h/\sqrt{n}}^{(n)}}{\rho_{\theta_{0}}^{(n)}}$ is expanded in $h\in\R^d$ as \begin{equation} \L_{h}^{(n)} =h^{i}\Delta_{i}^{(n)}-\frac{1}{2}(J_{ij}h^{i}h^{j})I^{(n)}+o(\Delta^{(n)},\,\rho_{\theta_{0}}^{(n)}), \label{eq:qlanexpand} \end{equation} where $I^{(n)}$ is the identity operator on $\H^{(n)}$. \end{enumerate} It is also possible to extend Le Cam's third lemma (Proposition \ref{prop:clecam3}) to the quantum domain. To this end, however, we need a device to handle the infinitesimal residual term in (\ref{eq:qlanexpand}) in a more elaborate way. \begin{defn} \label{def:QLANX} Let $\S^{(n)}=\left\{ \rho_{\theta}^{(n)}\,;\;\theta\in\Theta\subset\R^{d}\right\} $ be as in Definition \ref{def:QLAN}, and let $X^{(n)}=(X_{1}^{(n)},\,\dots,\,X_{r}^{(n)})$ be a list of observables on $\H^{(n)}$. We say the pair $(\S^{(n)}, X^{(n)})$ is {\em jointly QLAN} at $\theta_{0}\in\Theta$ if the following conditions are satisfied:\end{defn} \begin{enumerate} \item for any $\theta\in\Theta$ and $n\in\N$, $\rho_{\theta}^{(n)}$ is mutually absolutely continuous to $\rho_{\theta_{0}}^{(n)}$, \item there exist a list $\Delta^{(n)}=(\Delta_{1}^{(n)},\,\dots,\,\Delta_{d}^{(n)})$ of observables on each $\H^{(n)}$ that satisfies \begin{equation} \left(\begin{pmatrix}X^{(n)}\\ \Delta^{(n)} \end{pmatrix},\rho_{\theta_{0}}^{(n)}\right)\convq qN\left(\begin{pmatrix}0\\ 0 \end{pmatrix},\begin{pmatrix}\Sigma & \tau\\ \tau^{*} & J \end{pmatrix}\right), \label{eq:qcovergenceTogether} \end{equation} where $\Sigma$ and $J$ are Hermitian positive semidefinite matrices of size $r\times r$ and $d\times d$, respectively, with $\Re J>0$, and $\tau$ is a complex matrix of size $r\times d$. \item quantum log-likelihood ratio $\L_{h}^{(n)}:=\bigratio{\rho_{\theta_{0}+h/\sqrt{n}}^{(n)}}{\rho_{\theta_{0}}^{(n)}}$ is expanded in $h\in\R^d$ as \begin{equation} \L_{h}^{(n)} =h^{i}\Delta_{i}^{(n)}-\frac{1}{2}(J_{ij}h^{i}h^{j})I^{(n)}+o\left(\begin{pmatrix}X^{(n)}\\ \Delta^{(n)} \end{pmatrix},\,\rho_{\theta_{0}}^{(n)}\right). \label{eq:qlecam3expand} \end{equation} \end{enumerate} With Definition \ref{def:QLANX}, we can state a quantum extension of Le Cam's third lemma as follows. \begin{thm} \label{thm:qlecam3} Let $\S^{(n)}$ and $X^{(n)}$ be as in Definition \ref{def:QLANX}. If $(\rho_{\theta}^{(n)}, X^{(n)})$ is jointly QLAN at $\theta_{0}\in\Theta$, then \[ \left(X^{(n)},\,\rho_{\theta_{0}+h/\sqrt{n}}^{(n)} \right)\convq qN(\left(\Re\tau\right)h,\,\Sigma) \] for any $h\in\R^{d}$. \end{thm} It should be emphasized that assumption \eqref{eq:qlecam3expand}, which was superfluous in classical theory, is in fact crucial in proving Theorem \ref{thm:qlecam3}. In applications, we often handle i.i.d. extensions. In classical statistics, a sequence of i.i.d. extensions of a model is LAN if the log-likelihood ratio is twice differentiable \cite{Vaart}. Quite analogously, we can prove, with the help of Lemma \ref{lem:oclt}, that a sequence of i.i.d. extensions of a quantum statistical model is QLAN if the quantum log-likelihood ratio is twice differentiable. \begin{thm} \label{thm:QLANiid} Let $\left\{ \rho_{\theta}\,;\;\theta\in\Theta\subset\R^{d}\right\} $ be a quantum statistical model on a finite dimensional Hilbert space $\H$ satisfying $\rho_{\theta}\sim\rho_{\theta_{0}}$ for all $\theta\in\Theta$, where $\theta_{0}\in\Theta$ is an arbitrarily fixed point. If $\L_{h}:=\ratio{\rho_{\theta_{0}+h}}{\rho_{\theta_{0}}}$ is differentiable around $h=0$ and twice differentiable at $h=0$, then $\left\{ \rho_{\theta}^{\otimes n}\,;\;\theta\in\Theta\subset\R^{d}\right\} $ is QLAN at $\theta_{0}$: that is, $\rho_{\theta}^{\otimes n}\sim\rho_{\theta_{0}}^{\otimes n}$, and \[ \Delta_{i}^{(n)}:=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}I^{\otimes(k-1)}\otimes L_{i}\otimes I^{\otimes(n-k)} \] and $J_{ij}:=\Tr\rho_{\theta_{0}}L_{j}L_{i}$, with $L_{i}$ being the $i$th symmetric logarithmic derivative at $\theta_{0}\in\Theta$, satisfy conditions (ii) (iii) in Definition \ref{def:QLAN}. \end{thm} By combining Theorem \ref{thm:QLANiid} with Theorem \ref{thm:qlecam3} and Lemma \ref{lem:oclt}, we obtain the following. \begin{cor} \label{cor:qlecam3iid} Let $\left\{ \rho_{\theta}\,;\;\theta\in\Theta\subset\R^{d}\right\}$ be a quantum statistical model on $\H$ satisfying $\rho_{\theta}\sim\rho_{\theta_{0}}$ for all $\theta\in\Theta$, where $\theta_{0}\in\Theta$ is an arbitrarily fixed point. Further, let $\{B_{i}\}_{1\le i\le r}$ be observables on $\H$ satisfying $\Tr\rho_{\theta_{0}}B_{i}=0$ for $i=1,\ldots,r$. If $\L_{h}:=\ratio{\rho_{\theta_{0}+h}}{\rho_{\theta_{0}}}$ is differentiable around $h=0$ and twice differentiable at $h=0$, then the pair $\left(\left\{ \rho_{\theta}^{\otimes n}\right\},\,X^{(n)}\right)$ of i.i.d. extension model $\left\{\rho_{\theta}^{\otimes n}\right\}$ and the list $X^{(n)}=\{X^{(n)}_i\}_{1\le i\le r}$ of observables defined by \[ X_{i}^{(n)}:=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}I^{\otimes(k-1)}\otimes B_{i}\otimes I^{\otimes(n-k)} \] is jointly QLAN at $\theta_{0}$, and \[ \left(X^{(n)},\rho_{\theta_{0}+h/\sqrt{n}}^{\otimes n} \right)\convq qN(\left(\Re\tau\right)h,\Sigma) \] for any $h\in\R^{d}$, where $\Sigma$ is the $r\times r$ positive semidefinite matrix defined by $\Sigma_{ij}=\Tr\rho_{\theta_{0}}B_{j}B_{i}$ and $\tau$ is the $r\times d$ matrix defined by $\tau_{ij}=\Tr\rho_{\theta_0}L_j B_i$ with $L_i$ being the $i$th symmetric logarithmic derivative at $\theta_0$. \end{cor} Corollary \ref{cor:qlecam3iid} is an i.i.d. version of the quantum Le Cam third lemma, and will play a key role in demonstrating the asymptotic achievability of the Holevo bound. \section{Applications to quantum statistics\label{sec:appQLAN}} \subsection{Achievability of the Holevo bound} Corollary \ref{cor:qlecam3iid} prompts us to expect that, for sufficiently large $n$, the estimation problem for the parameter $h$ of $\rho_{\theta_{0}+h/\sqrt{n}}^{\otimes n}$ could be reduced to that for the shift parameter $h$ of the quantum Gaussian shift model $N(\left(\Re\tau\right)h,\,\Sigma)$. The latter problem has been well-established to date (see Section B.2). In particular, the best strategy for estimating the shift parameter $h$ is the one that achieves the Holevo bound $C_{h}\left(N(\left(\Re\tau\right)h,\Sigma), \,G\right)$, (see Theorem B.7). Moreover, it is shown (see Corollary B.6) that the Holevo bound $C_{h}\left(N(\left(\Re\tau\right)h,\Sigma), \,G\right)$ is identical to the Holevo bound $C_{\theta_{0}}\left(\rho_{\theta},\, G\right)$ for the model $\rho_\theta$ at $\theta_0$. These facts suggest the existence of a sequence $M^{(n)}$ of estimators for the parameter $h$ of $\left\{\rho_{\theta_{0}+h/\sqrt{n}}^{\otimes n}\right\}_n$ that asymptotically achieves the Holevo bound $C_{\theta_{0}}\left(\rho_{\theta},\, G\right)$. The following theorem materializes this program. \begin{thm} \label{thm:achieveHolevo} Let $\left\{ \rho_{\theta}\,;\;\theta\in\Theta\subset\R^{d}\right\} $ be a quantum statistical model on a finite dimensional Hilbert space $\H$, and fix a point $\theta_{0}\in\Theta$. Suppose that $\rho_{\theta}\sim\rho_{\theta_{0}}$ for all $\theta\in\Theta$, and that the quantum log-likelihood ratio $\L_{h}:=\ratio{\rho_{\theta_{0}+h}}{\rho_{\theta_{0}}}$ is differentiable in $h$ around $h=0$ and twice differentiable at $h=0$. For any countable dense subset $D$ of $\R^{d}$ and any weight matrix $G$, there exist a sequence $M^{(n)}$ of estimators on the model $\left\{ \rho_{\theta_{0}+h/\sqrt{n}}^{\otimes n}\,;\;h\in\R^{d}\right\} $ that enjoys \[ \lim_{n\rightarrow\infty}E_{h}^{(n)}[M^{(n)}]=h \] and \[ \lim_{n\rightarrow\infty}\Tr GV_{h}^{(n)}[M^{(n)}]=C_{\theta_{0}}\left(\rho_{\theta},G\right) \] for every $h\in D$. Here $C_{\theta_{0}}\left(\rho_{\theta},G\right)$ is the Holevo bound at $\theta_{0}$. Here $E_{h}^{(n)}[\,\cdot\,]$ and $V_{h}^{(n)}[\,\cdot\,]$ stand for the expectation and the covariance matrix under the state $ \rho_{\theta_{0}+h/\sqrt{n}}^{\otimes n}$. \end{thm} Theorem \ref{thm:achieveHolevo} asserts that there is a sequence $M^{(n)}$ of estimators on $\left\{\rho_{\theta_{0}+h/\sqrt{n}}^{\otimes n}\right\}_n$ that is asymptotically unbiased and achieves the Holevo bound $C_{\theta_{0}}\left(\rho_{\theta},G\right)$ for all $h$ that belong to a dense subset of $\R^{d}$. Since this result requires only twice differentiability of the quantum log-likelihood ratio of the base model $\rho_\theta$, it will be useful in a wide range of statistical estimation problems. \subsection{Application to qubit state estimation} In order to demonstrate the applicability of our theory, we explore qubit state estimation problems. \begin{example}[3-dimensional faithful state model]\label{ex:3d} \end{example} The first example is an ordinary one, comprising the totality of faithful qubit states: \[ \S(\C^2)=\left\{ \rho_{\theta}=\frac{1}{2}\left(I+\theta^{1}\sigma_{1}+\theta^{2}\sigma_{2}+\theta^{3}\sigma_{3}\right) \,;\; \theta=(\theta^i)_{1\le i\le 3} \in \Theta \right\} \] where $\sigma_{i}$ ($i=1,2,3$) are the standard Pauli matrices and $\Theta$ is the open unit ball in $\R^3$. Due to the rotational symmetry, we take the reference point to be $\theta_0=(0,0,r)$, with $0\le r<1$. By a direct calculation, we see that the symmetric logarithmic derivatives (SLDs) of the model $\rho_\theta$ at $\theta=\theta_{0}$ are $(L_1,\,L_2,\,L_3)=\left(\sigma_{1},\,\sigma_{2},\, (rI+\sigma_{3})^{-1} \right)$, and the SLD Fisher information matrix $J^{(S)}$ at $\theta_0$ is given by the real part of the matrix \[ J:=\left[\Tr\rho_{\theta_{0}}L_{j}L_{i}\right]_{ij}=\begin{pmatrix}1 & -r\sqrt{-1} & 0\\ r\sqrt{-1} & 1 & 0\\ 0 & 0 & 1/(1-r^{2}) \end{pmatrix}. \] Given a $3\times3$ real positive definite matrix $G$, the minimal value of the weighted covariances at $\theta=\theta_0$ is given by \[ \min_{\hat{M}}\Tr GV_{\theta_{0}}[\hat{M}]=C_{\theta_{0}}^{(1)}\left(\rho_{\theta},G\right), \] where the minimum is taken over all estimators $\hat M$ that are locally unbiased at $\theta_0$, and \[ C_{\theta_{0}}^{(1)}\left(\rho_{\theta},G\right) =\left( \Tr\sqrt{ \sqrt{G} J^{(S)^{-1}} \sqrt{G} } \right)^{2} \] is the Hayashi-Gill-Massar bound \cite{GillMassar,Hayashi} (see also \cite{YamagataTomo}). On the other hand, the SLD tangent space (i.e., the linear span of the SLDs) is obviously invariant under the action of the commutation operator $\D$, and the Holevo bound is given by \[ C_{\theta_{0}}\left(\rho_{\theta},G\right):=\Tr GJ^{(R)^{-1}}+\Tr\left|\sqrt{G}\,\Im J^{(R)^{-1}}\sqrt{G}\right|, \] where \[ J^{(R)^{-1}}:=(\Re J)^{-1}J(\Re J)^{-1}=\begin{pmatrix}1 & -r\sqrt{-1} & 0\\ r\sqrt{-1} & 1 & 0\\ 0 & 0 & 1-r^{2} \end{pmatrix} \] is the inverse of the right logarithmic derivative (RLD) Fisher information matrix (See Corollary B.2). It can be shown that the Hayashi-Gill-Massar bound is greater than the Holevo bound: \[ C_{\theta_0}^{(1)}\left(\rho_{\theta},G\right)>C_{\theta_{0}}\left(\rho_{\theta},G\right). \] Let us check this fact for the special case when $G=J^{(S)}$. A direct computation shows that \[ C_{\theta_0}^{(1)}\left(\rho_{\theta},J^{(S)} \right)=9,\] and \[ C_{\theta_0}\left(\rho_{\theta},J^{(S)} \right)=3+2r. \] The left panel of Figure \ref{fig:bounds} shows the behavior of $C_{\theta_0}\left(\rho_{\theta},J^{(S)} \right)$ (solid) and $C_{\theta_0}^{(1)}\left(\rho_{\theta},J^{(S)} \right)$ (dashed) as functions of $r$. We see that the Holevo bound $C_{\theta_0}\left(\rho_{\theta},J^{(S)} \right)$ is much smaller than $C_{\theta_0}^{(1)}\left(\rho_{\theta},J^{(S)} \right)$. Does this fact imply that the Holevo bound is of no use? The answer is contrary, as Theorem \ref{thm:achieveHolevo} asserts. We will demonstrate the asymptotic achievability of the Holevo bound. Let \[ \Delta_{i}^{(n)}:=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}I^{\otimes k-1}\otimes L_{i}\otimes I^{\otimes n-k} \] and let $X_i^{(n)}:=\Delta_i^{(n)}$ for $i=1,2,3$. It follows from the quantum central limit theorem that \[ \left(\begin{pmatrix}X^{(n)}\\ \Delta^{(n)} \end{pmatrix},\,\rho_{\theta_{0}}^{\otimes n}\right)\convq qN\left(0,\begin{pmatrix}J & J\\ J & J \end{pmatrix}\right). \] Since \[ \L(\theta):=\ratio{\rho_{\theta}}{\rho_{\theta_{0}}}=2\log\left(\sqrt{\rho_{\theta_{0}}^{-1}}\sqrt{\sqrt{\rho_{\theta_{0}}}\rho_{\theta}\sqrt{\rho_{\theta_{0}}}}\sqrt{\rho_{\theta_{0}}^{-1}}\right) \] is obviously of class $C^{\infty}$ in $\theta$, Corollary \ref{cor:qlecam3iid} shows that $\left(\left\{ \rho_{\theta}^{\otimes n}\right\},\,X^{(n)}\right)$ is jointly QLAN at $\theta_{0}$, and that \[ \left(X^{(n)},\rho_{\theta_{0}+h/\sqrt{n}}^{\otimes n} \right)\convq qN((\Re J)h,J) \] for all $h\in\R^3$. This implies that a sequence of models $\left\{\rho^{\otimes n}_{\theta_0+h/\sqrt{n}} \,;\, h\in\R^d\right\}$ converges to a quantum Gaussian shift model $\left\{N((\Re J)h,\, J)\,;\,h\in\R^3\right\}$. Note that the imaginary part \[ S= \begin{pmatrix}0 & -r\sqrt{-1} & 0\\ r\sqrt{-1} & 0 & 0\\ 0 & 0 & 0 \end{pmatrix} \] of the matrix $J$ determines the $\CCR{S}$, as well as the corresponding basic canonical observables $X=(X^1,X^2,X^3)$. When $r\neq 0$, the above $S$ has the following physical interpretation: $X^1$ and $X^2$ form a canonical pair of quantum Gaussian observables, while $X^3$ is a classical Gaussian random variable. In this way, the matrix $J$ automatically tells us the structure of the limiting quantum Gaussian shift model. Now, the best strategy for estimating the shift parameter $h$ of the quantum Gaussian shift model $\left\{N((\Re J)h,\,J)\,;\,h\in\R^d\right\}$ is the one that achieves the Holevo bound $C_{h}\left(N(\left(\Re J\right)h, J), \,G\right)$, (see Theorem B.7). Moreover, this Holevo bound $C_{h}\left(N(\left(\Re J\right)h,J), \,G\right)$ is identical to the Holevo bound $C_{\theta_{0}}\left(\rho_{\theta},\, G\right)$ for the model $\rho_\theta$ at $\theta_0$, (see Corollary B.6. Recall that the matrix $J$ is evaluated at $\theta_0$ of the model $\rho_\theta$). Theorem \ref{thm:achieveHolevo} combines these facts, and concludes that there exist a sequence $M^{(n)}$ of estimators on the model $\left\{ \rho_{\theta_{0}+h/\sqrt{n}}^{\otimes n}\,;\;h\in\R^{3}\right\} $ that is asymptotically unbiased and achieves the common values of the Holevo bound: \[ \lim_{n \rightarrow \infty} \Tr G V_{h}^{(n)}[M^{(n)}] =C_{h}\left(N((\Re J)h,J),G\right) =C_{\theta_0}\left(\rho_{\theta},\,G\right) \] for all $h$ that belong to a countable dense subset of $\R^3$. It should be emphasized that the matrix $J$ becomes the identity at the origin $\theta_0=(0,0,0)$. This means that the limiting Gaussian shift model $\left\{N(h, J)\,;\;h\in\R^{3}\right\} $ is ``classical.'' Since such a degenerate case cannot be treated in \cite{GutaQLANfor2, HayashiMatsumoto, GutaQLANforD}, our method has a clear advantage in applications. \begin{figure} \begin{centering} \includegraphics{bound1} \includegraphics{bound2} \par \end{centering} \caption{ The left panel displays the Holevo bound $C_{(0,0,r)}\left(\rho_{\theta},J^{(S)} \right)$ (solid) and the Hayashi-Gill-Massar bound $C_{(0,0,r)}^{(1)}\left(\rho_{\theta},J^{(S)} \right)$ (dashed) for the 3-D model $\rho_{\theta}=\frac{1}{2}\left(I+\theta^{1}\sigma_{1}+\theta^{2}\sigma_{2}+\theta^{3}\sigma_{3}\right)$ as functions of $r=\|\theta\|$. The right panel displays the Holevo bound $C_{(0,r)}\left(\rho_{\theta},J^{(S)} \right)$ (solid) and the Nagaoka bound $C_{(0,r)}^{(1)}\left(\rho_{\theta},J^{(S)} \right)$ (dashed) for the 2-D model $\rho_{\theta}=\frac{1}{2}\left( I+\theta^{1}\sigma_{1}+\theta^{2}\sigma_{2}+\frac{1}{4}\sqrt{1-\|\theta\|^2}\, \sigma_3 \right)$. \label{fig:bounds}} \end{figure} \begin{example}[Pure state model] \end{example} The second example is to demonstrate that our formulation allows us to treat pure state models. Let us consider the model $\S=\{|\psi(\theta)\rangle\langle\psi(\theta)|\,;\, \theta=(\theta^i)_{1\le i\le 2} \in\Theta\}$ defined by \[ \psi(\theta):=\frac{1}{\sqrt{\cosh\left\| \theta\right\|}}\, \e^{\frac{1}{2}\left(\theta^{1}\sigma_{1}+\theta^{2}\sigma_{2}\right)} \left(\array{c} 1 \\ 0 \endarray\right), \] where $\Theta$ is an open subset of $\R^2$ containing the origin, and $\|\,\cdot\,\|$ denotes the Euclid norm. By a direct computation, the SLDs at $\theta_0=(0,0)$ are $(L_1,\, L_2)=(\sigma_{1}, \,\sigma_{2})$, and the SLD Fisher information matrix $J^{(S)}$ is the real part of the matrix \[ J=\left[\Tr\rho_{\theta_{0}}L_{j}L_{i}\right]_{ij}=\begin{pmatrix}1 & -\sqrt{-1}\\ \sqrt{-1} & 1 \end{pmatrix}, \] that is, $J^{(S)}=I$. Since the SLD tangent space is $\D$ invariant \cite{fujiwaraCoherent}, the Holevo bound for a weight $G>0$ is represented as \[ C_{\theta_0}\left(\rho_{\theta},G\right):=\Tr GJ^{(R)^{-1}}+\Tr\left|\sqrt{G}\,\Im J^{(R)^{-1}}\sqrt{G}\right| \] where \[ J^{(R)^{-1}}:=(\Re J)^{-1}J(\Re J)^{-1}=\begin{pmatrix}1 & -\sqrt{-1}\\ \sqrt{-1} & 1 \end{pmatrix} \] is the inverse RLD Fisher information matrix (see Corollary B.2). Let us demonstrate that our QLAN is applicable also to pure state models. Let \[ \Delta_{i}^{(n)}:=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}I^{\otimes k-1}\otimes L_{i}\otimes I^{\otimes n-k} \] and let $X_{i}^{(n)}:=\Delta_{i}^{(n)}$ for $i=1,2$. It follows from the quantum central limit theorem that \[ \left(\begin{pmatrix}X^{(n)}\\ \Delta^{(n)} \end{pmatrix}, \,\rho_{\theta_{0}}^{\otimes n}\right)\convq qN\left(0,\begin{pmatrix}J & J\\ J & J \end{pmatrix}\right). \] Since \[ \L(\theta):=\ratio{\rho_{\theta}}{\rho_{\theta_{0}}}=\theta^{1}\sigma_{1}+\theta^{2}\sigma_{2}-\log\cosh\left\| \theta\right\| \] is of class $C^{\infty}$ with respect to $\theta$, it follows from Corollary \ref{cor:qlecam3iid} that $\left(\left\{ \rho_{\theta}^{\otimes n}\right\},\,X^{(n)}\right)$ is jointly QLAN at $\theta_{0}$, and that \[ (X^{(n)},\rho_{\theta_{0}+h/\sqrt{n}}^{\otimes n})\rightsquigarrow N((\Re J)h,J)=N(h,J^{(R)^{-1}}) \] for all $h\in\R^2$. Theorem \ref{thm:achieveHolevo} further asserts that there exist a sequence $M^{(n)}$ of estimators on the model $\left\{ \rho_{\theta_{0}+h/\sqrt{n}}^{\otimes n}\,;\;h\in\R^{2}\right\} $ that is asymptotically unbiased and achieves the Holevo bound: \[ \lim_{n \rightarrow \infty} \Tr G V_{h}^{(n)}[M^{(n)}] =C_{h}\left(N(h,J^{(R)^{-1}}),\, G \right)=C_{(0,0)}\left(\rho_{\theta},\, G \right) \] for all $h$ that belong to a dense subset of $\R^3$. In fact, the sequence $M^{(n)}$ can be taken to be a separable one, making no use of quantum correlations \cite{MatsumotoPure}. (See also Section B.3 for a simple proof.) Note that the matrix $J^{(R)^{-1}}$ is degenerate, and the derived quantum Gaussian shift model $\left\{ N(h,J^{(R)^{-1}})\right\}_h $ is a canonical coherent model \cite{fujiwaraCoherent}. \begin{example}[2-dimensional faithful state model] \end{example} The third example treats the case when the SLD tangent space is not $\D$ invariant. Let us consider the model \[ \S =\left\{ \rho_{\theta} = \frac{1}{2}\left( I+\theta^{1}\sigma_{1}+\theta^{2}\sigma_{2}+ z_0 \sqrt{1-\|\theta \|^2}\,\sigma_{3} \right)\,;\; \theta=(\theta^i)_{1\le i\le 2}\in\Theta \right\}, \] where $0 \leq z_0 <1$, and $\Theta$ is the open unit disk. Due to the rotational symmetry around $z$-axis, we take the reference point to be $\theta_0=(0,r)$, with $0\le r<1$. By a direct calculation, we see that the SLDs at $\theta_0$ are $(L_{1},\, L_2) =\left( \sigma_{1},\, \frac{1}{1-r^2}(\sigma_2-r I) \right)$. It is important to notice that the SLD tangent space $\span\left\{ L_{i}\right\} _{i=1}^{2}$ is not $\D$ invariant unless $r=0$. In fact \[ \D \sigma_1=z(r) \sigma_2-r\sigma_3,\qquad \D \sigma_2=-z(r) \sigma_1, \] where $z(r):=E[\sigma_3]=z_0\sqrt{1-r^2}$. The minimal $\D$ invariant extension $\T$ of the SLD tangent space has a basis $(D_1,\, D_2,\, D_3):=(L_{1},\,L_{2},\,\sigma_{3}-z(r) I)$. The matrices $\Sigma$, $J$, and $\tau$ appeared in Definition \ref{def:QLANX} and Corollary \ref{cor:qlecam3iid} are calculated as \begin{eqnarray*} \Sigma&:=& \left[\Tr\rho_{\theta_{0}}D_{j}D_{i}\right]_{ij} =\begin{pmatrix} 1 & \displaystyle -\i \frac{z_0^2}{z(r)} & r \i -z(r) \\ \displaystyle\i \frac{z_0^2}{z(r)} &\displaystyle \frac{z_0^2}{z(r)^2} & \displaystyle-\left(\frac{r}{z(r)} + \i \right)z_0^2 \\ -r \i -z(r) & \displaystyle -\left(\frac{r}{z(r)} - \i \right)z_0^2 & 1 \end{pmatrix}, \\ \\ J&:=& \left[\Tr\rho_{\theta_{0}}L_{j}L_{i}\right]_{ij} =\begin{pmatrix} 1 & \displaystyle -\i \frac{z_0^2}{z(r)} \\ \displaystyle \i \frac{z_0^2}{z(r)} & \displaystyle \frac{z_0^2}{z(r)^2} \end{pmatrix}, \\ \\ \tau&:=& \left[\Tr\rho_{\theta_{0}}L_{j}\sigma_{i}\right]_{ij}= \begin{pmatrix} 1 & \displaystyle -\i \frac{z_0^2}{z(r)} \\ \displaystyle \i \frac{z_0^2}{z(r)} & \displaystyle \frac{z_0^2}{z(r)^2} \\ -r \i -z(r) & \displaystyle -\left(\frac{r}{z(r)} - \i\right)z_0^2 \end{pmatrix}. \end{eqnarray*} Given a $2\times 2$ real positive definite matrix $G$, the minimal value of the weighted covariances at $\theta=\theta_0$ is given by \[ \min_{\hat{M}}\Tr GV_{\theta_{0}}[\hat{M}]=C_{\theta_{0}}^{(1)}\left(\rho_{\theta},G\right), \] where the minimum is taken over all estimators $\hat M$ that are locally unbiased at $\theta_0$, and \[ C_{\theta_{0}}^{(1)}\left(\rho_{\theta},G\right) =\left( \Tr\sqrt{\sqrt{G} J^{(S)^{-1}} \sqrt{G} } \right)^{2} \] is the Nagaoka bound \cite{Nagaoka} (see also \cite{YamagataTomo}). It can be shown that the Nagaoka bound is greater than the Holevo bound: \[ C_{\theta_0}^{(1)}\left(\rho_{\theta},G\right)>C_{\theta_{0}}\left(\rho_{\theta},G\right). \] Let us check this fact for the special case when $G=J^{(S)}$. A direct computation shows that \[ C_{\theta_0}^{(1)}\left(\rho_{\theta},J^{(S)} \right)=4, \] and \begin{eqnarray*} C_{\theta_0}\left(\rho_{\theta},J^{(S)} \right) & = & \begin{cases} \displaystyle 2(1+z_0)-r^2 (1-z_0^2), & \displaystyle \text{if } \,0 \leq r \leq \sqrt{\frac{z_0}{1-z_0^2}}\\ \\ \displaystyle 2 + \frac{z_0^2}{r^2 (1-z_0^2)}, & \displaystyle \text{if }\, \sqrt{\frac{z_0}{1-z_0^2}}<r. \end{cases} \end{eqnarray*} The right panel of Figure \ref{fig:bounds} shows the behavior of $C_{\theta_0}\left(\rho_{\theta},J^{(S)} \right)$ (solid) and $C_{\theta_0}^{(1)}\left(\rho_{\theta},J^{(S)} \right)$ with $z_0=\frac{1}{4}$ (dashed) as functions of $r$. We see that Holevo bound $C_{\theta_0}\left(\rho_{\theta},J^{(S)} \right)$ is much smaller than $C_{(0,r)}^{(1)}\left(\rho_{\theta},J^{(S)} \right)$. As in Example \ref{ex:3d}, we demonstrate that the Holevo bound is asymptotically achievable. Let \[ \Delta_{i}^{(n)}:=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}I^{\otimes k-1}\otimes L_{i}\otimes I^{\otimes n-k}, \qquad (i=1,2), \] and let \[ X_{j}^{(n)}:=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}I^{\otimes k-1}\otimes D_{j}\otimes I^{\otimes n-k}, \qquad (j=1,2,3). \] It then follows from the quantum central limit theorem that \[ \left(\begin{pmatrix}X^{(n)}\\ \Delta^{(n)} \end{pmatrix},\,\rho_{\theta_{0}}^{\otimes n}\right)\convq qN\left(0,\begin{pmatrix}\Sigma & \tau\\ \tau^{*} & J \end{pmatrix}\right). \] Therefore, Corollary \ref{cor:qlecam3iid} shows that $\left(\left\{\rho_\theta^{\otimes n}\right\},X^{(n)}\right)$ is jointly QLAN at $\theta_0$, and that \[ (X^{(n)},\rho_{\theta_{0}+h/\sqrt{n}}^{\otimes n})\convq qN((\Re\tau)h,\Sigma) \] for all $h\in\R^{2}$. It should be noted that the off-diagonal block $\tau$ of the ``quantum covariance'' matrix is not a square matrix. This means that the derived quantum Gaussian shift model $\left\{ N((\Re\tau)h,\Sigma)\,;\;h\in\R^{2}\right\} $ forms a submanifold of the total quantum Gaussian shift model derived in Example \ref{ex:3d}, corresponding to a 2-dimensional linear subspace in the shift parameter space. Nevertheless, Theorem \ref{thm:achieveHolevo} asserts that there exist a sequence $M^{(n)}$ of estimators on the model $\left\{ \rho_{\theta_{0}+h/\sqrt{n}}^{\otimes n}\,;\;h\in\R^{3}\right\} $ that is asymptotically unbiased and achieves the Holevo bound: \[ \lim_{n \rightarrow \infty} \Tr G V_{h}^{(n)}[M^{(n)}] =C_{h}\left(N((\Re \tau)h,\Sigma),G\right) =C_{\theta_0}\left(\rho_{\theta},\,G\right) \] for all $h$ that belong to a dense subset of $\R^3$. \subsection{Translating estimation of $h$ to estimation of $\theta$} As we have seen in the previous subsections, our theory enables us to construct asymptotically optimal estimators of $h$ in the local models indexed by the parameter $\theta_0 + h/\sqrt{n}$. In practice of course, $\theta_0$ is unknown and hence estimation of $h$, with $\theta_0$ known, is irrelevant. The actual sequence of measurements which we have constructed depends in all interesting cases on $\theta_0$. However, the results immediately inspire two-step (or adaptive) procedures, in which we first measure a small proportion of the quantum systems, in number $n_1$ say, using some standard measurement scheme, for instance separate particle quantum tomography. From these measurement outcomes we construct an initial estimate of $\theta$, let us call it $\widetilde \theta$. We can now use our theory to compute the asymptotically optimal measurement scheme which corresponds to the situation $\theta_0=\widetilde \theta$. We proceed to implement this measurement on the remaining quantum systems collectively, estimating $h$ in the model $\theta=\widetilde \theta+ h/\sqrt{n_2}$ where $n_2$ is the number of systems still available for the second stage. What can we say about such a procedure? If $n_1/n\to \alpha >0$ as $n\to \infty$ then we can expect that the initial estimate $\widetilde\theta$ is root $n$ consistent. In smooth models, one would expect that in this case the final estimate $\widehat\theta = \widetilde\theta + \widehat h/\sqrt{n_2}$ would be asymptotically optimal \emph{up to a factor $1-\alpha$}: its limiting variance will be a factor $(1-\alpha)^{-1}$ too large. If however $n_1\to \infty$ but $n_1/n\to \alpha =0$ then one would expect this procedure to break down, unless the rate of growth of $n_1$ is very carefully chosen (and fast enough). On the other hand, instead of a direct two-step procedure, with the final estimate computed as $\widetilde\theta + \widehat h/\sqrt{n_2}$, one could be more careful in how the data obtained from the second stage measurement is used. Given the second step measurement, which results in an observed value $\widehat h$, one could write down the likelihood for $h$ based on the given measurement and the initially specified model, and compute instead of the just mentioned one-step iterate, the actual maximum likelihood estimator of $\theta$ based on the second stage data. Such procedures have earlier been studied by Gill and Massar \cite{GillMassar} and others, and shown in special cases to perform very well. However, in general, the computational problem of even calculating the likelihood given data, measurement, and model, is challenging, due to the huge size of the Hilbert space of $n$ copies of a finite dimensional quantum system. \section{Concluding remarks} We have developed a new theory of local asymptotic normality in the quantum domain based on a quantum extension of the log-likelihood ratio. This formulation is applicable to any model satisfying a mild smoothness condition, and is free from artificial setups such as the use of a special coordinate system and/or non-degeneracy of eigenvalues of the reference state. We also have proved asymptotic achievability of the Holevo bound for the local shift parameter on a dense subset of the parameter space. There are of course many open questions left. Among others, it is not clear whether every sequence of statistics on a QLAN model can be realized on the limiting quantum Gaussian shift model. In classical statistics, such a problem has been solved affirmatively as the representation theorem, which asserts that, given a weakly convergent sequence $T^{(n)}$ of statistics on $\left\{ p_{\theta_{0}+h/\sqrt{n}}^{(n)}\,;\;h\in\R^{d}\right\}$, there exist a limiting statistics $T$ on $\left\{N(h,J^{-1})\,;\;h\in\R^{d}\right\}$ such that $T^{(n)}\convd h T$. Representation theorem is useful in proving, for example, the non-existence of an asymptotically superefficient estimator (the converse part, as stated in Introduction). Moreover, the so-called convolution theorem and local asymptotic minimax theorem, which are the standard tools in discussing asymptotic lower bounds for estimation in LAN models, immediately follows \cite{Vaart}. Extending the representation theorem, convolution theorem, and local asymptotic minimax theorem to the quantum domain is an intriguing open problem. However it surely is possible to make some progress in this direction, as for instance the results of Gill and Gu\c{t}\u{a} \cite{GillGuta}. In that paper, the van Trees inequality was used to derive some results in a ``poor man's'' version of QLAN theory; see also \cite{vanTrees}. It also remains to be seen whether our asymptotically optimal statistical procedures for the local model with local parameter $h$ can be translated into useful statistical procedures for the real world case in which $\theta_0$ is unknown.
{ "timestamp": "2013-08-30T02:06:24", "yymm": "1210", "arxiv_id": "1210.3749", "language": "en", "url": "https://arxiv.org/abs/1210.3749" }
\section{Introduction} Ricci solitons are precisely those Riemannian metrics that are `nice' enough to be `upgraded' by the Ricci flow, in the sense that they just evolve by scaling and pull-back by diffeomorphisms. A good understanding of them is therefore crucial to study the singularity behavior of the Ricci flow on any class of manifolds. Trivial examples are provided by Einstein metrics, as well as by direct products of an Einstein manifold with any Euclidean space. Non-K\"ahler examples of Ricci solitons are still very hard to find (see \cite{DncHllWng}). Homogeneity seems to be a very strong condition to impose. Nontrivial homogeneous Ricci solitons must be expanding (i.e. the scaling is time increasing) and they can never be compact nor gradient. One could say that they should not exist, but they do. Actually, the nilpotent part of any Einstein solvable Lie group gives an example of a homogeneous Ricci soliton, known as a {\it nilsoliton} in the literature (see \cite{soliton}). One may also extend a nilsoliton to a different solvable Lie group and obtain Ricci solitons which are not Einstein, the so called {\it solvsolitons} (see \cite{solvsolitons}). In all these examples the following `algebro-geometric' condition holds: \begin{equation}\label{ricder} \Ricci(g)=cI+D, \qquad\mbox{for some}\quad c\in\RR,\quad D\in\Der(\sg), \end{equation} once the left-invariant metric $g$ is identified with an inner product on the Lie algebra $\sg$ of the solvable Lie group $S$. So far, simply connected solvsolitons are the only known examples of nontrivial homogeneous Ricci solitons. A homogeneous space $(G/K,g)$ is said to be a {\it semi-algebraic soliton} if there exists a one-parameter family of equivariant diffeomorphisms $\vp_t\in\Aut(G/K)$ (i.e. automorphisms of $G$ taking $K$ onto $K$) such that $g(t)=c(t)\vp_t^*g$ is a solution to the Ricci flow starting at $g(0)=g$ for some scaling function $c(t)>0$. It is called an {\it algebraic soliton} if in addition, for some reductive decomposition $\ggo=\kg\oplus\pg$, the derivatives $d\vp_t|_o:\pg\longrightarrow\pg$ are all symmetric. The notion of algebraic soliton is precisely the generalization of condition (\ref{ricder}) to any Lie group or homogeneous space. It has recently been proved in \cite{Jbl} that any homogeneous Ricci soliton $(M,g)$ is semi-algebraic with respect to its full isometry group $G=\Iso(M,g)$. We give in Section \ref{hrs} an up-to-date overview on homogeneous Ricci solitons. Next, in Section \ref{ASBF}, we study the evolution of semi-algebraic solitons under the {\it bracket flow}, an ODE for a family $\mu(t)\in\hca_{q,n}\subset\lamg$. Here $\hca_{q,n}$ denotes the subset of the variety of Lie brackets on the fixed vector space $\ggo$ parameterizing the space of all homogeneous spaces of dimension $n$ with a $q$-dimensional isotropy (see \cite{spacehm}). This dynamical system has been proved in \cite{homRF} to be equivalent in a precise sense to the Ricci flow (preliminaries on this machinery are given in Section \ref{hm}). We first show that algebraic solitons are precisely the fixed points, and hence the possible limits of any normalized bracket flow. This in particular yields their asymptotic behavior and implies that algebraic solitons are all {\it Ricci flow diagonal}, in the sense that the Ricci flow solution $g(t)$ simultaneously diagonalizes with respect to a fixed orthonormal basis of some tangent space. Furthermore, given a starting point $\mu_0\in\hca_{q,n}$, we prove that one can obtain at most one nonflat algebraic soliton $\lambda$ as a limit by running all possible normalized bracket flow solutions $\mu(t)$. By using \cite{spacehm}, we can translate this convergence $\mu(t)\to\lambda$ of Lie brackets into more geometric notions of convergence, including convergence on the pointed or Cheeger-Gromov topology. The limit Lie bracket $\lambda$ might be non-isomorphic to $\mu(t)$ and therefore provides an explicit limit $(G_\lambda/K_\lambda,g_\lambda)$ which is often non-diffeomorphic and even non-homeomorphic to the starting homogeneous manifold $(G_{\mu_0}/K_{\mu_0},g_{\mu_0})$ . Regarding semi-algebraic solitons, we obtain that under any normalized bracket flow, they simply evolve by $$ \mu(t)=e^{tA}\cdot\mu_0=e^{tA}\mu_0(e^{-tA}\cdot, e^{-tA}\cdot), $$ for some skew-symmetric map $A:\ggo\longrightarrow\ggo$. Furthermore, they are algebraic solitons if and only if $A\in\Der(\mu_0)$ (i.e. fixed points). In particular, any homogeneous Ricci soliton would be necessarily isometric to an algebraic soliton in case the bracket flow was not chaotic, in the sense that the $\omega$-limit set of any solution is a single point. This is an open question. Whereas being a Ricci soliton is invariant under isometry, the concept of semi-algebraic soliton is not, as it may depend on the presentation of the homogeneous manifold $(M,g)$ as a homogeneous space $(G/K,g)$. We prove in Section \ref{algdiag} that the property of being Ricci flow diagonal characterizes algebraic solitons. Namely: \begin{quote} A homogeneous Ricci soliton is Ricci flow diagonal if and only if it is isometric to an algebraic soliton. \end{quote} Note that this is a geometric characterization as the property of being Ricci flow diagonal is also invariant under isometry. The Ricci flow evolution of a homogeneous Ricci soliton which is not isometric to any algebraic soliton, if any, is therefore quite different from all known examples (i.e. solvsolitons). \vs \noindent {\it Acknowledgements.} We are very grateful to M. Jablonski for fruitful discussions on the topic of this paper. \section{Preliminaries}\label{hm} Our aim in this section is to briefly describe a framework developed in \cite{spacehm} which allows us to work on the `space of homogeneous manifolds', by parameterizing the set of all simply connected homogeneous spaces of dimension $n$ and isotropy dimension $q$ by a subset $\hca_{q,n}$ of the variety of $(q+n)$-dimensional Lie algebras. According to the results in \cite{homRF}, the Ricci flow is equivalent to an ODE system on $\hca_{q,n}$ called the bracket flow. Given a connected homogeneous Riemannian manifold $(M,g)$, each transitive closed Lie subgroup $G\subset\Iso(M,g)$ gives rise to a presentation of $(M,g)$ as a homogeneous space $(G/K,g)$, where $K$ is the isotropy subgroup of $G$ at some point $o\in M$ and $g$ becomes a $G$-invariant metric. As $K$ is compact, there always exists a {\it reductive} (i.e. $\Ad(K)$-invariant) decomposition $\ggo=\kg\oplus\pg$, where $\ggo$ and $\kg$ are respectively the Lie algebras of $G$ and $K$. Thus $\pg$ can be naturally identified with the tangent space $\pg\equiv T_oM=T_oG/K$, by taking the value at the origin $o=eK$ of the Killing vector fields corresponding to elements of $\pg$ (i.e. $X_o=\ddt|_0\exp{tX}(o)$). Let $g_{\ip}$ denote the $G$-invariant metric on $G/K$ determined by $\ip:=g(o)$, the $\Ad(K)$-invariant inner product on $\pg$ defined by $g$. In this situation, when a reductive decomposition has already been chosen, the homogeneous space will be denoted by $(G/K,g_{\ip})$. In order to get a presentation $(M,g)=(G/K,g_{\ip})$ of a connected homogeneous manifold as a homogeneous space, there is no need for $G\subset\Iso(M,g)$ to hold, that is, an {\it effective} action. It is actually enough to have a transitive action of $G$ on $M$ by isometries being {\it almost-effective} (i.e. the normal subgroup $\{ g\in G:ghK=hK, \;\forall h\in G\}$ of $K$ is discrete), along with a reductive decomposition $\ggo=\kg\oplus\pg$ such that the inner product $\ip$ on $\pg$ defined by $\ip:=g(o)$ is $\Ad(K)$-invariant. Any homogeneous space considered in this paper will be assumed to be almost-effective and connected. In the study of homogeneous Ricci solitons carried out in the present paper, the following special reductive decomposition has often been very convenient to take. \begin{lemma}\label{Bkp0} Let $(G/K,g)$ be a homogeneous space. Then there exists a reductive decomposition $\ggo=\kg\oplus\pg$ such that $B(\kg,\pg)=0$, where $B$ is the Killing form of $\ggo$. \end{lemma} \begin{proof} It follows by taking $\pg$ as the orthogonal complement of $\kg$ in $\ggo$ with respect to $B$. Recall that $B|_{\kg\times\kg}<0$ since it is well known that $\overline{\Ad(K)}$ is compact in $\Gl(\ggo)$ and the isotropy representation $\ad:\kg\longrightarrow\End(\pg)$ is faithful by almost-effectiveness . This implies that $\kg\cap\pg=0$, and since $\dim{\pg}\geq\dim{\ggo}-\dim{\kg}$ we obtain $\ggo=\kg+\pg$, concluding the proof. \end{proof} \subsection{Varying Lie brackets viewpoint}\label{varhs} (See \cite{spacehm} for further information). Let us fix for the rest of the section a $(q+n)$-dimensional real vector space $\ggo$ together with a direct sum decomposition \begin{equation}\label{fixdec} \ggo=\kg\oplus\pg, \qquad \dim{\kg}=q, \qquad \dim{\pg=n}, \end{equation} and an inner product $\ip$ on $\pg$. We consider the space of all skew-symmetric algebras (or brackets) of dimension $q+n$, which is parameterized by the vector space $$ V_{q+n}:=\{\mu:\ggo\times\ggo\longrightarrow\ggo : \mu\; \mbox{bilinear and skew-symmetric}\}, $$ and we set $$ V_{n}:=\{\mu:\pg\times\pg\longrightarrow\pg : \mu\; \mbox{bilinear and skew-symmetric}\}. $$ \begin{definition}\label{hqn} The subset $\hca_{q,n}\subset V_{q+n}$ consists of the brackets $\mu\in V_{q+n}$ such that: \begin{itemize} \item [(h1)] $\mu$ satisfies the Jacobi condition, $\mu(\kg,\kg)\subset\kg$ and $\mu(\kg,\pg)\subset\pg$. \item[(h2)] If $G_\mu$ denotes the simply connected Lie group with Lie algebra $(\ggo,\mu)$ and $K_\mu$ is the connected Lie subgroup of $G_\mu$ with Lie algebra $\kg$, then $K_\mu$ is closed in $G_\mu$. \item[(h3)] $\ip$ is $\ad_{\mu}{\kg}$-invariant (i.e. $(\ad_{\mu}{Z}|_{\pg})^t=-\ad_{\mu}{Z}|_{\pg}$ for all $Z\in\kg$). \item[(h4)] $\{ Z\in\kg:\mu(Z,\pg)=0\}=0$. \end{itemize} \end{definition} Each $\mu\in\hca_{q,n}$ defines a unique simply connected homogeneous space, \begin{equation}\label{hsmu} \mu\in\hca_{q,n}\rightsquigarrow\left(G_{\mu}/K_{\mu},g_\mu\right), \end{equation} with reductive decomposition $\ggo=\kg\oplus\pg$ and $g_\mu(o_\mu)=\ip$, where $o_\mu:=e_\mu K_\mu$ is the origin of $G_\mu/K_\mu$ and $e_\mu\in G_\mu$ is the identity element. It is almost-effective by (h4), and it follows from (h3) that $\ip$ is $\Ad(K_{\mu})$-invariant as $K_{\mu}$ is connected. We note that any $n$-dimensional simply connected homogeneous space $(G/K,g)$ which is almost-effective can be identified with some $\mu\in\hca_{q,n}$, where $q=\dim{K}$. Indeed, $G$ can be assumed to be simply connected without losing almost-effectiveness, and we can identify any reductive decomposition with $\ggo=\kg\oplus\pg$. In this way, $\mu$ will be precisely the Lie bracket of $\ggo$. We also fix from now on a basis $\{ Z_1,\dots,Z_q\}$ of $\kg$ and an orthonormal basis $\{ X_1,\dots,X_n\}$ of $\pg$ (see (\ref{fixdec})) and use them to identify the groups $\Gl(\ggo)$, $\Gl(\kg)$, $\Gl(\pg)$ and $\Or(\pg,\ip)$, with $\Gl_{q+n}(\RR)$, $\Gl_q(\RR)$, $\Gl_n(\RR)$ and $\Or(n)$, respectively. There is a natural linear action of $\Gl_{q+n}(\RR)$ on $V_{q+n}$ given by \begin{equation}\label{action} h\cdot\mu(X,Y)=h\mu(h^{-1}X,h^{-1}Y), \qquad X,Y\in\ggo, \quad h\in\Gl_{q+n}(\RR),\quad \mu\in V_{q+n}. \end{equation} If $\mu\in\hca_{q,n}$, then $h\cdot\mu\in\hca_{q,n}$ for any $h\in\Gl_{q+n}(\RR)$ of the form \begin{equation}\label{formh} h:=\left[\begin{smallmatrix} h_q&0\\ 0&h_n \end{smallmatrix}\right]\in\Gl_{q+n}(\RR), \quad h_q\in\Gl_q(\RR), \quad h_n\in\Gl_n(\RR), \end{equation} such that \begin{equation}\label{adkh} [h_n^th_n,\ad_{\mu}{\kg}|_{\pg}]=0. \end{equation} We have that $\left(G_{h\cdot\mu}/K_{h\cdot\mu},g_{h\cdot\mu}\right)$ is equivariantly isometric to $\left(G_{\mu}/K_{\mu},g_{\la h_n\cdot,h_n\cdot\ra}\right)$, and in particular, the subset $$ \left\{ h\cdot\mu:h_q=I,\, h_n\,\mbox{satisfies (\ref{adkh})}\right\}\subset\hca_{q,n}, $$ parameterizes the set of all $G_\mu$-invariant metrics on $G_\mu/K_\mu$. Also, by setting $h_q=I$, $h_n=\frac{1}{c}I$, $c\ne 0$, we get the rescaled $G_\mu$-invariant metric $\frac{1}{c^2}g_{\ip}$ on $G\mu/K_\mu$, which is isometric to the element of $\hca_{q,n}$ denoted by $c\cdot\mu$ and defined by \begin{equation}\label{scmu} c\cdot\mu|_{\kg\times\kg}=\mu, \qquad c\cdot\mu|_{\kg\times\pg}=\mu, \qquad c\cdot\mu|_{\pg\times\pg}=c^2\mu_{\kg}+c\mu_{\pg}, \end{equation} where the subscripts denote the $\kg$ and $\pg$-components of $\mu|_{\pg\times\pg}$ given by \begin{equation}\label{decmu} \mu(X,Y)=\mu_{\kg}(X,Y)+\mu_{\pg}(X,Y), \qquad \mu_{\kg}(X,Y)\in\kg, \quad \mu_{\pg}(X,Y)\in\pg, \qquad\forall X,Y\in\pg. \end{equation} The $\RR^*$-action on $\hca_{q,n}$, $\mu\mapsto c\cdot\mu$, can therefore be considered as a geometric rescaling of the homogeneous space $(G_\mu/K_\mu,g_\mu)$. \subsection{Homogeneous Ricci flow}\label{hrf} (See \cite[Section 3]{homRF} for a more detailed treatment). Let $(M,g_0)$ be a simply connected homogeneous manifold. Thus $(M,g_0)$ has a presentation as a homogeneous space of the form $\left(G_{\mu_0}/K_{\mu_0},g_{\mu_0}\right)$ for some $\mu_0\in\hca_{q,n}$, with reductive decomposition $\ggo=\kg\oplus\pg$ (see Section \ref{varhs}). Let $g(t)$ be the unique homogeneous solution to a {\it normalized Ricci flow} \begin{equation}\label{RFrn} \dpar g(t)=-2\ricci(g(t))-2r(t)g(t),\qquad g(0)=g_0, \end{equation} for some {\it normalization function} $r(t)$ which may depend on $g(t)$. For any continuous function $r$, $g(t)$ can be obtained by just rescaling and reparameterizing the time variable of the usual {\it unnormalized} (i.e. $r\equiv 0$) Ricci flow solution. It follows that $g(t)$ is $G_{\mu_0}$-invariant for all $t$, and thus $(M,g(t))$ is isometric to the homogeneous space $\left(G_{\mu_0}/K_{\mu_0},g_{\ip_t}\right)$, where $\ip_t:=g(t)(o_{\mu_0})$ is a family of inner products on $\pg$. The Ricci flow equation (\ref{RFrn}) is therefore equivalent to the ODE system \begin{equation}\label{RFiprn} \ddt\ip_t=-2\ricci(\ip_t)-2r(t)\ip_t, \qquad \ip_0=\ip, \end{equation} where $\ricci(\ip_t):=\ricci(g(t))(o_{\mu_0})$. \subsection{The bracket flow}\label{lbflow} (See \cite[Sections 3.1-3.3]{homRF} for a more gentle presentation). The ODE system for a family $\mu(t)\in V_{q+n}=\lamg$ of bilinear and skew-symmetric maps defined by \begin{equation}\label{BFrn} \ddt\mu=-\pi\left(\left[\begin{smallmatrix} 0&0\\ 0&\Ricci_{\mu}+rI \end{smallmatrix}\right]\right)\mu, \qquad \mu(0)=\mu_0, \end{equation} is called a {\it normalized bracket flow}. Here $\Ricci_{\mu}$ is defined as in \cite[Section 2.3]{homRF} and coincides with the Ricci operator when $\mu\in\hca_{q,n}$, and $\pi:\glg_{q+n}(\RR)\longrightarrow\End(V_{q+n})$ is the natural representation given by \begin{equation}\label{actiong} \pi(A)\mu=A\mu(\cdot,\cdot)-\mu(A\cdot,\cdot)-\mu(\cdot,A\cdot), \qquad A\in\glg_{q+n}(\RR),\quad\mu\in V_{q+n}. \end{equation} We note that $\pi$ is the derivative of the $\Gl_{q+n}(\RR)$-representation defined in (\ref{action}). A homogeneous space $(G_{\mu(t)}/K_{\mu(t)},g_{\mu(t)})$ can indeed be associated to each $\mu(t)$ in a bracket flow solution provided that $\mu_0\in\hca_{q,n}$, since it follows that $\mu(t)\in\hca_{q,n}$ for all $t$. For a given simply connected homogeneous manifold $(M,g_0)=\left(G_{\mu_0}/K_{\mu_0},g_{\mu_0}\right)$, $\mu_0\in\hca_{q,n}$, we can therefore consider the following one-parameter families: \begin{equation}\label{3rmrn} (M,g(t)), \qquad \left(G_{\mu_0}/K_{\mu_0},g_{\ip_t}\right), \qquad \left(G_{\mu(t)}/K_{\mu(t)},g_{\mu(t)}\right), \end{equation} where $g(t)$, $\ip_t$ and $\mu(t)$ are the solutions to the normalized Ricci flows (\ref{RFrn}), (\ref{RFiprn}) and the normalized bracket flow (\ref{BFrn}), respectively. Recall that $\ggo=\kg\oplus\pg$ is a reductive decomposition for any of the homogeneous spaces involved. According to the following result, the Ricci flow and the bracket flow are intimately related. \begin{theorem}\label{eqflrn}\cite[Theorem 3.10]{homRF} There exist diffeomorphisms $\vp(t):M\longrightarrow G_{\mu(t)}/K_{\mu(t)}$ such that $$ g(t)=\vp(t)^*g_{\mu(t)}, \qquad t\in (T_-,T_+), $$ where $-\infty\leq T_-<0<T_+\leq\infty$ and $(T_-,T_+)$ is the maximal interval of time existence for both flows. Moreover, if we identify $M=G_{\mu_0}/K_{\mu_0}$, then $\vp(t):G_{\mu_0}/K_{\mu_0}\longrightarrow G_{\mu(t)}/K_{\mu(t)}$ can be chosen as the equivariant diffeomorphism determined by the Lie group isomorphism between $G_{\mu_0}$ and $G_{\mu(t)}$ with derivative $\tilde{h}:=\left[\begin{smallmatrix} I&0\\ 0&h \end{smallmatrix}\right]:\ggo\longrightarrow\ggo$, where $h(t)=d\vp(t)|_{o_{\mu_0}}:\pg\longrightarrow\pg$ is the solution to any of the following ODE systems: \begin{itemize} \item[(i)] $\ddt h=-h(\Ricci(\ip_t)+r(t)I)$, $\quad h(0)=I$. \item[(ii)] $\ddt h=-(\Ricci_{\mu(t)}+r(t)I)h$, $\quad h(0)=I$. \end{itemize} The following conditions also hold: \begin{itemize} \item[(iii)] $\ip_t=\la h\cdot,h\cdot\ra$. \item[(iv)] $\mu(t)=\tilde{h}\mu_0(\tilde{h}^{-1}\cdot,\tilde{h}^{-1}\cdot)$. \end{itemize} \end{theorem} It follows that $\mu(t)|_{\kg\times\ggo}=\mu_0|_{\kg\times\ggo}$ for all $t\in (T_-,T_+)$, that is, only $\mu(t)|_{\pg\times\pg}$ is actually evolving, and so the bracket flow equation (\ref{BFrn}) can be rewritten as the system \begin{equation}\label{BFrnsis} \left\{\begin{array}{ll} \ddt\mu_{\kg}=\mu_{\kg}(\Ricci_{\mu}\cdot,\cdot)+\mu_{\kg}(\cdot,\Ricci_{\mu}\cdot) +2r\mu_{\kg}(\cdot,\cdot), & \\ & \mu_{\kg}(0)+\mu_{\pg}(0)=\mu_0|_{\pg\times\pg},\\ \ddt\mu_{\pg}=-\pi_n(\Ricci_{\mu}+rI)\mu_{\pg} =-\pi_n(\Ricci_{\mu})\mu_{\pg} +r\mu_{\pg}. & \end{array}\right. \end{equation} where $\mu_{\kg}$ and $\mu_{\pg}$ are the components of $\mu|_{\pg\times\pg}$ as in (\ref{decmu}) and $\pi_n:\glg_n(\RR)\longrightarrow\End(V_n)$ is the representation defined in (\ref{actiong}) for $q=0$. Let $\nu(t)$ denote from now on the unnormalized (i.e. $r\equiv 0$) bracket flow solution with $\nu(0)=\mu_0$. Then any normalized bracket flow solution is given by \begin{equation}\label{ctau} \mu(t)=c(t)\cdot\nu(\tau(t)), \qquad t\in (T_-,T_+), \end{equation} for some rescaling $c(t)>0$ (defined by $c'=rc$, $c(0)=1$) and time reparameterization $\tau(t)$ (defined by $\tau'=c^2$, $\tau(0)=0$). If $(T_-^0,T_+^0)$ denotes the maximal interval of time existence for $\nu(t)$, then $\tau:(T_-,T_+)\longrightarrow(T_-^0,T_+^0)$ is a strictly increasing function, though non-necessarily surjective. \section{Homogeneous Ricci solitons}\label{hrs} A Riemannian manifold $(M,g)$ is called {\it Einstein} if $\ricci(g)=cg$ for some $c\in\RR$ (see e.g. \cite{Bss}). The Einstein equation for an $n$-dimensional homogeneous space is just a system of $\tfrac{n(n+1)}{2}$ algebraic equations, but unfortunately, quite an involved one, and it is still open the question of which homogeneous spaces $G/K$ admit a $G$-invariant Einstein metric (see the surveys \cite{Wng, cruzchica} and the references therein). In the noncompact homogeneous case, the only known non-flat examples until now are all simply connected {\it solvmanifolds}, which are defined in this paper to be solvable Lie groups endowed with a left-invariant metric. According to the long standing {\it Alekseevskii conjecture} (see \cite[7.57]{Bss} and \cite{alek}), asserting that any Einstein connected homogeneous manifold of negative scalar curvature is diffeomorphic to a Euclidean space, Einstein solvmanifolds might exhaust all the possibilities for noncompact homogeneous Einstein manifolds. A nice and important generalization of Einstein metrics is the following notion. A complete Riemannian manifold $(M,g)$ is called a {\it Ricci soliton} if \begin{equation}\label{rseq} \ricci(g)=cg+\lca_Xg, \qquad\mbox{for some}\; c\in\RR, \quad X\in\chi(M), \end{equation} where $\lca_Xg$ is the usual Lie derivative of $g$ in the direction of the (complete) vector field $X$ (as in the Einstein case, $c$ is often called the {\it cosmological constant} of the Ricci soliton $g$). Ricci solitons correspond to solutions of the Ricci flow that evolve self similarly, that is, only by scaling and pullback by diffeomorphisms, and often arise as limits of dilations of singularities of the Ricci flow. More precisely, $g$ is a Ricci soliton if and only if the one-parameter family of metrics \begin{equation}\label{rssol} g(t)=(-2ct+1)\vp_t^*g, \end{equation} is a solution to the Ricci flow for some one-parameter group $\vp_t$ of diffeomorphisms of $M$ (see e.g. \cite{libro,Cao} and the references therein for further information on Ricci solitons). From results due to Ivey, Naber, Perelman and Petersen-Wylie (see \cite[Section 2]{solvsolitons}), it follows that any {\it nontrivial} (i.e. non-Einstein and not the product of an Einstein homogeneous manifold with a Euclidean space) homogeneous Ricci soliton must be noncompact, expanding (i.e. $c<0$) and non-gradient (i.e. $X$ is not the gradient field of any smooth function on $M$). Any known example so far of a nontrivial homogeneous Ricci soliton is isometric to a simply connected {\it solvsoliton}, that is, a solvmanifold $(S,g)$ satisfying \begin{equation}\label{rsD} \Ricci(g)=cI+D, \qquad\mbox{for some}\quad c\in\RR,\quad D\in\Der(\sg), \end{equation} once the metric $g$ is identified with an inner product on the Lie algebra $\sg$ of $S$. When $S$ is nilpotent, these metrics are called {\it nilsolitons} and are precisely the nilpotent parts of Einstein solvmanifolds (see the survey \cite{cruzchica} for further information). It is proved in \cite{solvsolitons} that, up to isometry, any solvsoliton can be obtained via a very simple construction from a nilsoliton $(N,g_1)$ together with any abelian Lie algebra of symmetric derivations of the metric Lie algebra of $(N,g_1)$. Furthermore, a given solvable Lie group can admit at most one solvsoliton left invariant metric up to isometry and scaling, and another consequence of \cite{solvsolitons} is that any Ricci soliton obtained via (\ref{rsD}) is necessarily simply connected (see also \cite{Lfn,Wll} for examples and classification results on solvsolitons). The following recently proved result completes the picture for Ricci soliton left-invariant metrics on solvable Lie groups. \begin{theorem} \cite[Theorem 1.1]{Jbl} Any (nonflat) Ricci soliton admitting a transitive solvable Lie group of isometries is isometric to a simply connected solvsoliton. \end{theorem} The concept of solvsoliton can be easily generalized to the class of all homogeneous spaces as follows. \begin{definition}\label{as} A homogeneous space $(G/K,g_{\ip})$ with reductive decomposition $\ggo=\kg\oplus\pg$ is said to be an {\it algebraic soliton} if there exist $c\in\RR$ and $D\in\Der(\ggo)$ such that $D\kg\subset\kg$ and $$ \Ricci(g_{\ip})=cI+D_{\pg}, $$ where $D_{\pg}:=\proy\circ D|_{\pg}$ and $\proy:\ggo=\kg\oplus\pg\longrightarrow\pg$ is the linear projection. \end{definition} We note that Einstein homogeneous manifolds are algebraic solitons with respect to any presentation as a homogeneous space and any reductive decomposition, by just taking $D=0$. The following result supports in a way the above definition. \begin{proposition}\label{asrs} Any simply connected algebraic soliton $(G/K,g_{\ip})$ is a Ricci soliton. \end{proposition} \begin{remark} The hypothesis of $G/K$ being simply connected is in general necessary in the above proposition, as the case of solvsolitons shows (see \cite[Remark 4.12]{solvsolitons}). \end{remark} \begin{proof} We can assume that $G$ is simply connected and still have that $G/K$ is almost-effective. Notice that $K$ is therefore connected as $G/K$ is simply connected. Since $D\in\Der(\ggo)$ we have that $e^{tD}\in\Aut(\ggo)$ and thus there exists $\tilde{\vp}_t\in\Aut(G)$ such that $d\tilde{\vp}_t|_e=e^{tD}$ for all $t\in\RR$. By using that $K$ is connected and $D\kg\subset\kg$, it is easy to see that $\tilde{\vp}_t(K)=K$ for all $t$. This implies that $\tilde{\vp}_t$ defines a diffeomorphism $\vp_t$ of $M=G/K$ by $\vp_t(uK)=\tilde{\vp}_t(u)K$ for any $u\in G$, which therefore satisfies at the origin that $d\vp_t|_{o}=e^{tD_{\pg}}$. Let $X_D$ denote the vector field of $M$ defined by the one-parameter subgroup $\{\vp_t\}\subset\Diff(M)$, that is, $X_D(p)=\ddt|_0\vp_t(p)$ for any $p\in M$. It follows from the symmetry of $D_{\pg}$ that \begin{equation}\label{Lder} \lca_{X_D}g_{\ip}=\ddt|_0\vp^*_tg_{\ip} =\ddt|_0\la e^{-tD_{\pg}}\cdot, e^{-tD_{\pg}}\cdot\ra = -2\la D_{\pg}\cdot,\cdot\ra, \end{equation} but since $\Ricci=cI+D_{\pg}$, we obtain that $\ricci(g_{\ip})=cg_{\ip}-\unm \lca_{X_D}g_{\ip}$, and so $g_{\ip}$ is a Ricci soliton (see (\ref{rseq})), as was to be shown. \end{proof} \begin{remark}\label{Dkcero} In Definition \ref{as}, $D\kg=0$ necessarily holds. Indeed, we have that $$ \ad{DZ}|_\pg=[D|_\pg,\ad{Z}|_\pg]=[\Ricci(g_{\ip}),\ad{Z}|_\pg]=0, \qquad\forall Z\in\kg, $$ and thus $D\kg=0$ by almost-effectiveness. \end{remark} From the proof of Proposition \ref{asrs}, one may perceive that there is a more general way to consider a homogeneous Ricci soliton `algebraic', in the sense that the algebraic structure of some of its presentations as a homogeneous space be strongly involved. \begin{definition}\label{sas}\cite[Definition 1.4]{Jbl} A homogeneous space $(G/K,g)$ is called a {\it semi-algebraic soliton} if there exists a one-parameter family $\tilde{\vp}_t\in\Aut(G)$ with $\tilde{\vp}_t(K)=K$ such that $$ g(t)=c(t)\vp_t^*g, $$ is a solution to the (unnormalized) Ricci flow equation (\ref{RFrn}) starting at $g(0)=g$ for some scaling function $c(t)>0$, where $\vp_t\in\Diff(G/K)$ is the diffeomorphism determined by $\tilde{\vp}_t$. \end{definition} As the following example shows, a homogeneous Ricci soliton is not always semi-algebraic with respect to a given presentation as a homogenous space. \begin{example}\label{nosemi1} The direct product $S=S_1\times S_2$ of a completely solvable nonflat solvsoliton $S_1$ and a flat nonabelian solvmanifold $S_2$ is a Ricci soliton which is not semi-algebraic when presented as a left-invariant metric on $S$ (see \cite[Example 1.3]{Jbl}). \end{example} The following result confirms the protagonism of the algebraic side of homogeneous manifolds in regard to Ricci soliton theory. \begin{theorem}\cite[Proposition 2.2]{Jbl}\label{semi} Any homogeneous Ricci soliton $(M,g)$ is a semi-algebraic soliton with respect to its full isometry group $G=\Iso(M,g)$. Moreover, $\tilde{\vp}_t$ can be chosen to be a one-parameter subgroup of $\Aut(G)$ such that $\tilde{\vp}_t|_{K_0}=id$, where $K_0$ is the identity component of $K$. \end{theorem} It follows from (\ref{Lder}) that if $\left(G/K,g_{\ip}\right)$ is a semi-algebraic soliton with reductive decomposition $\ggo=\kg\oplus\pg$, then \begin{equation}\label{alg2} \Ricci(g_{\ip})=cI+\unm\left(D_{\pg}+D_{\pg}^t\right), \qquad\mbox{for some} \quad c\in\RR, \quad D\in\Der(\ggo), \quad D\kg=0, \end{equation} where actually $D=\ddt|_0\tilde{\vp}_t$ (see also \cite[Proposition 2.3]{Jbl}). Conversely, if condition (\ref{alg2}) holds for some reductive decomposition and $G/K$ is simply connected, then one can prove in much the same way as Proposition \ref{asrs} that $\left(G/K,g_{\ip}\right)$ is indeed a Ricci soliton with $\ricci(g_{\ip})=cg_{\ip}-\unm \lca_{X_D}g_{\ip}$. \begin{example}\label{nosemi2} The following example was generously provided to us by M. Jablonski and it is given by the $6$-dimensional solvmanifold whose metric Lie algebra has an orthonormal basis $\{X_1,Y_1,Z_1,X_2,Y_2,Z_2\}$ with Lie bracket $$ [X_1,Y_1]=Z_1, \quad [X_1,X_2]=Y_2, \quad [X_1,Y_2]=-X_2, \quad [X_2,Y_2]=Z_2. $$ It is easy to see that it is not a semi-algebraic soliton. Indeed, the Ricci operator is $\Ricci=\diag(-\unm,-\unm,\unm,-\unm,-\unm,\unm)$ and if (\ref{alg2}) holds, then $c=-\unm$ since $D$ must leave the nilradical $\la X_2,\dots,Z_2\ra$ invariant. Now, restricted to the Lie ideal $\la X_2,Y_2,Z_2\ra$, the diagonal part of $D$ has the form $\diag(a,b,a+b)$, and thus $a=b=0$ and $-\unm+a+b=\unm$, a contradiction. However, it is easy to prove that it is isometric to the nilsoliton $H_3\times H_3$, where $H_3$ denotes the $3$-dimensional Heisenberg group. The bracket flow evolution of this solvmanifold will be analyzed in Section \ref{evol-nosemi}. \end{example} The special reductive decomposition given in Lemma \ref{Bkp0} establishes some constraints on the behavior of derivations, and consequently on the structure of semi-algebraic solitons. \begin{lemma}\label{Dkp} Let $(G/K,g_{\ip})$ be a homogeneous space with reductive decomposition $\ggo=\kg\oplus\pg$, and assume in addition that $B(\kg,\pg)=0$, where $B$ is the Killing form of $\ggo$. If $D \in \Der(\ggo)$ satisfies $D \kg \subset \kg$, then $D\pg \subset \pg$. \end{lemma} \begin{proof} For $X\in \pg$, $Z = DX$, we write $Z = Z_\kg + Z_\pg$ according to the decomposition $\ggo=\kg\oplus\pg$. By using that $D \kg \subset \kg$ and $B(\kg,\pg)=0$, we get \[ 0 = B(Z, Z_\kg) + B(X, DZ_\kg) = B(Z_\kg, Z_\kg), \] which implies that $Z_\kg =0$ since $B|_{\kg \times \kg}$ is negative definite. \end{proof} \begin{corollary}\label{semiB} Let $(G/K,g_{\ip})$ be a semi-algebraic soliton with reductive decomposition $\ggo=\kg\oplus\pg$, and assume in addition that $B(\kg,\pg)=0$. Then, $$ \Ricci(g_{\ip})=cI+\unm\left(D_{\pg}+D_{\pg}^t\right), \qquad\mbox{with} \qquad D = \left[\begin{smallmatrix} 0&0\\ 0&D_\pg\\\end{smallmatrix}\right]\in\Der(\ggo). $$ \end{corollary} \begin{example}\label{Dpnop} It may happen that $\pg$ is not preserved by the derivation $D$. Consider for instance any nilsoliton $(\ngo,\ip)$, say with $\Ricci_\ngo = cI + D_0$, $D_0 \in \Der(\ngo)$. Assume there exists a nonzero $B\in \Der(\ngo)\cap\sog(\ngo,\ip)$. This gives rise to a presentation of the nilsoliton as a homogeneous space with one-dimensional isotropy and reductive decomposition $\ggo = \kg \oplus \ngo$, where $\kg = \RR B$ and $B$ is acting on $\ngo$ as usual. Now suppose that there is a nonzero $X\in\ngo$ such that $BX=0$, and let $\pg \subset \ggo$ be the subspace spanned by $\alpha$, where $\alpha$ is the set obtained by replacing $X$ with $X+B$ in an orthonormal basis $\beta$ of $(\ngo,\ip)$ containing $X$. We see that $\ggo = \kg \oplus \pg$ is also a reductive decomposition. Under the natural identification of $\pg$ with $\ngo$, the basis $\alpha$ and $\beta$ turn out to be identified, and then the Ricci operator $\Ricci_\pg$ (according to $\ggo = \kg \oplus \pg$) is given by \[ \Ricci_\pg = cI + D_1, \quad D_1\in \End(\pg), \quad [D_1]_\alpha = [D_0]_\beta, \] which implies that it is still an algebraic soliton with respect to the reductive decomposition $\ggo=\kg\oplus\pg$. Now assume that $D_1 = \unm(D_\pg + D_\pg^t)$, for some $D:= \left[\begin{smallmatrix}0 & 0 \\ 0 & D_\pg \end{smallmatrix}\right] \in \Der(\ggo)$. Then, by using that $D \pg \subset \pg \cap \ngo$ (in fact, $D\ggo \subset \ngo$ since $\ggo$ is solvable and $\ngo$ is its nilradical) and $\pg \cap \ngo \perp (X+B)$, we obtain that \[ \la D_0 X, X \ra = \la D_1 (X+B), (X+B)\ra = \la D (X+B), (X+B)\ra = 0, \] which contradicts the fact that $D_0$ is positive definite (see e.g. \cite[Section 2]{cruzchica}). Thus this reductive decomposition does not allow us to present this nilsoliton as a semi-algebraic soliton with a derivation leaving $\pg$ invariant. The bracket flow evolution of this example in the case $\ngo$ is the 3-dimensional Heisenberg Lie algebra will be studied in Section \ref{evol-Dpnop}. \end{example} In \cite{alek}, some structural results on Lie theoretical aspects of semi-algebraic solitons are given. In the case of nilmanifolds, or more precisely, when $G$ is nilpotent and simply connected and $K$ is trivial, condition (\ref{alg2}) is equivalent to $(G,\ip)$ being a nilsoliton, since $D^t$ turns out to be a derivation in this case. Indeed, if $\Ricci(\ip)=cI+D+D^t$, then by using \cite[(19)]{homRF} one obtains that $$ 0=\tr{\Ricci(\ip)[D,\Ricci(\ip)]}=\tr{\Ricci(\ip)[D,D^t]}=\unc\la\pi(D^t)\lb,\pi(D^t)\lb\ra, $$ and hence $D^t\in\Der(\ggo)$. This fixes a gap in the proof of \cite[Proposition 1.1]{soliton}, which was kindly pointed out to the second author by M. Jablonski. \begin{figure}\label{HRSfig} \includegraphics[width=\textwidth]{homRS-cuadro} \caption{Homogeneous Ricci solitons} \end{figure} It is proved in \cite[Theorem 1.6]{Jbl} that any homogeneous Ricci soliton admitting a transitive semi-simple group of isometries must be Einstein. As far as we know, beyond solvmanifolds, the current status of knowledge on simply connected (nontrivial) homogeneous Ricci solitons can be summarized as in Figure \ref{HRSfig}. Recall that the only known examples for now are all isometric to solvsolitons, i.e. left-invariant algebraic solitons on solvable Lie groups. \section{Algebraic solitons and the bracket flow}\label{ASBF} We study in this section how semi-algebraic solitons evolve according to the bracket flow. Algebraic solitons are proved to be the only possible limits, backward and forward, for any bracket flow solution. This fact and the equivalence between the bracket and Ricci flows (see Theorem \ref{eqflrn}) suggest that algebraic solitons might exhaust the class of all homogeneous Ricci solitons (up to isometry). \begin{proposition}\label{limrs} Let $\mu(t)$ be a solution to any normalized bracket flow (see {\rm (\ref{BFrn})} or {\rm (\ref{BFrnsis})}). \begin{itemize} \item[(i)] If $\mu_0$ is a fixed point (i.e. $\mu(t)\equiv\mu_0$), then $\left(G_{\mu_0}/K_{\mu_0},g_{\mu_0}\right)$ is an algebraic soliton with $\Ricci_{\mu_0}=cI+D_{\pg}$, for some $c\in\RR$ and such that $\left[\begin{smallmatrix} 0&0\\ 0&D_{\pg} \end{smallmatrix}\right]\in\Der(\ggo,\mu_0)$. \item[(ii)] If $\mu(t)\to\lambda\in\hca_{q,n}$, as $t\to T_{\pm}$, then $T_{\pm}=\pm\infty$ and $\left(G_\lambda/K_\lambda,g_\lambda\right)$ is an algebraic soliton as in part (i). \item[(iii)] Assume that $\mu(t)\to\lambda\in\hca_{q,n}$, as $t\to\pm\infty$, with $\lambda|_{\pg\times\pg}\ne 0$. Then the limit $\tilde{\lambda}$ of any other $\tilde{r}$-normalized bracket flow solution necessarily satisfies $\tilde{\lambda}=c\cdot\lambda$ for some $c\geq 0$. In particular, $\tilde{\lambda}$ is either flat ($c=0$) or homothetic to $\lambda$ ($c>0$). \end{itemize} \end{proposition} \begin{proof} Let $\lambda$ be a fixed point for some normalized bracket flow of the form (\ref{BFrn}) (i.e. $\mu(t)\equiv\mu_0=\lambda$). Thus $$ -\pi\left(\left[\begin{smallmatrix} 0&0\\ 0&\Ricci_{\lambda}+r(0)I \end{smallmatrix}\right]\right)\lambda=0, $$ from which we deduce that $\left(G_\lambda/K_\lambda,g_\lambda\right)$ is an algebraic soliton with $c=-r(0)$ and $D=\left[\begin{smallmatrix} 0&0\\ 0&\Ricci_{\lambda}+r(0)I\end{smallmatrix}\right]$ (see {\rm Definition \ref{as}}). This proves parts (i) and (ii). Let us now prove (iii). It follows from \cite[Lemma 3.9]{homRF} that both time reparametrizations $\tau(t)$ and $\tilde{\tau}(t)$ converge to $T^0_{\pm}$, as $t\to\pm\infty$ (see (\ref{ctau})). We project the curves determined by the two bracket flow solutions onto the quotient $V_{q+n}/\RR_{>0}$, where the $\RR_{>0}$-action is given by the rescaling (\ref{scmu}), and denote by $[v]$ the equivalence class of a vector $v\in V_{q+n}$. By (\ref{ctau}), we obtain that $[\nu(s)]$ converges to both $[\lambda]$ and $[\tilde{\lambda}]$, as $s\to T_{\pm}^0$, relative to the quotient topology. As the pairs of points which can not be separated by disjoint open sets in $V_{q+n}/\RR_{>0}$ are all of the form $[v]$, $[0\cdot v]$ for some $v\in V_{q+n}$, either $\tilde{\lambda}=0\cdot\lambda$ or $[\tilde{\lambda}]=[\lambda]$, as was to be shown. \end{proof} As an application of Theorem \ref{eqflrn}, unnormalized Ricci and bracket flow solutions are proved in what follows to have a very simple form for semi-algebraic and algebraic solitons. \begin{proposition}\label{saBF} Let $\left(G_{\mu_0}/K_{\mu_0},g_{\mu_0}\right)$, $\mu_0 \in \hca_{q,n}$, be a homogeneous space that is a semi-algebraic soliton, say with $$ \Ricci_{\mu_0}=cI+\unm(D_\pg+D_\pg^t), \qquad c\in\RR, \qquad D:=\left[\begin{smallmatrix} 0&\ast\\ 0&D_\pg \end{smallmatrix}\right] \in\Der(\ggo,\mu_0). $$ Then the unnormalized bracket flow solution to \eqref{BFrn} (i.e. with $r\equiv 0$) is given by \begin{equation}\label{saevol} \nu(t) = (-2ct+1)^{-1/2} \cdot \left[\begin{smallmatrix} I&0\\ 0& e^{s(t)A} e^{-s(t)D_\pg} \end{smallmatrix}\right] \cdot \mu_0, \qquad t\in\left\{\begin{array}{lcl} (\tfrac{1}{2c},\infty), && c<0, \\ (-\infty,\tfrac{1}{2c}), && c>0, \\ (-\infty,\infty), && c=0, \end{array}\right. \end{equation} where $ A = \unm(D_\pg - D_\pg^t)$ and $s(t) = -\tfrac{1}{2c} \ln (-2ct+1) $ (for $c=0$, $s(t)\equiv 1$). Conversely, if the unnormalized bracket flow $\nu(t)$ evolves as in \eqref{saevol}, then $\left(G_{\mu_0}/K_{\mu_0},g_{\mu_0}\right)$ is a semi-algebraic soliton. The corresponding derivation $\tilde{D}$ satisfies that $A = \unm(\tilde{D}_\pg - \tilde{D}_\pg^t)$, although possibly $\tilde{D}_{\pg}\ne D_\pg$. \end{proposition} \begin{proof} By using equality $\Ricci_{\mu_0}=cI+\unm(D_\pg+D_\pg^t)$, it is easy to check that \[ \ip_t = (-2ct+1) \la e^{-s(t)D_\pg} \cdot , e^{-s(t)D_\pg} \cdot \ra, \] is a solution to the unnormalized Ricci flow \eqref{RFiprn}. The Ricci operator of $\ip_t$ is therefore given by \[ \Ricci(\ip_t) = (-2ct+1)^{-1} e^{s(t) D_\pg} \Ricci(\ip) e^{-s(t) D_\pg} = (-2ct+1)^{-1} (cI + D_\pg - e^{s(t) D_\pg} A e^{-s(t) D_\pg} ) \] (recall that $\Ricci(\ip) = \Ricci_{\mu_0} = cI + D_\pg - A$). Now we solve the differential equation given in part (i) of Theorem \ref{eqflrn}, getting $h(t) = (-2ct+1)^{1/2} e^{s(t) A} e^{-s(t) D_\pg} $, and by using part (iv) of the same theorem we obtain the desired formula for $\nu(t)$. The converse follows by computing $\ddt \nu(t)\big|_0$, which equals $-\pi\left(\left[\begin{smallmatrix} 0&0\\ 0&\Ricci_{\mu_0}\end{smallmatrix}\right]\right)\mu_0$. \end{proof} \begin{remark}\label{saevol2} If $D$ has the special form $D = \left[\begin{smallmatrix} 0&0\\ 0&D_\pg \end{smallmatrix}\right]$ (i.e. $D\pg \subseteq \pg$, or $\ast=0$), which holds for instance if the reductive decomposition satisfies $B_{\mu_0}(\kg,\pg)=0$ for the Killing form $B_{\mu_0}$ of $\mu_0$ (see Corollary \ref{semiB}), then $\left[\begin{smallmatrix} I&0\\ 0& e^{-s(t)D_\pg} \end{smallmatrix}\right] \in \Aut(\ggo,\mu_0)$, and hence the formula for the unnormalized bracket flow is given by \begin{equation}\label{saBF2} \nu(t) = (-2ct+1)^{-1/2} \cdot \left(\left[\begin{smallmatrix} I&0\\ 0& e^{s(t)A} \end{smallmatrix}\right] \cdot \mu_0\right). \end{equation} \end{remark} Algebraic solitons with $D\pg\subset\pg$ are characterized in terms of their bracket flow evolution as follows. \begin{proposition}\label{rsequiv} For a homogeneous space $(G_{\mu_0}/K_{\mu_0},g_{\mu_0})$, $\mu_0\in\hca_{q,n}$, the following conditions are equivalent: \begin{itemize} \item[(i)] $(G_{\mu_0}/K_{\mu_0},g_{\mu_0})$ is an algebraic soliton with $$ \Ricci_{\mu_0}=cI+D_\pg, \qquad c\in\RR, \qquad D=\left[\begin{smallmatrix} 0&0\\ 0&D_\pg \end{smallmatrix}\right] \in\Der(\ggo,\mu_0). $$ \item[(ii)] The unnormalized bracket flow solution is given by $$ \nu(t)=(-2ct+1)^{-1/2}\cdot\mu_0, $$ or equivalently, $$ \nu_{\kg}(t)=(-2ct+1)^{-1}\mu_{\kg}(0), \quad\nu_{\pg}(t)=(-2ct+1)^{-1/2}\mu_{\pg}(0), \quad t\in\left\{\begin{array}{lcl} (\tfrac{1}{2c},\infty), && c<0, \\ (-\infty,\tfrac{1}{2c}), && c>0, \\ (-\infty,\infty), && c=0. \end{array}\right. $$ \item[(iii)] The solutions to the unnormalized Ricci flow equations {\rm \eqref{RFrn}} and {\rm \eqref{RFiprn}} are given by $$ g_{ij}(t)=\la X_i,X_j\ra_t= (-2ct+1)^{r_i/c}\delta_{ij}, $$ where $\{ X_1,\dots,X_n\}$ is an orthonormal basis of $(\pg,\ip)$ of eigenvectors (or Killing vector fields) for $\Ricci_{\mu_0}$ with eigenvalues $\{ r_1,\dots,r_n\}$. \end{itemize} \end{proposition} \begin{proof} The equivalence between parts (i) and (ii) follows from Proposition \ref{saBF}, by using Remark \ref{saevol2} and that $A=0$ in this case. To prove that parts (ii) and (iii) are equivalent, we can use Theorem \ref{eqflrn}, as in both cases one obtains $$ h(t)=e^{a(t)\Ricci_{\mu_0}}, \qquad a(t)=\tfrac{1}{2c}\log(-2ct+1), $$ concluding the proof of the proposition. \end{proof} Part (iii) of the above proposition generalizes results on the asymptotic behavior of some nilsolitons obtained in \cite{Pyn,Wllm} and algebraic solitons on Lie groups in \cite{nicebasis}. Recall that $c<0$ for any nontrivial homogeneous Ricci soliton. It follows from Proposition \ref{rsequiv}, (ii) that if $\mu_{\kg}=0$ (e.g. if $q=0$) or $\mu_{\pg}=0$, then the trace of the algebraic soliton $\nu(t)$ is contained in the straight line segment joining $\mu_0$ with the flat metric $\lambda$ given by $\lambda|_{\kg\times\ggo}=\mu_0|_{\kg\times\ggo}$, $\lambda|_{\pg\times\pg}=0$ (see e.g. the algebraic solitons $S^2\times\RR$, $H^2\times\RR$ and $Nil$ in \cite[Figure 1]{homRF}, and those denoted by $G_{bi}$, $E2$, $H\times\RR^m$ and $N$ in \cite[Figure 5]{homRF}). Otherwise, if $\mu_{\kg},\mu_{\pg}\ne 0$, then $\nu(t)$ stays in the half parabola $\{ s^2\mu_{\kg}+s\mu_{\pg}:s>0\}$ joining $\mu_0$ with the flat $\lambda$ (e.g. the round metrics on $S^3$ in \cite[Figure 1]{homRF}). In any case, the forward direction is determined by the sign of $c$. \vs We now study the evolution of semi-algebraic solitons under a normalized bracket flow. Let $F:\hca_{q,n}\longrightarrow\RR$ be a function which is invariant under isometry (i.e. $F(\mu)=F(\lambda)$ for any pair $\mu,\lambda\in\hca_{q,n}$ of isometric homogenous spaces). In particular, $F$ is $\left[\begin{smallmatrix} \Gl_q(\RR)&0\\ 0&\Or(n) \end{smallmatrix}\right]$-invariant relative to the action defined in (\ref{action})-(\ref{adkh}). Also assume that $F$ is {\it scaling invariant}, in the sense that there exists $d\ne 0$ such that $F(c\cdot\mu)=c^dF(\mu)$ for any $c\in\RR$, $\mu\in\hca_{q,n}$. Some examples of isometry and scaling invariant functions are given by the scalar curvature $R(\mu)$, more in general the other traced powers $\tr{\Ricci_\mu^k}$ of the Ricci operator, and $\|\nabla^k\Riem_\mu\|$, where $\nabla_\mu$ denotes the Levi-Civita connection and $\Riem_\mu$ the Riemann curvature tensor. Consider the normalized bracket flow $\mu(t)$ as in (\ref{BFrn}) such that $F(\mu(t))\equiv F(\mu_0)$, and express it in terms of the unnormalized bracket flow by $\mu(t)=c(t)\nu(\tau(t))$ (see (\ref{ctau})). If $\mu_0\in\hca_{q,n}$ is a Ricci soliton with cosmological constant $c_0$ (see \eqref{rseq}), then it follows from Theorem \ref{eqflrn} and (\ref{rssol}) that $$ F(\mu_0)=c(t)^d(-2c_0\tau(t)+1)^{-d/2}F(\mu_0), \qquad\forall t. $$ By assuming that $F(\mu_0)\ne 0$, we obtain that $c(t)=(-2c_0\tau(t)+1)^{1/2}$ and so $c'(t)=-c_0c(t)$, from which follows that $r(t)\equiv -c_0$ (recall from (\ref{ctau}) that $c'=rc$). The evolution equation of the normalized bracket flow yielding $F$ constant and starting at a Ricci soliton $\mu_0$ is therefore given by \begin{equation}\label{F-eq} \ddt\mu=-\pi\left(\left[\begin{smallmatrix} 0&0\\ 0&\Ricci_{\mu} \end{smallmatrix}\right]\right)\mu -c_0\mu, \qquad \mu(0)=\mu_0. \end{equation} Moreover, it follows that $c(t)=e^{-c_0t}$ and $\tau(t)=\tfrac{1-e^{-2c_0t}}{2c_0}$ (and $\tau(t)=t$ if $c_0=0$), which together with Proposition \ref{saBF} yield the following result. \begin{proposition}\label{F-const} Let $(G_{\mu_0}/K_{\mu_0},g_{\mu_0})$, $\mu_0 \in \hca_{q,n}$, be a homogeneous space that is a semi-algebraic soliton, say with $$ \Ricci_{\mu_0}=cI+\unm(D_\pg+D_\pg^t), \qquad c\in\RR, \qquad D:=\left[\begin{smallmatrix} 0&\ast\\ 0&D_\pg \end{smallmatrix}\right]\in\Der(\ggo,\mu_0). $$ Let $F:\hca_{q,n}\longrightarrow\RR$ be any differentiable function invariant under isometry and scaling such that $F(\mu_0)\ne 0$. Then the normalized bracket flow solution such that $F(\mu(t))\equiv F(\mu_0)$ is given by \begin{equation}\label{saevol_sc} \mu(t) = \left[\begin{smallmatrix} I&0\\ 0& e^{tA} e^{-tD_\pg} \end{smallmatrix}\right] \cdot \mu_0, \qquad t\in (-\infty,\infty), \end{equation} where $ A = \unm(D_\pg - D_\pg^t)$. \end{proposition} It follows from Remark \ref{saevol2} that if the derivation $D$ satisfies $D\pg \subseteq \pg$, and $F:\hca_{q,n}\longrightarrow\RR$ is only assumed to be scaling and $\left[\begin{smallmatrix} I&0\\ 0&\Or(n) \end{smallmatrix}\right]$-invariant, then the evolution is simply given by \begin{equation}\label{F-const2} \mu(t) = \left[\begin{smallmatrix} I&0\\ 0& e^{tA} \end{smallmatrix}\right] \cdot \mu_0. \end{equation} An example of a scaling and $\left[\begin{smallmatrix} I&0\\ 0&\Or(n) \end{smallmatrix}\right]$-invariant function which is not isometry invariant is $F(\mu)=\|\mu\|$ (see Section \ref{N-norm} below). As the orbit $\left[\begin{smallmatrix} I&0\\ 0&\Or(n) \end{smallmatrix}\right]\cdot\mu_0$ is compact, the solution $\mu(t)$ in \eqref{F-const2} stays bounded and the limits of subsequences $\mu(t_k)$ are all isomorphic to $\mu_0$ (compare with Example \ref{evol-nosemi}). On the other hand, recall that $A$ is skew-symmetric, so its eigenvalues are purely imaginary. By Kronecker's theorem, there exists a sequence $t_k$, with $t_k \rightarrow \infty$, such that $e^{t_kA} \rightarrow I$. This implies that $\mu(t_0+t_k) \underset{k \rightarrow \infty}\longrightarrow \mu(t_0)$ for any $t_0\in\RR$ and thus the whole solution is contained in its $\omega$-limit. The absence of this kind of chaos for the bracket flow, which is still an open question, would imply that $\mu(t)\equiv\mu_0$, and so $A$ should be a derivation of $\mu_0$ thus obtaining that any semi-algebraic soliton would be algebraic. \subsection{Normalizing by the bracket norm}\label{N-norm} The choice of any inner product on $\kg$ allows us to consider the inner product on $\lamg$ defined by \begin{equation}\label{innV} \la\mu,\lambda\ra= \sum\la\mu(Y_i,Y_j),\lambda(Y_i,Y_j)\ra, \end{equation} where $\{ Y_i\}$ is the union of orthonormal bases of $\kg$ and $(\pg,\ip)$, respectively. Given any $\mu\in\hca_{q,n}$, one may assume to make simpler some computations that the inner product on $\kg$ is $\Ad(K_\mu)$-invariant (recall that $\overline{\Ad(K_{\mu})}$ is compact in $\Gl(\ggo)$), which will also hold for any other element in $\hca_{q,n}$ coinciding with $\mu$ on $\kg\times\kg$ (e.g. for any normalized bracket flow solution starting at $\mu$). The normalized bracket flow equation as in (\ref{BFrn}) such that the norm $\|\mu(t)\|$ of any solution remains constant in time, produces an interesting consequence: there always exists a convergent subsequence $\mu(t_k)\to\lambda$. Recall that $\mu(t)|_{\kg\times\ggo}\equiv\mu_0|_{\kg\times\ggo}$ for any bracket flow solution, so we only need to keep $\|\mu(t)|_{\pg\times\pg}\|^2=\|\mu_{\kg}\|^2+\|\mu_{\pg}\|^2$ constant. By using that $$ \|c\cdot\mu|_{\pg\times\pg}\|^2=c^4\|\mu_{\kg}\|^2+c^2\|\mu_{\pg}\|^2, $$ we obtain that $\|c\cdot\mu|_{\pg\times\pg}\|^2=1$ if and only if \begin{equation}\label{cN-norm1} c^2(t):= \frac{-\|\mu_{\pg}\|^2+\sqrt{\|\mu_{\pg}\|^4+4\|\mu_{\kg}\|^2}}{2\|\mu_{\kg}\|^2}, \qquad \mu_{\kg}\ne 0, \end{equation} and $c(t)=\|\mu_{\pg}\|^{-1}$ for $\mu_{\kg}=0$. It is easy to prove that the corresponding normalizing function $r$ in equation (\ref{BFrn}) must be $$ r=\frac{4\tr{\Ricci\mm}-\la\mu_{\kg},\mu_{\kg}(\Ricci\cdot,\cdot)+\mu_{\kg}(\cdot,\Ricci\cdot)\ra}{2\|\mu_{\kg}\|^2+\|\mu_{\pg}\|^2}. $$ An alternative way to guaranty the existence of a convergent subsequence is by taking \begin{equation}\label{cN-norm2} c(t):=\frac{1}{\|\mu_{\kg}\|^{1/2}+\|\mu_{\pg}\|}, \end{equation} as it follows that $0<\beta\leq\|c\cdot\mu|_{\pg\times\pg}\|^2\leq 2$, where $\beta=(1-\alpha)^2+\alpha^4\approx 0.289$ and $\alpha$ is the real root of $2x^3+x-1$. Once we obtain a convergent subsequence $\mu(t_k)\to\lambda$, we have that $\lambda|_{\pg\times\pg}\ne 0$ as soon as $\mu_0|_{\pg\times\pg} \ne 0$, but we do not know if it is always the case that $\lambda\in\hca_{q,n}$. The only condition in Definition \ref{hqn} which may fail is (h2), that is, $K_\lambda$ might not be closed in $G_\lambda$. This would yield a collapsing with bounded geometry under the Ricci flow to the lower dimensional homogeneous space $G_\lambda/\overline{K_\lambda}$ (see \cite[Section 6.5]{spacehm}). We also note that $\lambda$ may belong to $\hca_{q,n}$ and nevertheless be flat (see the examples in Sections \ref{evol-nosemi} and \ref{evol-Dpnop}). Recall that $F(\mu)=\|\mu\|$ is scaling and $\left[\begin{smallmatrix} I&0\\ 0&\Or(n) \end{smallmatrix}\right]$-invariant, so that formula (\ref{F-const2}) applies for the evolution of semi-algebraic solitons. \subsection{Scalar curvature normalization}\label{R-norm} If we start the Ricci flow at a Ricci soliton $(M,g)$, the solution $g(t)$ is homothetic (i.e. isometric up to scaling) to $g$ for each $t$ where it is defined. So, in the homogeneous case, normalizing by constant in time scalar curvature yields a solution $g(t)$ that is isometric to $g$ for each $t$. It is well known that the scalar curvature $R=R(g(t))$ of any Ricci flow solution $g(t)$ evolves by $$ \dpar R=\Delta(R) +2\|\ricci\|^2, $$ where $\Delta$ is the Laplace operator of the Riemannian manifold $(M,g(t))$ (see e.g. \cite[Lemma 6.7]{ChwKnp}). As in the homogeneous case $R$ is constant on $M$, we simply get \begin{equation}\label{evR} \ddt R=2\|\ricci\|^2. \end{equation} We refer to \cite[Proposition 3.8, (vi)]{homRF} for an alternative proof of this fact within the homogeneous setting, as an application of Theorem \ref{eqflrn}. \begin{lemma}\label{trRic2} Let $(M,g)$ be a Ricci soliton with constant scalar curvature (not necessarily homogeneous), say with $\ricci(g) = c g + \lca_Xg$ and Ricci operator $\Ricci(g)$ . Then, $$ cR(g)=\tr{\Ricci(g)^2}, $$ where $R(g)=\tr{\Ricci(g)}$ is the scalar curvature. \end{lemma} \begin{proof} Let $g(t)$ be the unnormalized Ricci flow starting at $g$. It follows from \eqref{rssol} and \eqref{evR} that \begin{align*} 2(-2ct+1)^{-2}\tr{\Ricci(g)^2} &= 2 \tr{\Ricci(g(t))^2} = \ddt R(g(t)) \\ &= \ddt (-2ct+1)^{-1} R(g) = 2c(-2ct+1)^{-2} R(g), \end{align*} and so the lemma follows. \end{proof} By using that a homogeneous manifold is flat if and only if it is Ricci flat (see \cite{AlkKml}), we deduce from the above lemma that a homogeneous Ricci soliton $(M,g)$ is flat as soon as $c=0$ or $R(g)=0$. Furthermore, if $(M,g)$ is nonflat, then $c$ and $R(g)$ are both nonzero and have the same sign. For the normalized bracket flow in this case, we have that the limit $\lambda$ of any subsequence $\mu(t_k)\to\lambda$, as $t_k\to\pm\infty$, is automatically nonflat as $R(\lambda)=R(\mu_0)$. Unfortunately, the solution may diverge to infinity without any convergent subsequence. There are examples of this behavior with $R<0$ in \cite[Sections 3.4 and 4]{homRF}, and to obtain examples with $R>0$ consider any product $E\times S$ of a compact Einstein homogeneous manifolds $E$ and a nonabelian flat solvable Lie group $S$. Since $F(\mu)=R(\mu)$ is scaling and isometry invariant, we can apply Proposition \ref{F-const} and (\ref{F-const2}) to study the evolution of semi-algebraic solitons under this normalization. In this case, these results also follow more directly by using \cite[Example 3.13]{homRF} and Lemma \ref{trRic2}. \subsection{Example of a non semi-algebraic soliton evolution}\label{evol-nosemi} Our aim in this section is to study the bracket flow evolution of the homogeneous Ricci soliton which is not semi-algebraic given in Example \ref{nosemi2}. We therefore fix an orthonormal basis $\{X_1,Y_1,Z_1,X_2,Y_2,Z_2\}$ of $\ggo$ and consider $\nu=\nu_{a,b,c}\in\hca_{0,6}=\lca_6$ defined by $$ \nu(X_1,Y_1)=aZ_1, \quad \nu(X_1,X_2)=bY_2, \quad \nu(X_1,Y_2)=-bX_2, \quad \nu(X_2,Y_2)=cZ_2. $$ It is easy to see that for any $a,c\ne 0$, the solvmanifold $(G_\nu,\ip)$ is isometric to the nilsoliton $H_3\times H_3$, where $H_3$ denotes the $3$-dimensional Heisenberg group. By a straightforward computation we obtain that the unnormalized bracket flow is equivalent to the ODE $$ a'=-\tfrac{3}{2}a^3, \qquad b'=-\unm a^2b, \qquad c'=-\tfrac{3}{2}c^3, $$ from which follows that if $a(0)=b(0)=c(0)=1$, then $$ a=b^3, \qquad b(t)=(3t+1)^{-\tfrac{1}{6}}, \qquad c=b^3, \qquad t\in (-\tfrac{1}{3},\infty). $$ Thus $\nu(t)\longrightarrow 0$ (flat), as $t\to\infty$, and $\nu(t)\longrightarrow \infty$, as $t\to-\tfrac{1}{3}$. It follows from $\|\nu\|^2=2(a^2+2b^2+c^2)=4b^2(b^4+1)$ that \begin{eqnarray*} \frac{a}{\|\nu\|}&=&\frac{b^2}{2(b^4+1)^{1/2}}\underset{t\raw\infty}\longrightarrow 0, \qquad \bigg(\underset{{t\to-\tfrac{1}{3}}}\longrightarrow \unm\bigg), \\ \frac{b}{\|\nu\|}&=&\frac{1}{2(b^4+1)^{1/2}}\underset{t\raw\infty}\longrightarrow \unm, \qquad \bigg(\underset{{t\to-\tfrac{1}{3}}}\longrightarrow 0\bigg). \end{eqnarray*} This implies that under the bracket norm normalization, $\frac{\nu}{\|\nu\|}\longrightarrow\nu_{0,\unm,0}$, as $t\to\infty$, a nonabelian flat solvmanifold, and backward, as $t\to -\tfrac{1}{3}$, we have that $\frac{\nu}{\|\nu\|}\longrightarrow\nu_{\unm,0,\unm}$, the nilsoliton $H_3\times H_3$ itself. Concerning scalar curvature normalization, by using that $R=R(\nu)=-\unm(a^2+b^2)=-b^6$, we obtain \[ \frac{a}{|R|^{1/2}}\equiv 1, \qquad \frac{b}{|R|^{1/2}}=\frac{1}{b^2}\underset{t\to\infty}\longrightarrow \infty, \qquad \bigg(\underset{{t\to-\tfrac{1}{3}}}\longrightarrow 0 \bigg), \] and therefore $\frac{\nu}{|R|^{1/2}}\longrightarrow\infty$, as $t\to\infty$. In the backward direction, as $t\to -\tfrac{1}{3}$, one has $\frac{\nu}{|R|^{1/2}}\longrightarrow\nu_{1,0,1}$, the nilsoliton $H_3\times H_3$. We note that all the limits obtained are non-isomorphic to the starting point $\nu_0=\nu_{1,1,1}$, and that $\nu(t)$ even diverges to infinity in one case. This is in clear contrast with the evolution of semi-algebraic solitons described in (\ref{F-const2}). \subsection{Example of an algebraic soliton evolution with $D\pg \nsubseteq \pg$}\label{evol-Dpnop} By Proposition \ref{limrs}, (i), the fixed points of the normalized bracket flow are precisely algebraic solitons with $D\pg \subseteq \pg$. It is then natural to ask how an algebraic soliton with $D\pg \nsubseteq \pg$ evolves, and this is the question we address in this section, by studying the bracket flow evolution of the algebraic soliton given in Example \ref{Dpnop} in the specific case where $\ngo=\hg_3$ is the $3$-dimensional Heisenberg Lie algebra. Fix a basis $\{Z,X_1,X_2,X_3 \}$ of $\ggo$ and consider $\nu = \nu_{a,b,c}\in \hca_{1,3}$ defined by \[ \left\{ \begin{array}{lll} \nu(Z, X_1) = X_2, & \nu(X_1,X_2) = a X_3 + b Z, & \nu(X_3,X_1) = cX_2,\\ \nu(Z, X_2) = -X_1 & & \nu(X_2,X_3) = cX_1, \end{array} \right. \] where $\kg = \RR Z$ and $\{X_1,X_2,X_3\}$ is an orthonormal basis of $(\pg,\ip)$. For $(a,b,c)=(1,-1,1)$ we get the nilsoliton $H_3$ presented with the modified reductive decomposition, as in Example \ref{Dpnop}. The unnormalized bracket flow for $\nu_{a,b,c}$ is equivalent to the ODE \[ a' = (-\tfrac32 a^2 + 2b + 2ac) a, \qquad b' = (-a^2 + 2b + 2ac)b, \qquad c' = \unm a^2 c. \] It is not difficult to see that if $b\neq 0$ then $\tfrac{ac}{b}$ remains constant, and so starting at $a(0) = c(0) = 1, b(0)=-1$ one easily solves the ODE and gets \[ a = c^{-3}, \quad b = -c^{-2}, \qquad c(t) = (3t+1)^{\tfrac16}, \qquad t\in(-\tfrac13, \infty). \] We obtain $\nu(t) \lraw \infty$ if we let either $t\to \infty$ or $t\to -\tfrac13$. This provides an explicit example of the following unexpected behavior: a bracket flow solution which is immortal but not due to uniform boundedness, as it goes to infinity. Under the bracket norm normalization defined in \eqref{cN-norm2}, we see that \[ \tfrac1{ \|\nu_{\kg}\|^{1/2}+\|\nu_{\pg}\| } \cdot \nu \underset{t\raw\infty}\lraw \nu_{0,0,\unm}, \] a flat metric on the solvable Lie group $E(2)$, and backward, \[ \tfrac1{ \|\nu_{\kg}\|^{1/2}+\|\nu_{\pg}\| } \cdot \nu \underset{t\raw -\tfrac13}\lraw \nu_{\tfrac1{\sqrt{2}},0,0}, \] the nilsoliton $H_3$ itself, though presented with reductive decomposition such that $D\pg \subseteq \pg$. This follows from a straightforward calculation, by using that $\| \nu_\pg \|^2 = 2a^2 + 4c^2$, $\| \nu_\kg \| ^2 = 2 b^2$. Regarding scalar curvature normalization, we have that $R = R(\nu) = -\unm a^2 = -\unm c^{-6}$, and then \[ \frac{a}{|R|^{1/2}} \equiv \sqrt2, \qquad \frac{b}{|R|} = -2c^4, \qquad \frac{c}{|R|^{1/2}} = \sqrt{2} c^4. \] This implies that $\frac1{|R|^{1/2}} \cdot \nu \lraw \infty$ as $t\raw \infty$, and backward, one has that as $t\raw -\frac13$, $\frac1{|R|^{1/2}} \cdot \nu \lraw \nu_{\sqrt{2},0,0}$, the same nilsoliton $H_3$ obtained in the backward limit of the bracket norm normalization. As in the previous example, we obtain non-isomorphic limits and even divergence in one case, in contrast with Proposition \ref{rsequiv} and \eqref{F-const2}, thus showing the advantages of having condition $D\pg \subseteq \pg$. \section{A geometric characterization of algebraic solitons}\label{algdiag} Whereas the concept of Ricci soliton is a Riemannian invariant, that is, invariant under isometry, the concept of semi-algebraic soliton is not, as it may depend on the presentation of the homogeneous manifold $(M,g)$ as a homogeneous space $(G/K,g)$ (see Section \ref{hrs}). Moreover, being an algebraic soliton may a priori not only depend on such presentation, but also on the reductive decomposition $\ggo = \kg \oplus \pg$ one is choosing for the homogeneous space (see Definition \ref{as}). The following property plays a key role in the study of the Ricci flow for homogeneous manifolds (see \cite{nicebasis}). \begin{definition}\label{RFdiag} A homogeneous manifold $(M,g)$ is said to be {\it Ricci flow diagonal} if at some point $p\in M$ there exists an orthonormal basis $\beta$ of $T_pM$ such that the Ricci flow solution $g(t)$ starting at $g$ is diagonal with respect to $\beta$ for any $t\in (T_-,T_+)$ (i.e. $g_{ij}(t)(p)=0$ for all $i\ne j$). \end{definition} We note that the point $p$ plays no role in this definition, as the condition holds either for every point or for none. It is easy to see that the property of being Ricci flow diagonal is invariant under isometry. Already in dimension $4$, there is a left-invariant metric on a nilpotent Lie group which is not Ricci flow diagonal (see \cite[Example 5.7]{nicebasis}). Let $(M,g)$ be a homogeneous Ricci soliton, and consider any presentation $(G/K,g_{\ip})$ of $(M,g)$ with reductive decomposition $\ggo=\kg\oplus\pg$. It is easy to check that if the unnormalized Ricci flow solution to equation \eqref{RFiprn} is written as $$ \ip_t=\la P(t)\cdot,\cdot\ra, $$ where $P(t)$ is the corresponding smooth curve of positive definite operators of $(\pg,\ip)$, then the Ricci flow equation is equivalent to the following ODE for $P$: \begin{equation}\label{RFP} \ddt P=-2P\Ricci(\ip_t), \end{equation} where $\Ricci(\ip_t):=\Ricci(g(t))(o):\pg\longrightarrow\pg$ is the Ricci operator at the origin. It follows from the uniqueness of ODE solutions that the following conditions are equivalent: \begin{itemize} \item $(M,g)$ is Ricci flow diagonal. \item There exists an orthonormal basis $\beta$ of $(\pg,\ip)$ such that the matrix $[\Ricci(\ip_t)]_\beta$ is diagonal for all $t\in (T_-,T_+)$. \item The family of symmetric operators $\{ P(t):t\in (T_-,T_+)\}$ is commutative. \end{itemize} By Remark \ref{saevol2} and Proposition \ref{rsequiv}, (iii), any algebraic soliton is Ricci flow diagonal. We now prove that this condition actually characterizes algebraic solitons among homogeneous Ricci solitons. In particular, if any, a homogeneous Ricci soliton which is not isometric to any algebraic soliton must be geometrically different from all known examples. \begin{theorem}\label{diagalg} A homogeneous Ricci soliton is Ricci flow diagonal if and only if it is isometric to an algebraic soliton. \end{theorem} \begin{proof} Let $(M,g_0)=(G/K,g_0)$ be a homogeneous Ricci soliton presented as a semi-algebraic soliton, with Ricci operator $\Ricci(\ip_0) = cI + D_\pg - A$, $A = \unm (D_\pg - D_\pg^t)$, $\ip_0 = g_0(eK)$. Fix a reductive decomposition $\ggo = \kg \oplus \pg$ such that $D:=\left[\begin{smallmatrix} 0&0\\ 0&D_\pg \end{smallmatrix}\right] \in\Der(\ggo,\mu_0)$, and any inner product on $\ggo$ that extends $\ip_0$ and makes $\kg\perp \pg$. Therefore $D$ is normal if and only if $D_\pg$ is so. Assume that $(G/K,g_0)$ is Ricci flow diagonal, hence \[ [\Ricci(\ip_t), \Ricci(\ip_0)] = 0, \qquad \forall t\in(T_-,T_+), \] By using that $\Ricci(\ip_t) = (-2ct+1)^{-1}e^{s(t)D_\pg} \Ricci(\ip_0) e^{-s(t)D_\pg}$ (which follows from the proof of Proposition \ref{saBF}), we can rewrite the previous formula as \[ [e^{sD_\pg}\Ricci(\ip_0)e^{-sD_{\pg}}, \Ricci(\ip_0)] = 0, \qquad \forall s\in(-\epsilon,\epsilon). \] Now this implies that $[[D_\pg,\Ricci(\ip_0)],\Ricci(\ip_0)] = 0$, and so $$ 0=\tr{D_\pg[[D_\pg,\Ricci(\ip_0)],\Ricci(\ip_0)]}= -\tr{[D_\pg,\Ricci(\ip_0)]^2}. $$ It follows that $[D_\pg,\Ricci(\ip_0)]=0$ as it is skew-symmetric, or equivalently, $[D_\pg,A]=0$, and thus $D_\pg$ is normal. Hence $D$ is normal as well, and so $D^t\in \Der(\ggo)$ since it is a well-known fact that the transpose of a normal derivation of a metric Lie algebra is again a derivation (see e.g. the proof of \cite[Lemma 4.7]{solvsolitons} or use that $\Aut(\ggo)$ is an algebraic group). Thus $\Ricci(\ip_0)-cI = \unm(D+D^t)_\pg$, with $\unm(D+D^t) \in \Der(\ggo)$, which shows that the semi-algebraic soliton is actually algebraic, concluding the proof. \end{proof}
{ "timestamp": "2012-10-16T02:00:56", "yymm": "1210", "arxiv_id": "1210.3656", "language": "en", "url": "https://arxiv.org/abs/1210.3656" }
\section{Introduction} We consider spherically symmetric motions of atmosphere governed by the compressible Euler equations: \begin{eqnarray} &&\frac{\partial\rho}{\partial t} +u\frac{\partial \rho}{\partial r}+\rho\frac{\partial u}{\partial r} +\frac{2}{r}\rho u =0, \nonumber \\ &&\rho\Big(\frac{\partial u}{\partial t}+ u\frac{\partial u}{\partial r}\Big)+\frac{\partial P}{\partial r}=-\frac{g_0\rho}{r^2}\quad (R_0\leq r ) \end{eqnarray} and the boundary value condition \begin{equation} \rho u|_{r=R_0}=0. \end{equation} Here $\rho$ is the density, $u$ the velocity, $P$ the pressure. $R_0\ (>0)$ is the radius of the central solid ball, and $g_0=G_0M_0$, $G_0$ being the gravitational constant, $M_0$ the mass of the central ball. The self-gravity of the atmosphere is neglected. In this study we always assume that \begin{equation} P=A\rho^{\gamma}, \end{equation} where $A$ and $\gamma$ are positive constants, and we assume that $1<\gamma \leq 2$. \\ Equilibria of the problem are given by $$\bar{\rho}(r)=\begin{cases} \displaystyle A_1\Big(\frac{1}{r}-\frac{1}{R}\Big)^{\frac{1}{\gamma-1}} & \quad (R_0\leq r<R) \\ 0 & \quad (R\leq r), \end{cases} $$ where $R$ is an arbitrary number such that $R>R_0$ and $$A_1=\Big(\frac{(\gamma-1)g_0}{\gamma A}\Big)^{\frac{1}{\gamma-1}}.$$ {\bf Remark} The total mass $M$ of the equilibrium is given by $$M=4\pi A_1\int_{R_0}^R\Big(\frac{1}{r}-\frac{1}{R}\Big)^{\frac{1}{\gamma-1}}r^2dr.$$ $M$ is an increasing function of $R$. Of course $M\rightarrow 0$ as $R \rightarrow R_0$. But as $R\rightarrow +\infty$, we see $$M\rightarrow 4\pi A_1\int_{R_0}^{\infty}r^{\frac{2\gamma-3}{\gamma-1}}dr =\begin{cases} +\infty & \mbox{if}\ \gamma\geq 4/3 \\ M^*(<\infty) & \mbox{if}\ \gamma <4/3, \end{cases} $$ where $$M^*=\frac{4\pi A_1(\gamma-1)}{4-3\gamma}R_0^{-\frac{4-3\gamma}{\gamma-1}}.$$ Hence if $\gamma \geq 4/3$ there is an equilibrium for any given total mass, but if $\gamma <4/3$ the possible mass has the upper bound $M^*$. Anyway, given the total mass $M$, a conserved quantity, in $(0,+\infty)$ or $(0,M^*)$, then the radius $R$ or the configuration of the equilibrium is uniquely determined.\\ Let us fix one of these equilibria. We are interested in motions around this equilibrium. \\ Here let us glance at the history of researches of this problem. Of course there were a lot of works on the Cauchy problem to the compressible Euler equations. But there were gaps if we consider density distributions which contain vacuum regions. As for local-in-time existence of smooth density with compact support, \cite{M1989} treated the problem under the assumption that the initial density is non-negative and the initial value of $$\omega:=\frac{2\sqrt{A\gamma}}{\gamma-1}\rho^{\frac{\gamma-1}{2}}$$ is smooth, too. By the variables $(\omega,u)$ the equations are symmetrizable continuously including the region of vacuum. Hence the theory of quasi-linear symmetric hyperbolic systems can be applied. The discovery of the variable $\omega$ can go back to \cite{M1986}, \cite{MUK}. However, since $$\omega\propto \Big(\frac{1}{r}-\frac{1}{R}\Big)^{\frac{1}{2}}\sim \mbox{Const.}(R-r)^{\frac{1}{2}} \quad \mbox{as}\ r\rightarrow R-0$$ for equilibria, $\omega$ is not smooth at the boundary $r=R$ with the vacuum. Hence the class of ``tame" solutions considered in \cite{M1989} cannot cover equilibria. On the other hand, possibly discontinuous weak solutions with compactly supported density can be constructed. The article \cite{MT} gave local-in-time existence of bounded weak solutions under the assumption that the initial density is bounded and non-negative. The proof by the compensated compactness method is due to \cite{DCL}. Of course the class of weak solutions can cover equilibria, but the concrete structures of solutions were not so clear. Therefore we wish to construct solutions whose regularities are weaker than solutions with smooth $\omega$ and stronger than possibly discontinuous weak solutions. The present result is an answer to this wish. More concretely speaking, the solution $(\rho(t,r),u(t,r))$ constructed in this article should be continuous on $0\leq t\leq T,R_0\leq r <\infty$ and there should be found a continuous curve $r=R_F(t), 0\leq t\leq T,$ such that $|R_F(t)-R|\ll 1, \rho(t,r)>0 $ for $ 0\leq t\leq T, R_0\leq r <R_F(t)$ and $\rho(t,r)=0$ for $0\leq t\leq T, R_F(t)\leq r<\infty$. The curve $r=R_F(t)$ is the free boundary at which the density touches the vacuum. It will be shown that the solution satisfies $$\rho(t,r)=C(t)(R_F(t)-r)^{\frac{1}{\gamma-1}}(1+O(R_F(t)-r))$$ as $r \rightarrow R_F(t)-0$. Here $C(t)$ is positive and smooth in $t$. This situation is ``physical vacuum boundary" so-called by \cite{JM} and \cite{CS}. This concept can be traced back to \cite{L}, \cite{LY}, \cite{Y}. Of course this singularity is just that of equilibria.\\ The major difficulty of the analysis comes from the free boundary touching the vacuum, which can move along time. So it is convenient to introduce the Lagrangian mass coordinate $$m=4\pi\int_{R_0}^r\rho(t,r')r'^2dr',$$ to fix the interval of independent variable to consider. Taking $m$ as the independent variable instead of $r$, the equations turn out to be \begin{eqnarray*} &&\frac{\partial\rho}{\partial t}+4\pi \rho^2 \frac{\partial}{\partial m}(r^2u)=0, \\ &&\frac{\partial u}{\partial t}+ 4\pi r^2\frac{\partial P}{\partial m}=-\frac{g_0}{r^2} \qquad (0<m<M), \end{eqnarray*} where $$r=\Big(R_0^3+\frac{3}{4\pi}\int_0^m\frac{dm}{\rho}\Big)^{1/3}.$$ We note that $$\frac{\partial r}{\partial t}=u, \qquad \frac{\partial r}{\partial m}=\frac{1}{4\pi r^2\rho}.$$ Let us take $\bar{r}=\bar{r}(m)$ as the independent variable instead of $m$, where $\bar{r}(m)$ is the inverse function $\bar{m}^{-1}(m)$ of the function $$\bar{m}:r \mapsto 4\pi \int_{R_0}^r\bar{\rho}(r')r'^2dr'.$$ Then, since $$\frac{\partial }{\partial m}=\frac{1}{4\pi\bar{r}^2\bar{\rho}}\frac{\partial}{\partial\bar{r}}, \qquad \rho=\Big(4\pi r^2\frac{\partial r}{\partial m}\Big)^{-1}=\bar{\rho} \Big(\frac{r^2}{\bar{r}^2}\frac{\partial r}{\partial\bar{r}}\Big)^{-1}, $$ we have a single second-order equation $$ \frac{\partial^2r}{\partial t^2}+ \frac{1}{\bar{\rho}} \frac{r^2}{\bar{r}^2}\frac{\partial}{\partial\bar{r}} \Big(\bar{P}\Big(\frac{r^2}{\bar{r}^2}\frac{\partial r}{\partial\bar{r}}\Big)^{-\gamma}\Big)+ \frac{g_0}{r^2}=0. $$ The variable $\bar{r}$ runs on the interval $[R_0, R]$ and the boundary condition is $$r|_{\bar{r}=R_0}=R_0.$$ \\ Without loss of generality, we can and shall assume that $$R_0=1,\quad g_0=\frac{1}{\gamma-1}, \quad A=\frac{1}{\gamma}, \quad A_1=1.$$\\ Keeping in mind that the equilibrium satisfies $$\frac{1}{\bar{\rho}}\frac{\partial \bar{P}}{\partial\bar{r}}+\frac{g_0}{\bar{r}^2}=0, $$ we have $$ \frac{\partial^2r}{\partial t^2}- \frac{1}{\bar{\rho}} \frac{r^2}{\bar{r}^2}\frac{\partial}{\partial\bar{r}} \Big(\bar{P}\Big(1- \Big(\frac{r^2}{\bar{r}^2}\frac{\partial r}{\partial\bar{r}}\Big)^{-\gamma}\Big)\Big)+ \frac{1}{\gamma-1}\Big(\frac{1}{r^2}-\frac{r^2}{\bar{r}^4}\Big)=0.$$ Introducing the unknown variable $y$ for perturbation by \begin{equation} r=\bar{r}(1+y), \end{equation} we can write the equation as \begin{equation} \frac{\partial^2y}{\partial t^2}- \frac{1}{\rho r} (1+y)^2\frac{\partial}{\partial r}\Big(PG\Big(y, r\frac{\partial y}{\partial r}\Big)\Big) -\frac{1}{\gamma-1}\frac{1}{r^3}H(y)=0, \end{equation} where \begin{eqnarray*} G(y,v)&:=&1-(1+y)^{-2\gamma}(1+y+v)^{-\gamma}=\gamma(3y+v)+[y,v]_2, \\ H(y)&:=&(1+y)^2-\frac{1}{(1+y)^2}=4y+[y]_2 \end{eqnarray*} and we have used the abbreviations $r, \rho, P$ for $\bar{r}, \bar{\rho}, \bar{P}$.\\ {\bf Notational Remark} Here and hereafter $[X]_q$ denotes a convergent power series, or an analytic function given by the series, of the form $\sum_{j\geq q}a_jX^j$, and $[X,Y]_q$ stands for a convergent double power series of the form $\sum_{j+k\geq q}a_{jk}X^jY^k$.\\ We are going to study the equation (5) on $1<r<R$ with the boundary condition $$y|_{r=1}=0.$$ Of course $y$ and $\displaystyle r\frac{\partial y}{\partial r}$ will be confined to $$|y|+\Big|r\frac{\partial y}{\partial r}\Big| <1.$$ Here let us propose the main goal of this study roughly. Let us fix an arbitrarily large positive number $T$. Then we have \\ {\bf Main Goal } {\it For sufficiently small $\varepsilon>0$ there is a solution $y=y(t,r;\varepsilon)$ of (5) in $C^{\infty}([0,T]\times[1,R])$ such that $$y(t,r;\varepsilon)=\varepsilon y_1(t,r)+O(\varepsilon^2).$$ The same estimates $O(\varepsilon^2)$ hold between the higher order derivatives of $y$ and $\varepsilon y_1$.}\\ Here $y_1(t,r)$ is a time-periodic function specified in Section 2, which is of the form $$y_1(t,r)=\sin(\sqrt{\lambda}t+\theta_0)\cdot \tilde{\Phi}(r),$$ where $\lambda$ is a positive number, $\theta_0$ a constant, and $\tilde{\Phi}(r)$ is an analytic function of $1\leq r\leq R$.\\ Once the solution $y(t,r;\varepsilon)$ is given, then the corresponding motion of gas particles can be expressed by the Lagrangian coordinate as \begin{eqnarray*} r(t,m)&=&\bar{r}(m)(1+y(t,\bar{r}(m);\varepsilon)) \\ &=&\bar{r}(m)(1+\varepsilon y_1(t,\bar{r}(m))+O(\varepsilon^2)). \end{eqnarray*} The curve $r=R_F(t)$ of the free vacuum boundary is given by $$R_F(t)=r(t,M)=R+\varepsilon R\sin(\sqrt{\lambda}t+\theta_0)\tilde{\Phi}(R)+O(\varepsilon^2).$$ {\it The free boundary $R_F(t)$ oscillates around $R$ with time-period $2\pi/\sqrt{\lambda}$ approximately.} The solution $(\rho,u)$ of the original problem (1)(2) is given by $$\rho=\bar{\rho}(\bar{r})\Big((1+y)^2\Big(1+\bar{r}\frac{\partial y}{\partial\bar{r}}\Big)\Big)^{-1}, \qquad u=\bar{r}\frac{\partial y}{\partial t} $$ implicitly by \begin{eqnarray*} \bar{r}&=&\bar{r}(m), \qquad y=y(t,\bar{r}(m);\varepsilon) \\ \frac{\partial y}{\partial\bar{r}}&=& \partial_ry(t,\bar{r}(m);\varepsilon), \qquad \frac{\partial y}{\partial t}= \partial_ty(t,\bar{r}(m);\varepsilon), \end{eqnarray*} where $m=m(t,r)$ for $1\leq r\leq R_F(t)$. Here $m(t,r)$ is given as the inverse function $(f_{(t)})^{-1}(r)$ of the function $$f_{(t)}: m\mapsto r(t,m)=\bar{r}(m)(1+y(t,\bar{r}(m);\varepsilon)).$$ We note that $$R_F(t)-r(t,m)=R(1+y(t,R))-\bar{r}(m)(1+y(t,\bar{r}(m))$$ implies $$\frac{1}{\kappa}(R-\bar{r})\leq R_F(t)-r\leq \kappa (R-\bar{r}) $$ with $0<\kappa-1\ll 1$, since $|y|+|\partial_r y|\leq \varepsilon C$. Therefore $$y(t,\bar{r}(m))=y(t,R)+O(R_F(t)-r),$$ and so on. Hence we get the ``physical vacuum boundary". (See Remark to Theorem 1.)\\ We shall give a precise statement of the main result in Section 3 and give a proof of the main result in Sections 4, 5. We shall apply the Nash-Moser theory. The reason is as follows. The equation (5) looks like as if it is a second-order quasi-linear hyperbolic equation, and one might expect that the usual iteration method in a suitable Sobolev spaces, e.g., $H^s$, or something like them, could be used. But it is not the case. Actually the linear part of the equation is essentially the d'Alembertian operator \begin{eqnarray*} \frac{\partial^2}{\partial t^2}-\triangle&=& \frac{\partial^2}{\partial t^2}-x\frac{\partial^2}{\partial x^2}-\frac{N}{2}\frac{\partial}{\partial x}=\\ &=&\frac{\partial^2}{\partial t^2}- \frac{\partial^2}{\partial\xi^2}-\frac{N-1}{\xi}\frac{\partial}{\partial\xi} \end{eqnarray*} in the variables $x, \xi$ such that $$\frac{R-r}{R} \sim x=\frac{\xi^2}{4}, $$ and the nonlinear terms are smooth functions of $y$ and $\partial y/\partial r$. (See (13) and (15).) Here the term $\partial y/\partial r$ apparently looks like as if to be the first order derivative. If it was the case, the usual iteration in the Picard's scheme applied to the wave equation would work, since the inverse of the d'Alembertian operator may recover the regularity up to one order of derivative, that is, roughly speaking, the d'Alembertian may pull back $C^1([0,T],L^2)$ to $C^1([0,T],H^1)$. But indeed the apparently first order derivative $\partial y/\partial r$ performs like $$r\frac{\partial y}{\partial r} \sim \frac{\partial y}{\partial x}\propto -\frac{1}{\xi}\frac{\partial y}{\partial \xi} \sim -\frac{\partial^2y}{\partial\xi^2}$$ near $r=R$ or $\xi=0$, like the second order derivative. So, since the inverse of the d'Alembertian recovers the regularity up to only one order of derivative, the usual iteration for non-linear wave equations may cause troubles of the loss of regularities, which occur at the free vacuum boundary $r=R$. This is the reason why we like to apply the Nash-Moser theory to our problem. \section{Analysis of the linearized problem} The linearization of the equation (5) is clearly \begin{equation} \frac{\partial^2y}{\partial t^2}+\mathcal{L}y=0, \end{equation} where \begin{eqnarray} \mathcal{L}y&:=&-\frac{1}{\rho r}\frac{d}{dr} \Big(P\gamma\Big(3y+r\frac{dy}{dr}\Big)\Big)-\frac{1}{\gamma-1}\frac{1}{r^3}(4y) \nonumber \\ &=&-\Big(\frac{1}{r}-\frac{1}{R}\Big)\frac{d^2y}{dr^2}+ \Big(-\frac{4}{r}\Big(\frac{1}{r}-\frac{1}{R}\Big)+\frac{\gamma}{\gamma-1}\frac{1}{r^2}\Big)\frac{dy}{dr}+ \frac{3\gamma-4}{\gamma-1}\frac{y}{r^3}. \end{eqnarray}\\ In order to analyze the eigenvalue problem $\mathcal{L} y=\lambda y$, we introduce the independent variable $z$ by \begin{equation} z=\frac{R-r}{R} \end{equation} and the parameter $N$ by \begin{equation} \frac{\gamma}{\gamma-1}=\frac{N}{2} \quad \mbox{or} \quad \gamma=1+\frac{2}{N-2}. \end{equation} Then we can write \begin{equation} R^3\mathcal{L}y=-\frac{z}{1-z}\frac{d^2y}{dz^2} -\frac{\frac{N}{2}-4z}{(1-z)^2}\frac{dy}{dz}+\frac{8-N}{2}\frac{1}{(1-z)^3}y. \end{equation} The variable $z$ runs over the interval $[0, 1-1/R]$, the boundary $z=0$ corresponds to the free boundary touching the vacuum, and the boundary condition at $z=1-1/R$ is the Dirichlet condition $y=0$. Although the boundary $z=1-1/R$ is regular, the boundary $z=0$ is singular. In order to analyze the singularity, we transform the equation $\mathcal{L}y=\lambda y$ , which can be written as $$ -z\frac{d^2y}{dz^2}-\Big(\frac{N}{2}\frac{1}{1-z}-\frac{4z}{1-z}\Big)\frac{dy}{dz} +\frac{8-N}{2}\frac{y}{(1-z)^2}=\lambda R^3(1-z)y, $$ to an equation of the formally self-adjoint form $$-\frac{d}{dz}p(z)\frac{dy}{dz}+q(z)y=\lambda R^3\mu(z)y.$$ This can be done by putting \begin{eqnarray*} p&=&z^{\frac{N}{2}}(1-z)^{\frac{8-N}{2}}, \\ q&=&\frac{8-N}{2}z^{\frac{N-2}{2}}(1-z)^{\frac{4-N}{2}}, \\ \mu&=&z^{\frac{N-2}{2}}(1-z)^{\frac{10-N}{2}}. \end{eqnarray*} Using the Liouville transformation, we convert the equation $$-\frac{d}{dz}p(z)\frac{dy}{dz}+q(z)y=\lambda R^3\mu(z)y+f$$ to the standard form $$-\frac{d^2\eta}{d\xi^2}+Q\eta =\lambda R^3\eta +\hat{f}.$$ This can be done by putting \begin{eqnarray} \xi&=&\int_0^z\sqrt{\frac{\mu}{p}}dz=\int_0^z\sqrt{\frac{1-\zeta}{\zeta}}d\zeta =\sqrt{z(1-z)}+\tan^{-1}\sqrt{\frac{z}{1-z}}, \\ \eta&=&(\mu p)^{1/4}y=z^{\frac{N-1}{4}}(1-z)^{\frac{9-N}{4}}y, \nonumber \\ \hat{f}&=&p^{1/4}\mu^{-3/4}f=z^{\frac{3-N}{4}}(1-z)^{\frac{N-11}{4}}f, \nonumber \end{eqnarray} and \begin{eqnarray*} Q&=&\frac{p}{\mu}\Big(\frac{q}{p}+ \frac{1}{4}\Big(\frac{p'}{p}+\frac{\mu'}{\mu}\Big)' -\frac{1}{16}\Big(\frac{p'}{p}+\frac{\mu'}{\mu}\Big)^2 +\frac{1}{4}\frac{p'}{p}\Big(\frac{p'}{p}+\frac{\mu'}{\mu}\Big)\Big) \\ &=&\frac{1}{z(1-z)^3}\Big(\frac{(N-1)(N-3)}{16}+\frac{7-2N}{2}z+2z^2\Big). \end{eqnarray*} Putting $$\xi_R :=\int_0^{1-\frac{1}{R}}\sqrt{\frac{1-z}{z}}dz, $$ we see that the variable $\xi$ runs over the interval $[0, \xi_R]$. Since $z\sim \displaystyle \frac{\xi^2}{4}$ as $ \xi \rightarrow 0$, we see $$Q \sim \frac{(N-1)(N-3)}{4}\frac{1}{\xi^2}$$ as $\xi \rightarrow 0$. But $\gamma <2$ implies $N>4$ and $\displaystyle \frac{(N-1)(N-3)}{4}>\frac{3}{4}$. Hence the boundary $\xi=0$ is of the limit point type. See, e.g., \cite{Reed}, p.159, Theorem X.10. The exceptional case $\gamma=2$ or $N=4$ will be considered separately. Anyway the potential $Q$ is bounded from below on $0<\xi<\xi_R$ provided that $N>3$. Thus we have \begin{Proposition} The operator $T_0, \mathcal{D}(T_0)=C_0^{\infty}(0,\xi_R), T_0\eta= -\eta_{\xi\xi}+Q\eta$, in $L^2(0,\xi_R)$ has the Friedrichs extension $T$, a self-adjoint operator whose spectrum consists of simple eigenvalues $\lambda_1R^3<\lambda_2R^3<\cdots<\lambda_nR^3<\cdots\rightarrow +\infty.$ \end{Proposition} In other words, { \it the operator $S_0, \mathcal{D}(S_0)=C_0^{\infty}(0, 1-\frac{1}{R}), S_0y=\mathcal{L}y$ in $$\mathfrak{X}:=L^2((0,1-\frac{1}{R}), \mu dz(=z^{\frac{N-2}{2}}(1-z)^{\frac{10-N}{2}}dz)),$$ has the Friedrichs extension $S$, a self-adjoint operator with the eigenvalues $(\lambda_n)_n$ }. We note that the domain of $S$ is \begin{eqnarray*} \mathcal{D}(S)&=&\{ y \in \mathfrak{X} \ | \ \exists \phi_n \in C_0^{\infty}(0, 1-\frac{1}{R}) \\ &&\mbox{such that}\ \phi_n\rightarrow y \ \mbox{in} \ \mathfrak{X} \ \mbox{and}\ \mathfrak{Q}[\phi_m-\phi_n]\rightarrow 0 \ \mbox{as}\ m,n\rightarrow \infty, \\ &&\mbox{and}\ \mathcal{L}y\in \mathfrak{X} \ \mbox{in distribution sense}. \} \end{eqnarray*} Here \begin{eqnarray*} \mathfrak{Q}[\phi]:&=&\int_0^{1-\frac{1}{R}}\Big|\frac{d\phi}{dz}\Big|^2z^{\frac{N}{2}}(1-z)^{\frac{8-N}{2}}dz \\ &=&\int_0^{1-\frac{1}{R}}\frac{z}{1-z}\Big|\frac{d\phi}{dz}\Big|^2\mu(z)dz. \end{eqnarray*}\\ Moreover we have \begin{Proposition} If $N\leq 8$ ( or $\gamma\geq 4/3$ ), the least eigenvalue $\lambda_1$ is positive. \end{Proposition} {\bf Proof} Suppose $N\leq 8$. Clearly $y \equiv 1$ satisfies $$-\frac{d}{dz}p\frac{dy}{dz}+qy=q=\frac{8-N}{2}z^{\frac{N-2}{2}}(1-z)^{\frac{4-N}{2}}.$$ Therefore the corresponding $\eta_1(\xi)$ given by $$\eta_1=z^{\frac{N-1}{4}}(1-z)^{\frac{9-N}{4}}$$ satisfies $$-\frac{d^2\eta_1}{d\xi^2}+Q\eta_1=\hat{q}=\frac{8-N}{2}z^{\frac{N-1}{4}}(1-z)^{-\frac{N+3}{4}}.$$ It is easy to see $\eta_1=\displaystyle \frac{d\eta_1}{d\xi}=0$ at $\xi=0$. Let $\phi_1(\xi)$ be the eigenfunction of $-d^2/d\xi^2 +Q$ associated with the least eigenvalue $\lambda_1$. We can assume that $\phi_1(\xi)>0$ for $0<\xi<\xi_R, \phi_1(\xi_R)=0$, and $\displaystyle \frac{d\phi_1}{d\xi}<0$ at $\xi=\xi_R$. Then integrations by parts give \begin{eqnarray*} \lambda_1\int_0^{\xi_R}\phi_1\eta_1d\xi &=& \int_0^{\xi_R}\Big(-\frac{d^2\phi_1}{d\xi^2}+Q\phi_1\Big)\eta_1d\xi \\ &=&-\frac{d\phi_1}{d\xi}\eta_1\Big|_{\xi=\xi_R}+ \int_0^{\xi_R}\Big(\frac{d\phi_1}{d\xi}\frac{d\eta_1}{d\xi}+Q\phi_1\eta_1\Big)d\xi \\ &>&\int_0^{\xi_R}\Big(\frac{d\phi_1}{d\xi}\frac{d\eta_1}{d\xi}+Q\phi_1\eta_1\Big)d\xi \\ &=&\int_0^{\xi_R}\phi_1\Big(-\frac{d^2\eta_1}{d\xi^2}+Q\eta_1\Big)d\xi \\ &=&\int_0^{\xi_R}\phi_1\hat{q}d\xi \geq 0. \end{eqnarray*} $\blacksquare$ {\bf Remark} When $N=8$, $\eta_1$ satisfies $\displaystyle -\frac{d^2\eta}{d\xi^2}+Q\eta=0$, but does not satisfy the boundary condition $\eta|_{\xi=\xi_R}=0$. Hence it is not an eigenfunction of zero eigenvalue, and $\lambda_1>0$ even if $N=8$.\\ For the sake of convenience of the further analysis, let us rewrite the linear part $\mathcal{L}$ by introducing a new variable $$\tilde{x}=\frac{\xi^2}{4}=\frac{1}{4}\Big(\sqrt{z(1-z)}+\tan^{-1}\sqrt{\frac{z}{1-z}}\Big)^2. $$ Clearly $\tilde{x}=z+[z]_2$ and the change of variables $z \mapsto \tilde{x}$ is analytic on $0\leq z <1$ and its inverse $\tilde{x} \mapsto z$ is analytic on $0\leq \tilde{x} <\tilde{x}_{\infty}:=\pi^2/16$. Since \begin{eqnarray*} \frac{d}{dz}&=&\sqrt{\frac{\tilde{x}}{z}}\sqrt{1-z}\frac{d}{d\tilde{x}}, \\ \frac{d^2}{dz^2}&=&\frac{1-z}{z}\Big( \tilde{x}\frac{d^2}{d\tilde{x}^2}+ \frac{1}{2}\Big(1-\sqrt{\frac{\tilde{x}}{z}}\frac{1}{(1-z)\sqrt{1-z}}\Big)\frac{d}{d\tilde{x}}\Big), \\ \sqrt{\frac{\tilde{x}}{z}}&=&1+[\tilde{x}]_1, \end{eqnarray*} we can write $$R^3\mathcal{L}y= -\Big(\tilde{x}\frac{d^2y}{d\tilde{x}^2}+\frac{N}{2}\frac{dy}{d\tilde{x}}\Big)+ \ell_1(\tilde{x})\tilde{x}\frac{dy}{d\tilde{x}}+ \ell_0(\tilde{x})y, $$ where $\ell_1(\tilde{x})$ and $\ell_0(\tilde{x})$ are analytic on $0\leq \tilde{x}<\tilde{x}_{\infty}$. Putting \begin{eqnarray} x&=&R^3\tilde{x}=\frac{R^3\xi^2}{4}=\frac{R^3}{4}\Big(\sqrt{z(1-z)}+\tan^{-1}\sqrt{\frac{z}{1-z}}\Big)^2\nonumber \\ &=&\frac{R^3}{4}\Big(\frac{\sqrt{(R-r)r}}{R}+\tan^{-1}\sqrt{\frac{R-r}{r}}\Big)^2, \end{eqnarray} we can write \begin{equation} \mathcal{L}y=-\triangle y+L_1(x)x\frac{dy}{dx}+L_0(x)y, \end{equation} where $$\triangle =x\frac{d^2}{dx^2}+\frac{N}{2}\frac{d}{dx}$$ and $L_1(x)$ and $L_0(x)$ are analytic on $0\leq x < x_{\infty}:=R^3\tilde{x}_{\infty}=\pi^2R^3/16$. While $r$ runs over the interval $[1,R]$, $x$ runs over $[0, x_R]$, where $x_R:=R^3\xi_R^2/4 (< x_{\infty})$. The Dirichlet condition at the regular boundary is $\displaystyle y|_{x=x_R}=0$.\\ {\bf Remark} Since $x=R^3\xi^2/4$, we have $$\triangle=x\frac{d^2}{dx^2}+\frac{N}{2}\frac{d}{dx}= \frac{1}{R^3}\Big(\frac{d^2}{d\xi^2}+\frac{N-1}{\xi}\frac{d}{d\xi}\Big).$$ Thus $\triangle$ is the radial part of the Laplacian in the $N$-dimensional Euclidean space $\mathbb{R}^N$ provided that $N$ is an integer. But we do not assume that $N$ is an integer in this study.\\ Let us fix an eigenvalue $\lambda=\lambda_n$ and an associated eigenfunction $\Phi(x)$ of $\mathcal{L}$. Then \begin{equation} y_1(t,x)=\sin(\sqrt{\lambda}t+\theta_0)\Phi(x) \end{equation} is a time-periodic solution of the linearized problem $$\frac{\partial^2y}{\partial t^2}+\mathcal{L}y=0, \qquad y|_{x=x_R}=0. $$ Moreover we claim that $\Phi(x)$ is an analytic function of $|x|<x_0$. To verify it, we use the following \begin{Lemma} We consider the differential equation $$x\frac{d^2y}{dx^2}+b(x)\frac{dy}{dx}+c(x)y=0,$$ where $$b(x)=\beta+[x]_1, \qquad c(x)=[x]_0, $$ and we assume that $\beta \geq 2$. Then 1) there is a solution $y_1$ of the form $$y_1=1+[x]_1, $$ and 2) there is a solution $y_2$ such that $$y_2=x^{-\beta+1}(1+[x]_1) $$ provided that $\beta \not\in \mathbb{N}$ or $$y_2=x^{-\beta+1}(1+[x]_1)+hy_1\log x$$ provided that $\beta\in\mathbb{N}$. Here $h$ is a constant which can vanish in some cases. \end{Lemma} For a proof, see \cite{Coddington}, Chapter 4. Applying this lemma with $\beta=N/2$ to the equation $$ x\frac{d^2y}{dx^2}+\Big(\frac{N}{2}-L_1(x)x\Big)\frac{dy}{dx} +(\lambda-L_0(x))y=0, $$ we get the assertion, since $y_2\sim \displaystyle x^{-\frac{N-2}{2}}$ cannot belong to $\mathfrak{X}=L^2(x^{\frac{N-2}{2}}dx)$ for $N\geq 4$, even if $N=4$, which was the exceptional case in the preceding discussion of the limit point type. \section{Statement of the main result} We rewrite the equation (5) by using the linearized part $\mathcal{L}$ defined by (7) as \begin{equation} \frac{\partial^2y}{\partial t^2}+ \Big(1+G_I\Big(y,r\frac{\partial y}{\partial r}\Big)\Big)\mathcal{L}y+ G_{II}\Big(r,y,r\frac{\partial y}{\partial r}\Big)=0, \end{equation} where \begin{eqnarray*} G_I(y,v)&=&(1+y)^2\Big(1+\frac{1}{\gamma}\partial_vG_2(y,v)\Big)-1, \\ G_{II}(r,y,v)&=&\frac{P}{\rho r^2}G_{II0}(y,v)+ \frac{1}{\gamma-1}\frac{1}{r^3}G_{II1}(y,v), \\ G_{II0}(y,v)&=&(1+y)^2(3\partial_vG_2-\partial_yG_2)v, \\ G_{II1}(y,v)&=&(1+y)^2\Big( \frac{1}{\gamma}(\partial_vG_2)((4-3\gamma)y-\gamma v)+G_2\Big) -H+4y(1+y)^2. \end{eqnarray*} Here \begin{eqnarray*} G_2&:=&G-\gamma(3y+v)=[y,v]_2, \\ \partial_vG_2&:=&\frac{\partial}{\partial v}G_2=\frac{\partial G}{\partial v}-\gamma=[y,v]_1,\\ \partial_yG_2&:=&\frac{\partial}{\partial y}G_2=\frac{\partial G}{\partial y}-3\gamma=[y,v]_1. \end{eqnarray*}\\ We have fixed a solution $y_1$ of the linearized equation $y_{tt}+\mathcal{L}y=0$ (see (14)), and we seek a solution $y$ of (5) or (15) of the form $$y=\varepsilon y_1+\varepsilon w,$$ where $\varepsilon$ is a small positive parameter. \\ {\bf Remark} The following discussion is valid if we take $$y_1=\sum_{k=1}^Kc_k\sin(\sqrt{\lambda_{n_k}}t+\theta_k)\cdot \Phi_k(x), \eqno(14)'$$ where $\Phi_k$ is an eigenfunction of $\mathcal{L}$ associated with the eigenvalue $\lambda_{n_k}$ and $c_k$ and $\theta_k$ are constants for $k=1,\cdots, K$.\\ Then the equation which governs $w$ turns out to be \begin{equation} \frac{\partial^2w}{\partial t^2}+ \Big(1+\varepsilon a(t,r,w,r\frac{\partial w}{\partial r},\varepsilon)\Big) \mathcal{L}w+ \varepsilon b(t,r,w,r\frac{\partial w}{\partial r},\varepsilon)= \varepsilon c(t,r,\varepsilon), \end{equation} where \begin{eqnarray*} a(t,r,w, \Omega,\varepsilon)&=&\varepsilon^{-1}G_I(\varepsilon(y_1+w),\varepsilon(v_1+\Omega)), \\ b(t,r,w,\Omega,\varepsilon)&=&\varepsilon^{-1}G_I(\varepsilon(y_1+w),\varepsilon(v_1+\Omega))\mathcal{L} y_1+\varepsilon^{-2}G_{II} (\varepsilon(y_1+w),\varepsilon(v_1+\Omega)) \\ &&-\varepsilon^{-1}G_I(\varepsilon y_1, \varepsilon v_1)\mathcal{L}y_1 -\varepsilon^{-2}G_{II}(\varepsilon y_1,\varepsilon v_1), \\ c(t,r,\varepsilon)&=&\varepsilon^{-1}G_I(\varepsilon y_1, \varepsilon v_1)\mathcal{L}y_1 +\varepsilon^{-2}G_{II}(\varepsilon y_1,\varepsilon v_1). \end{eqnarray*} Here $v_1$ stands for $r\partial y_1/\partial r$.\\ The main result of this study can be stated as follows: \begin{Theorem} For any $T>0$, there is a sufficiently small positive $\varepsilon_0(T)$ such that, for $0<\varepsilon\leq\varepsilon_0(T)$, there is a solution $w$ of (16) such that $w \in C^{\infty}([0,T]\times [1,R])$ and $$\sup_{j+k\leq n}\Big\| \Big(\frac{\partial}{\partial t}\Big)^j\Big(\frac{\partial}{\partial r}\Big)^k w\Big\|_{L^{\infty}([0,T]\times[1,R])}\leq C_n\varepsilon, $$ or a solution $y \in C^{\infty}([0,T]\times[1,R])$ of (5) or (15) of the form $$y(t,r)=\varepsilon y_1(t,r)+O(\varepsilon^2), $$ or a motion which can be expressed by the Lagrangian coordinate as $$r(t,m)=\bar{r}(m)(1+\varepsilon y_1(t,\bar{r}(m))+O(\varepsilon^2))$$ for $0\leq t\leq T, 0\leq m\leq M$. \end{Theorem} {\bf Remark} The corresponding density distribution $\rho=\rho(t,r)$, where $r$ is the original Euler coordinate, satisfies $$\rho(t,r)>0\ \mbox{for}\ 1\leq r<R_F(t), \qquad \rho(t,r)=0\ \mbox{for}\ R_F(t)\leq r, $$ where $$R_F(t):=r(t,M)=R+\varepsilon R\sin(\sqrt{\lambda}t+\theta_0)\Phi(0)+O(\varepsilon^2).$$ Since $y(t,r)$ is smooth on $1\leq r\leq R$, we have $$\rho(t,r)=C(t)(R_F(t)-r)^{\frac{1}{\gamma-1}}(1+O(R_F(t)-r)) $$ as $r\rightarrow R_F(t)-0$. Here $C(t)$ is positive and smooth in $t$.\\ Our task is to find the inverse image $\mathfrak{P}^{-1}(\varepsilon c)$ of the nonlinear mapping $\mathfrak{P}$ defined by \begin{equation} \mathfrak{P}(w)=\frac{\partial^2w}{\partial t^2}+ (1+\varepsilon a)\mathcal{L}w+\varepsilon b. \end{equation} Let us note that $\mathfrak{P}(0)=0$. This task, which will be done by applying the Nash-Moser theorem, will require a certain property of the derivative of $\mathfrak{P}$: \begin{equation} D\mathfrak{P}(w)h= \frac{\partial^2h}{\partial t^2}+(1+\varepsilon a_1) \mathcal{L}h+ \varepsilon a_{21}r\frac{\partial h}{\partial r}+\varepsilon a_{20}h, \end{equation} where \begin{eqnarray*} a_1&=&a(t,r,w,r\frac{\partial w}{\partial r}, \varepsilon), \\ a_{20}&=&\frac{\partial a}{\partial w}\mathcal{L}w +\frac{\partial b}{\partial w}, \\ a_{21}&=&\frac{\partial a}{\partial\Omega}\mathcal{L}w+\frac{\partial b}{\partial \Omega}. \end{eqnarray*} Here $\Omega$ is the dummy of $\displaystyle r\frac{\partial w}{\partial r}$. The following observation will play a crucial role in energy estimates later. \begin{Lemma} We have $$a_{21}=\frac{\gamma P}{\rho} (1+y)^{-2\gamma+2}(1+y+v)^{-\gamma-2}\Big( (\gamma+1)\frac{\partial^2Y}{\partial r^2}+ \frac{4\gamma}{r}\frac{\partial Y}{\partial r}+ \frac{2\varepsilon (\gamma-1)}{1+y}\Big(\frac{\partial Y}{\partial r}\Big)^2\Big), $$ where $$y=\varepsilon(y_1+w),\qquad v=r\frac{\partial y}{\partial r}, \qquad Y=y_1+w. $$ \end{Lemma} {\bf Proof} It is easy to see \begin{eqnarray*} a_{21}&=&(\partial_vG_I)\mathcal{L}Y+\varepsilon^{-1}\partial_vG_{II} \\ &=&(\partial_vG_I)\Big( -\frac{\gamma P}{\rho r}(3Y+V)'\Big)+\varepsilon^{-1}\frac{P}{\rho r^2}\partial_vG_{II0}+ \frac{1}{\gamma-1}\frac{1}{r^3}[U], \end{eqnarray*} where $$[U]=\gamma(\partial_vG_I)(3Y+V)+ \partial_vG_I(-4Y)+ \varepsilon^{-1}\partial_vG_{II1}. $$ Using $$\partial_vG_I=(1+y)^2\frac{1}{\gamma}\partial_v^2G_2, $$ we can show that $[U]=0$. Then a direct calculation leads us to the conclusion. $\blacksquare$ \section{Proof of the main result} We use the variable $x$ defined by (12) instead of $r$. We note that \begin{eqnarray*} x&=&R^2(R-r)+[R-r]_2, \\ \frac{\partial}{\partial r}&=&-R^2(1+[x]_1))\frac{\partial}{\partial x}. \end{eqnarray*} Therefore a function of $1\leq r\leq R$ which is infinitely many times continuously differentiable is also so as a function of $0\leq x\leq x_R$.\\ We are going to apply the Nash-Moser theorem formulated by R. Hamilton (\cite{Hamilton}, p.171, III.1.1.1):\\ {\it Let $\mathfrak{E}_0$ and $\mathfrak{E}$ be tame spaces, $U$ an open subset of $\mathfrak{E}_0$ and $\mathfrak{P}: U\rightarrow \mathfrak{E}$ a smooth tame map. Suppose that the equation for the derivative $D\mathfrak{P}(w)h=g$ has a unique solution $h=V\mathfrak{P}(w)g$ for all $w$ in $U$ and all $g$, and that the family of inverse $V\mathfrak{P}: U\times \mathfrak{E} \rightarrow \mathfrak{E}_0$ is a smooth tame map. Then $\mathfrak{P}$ is locally invertible.}\\ In order to apply the Nash-Moser theorem, we consider the spaces of functions of $t$ and $x$: \begin{eqnarray*} \mathfrak{E}&:=& \{ y\in C^{\infty}([-2\tau_1,T]\times[0,x_R])\quad | \quad y(t,x)=0 \quad\mbox{for}\quad -2\tau_1\leq t\leq -\tau_1 \}, \\ \mathfrak{E}_0&:=&\{ w \in \mathfrak{E}\ |\quad w|_{x=x_R}=0\}. \end{eqnarray*} Here $\tau_1$ is a positive number. Let $U$ be the set of all functions $w$ in $\mathfrak{E}_0$ such that $|w|+|\partial w/\partial x| <1$ and suppose that $|\varepsilon|\leq \varepsilon_1$, $\varepsilon_1$ being a small positive number. Then we can consider that the nonlinear mapping $\mathfrak{P}$ maps $U(\subset\mathfrak{E}_0)$ into $\mathfrak{E}$, since the coefficients $a,b$ are smooth functions of $t, x, \varepsilon w$ and $\varepsilon\partial w/\partial x$. Let us asume that $\varepsilon c(t,x)=0$ for $-2\tau_1\leq t\leq -\tau_1$ after changing the value of $c$ for $-2\tau_1\leq t <0$. To fix the idea, we replace $c(t,x)$ by $\alpha(t)c(t,x)$ with a cut off function $\alpha\in C^{\infty}(\mathbb{R})$ such that $\alpha(t)=1$ for $t\geq 0$ and $\alpha(t)=0$ for $t \leq -\tau_1$. Then $\mathfrak{P}^{-1}(\varepsilon c)$ is a solution of (16) on $t\geq 0$.\\ We should show that the Fr\'{e}chet space $\mathfrak{E}$ is tame for some gradings of norms. For $y\in \mathfrak{E}$, $n\in\mathbb{N}$, let us define $$ \|y\|_n^{(\infty)}:= \sup_{0\le j+k\leq n}\Big\|\Big(-\frac{\partial^2}{\partial t^2} \Big)^{j}(-\triangle)^{k}y\Big\|_{L^{\infty}([-2\tau_1,T]\times[0,x_R])}. $$ Then we can claim that $\mathfrak{E}$ turns out to be tame by this grading $(\|\cdot\|_{n}^{(\infty)})_n$ (see \cite{Hamilton}, p.136, II.1.3.6 and p.137, II 1.3.7). In fact, even if $N$ is not an integer, we can define the Fourier transformation $Fy(\xi)$ of a function $y(x)$ for $0\leq x<\infty$ by $$Fy(\xi):=\int_0^{\infty}K(\xi x)y(x)x^{\frac{N}{2}-1}dx. $$ Here $K(X)$ is an entire function of $X \in \mathbb{C}$ given by $$K(X)=2(\sqrt{X})^{-\frac{N}{2}+1}J_{\frac{N}{2}-1}(4\sqrt{X}) =2^{\frac{N}{2}}\Phi_{\frac{N}{2}-1}(X), $$ $J_{\nu}$ being the Bessel function. Then we have $$F({-\triangle y})(\xi)=4\xi\cdot F{y}(\xi) $$ and the inverse of the transformation $F$ is $F$ itself. See, e.g. \cite{Sneddon}. Then it is easy to see $\mathfrak{E}$ endowed with the grading $(\|y\|_n^{(\infty)})_n$ is a tame direct summand of the tame space $$\mathfrak{F}:=L_1^{\infty}(\mathbb{R}\times [0,\infty), d\tau\otimes \xi^{\frac{N}{2}-1}d\xi, \log (1+\tau^2+4\xi)) $$ through the Fourier transformation $$\mathcal{F}y(\tau, \xi)= \frac{1}{\sqrt{2\pi}}\int e^{-\sqrt{-1}\tau t}Fy(t,\cdot)(\xi)dt$$ and its inverse applied to the space $\tilde{\mathfrak{E}}_0:=C_0^{\infty}((-2T-2\tau_1,2T)\times [0, x_R+1))$, into which functions of $\mathfrak{E}$ can be extended (see, e.g. \cite{Adams}, p.88, 4.28 Theorem, -the existence of `total extension operator') and the space $$\dot{\mathfrak{E}}:=\dot{C}^{\infty}(\mathbb{R}\times [0,\infty)) := \{y | \forall j\forall k \lim_{L\rightarrow\infty}\sup_{|t|\geq L,x\geq L}|(-\partial_t^2)^j(-\triangle)^ky|=0\},$$ for which functions of $\mathfrak{E}$ are restrictions. Actually, if we denote by $\mathfrak{e}:\mathfrak{E}\rightarrow \tilde{\mathfrak{E}}_0$ the extension operator, and by $\mathfrak{r}:\dot{\mathfrak{E}}\rightarrow\mathfrak{E}$ the restriction operator, then the operators $\mathcal{F}\circ\mathfrak{e}:\mathfrak{E}\rightarrow \mathfrak{F}$ and $\mathfrak{r}\circ\mathcal{F}:\mathfrak{F}\rightarrow \mathfrak{E}$ are tame and the composition $(\mathfrak{r}\circ\mathcal{F})\circ(\mathcal{F}\circ\mathfrak{e})$ is the identity of $\mathfrak{E}$. For the details, see the proof of \cite{Hamilton}, p.137, II.1.3.6.Theorem.\\ On the other hand, let us define $$\|y\|_n^{(2)}:=\Big( \sum_{0\le j+k\leq n} \int_{-\tau_1}^T\|\Big(-\frac{\partial^2}{\partial t^2}\Big)^{j}(-\triangle)^{k}y\|_{\mathfrak{X}}^2dt \Big)^{1/2}. $$ Here $\mathfrak{X}=L^2((0,x_R); x^{\frac{N}{2}-1}dx)$ and $$\|y\|_{\mathfrak{X}}:=\Big(\int_0^{x_R}|y(x)|^2x^{\frac{N}{2}-1}dx\Big)^{1/2}.$$ We have $$\sqrt{\frac{N}{2}}\|y\|_{\mathfrak{X}}\leq \|y\|_{L^{\infty}}\leq C \sup_{j\leq\sigma}\|(-\triangle)^jy\|_{\mathfrak{X}},$$ by the Sobolev imbedding theorem (see Appendix A), provided that $2\sigma > N/2$. The derivatives with respect to $t$ can be treated more simply. Then we see that the grading $(\|\cdot\|_n^{(2)})_n$ is tamely equivalent to the grading $(\|\cdot\|_n^{(\infty)})_n$, that is, we have $$\frac{1}{C}\|y\|_n^{(2)}\leq\|y\|_n^{(\infty)}\leq C\|y\|_{n+s}^{(2)}$$ with $2s >1+N/2$. Hence $\mathfrak{E}$ is tame with respect to $(\|\cdot\|_n^{(2)})_n$, too. The grading $(\|\cdot\|_n^{(2)})_n$ will be suitable for estimates of solutions of the associated linear wave equations. Note that $\mathfrak{E}_0$ is a closed subspace of $\mathfrak{E}$ endowed with these gradings. \medskip Now we verify the nonlinear mapping $\mathfrak{P}$ is tame for the grading $(\|\cdot\|_n^{(\infty)})_n$. To do so, we write $$\mathfrak{P}(w)=F(t, x, Dw, D^2w, w_{tt}), $$ where $D=\partial/\partial x$, $F$ is a smooth function of $t,x, Dw, D^2w, w_{tt}$ and linear in $D^2w, w_{tt}$. According to \cite{Hamilton} (see p.142, II.2.1.6 and p.145, II.2.2.6), it is sufficient to prove the linear differential operator $w \mapsto Dw=\partial w/\partial x$ is tame. But it is clear because of the following result. \begin{Proposition} For any $m\in\mathbb{N}$ and for any $y\in C^{\infty}[0,1]$ we have the formula $$\triangle^mDy(x)= x^{-\frac{N}{2}-m-1}\int_0^x \triangle^{m+1}y(x')(x')^{\frac{N}{2}+m}dx'. $$ As a corollary it holds that, for any $m,k \in \mathbb{N}$, $$\|(-\triangle)^mD^ky\|_{L^{\infty}}\leq \frac{1}{\prod_{j=0}^{k-1}(\frac{N}{2}+m+j)}\|(-\triangle)^{m+k}y\|_{L^{\infty}}.$$ \end{Proposition} {\bf Proof} It is easy by integration by parts in induction on $m$ starting from the formula $$Dy(x)= x^{-\frac{N}{2}}\int_0^x \triangle y(x')(x')^{\frac{N}{2}-1}dx'. $$ $\blacksquare$ \\ In parallel with the results of \cite{Hamilton} (see p.144, II.2.2.3.Corollary and p.145, II.2.2.5.Theorem), we should use the following two propositions. Proofs for these propositions are given in Appendix B. \begin{Proposition} For any positive integer $m$, there is a constant $C$ such that $$|\triangle^m(f\cdot g)|_0\leq C(|\triangle^mf|_0|g|_0+ |f|_0|\triangle^mg|_0), $$ where $|\cdot |_0$ stands for $\|\cdot \|_{L^{\infty}}$. \end{Proposition} \begin{Proposition} Let $F(x,y)$ be a smooth function of $x$ and $y$ and $M$ be a positive number. Then for any positive integer $m$, there is a constant $C>0$ such that $$|\triangle^mF(x,y(x))|_0\leq C (1+|y|_m) $$ provided that $|y|_0\leq M$, where we denote $$|y|_m=\sup_{0\leq j\leq m}\|(-\triangle)^jy\|_{L^{\infty}}.$$ \end{Proposition} Summing up, we can claim that $$\|\mathfrak{P}(w)\|_n^{(\infty)}\leq C(1+\|w\|_{n+2}^{(\infty)}),$$ provided that $\|w\|_2^{(\infty)}\leq M$.\\ Therefore the problem is concentrated to estimates of the solution and its higher derivatives of the linear equation $$D\mathfrak{P}(w)h=g,$$ when $w$ is fixed in $\mathfrak{E}_0$ and $g$ is given in $\mathfrak{E}$. Let us investigate the structure of the linear operator $D\mathfrak{P}(w)$. First we note that $$\frac{\gamma P}{\rho }=\frac{1}{r}-\frac{1}{R}=\frac{x}{R^3}(1+[x]_1).$$ Therefore it follows from Lemma 2 that there exists a smooth function $\hat{a}(t,x)$ such that $$\varepsilon a_{21}r\frac{\partial}{\partial r}=\varepsilon\hat{a}(t,x)x\frac{\partial}{\partial x}.$$ Let us put \begin{eqnarray*} b_1&:=&(1+\varepsilon a_1)L_1(x)+\varepsilon\hat{a}, \\ b_0&:=&(1+\varepsilon a_1)L_0(x)+\varepsilon a_{20}, \end{eqnarray*} taking into account the observation in Section 2, (13). Then we can write $$D\mathfrak{P}(w)h= \frac{\partial^2h}{\partial t^2}-(1+\varepsilon a_1)\triangle h +b_1(t,x)x\frac{\partial h}{\partial x}+b_0(t,x)h. $$ We note that $b_1, b_0$ depend only on $w, \partial w/\partial x, \partial^2w/\partial x^2$. Then we can claim \begin{Lemma} If a solution of $D\mathfrak{P}(w)h=g$ satisfies $$h|_{x=x_R}=0, \quad h|_{t=0}=\frac{\partial h}{\partial t}\Big|_{t=0}=0,$$ then $h$ enjoys the energy inequality $$\|\partial_th\|_{\mathfrak{X}}+\|\dot{D}h\|_{\mathfrak{X}}+\|h\|_{\mathfrak{X}}\leq C \int_0^T\|g(t')\|_{\mathfrak{X}}dt', $$ where $\dot{D}=\sqrt{x}\partial/\partial x$ and $C$ depends only on $N, R, T$, $A:=\|\varepsilon \partial_ta_1\|_{L^{\infty}}+\sqrt{2}\|\varepsilon \dot{D}a_1+b_1\|_{L^{\infty}}$ and $B:=\|b_0\|_{L^{\infty}}$, provided that $|\varepsilon a_1|\leq 1/2$. \end{Lemma} {\bf Proof} We consider the energy $$E(t):=\int_0^{x_R} ((\partial_th)^2+(1+\varepsilon a_1)(\dot{D}h)^2)x^{\frac{N}{2}-1}dx.$$ Mutiplying the equation by $\partial_th$, and integrating by parts under the boundary condition, we get \begin{eqnarray*} \frac{1}{2}\frac{dE}{dt}&=& \int_0^{x_R}\Big(\frac{1}{2}\partial_t(\varepsilon a_1)(\dot{D}h)^2- \dot{D}(\varepsilon a_1)(\dot{D}h)(\partial_th) + \\ &&-\sqrt{x}b_1(\dot{D}h)(\partial_th)- b_0h(\partial_th)+ g(\partial_th)\Big)x^{\frac{N}{2}-1}dx, \end{eqnarray*} which implies $$\frac{1}{2}\frac{dE}{dt}\leq AE+B\Big|\int_0^{x_R}h(\partial_th)x^{\frac{N}{2}-1}dx\Big|+ E^{1/2}\|g(t)\|_{\mathfrak{X}}.$$ On the other hand, using the initial condition, we see that $U(t):=\|h\|_{\mathfrak{X}}^2$ enjoys \begin{eqnarray*} &&\frac{1}{2}\frac{dU}{dt}=\int_0^{x_R} h(\partial_th)x^{\frac{N}{2}-1}dx \leq U^{1/2}E^{1/2}, \\ &&U(0)=0. \end{eqnarray*} Hence $U(t)\leq \int_0^tE^{1/2}$ and $$\Big|\int_0^{x_R} h(\partial_th)x^{\frac{N}{2}-1}dx \Big|\leq E^{1/2}(t)\int_0^tE^{1/2}.$$ Summing up, we have \begin{eqnarray*}&&\frac{1}{2}\frac{dE}{dt}\leq AE+BE(t)^{1/2}\int_0^tE^{1/2}+E^{1/2}\|g(t)\|_{\mathfrak{X}},\\ &&E(0)=0. \end{eqnarray*} By the Gronwall's lemma, we can derive the inequality $$E^{1/2}(t)\leq C\int_0^t\|g(t')\|_{\mathfrak{X}}dt'.$$ $\blacksquare$\\ A tame estimate of the inverse $D\mathfrak{P}(w)^{-1}:g\mapsto h$ will be discussed in the next section. This will completes the proof of the main result. \section{Tame estimate of solutions of linear wave equations} We consider the wave equation \begin{equation} \frac{\partial^2h}{\partial t^2}+\mathcal{A}h=g(t,x), \qquad (0\leq t\leq T, 0\leq x\leq 1), \end{equation} where \begin{eqnarray*} \mathcal{A}h&=&-b_2\triangle h +b_1\check{D}h+b_0h, \\ \triangle &=&x\frac{d^2}{dx^2}+\frac{N}{2}\frac{d}{dx}, \qquad \check{D}=x\frac{d}{dx}. \end{eqnarray*} We denote $\vec{b}=(b_2,b_1,b_0)$. The given function $\vec{b}(t,x)$ is supposed to be in $C^{\infty}([0,T]\times[0,1])$ and we assume that $|b_2(t,x)-1|\leq 1/2$. The function $g(t,x)$ belongs to $C^{\infty}([0,T]\times[0,1])$ and we suppose that \begin{equation} g(t,x)=0 \qquad \mbox{for}\qquad 0\leq t\leq \tau_1, \end{equation} where $\tau_1$ is a positive number. Let us consider the initial boundary value problem (IBP): \begin{eqnarray*} &&\frac{\partial^2h}{\partial t^2}+\mathcal{A}h=g(t,x), \\ &&h|_{x=1}=0, \\ &&h|_{t=0}=\frac{\partial h}{\partial t}\Big|_{t=0}=0 . \end{eqnarray*} Then (IBP) admits a unique solution $h(t,x)$ thanks to the energy estimate, and $h(t,x)=0$ for $0\leq t\leq \tau_1$ because of the uniqueness. Moreover, since the compatibility conditions are satisfied, the unique solution turns out to be smooth. A proof can be found e.g. in \cite{Ikawa}, Chapter 2. To satisfy ourselves, we shall give a brief sketch of a proof of the existence of smooth solutions in Appendix C. We are going to get estimates of the higher derivatives of $h$ by them of $g$ and the coefficients $b_2,b_1,b_0$.\\ \subsection{Notations} Let us introduce the following notations: For $m,n\in\mathbb{N}$ and for functions $y=y(x)$ of $x \in [0,1]$, we put \begin{eqnarray*} (y)_{2m} &:=&\|\triangle^my\|, \qquad \|y\|:=\|y\|_{\mathfrak{X}}:=\Big(\int_0^1|y(x)|^2x^{\frac{N}{2}-1}dx\Big)^{1/2}, \\ (y)_{2m+1} &:=&\|\dot{D}\triangle^my\|, \qquad \dot{D}=\sqrt{x}\frac{d}{dx}, \\ \|y\|_n&:=&\Big(\sum_{0\leq\ell\leq n}(y)_{\ell}^2\Big)^{1/2}, \\ |y|_n&:=&\max_{0\leq\ell\leq n} \|\dot{D}^{\ell}y\|_{L^{\infty}(0,1)}. \end{eqnarray*} For $n\in\mathbb{N}$, a fixed $T>0$, and for functions $y=y(t,x)$ of $(t,x)\in[0,T]\times[0,1]$, we put \begin{eqnarray*} \|y\|_n^T&&:=\Big(\sum_{j+k\leq n} \int_0^T\|\partial_t^jy\|_k^2dt\Big)^{1/2}, \\ |y|_n^T&&:=\max_{j+k\leq n}\|\partial_t^j\dot{D}^ky\|_{L^{\infty}([0,T]\times[0,1])}. \end{eqnarray*} Here $\partial_t=\partial/\partial t$. \\ Let us say that a grading of norms $(p_n)_{n\in\mathbb{N}}$ is {\bf interpolation admissible} if for $\ell\leq m\leq n$ it holds $$p_m(f)\leq Cp_n(f)^{\frac{m-\ell}{n-\ell}}p_{\ell}(f)^{\frac{n-m}{n-\ell}}. $$ It is well known that, if and only if $$p_n(f)^2\leq C p_{n+1}(f)p_{n-1}(f) $$ for any $n\geq 1$, $(p_n)_n$ is interpolation admissible. If $(p_n)_n$ and $(q_n)_n$ are interpolation admissible, and if $(i,j)$ lies on the line segment joining $(k,\ell)$ and $(m,n)$, then $$p_i(f)q_j(g)\leq C (p_k(f)q_{\ell}(g)+p_m(f)q_n(g)). $$ (For a proof, see \cite{Hamilton}, p.144, 2.2.2. Corollary.)\\ It is well-known that $(|\cdot|_n)_n$ and $(|\cdot|_n^T)_n$ are interpolation admissible, since $\dot{D}=\partial/\partial\xi$, where $x=\xi^2/4$. Moreover $(\|\cdot\|_n)_n$ and $(\|\cdot\|_n^T)_n$ are interpolation admissible. To verify it, it is sufficient to note that $$y=\sum_{k=1}^{\infty}c_k\phi_k \in C_0^{\infty}[0,1)$$ enjoys $$(y)_{\ell}=\Big(\sum_k\lambda_k^{\ell}|c_k|^2\Big)^{1/2}. $$ Here $(\lambda_k)_k$ are eigenvalues of $-\triangle$ with the Dirichlet boundary condition at $x=1$ and $(\phi_k)_k$ are associated eigenfunctions. We note that $(\dot{D}\phi_n/\sqrt{\lambda_n})_n$ is a complete orthonormal system of $\mathfrak{X}$ and $(\dot{D}y|\dot{D}\phi)_{\mathfrak{X}}= (-\triangle y|\phi)_{\mathfrak{X}}$ if $y \in C^{\infty}[0,1)$. Then it is clear by the Schwartz inequality that $$(y)_n^2\leq (y)_{n+1}(y)_{n-1}$$ for $y \in C_0^{\infty}[0,1)$. Since $(y)_j\leq (y)_j'$ for $j\leq j', y\in C_0^{\infty}[0,1)$, we have $$(y)_{\ell}\leq \|y\|_{\ell}\leq C\cdot (y)_{\ell}$$ and $$\|y\|_n^2\leq C\|y\|_{n-1}\|y\|_{n+1}$$ at least for $y \in C_0^{\infty}[0,1)$. By using a continuous linear extension of functions on $[0,1]$ to functions on $[0,2]$ with supports in $[0,3/2)$, we can claim that this inequality holds for any $y \in C^{\infty}[0,1]$ with a suitable change of the constant $C$. We refer to \cite{Mizohata}, Chapter 3, Section 4, Theorem 3.11. It is sufficient to note the following \begin{Proposition} If $\alpha(x) \in C^{\infty}(\mathbb{R})$ is fixed, then there is a constant $C$ depending on $\alpha$ such that $$\|\alpha y\|_n\leq C \|y\|_n.$$ \end{Proposition} A proof can be found in Appendix B. Hence $(\|\cdot\|_n)_n$ and $(\|\cdot\|_n^T)_n$ are interpolation admissible.\\ \subsection{Goal of this Section} Our goal is: \begin{Lemma} Assume that $|b_2-1|\leq 1/2$, $|\vec{b}|_2^T\leq M$ and $\|g\|_1^T \leq M$. Then there is a constant $C_n=C_n(T,M,N)$ such that if $h$ is the solution of (IBP) then $$\|h\|_{n+2}^T\leq C_n (1+\|g\|_{n+1}^T+|\vec{b}|_{n+3}^T).$$ \end{Lemma} We see that $\|y\|_{2m}^T$ is equivalent to $$\|y\|_m^{(2)}=\Big(\sum_{j+k\leq m} \int_0^T((\partial_t^{2j}y)_{2k})^2dt\Big)^{1/2}$$ for $y \in C^{\infty}([0,T]\times[0,1])$. In fact it is sufficient to note the following \begin{Proposition} For any $y \in C^{\infty}([0,1])$ we have $$\|\dot{D}\triangle^my\|_{\mathfrak{X}}\leq C(\|\triangle^my\|_{\mathfrak{X}}+ \|\triangle^{m+1}y\|_{\mathfrak{X}}).$$ \end{Proposition} A proof can be found in Appendix B. Therefore the conclusion of the Lemma reads: $$\|h\|_{m}^{(2)}\leq C(1+\|g\|_{m}^{(2)}+\|w\|_{m+3+s}^{(2)})$$ with $2s>1+N/2$, provided that $\|g\|_1^{(2)}\leq M$ and $\|w\|_{3+s}^{(2)}\leq M$, since $\vec{b}$ is a smooth function of $w, Dw, D^2w, \partial_t^2w$ in our context so that $|\vec{b}|_{n+3}^T\leq C(1+|w|_{n+7}^T)$, provided that $|w|_4^T\leq M'$. This says that $(w,g)\mapsto h$ is tame with respect to the grading $(\|\cdot\|_n^{(2)})_n$.\\ Let us sketch a proof of this Lemma. \subsection{Elliptic a priori estimates} By tedious calculations we have \begin{eqnarray*} [\triangle^m, \mathcal{A}]y &:=&\triangle^m\mathcal{A}y-\mathcal{A}\triangle^my \\ &=&\sum_{j+k=m}(b_{1k}^{(m)}\check{D}\triangle^jy+b_{0k}^{(m)}\triangle^jy), \end{eqnarray*} where $\check{D}=xd/dx$ and \begin{eqnarray*} b_{10}^{(m)}&=&-2mDb_2, \\ b_{00}^{(m)}&=&-m((2m-1)\triangle +(m-1)(1-N)D)b_2+ m(1+2\check{D})b_1, \end{eqnarray*} where $D=d/dx$ and $b_{1k}^{(m)}, b_{0k}^{(m)}, k\geq 1$ are determined by \begin{eqnarray*}&& b_{11}^{(1)}=2Db_0+(\triangle -(N-2)D)b_1,\\ && b_{01}^{(1)}=\triangle b_0, \end{eqnarray*} and the recurrence formula \begin{eqnarray*} b_{1k}^{(m+1)}&=&b_{1k}^{(m)}+(\triangle-(N-2)D)b_{1,k-1}^{(m)} +2Db_{0,k-1}^{(m)}\quad\mbox{for}\quad k\geq 2, \\ b_{11}^{(m+1)}&=&b_{11}^{(m)}-4m^2(\triangle+\frac{3-N}{2}D)Db_2 + \\ &+&((4m+1)\triangle -(2mN-6m+N-2)D)b_1+2Db_0, \\ b_{0k}^{(m+1)}&=& b_{0k}^{(m)}+(1+2\check{D})b_{1k}^{(m)}+\triangle b_{0,k-1}^{(m)}\quad\mbox{for}\quad k\geq 2, \\ b_{01}^{(m+1)}&=& b_{01}^{(m)}-m\triangle((2m-1)\triangle + (m-1)(1-N)D)b_2+\\ &+&m(3+2\check{D})\triangle b_1+\triangle b_0+(1+2\check{D})b_{11}^{(m)}. \end{eqnarray*} We have used the following calculus formula: \begin{eqnarray*} D\check{D}&=&\triangle -\Big(\frac{N}{2}-1\Big)D, \qquad \triangle\check{D}-\check{D}\triangle =\triangle, \\ \triangle (Q\check{D}P)&=&Q\check{D}\triangle P+(1+2\check{D})Q\cdot\triangle P+ (\triangle-(N-2)D)Q\cdot\check{D}P, \\ \triangle(QP)&=&Q\triangle P+2(DQ)\check{D}P+(\triangle Q)P. \end{eqnarray*}\\ Then it follows that $$\|b_{0k}^{(m)}\|_{L^{\infty}}\leq C|\vec{b}|_{2k+3} \quad \|b_{1k}^{(m)}\|_{L^{\infty}}\leq C|\vec{b}|_{2k+2} $$ and therefore $$\|[\triangle^m, \mathcal{A}]y\|\leq C\sum_{j+k=m}(|\vec{b}|_{2k+2}\|y\|_{2j+1} +|\vec{b}|_{2k+3}\|y\|_{2j}). $$ Since $$\triangle^m[\triangle,\mathcal{A}]=[\triangle^{m+1}, \mathcal{A}]- [\triangle^m, \mathcal{A}]\triangle, $$ it follows that $$\|\triangle^m[\triangle,\mathcal{A}]y\|\leq C A_m,$$ where $$A_m:=\sum_{j+k=m+1} (|\vec{b}|_{2k+2}\|y\|_{2j+1}+|\vec{b}|_{2k+3}\|y\|_{2j}). $$\\ {\bf Remark}\ This estimate is very rough and may be far from the best possible. But it is enough for our purpose. To derive this estimate, we have used the following observations: Let $\mathfrak{M}_k$ denote the set of all functions of the form $$\sum_{\alpha=2,1,0}\sum_{i+j\leq k}C_{\alpha ij}\triangle^iD^jb_{\alpha},$$ $C_{\alpha ij}$ being constants. Then it can be shown that $\triangle f$, $Df$ and $D\check{D}f(=\triangle f+(-\frac{N}{2}+1)Df)$ belong to $\mathfrak{M}_{k+1}$ if $f$ belongs to $\mathfrak{M}_k$. Using this, we can claim inductively that $b_{1k}^{(m)}\in \mathfrak{M}_{k+1}$ and $b_{0k}^{(m)}\in\mathfrak{M}_{k+1}+\check{D}\mathfrak{M}_{k+1}$ for any $m$, $k\leq m+1$. Note that $\|f\|_{L^{\infty}}\leq C|\vec{b}|_{2k}$ and $\|\check{D}f\|_{L^{\infty}}\leq\|\dot{D}f\|_{L^{\infty}}\leq C|\vec{b}|_{2k+1}$ if $f\in\mathfrak{M}_k$. (See Proposition 3 and Appendix B, (B.7). Also see (B.3), keeping in mind that $\triangle=\dot{D}^2+\frac{N-1}{2}D$.)\\ Differentiating $[\triangle^m, \mathcal{A}]y$, we get $$\dot{D}[\triangle^m, \mathcal{A}]y= \sum_{k+j=m}(\dot{b}_{2k}^{(m)}\triangle^{j+1}y+ \dot{b}_{1k}^{(m)}\dot{D}\triangle^jy+ \dot{b}_{0k}^{(m)}\triangle^jy), $$ where \begin{eqnarray*} \dot{b}_{2k}^{(m)}&=&\sqrt{x}b_{1k}^{(m)}, \\ \dot{b}_{1k}^{(m)}&=&(-\frac{N}{2}+1+\check{D})b_{1k}^{(m)}+b_{0k}^{(m)}, \\ \dot{b}_{0k}^{(m)}&=&\dot{D}b_{0k}^{(m)}. \end{eqnarray*} Using $$\dot{D}\triangle^m[\triangle, \mathcal{A}]=\dot{D}[\triangle^{m+1}, \mathcal{A}]- \dot{D}[\triangle^m, \mathcal{A}]\triangle, $$ we have $$\|\dot{D}\triangle^m[\triangle, \mathcal{A}]y\|\leq CA_m^{\sharp},$$ where $$A_m^{\sharp}:=\sum_{j+k=m+1}(|\vec{b}|_{2k+2}\|y\|_{2j+2} +|\vec{b}|_{2k+3}\|y\|_{2j+1}+|\vec{b}|_{2k+4}\|y\|_{2j}).$$ Since $A_{m-1}\leq A_{m-1}^{\sharp}\leq 2A_m\leq 2A_{m}^{\sharp}$, we can claim that \begin{eqnarray*} \|[\triangle, \mathcal{A}]y\|_{2m} &\leq& CA_m, \\ \|[\triangle, \mathcal{A}]y\|_{2m+1}&\leq& CA_m^{\sharp}. \end{eqnarray*} Now $$\triangle y=-\frac{1}{b_2}(\mathcal{A}y-b_1\check{D}y-b_0y) $$ implies $$\|\triangle y\|\leq C(\|\mathcal{A}y\|+\|y\|_1), $$ and $$\|y\|_2\leq C(\|\mathcal{A}y\|+\|y\|_1). $$ Moreover \begin{eqnarray*} \dot{D}\triangle y &=&-\frac{1}{b_2}\Big(\dot{D}\mathcal{A}y+(-\dot{D}b_2+\sqrt{x}b_1)\triangle y + \\ &+& (-\frac{N}{2}+1+\check{D}b_1+b_0)\dot{D}y+(\dot{D}b_0)y\Big) \end{eqnarray*} implies $$\|\dot{D}\triangle y\|\leq C(\|\dot{D}\mathcal{A}y\|+\|y\|_2),$$ and $$\|y\|_3\leq C(\|\mathcal{A}y\|_1+\|y\|_1). $$ Using the estimates of $[\triangle,\mathcal{A}]$, we can show inductively that, for $n\geq 2$, $$\|y\|_{n+2}\leq C(\|\mathcal{A}y\|_n+\|y\|_1+K(n)), $$ where \begin{eqnarray*} K(n)=\begin{cases} A_m &\mbox{for $n=2m+2$,} \\ A_m^{\sharp} &\mbox{for $n=2m+3$} \end{cases} \end{eqnarray*} By interpolation we have $$ K(n)\leq C(|\vec{b}|_2\|y\|_{n+1}+|\vec{b}|_{n+3}\|y\|). $$ Therefore we have \begin{Proposition} Suppose $|b_2-1|\leq 1/2$ and $|\vec{b}|_2\leq M$. Then $$\|y\|_{n+2} \leq C(\|\mathcal{A}y\|_n+\|y\|_1+|\vec{b}|_{n+3}\|y\|).$$ \end{Proposition} \subsection{Estimates for evolutions} Hereafter we denote generally by $H$ a solution of the boundary value problem $$\frac{\partial^2H}{\partial t^2}+\mathcal{A}H=G(t,x),\qquad H|_{x=1}=0 $$ such that $H(t,x)=0$ for $0\leq t\leq \tau_1$. Thus $\partial_t^jH|_{t=0}=0$ for any $j\in\mathbb{N}$. The time derivative $H_j=\partial_t^jH$ satisfies $$\frac{\partial^2H_j}{\partial t^2} +\mathcal{A}H_j=G_j,\qquad H_j|_{x=1}=0,$$ where $$G_j:=\partial_t^jG-[\partial_t^j,\mathcal{A}]H.$$ We put $G_0=G$. Hereafter we always assume that $|b_2-1|\leq 1/2$ and $|\vec{b}|_2^T\leq M$.\\ Note that we have the energy estimate $$\|\partial_tH\|+\|H\|_1\leq C\int_0^t\|G(t')\|dt'$$ for $0\leq t\leq T$. (See Lemma 3.)\\ {\bf Remark} In this subsection $H$ and $G$ do not mean the particular functions defined in Section 1.\\ We put $$Z_n(H):=\sum_{j+k=n}\|\partial_t^jH\|_k.$$ First we claim \begin{Proposition} For $n\in\mathbb{N}$, we have $$Z_{n+2}(H) \leq C(Z_{n+1}(\partial_tH)+\|G\|_n+ \|H\|_1+|\vec{b}|_{n+3}\|H\|).$$ \end{Proposition} {\bf Proof} By definition we have $$Z_{n+2}(H)=Z_{n+1}(\partial_tH)+\|H\|_{n+2}.$$ By Proposition 8 we have \begin{eqnarray*} \|H\|_{n+2}&&\leq C(\|\mathcal{A}H\|_n+\|H\|_1+|\vec{b}|_{n+3}\|H\|) \\ &&\leq C(\|\partial_t^2H-G\|_n+\|H\|_1+|\vec{b}|_{n+3}\|H\|) \\ &&\leq C(\|\partial_t^2H\|_n+\|G\|_n+\|H\|_1+|\vec{b}|_{n+3}\|H\|). \end{eqnarray*} Note that $\|\partial_t^2H\|_{n}\leq Z_{n+1}(\partial_tH)$. $\blacksquare$ \\ This implies by induction the following \begin{Proposition} For $n\in\mathbb{N}$ we have $$ Z_{n+2}(H)\leq C (\|\partial_t^{n+1}H\|_1+ \sum_{j+k=n}\|G_j\|_k+\sum_{j+k=n}(\|\partial_t^jH\|_1+|\vec{b}|_{k+3}\|\partial_t^jH\|). $$ \end{Proposition} Applying the energy estimate on $\|\partial_t^{n+1}H\|_1$, we get \begin{Proposition} We have $$Z_{n+2}(H)(t)\leq C\Big(\Big(\int_0^tF_n(t')^2dt'\Big)^{1/2}+F_n(t)\Big), $$ where \begin{eqnarray*} F_n(t)&&=\int_0^t\|\partial_t^{n+1}G(t')\|dt'+ |\vec{b}|_{n+1}^T\|H\|_2^T+\sum_{j+k=n}\|G_j\|_k+\\ &&+\sum_{j+k=n}(\|\partial_t^jH\|_1+|\vec{b}|_{k+3}\|\partial_t^jH\|). \end{eqnarray*} \end{Proposition} {\bf Proof} The energy estimate reads $$\|\partial_t^{n+1}H\|_1\leq C \Big(\int_0^t\|\partial_t^{n+1}G\|+\int_0^t \|[\partial_t^{n+1},\mathcal{A}]H\| \Big). $$ But \begin{eqnarray*} \int_0^t\|[\partial_t^{n+1},\mathcal{A}]H\| && \leq C\sum_{\alpha+\beta=n+1, \alpha\not=0,}|\partial_t^{\alpha}\vec{b}|_0^T \Big(\int_0^t \|\partial_t^{\beta}H\|_2^2\Big)^{1/2} \\ &&\leq C'\Big(|\vec{b}|_1^T \Big(\int_0^tZ_{n+2}(H)^2\Big)^{1/2}+|\vec{b}|_{n+1}^T\|H\|_2^T\Big) \end{eqnarray*} by interpolation. Then Proposition 10 implies $$Z_{n+2}(H)(t)\leq C\Big(\Big(\int_0^t Z_{n+2}(H)^2\Big)^{1/2} +F_n(t)\Big).$$ We can apply the Gronwall's lemma to this inequality. $\blacksquare$\\ Integrating the conclusion of Proposition 11, we see \begin{eqnarray*} \|H\|_{n+2}^T&&=\Big(\sum_{j+k\leq n+2}\int_0^T\|\partial_t^jH\|_k^2dt\Big)^{1/2} \\ &&= \Big(\int_0^T(\|H\|^2+\|\partial_tH\|^2+\|H\|_1^2+\sum_{0\leq\nu\leq n}Z_{\nu+2}(H)^2)dt\Big)^{1/2} \\ &&\leq \Big(\int_0^T(\|H\|^2+\|\partial_tH\|^2+\|H\|_1^2)dt+ C\sum_{0\leq\nu\leq n}\int_0^TF_{\nu}^2\Big)^{1/2} \\ &&\leq C'\Big(\|G\|_{n+1}^T+|\vec{b}|_{n+1}^T\|H\|_2^T+ \sum_{j+k\leq n}\Big(\int_0^T\|G_j\|_k^2\Big)^{1/2}+ \\ &&+\sum_{0\leq j\leq n}\sup_{0\leq t\leq T}\|\partial_t^jH\|_1+ |\vec{b}|_{2}^T\|H\|_{n+1}^T+ |\vec{b}|_{n+3}^T\|H\|^T\Big) \end{eqnarray*} by interpolation. Hereafter we suppose that $n\geq 1$. Then by interpolation we have $$|\vec{b}|_{n+1}^T\|H\|_2^T\leq C(|\vec{b}|_2^T\|H\|_{n+1}^T+ |\vec{b}|_{n+3}^T\|H\|^T)$$ and therefore we can claim \begin{Proposition} We have \begin{eqnarray} \|H\|_{n+2}^T&&\leq C\Big(\|G\|_{n+1}^T+ \sum_{j+k\leq n}\Big(\int_0^T\|G_j\|_k^2\Big)^{1/2}+ \nonumber \\ &&+\sum_{0\leq j\leq n}\sup_{0\leq t\leq T}\|\partial_t^jH\|_1+\|H\|_{n+1}^T+ |\vec{b}|_{n+3}^T\|H\|^T\Big). \end{eqnarray} \end{Proposition} Let us estimate the second and third terms in the right-hand side of (21).\\ We have \begin{equation} \sum_{j+k=\nu}\Big(\int_0^T\|G_j\|_k^2\Big)^{1/2}\leq C(\|G\|_{\nu}^T +\|H\|_{\nu+1}^T +|\vec{b}|_{\nu+3}^T\|H\|^T) \end{equation} {\bf Proof} It is sufficient to estimate $$\int_0^T\|[\partial_t^j, \mathcal{A}(\vec{b})]H\|_k^2dt.$$ But $$[\partial_t^j, \mathcal{A}(\vec{b})]H= \sum_{\alpha+\beta=j,\alpha\not=0} \binom{j}{\alpha} \mathcal{A}(\partial_t^{\alpha}\vec{b}) \partial_t^{\beta}H, $$ and $$\|\mathcal{A}(\partial_t^{\alpha}\vec{b})\partial_t^{\beta}H\|_k \leq C(\|\partial_t^{\beta}H\|_{k+2}+ |\partial_t^{\alpha}\vec{b}|_{k+3}\|\partial_t^{\beta}H\|),$$ since $$\|\mathcal{A}(\vec{b})y\|_k\leq C (\|y\|_{k+2}+|\vec{b}|_{k+3}\|y\|).$$ (The estimate of $\|\mathcal{A}y\|_n$ can be derived by the discussion of the preceding subsection, keeping in mind that $\triangle^m\mathcal{A}=\mathcal{A}\triangle^m+[\triangle^m,\mathcal{A}]$.) By interpolation, we have, for $\alpha+\beta+k=\nu, \alpha\not=0$, \begin{eqnarray*} \Big(\int_0^T\|\mathcal{A}(\partial_t^{\alpha}\vec{b}) \partial_t^{\beta}H\|_k^2\Big)^{1/2}&&\leq C(\|H\|_{\beta+k+2}^T+|\vec{b}|_{\alpha+k+3}^T\|H\|_{\beta}^T) \\ &&\leq C'(\|H\|_{\nu+1}^T+ |\vec{b}|_2^T\|H\|_{\nu+1}^T+ |\vec{b}|_{\nu+3}^T\|H\|^T). \end{eqnarray*} $\blacksquare$ Next we have \begin{equation} \sup_{0\leq t\leq T}\|\partial_t^jH\|_1 \leq C(\|G\|_j^T + \|H\|_{j+1}^T +|\vec{b}|_{j+3}^T\|H\|^T). \end{equation} {\bf Proof} By the energy estimate, we have $$\|\partial_t^jH\|_1\leq C\int_0^T\|G_j\|.$$ Here we can use the estimate of $$\int_0^T\|[\partial_t^j, \mathcal{A}]H\|^2$$ given in the proof of the preceding proposition with $k=0, n=j$. $\blacksquare$ Substituting (22),(23) to (21) for $h=H$, we have \begin{equation} \|h\|_{n+2}^T\leq C(\|g\|_{n+1}^T+\|h\|_{n+1}^T+|\vec{b}|_{n+3}^T\|h\|^T). \end{equation} Noting that $$\|h\|^T\leq C\|g\|^T\leq CM,$$ we have \begin{equation} \|h\|_{n+2}^T\leq C (\|h\|_{n+1}^T+\|g\|_{n+1}^T+|\vec{b}|_{n+3}), \end{equation} which implies inductively that \begin{equation} \|h\|_{n+2}^T\leq C (1+\|g\|_{n+1}^T+|\vec{b}|_{n+3}), \end{equation} provided that $\|g\|_1^T\leq M$ and $|\vec{b}|_2^T\leq M$. This completes the proof of Lemma 4.\\ {\bf Acknowledgment} The author would like to express his sincere thanks to the referee for his/her careful reading of the preceding version of the manuscript and giving of many kind suggestions to rewrite it. If this revised manuscript has turned out to be readable, it is due to his/her advices.\\ \noindent {\bf \Large Appendix }\medskip \noindent{\bf A. The Sobolev imbedding theorem}\medskip For the sake of self-containedness, we prove the Sobolev imbedding theorem for our framework. (The statement is well-known if $N$ is an integer.) Let $y \in C^{\infty}[0,1]$ and $m \in \mathbb{N}$, we denote $$((y))_m:=\|(-\triangle)^my\|_{\mathfrak{X}}.$$ For $y\in \mathfrak{X}=L^2((0,1), x^{\frac{N}{2}-1}dx)$ we have the expansion $$y(z)=\sum_{n=1}^{\infty}c_n\phi_n, $$ where $(\phi_n)_n$ is the orthonormal system of eigenfunctions of the operator $T=-\triangle$ with the Dirichlet boundary condition at $x=1$. Then, for $m\in\mathbb{N}$ and for $y \in C^{\infty}[0,1)$, we have $$(-\triangle)^my(x)=\sum_{n=1}^{\infty}c_n\lambda_n^m\phi_n(x)$$ and $$((y))_m=\Big(\sum_n |c_n|^2\lambda_n^{2m} \Big)^{1/2}. $$\\ \noindent{\bf Lemma A.1.} Let $j_{\nu,n}$ be the $n$-th positive zero of the Bessel function $J_{\nu}$, where $\nu=\frac{N}{2}-1$. Then we have $$\lambda_n=(j_{\nu,n}/2)^2 \sim \frac{\pi^2}{4}n^2 \ \ \mbox{as }n\rightarrow\infty.$$ {\bf Proof} By the Hankel's asymptotic expansion (see \cite{Watson}), the zeros of $J_{\nu}$ can be determined by the relation $$\tan\Big(r-\Big(\frac{\nu}{2}+\frac{1}{4}\Big)\pi\Big)=\frac{2}{\nu^2-\frac{1}{4}}r(1+O(r^{-2})).$$ Then we see $$j_{\nu,n}=\Big(n_0+n+\frac{\nu}{2}+\frac{3}{4}\Big)\pi + O\Big(\frac{1}{n}\Big) \ \ \mbox{as }n\rightarrow\infty,$$ for some $n_0\in\mathbb{Z}$. $\blacksquare$ \noindent{\bf Lemma A.2.} There is a constant $C=C(N)$ such that $$|\phi_n(x)|\leq C n^{\frac{N-1}{2}} \ \ \mbox{for }0\leq x\leq 1.$$ {\bf Proof} Note that $\phi_n(x)$ is a normalization of $\Phi_{\nu}(\lambda_n x)$, where $$\Phi_{\nu}\Big(\frac{r^2}{4}\Big)=J_{\nu}(r)\Big(\frac{r}{2}\Big)^{-\nu}.$$ Since $|\Phi_{\nu}(x)|\leq C$ for $0\leq x<\infty$, it is sufficient to estimate $\|\Phi_{\nu}(\lambda_nx)\|_{\mathfrak{X}}$. Using the Hankel's asymptotic expansion in the form \begin{eqnarray*} J_\nu(r)=&\sqrt{\frac{2}{\pi r}}\Big(\cos\Big(r-\frac{\nu}{2}\pi-\frac{\pi}{4}\Big) (1+O\Big(\frac{1}{r^2}\Big)\Big)+\\ -&\frac{1}{r}\sin\Big(r-\frac{\nu}{2}\pi-\frac{\pi}{4}\Big) \Big(\frac{\nu^2-\frac{1}{4}}{2}+O\Big(\frac{1}{r^2}\Big)\Big)\Big), \end{eqnarray*} we see that \begin{align*} \|\Phi_{\nu}(\lambda_nx)\|_{\mathfrak{X}}^2=& (\lambda_n)^{-\nu-1}\int_0^{j_{\nu,n}}J_{\nu}(r)^2rdr = (\lambda_n)^{-\nu-1}\Big(\frac{1}{\pi}j_{\nu,n}+O(1)\Big) \\ =&(\lambda_n)^{-\nu-1}\cdot \frac{2}{\pi}(\lambda_n^{1/2}+O(1)) \sim \frac{2}{\pi}(\lambda_n)^{-\nu-\frac{1}{2}}. \end{align*} Then Lemma A.1 implies that $$\|\Phi_{\nu}(\lambda_nx)\|_{\mathfrak{X}}^{-1}\sim \mbox{Const.}n^{\nu+\frac{1}{2}}.$$ $\blacksquare$ \noindent{\bf Lemma A.3.} If $ y \in C_0^{\infty}[0,1)$ and $0\leq j\leq m$, then $((y))_j\leq((y))_m.$ {\bf Proof} For $y=\sum c_n\phi_n$, we have \begin{eqnarray*} ((y))_j^2=& \sum |c_n|^2\lambda_n^{2j} =(\lambda_1)^{2j}\sum |c_n|^2(\lambda_n/\lambda_1)^{2j} \\ \leq& (\lambda_1)^{2j}\sum|c_n|^2(\lambda_n/\lambda_1)^{2m}= \lambda_1^{2j-2m}((y))_m^2. \end{eqnarray*} According to \cite{Watson} (see Section 15-6 , p.208), we know that $j_{\nu,1}$ is an increasing function of $\nu>0$ and $j_{\frac{1}{2}, 1}=\pi$. Therefore, $\lambda_1\geq (\pi/2)^2 >1$ for $N\geq 2$ and which implies $((y))_j\leq((y))_m$ $\blacksquare$ \noindent{\bf Lemma A.4.} If $2s>N/2$, then there is a constant $C =C(s,N)$ such that $$\|y\|_{L^{\infty}}\leq C((y))_s$$ for any $y\in C_0^{\infty}[0,1)$. {\bf Proof} Let $y=\sum c_n\phi_n(x)$, then Lemmas A.1 and A.2 imply that \begin{align*} |y(x)|\leq& \sum |c_n||\phi_n(x)| \leq C\sum |c_n|n^{\frac{N-1}{2}} \\ \leq& C\sqrt{\sum|c_n|^2\lambda_n^{2s}} \sqrt{\sum n^{N-4s-1}}. \end{align*} Since $N-4s<0$, the last term in the above inequality is finite. Therefore we get the required estimate. $\blacksquare$ Now, for $R>0$, we denote by $\mathfrak{X}(0,R)$ the Hilbert space of functions $y(x)$ of $0\leq x\leq R$ endowed with the inner product $$(y_1|y_2)_{\mathfrak{X}(0,R)}= \int_0^Ry_1(x)\overline{y_2(x)}x^{\frac{N}{2}-1}dx. $$ Moreover, for $m\in\mathbb{N}$, we denote by $\mathfrak{X}^{2m}(0,R)$ the space of functions $y(x)$ of $0\leq x\leq R$ for which the derivatives $(-\triangle)^jy \in \mathfrak{X}$ exist in the sense of distribution for $0\leq j\leq m$. And we use the norm $$\|y\|_{\mathfrak{X}^{2m}(0,R)}:=\Big( \sum_{0\leq j\leq m}\|(-\triangle)^jy\|_{\mathfrak{X}(0,R)}^2 \Big)^{1/2}. $$ Let us denote by $\mathfrak{X}_0^{2m}(0,R)$ the closure of $C_0^{\infty}[0,R)$ in the space $\mathfrak{X}^{2m}(0,R)$. There is a continuous linear extension $\Psi: \mathfrak{X}^{2m}(0,1) \rightarrow \mathfrak{X}_0^{2m}(0,2)$ such that $$\|y\|_{\mathfrak{X}^{2m}(0,1)}\leq \|\Psi y\|_{\mathfrak{X}^{2m}(0,2)} \leq C\|y\|_{\mathfrak{X}^{2m}(0,1)}.$$ See \cite{Mizohata}, p.186, Theorem 3.11, keeping in mind Propositions 6,7. Then, by Lemmas A.3 and A.4, the Sobolev imbedding theorem holds for $y \in \mathfrak{X}_0^{2s}(0,2)$. Say, if $2s >N/2$, there is a constant $C$ such that $$\|y\|_{L^{\infty}}\leq C\|y\|_{\mathfrak{X}^{2s}(0,2)} $$ for $y \in \mathfrak{X}_0^{2s}(0,2)$. Thus the same imbedding theorem holds for $y \in C^{\infty}[0,1] \subset \mathfrak{X}^{2s}(0,1)$ through the above extension. The conclusion is that, if $2s >N/2$, there is a constant $C=C(s,N)$ such that $$\|y\|_{L^{\infty}}\leq C\sup_{0\leq j \leq s}\|(-\triangle)^jy \|_{\mathfrak{X}} $$ for any $y \in C^{\infty}[0,1]$.\\ \noindent{\bf B. Nirenberg-Moser type inequalities}\medskip Let us prove Propositions 4, 5, 6 and 7.\\ \noindent{\bf Proof of Proposition 4}\medskip First, it is easy to verify the formula $$\dot{D}^kDy(x)=x^{-\frac{N+k}{2}}\int_0^x\dot{D}^k\triangle y(x') (x')^{\frac{N+k}{2}-1}dx',\eqno(B.1) $$ where $k\in \mathbb{N}$, $$\dot{D}:=\sqrt{x}\frac{d}{dx}\quad\mbox{and} \quad D:=\frac{d}{dx}. $$ Since $\triangle =\dot{D}^2+\frac{N-1}{2}D$, (B.1) implies $$|\dot{D}^kDy|_0\leq \frac{2}{N+k}|\dot{D}^{k+2}y|_0+\frac{N-1}{N+k}|\dot{D}^kDy|_0. $$ Here and hereafter $|\cdot|_0$ stands for $\|\cdot\|_{L^{\infty}}$. Thus we have $$|\dot{D}^kDy|_0\leq \frac{2}{k+1}|\dot{D}^{k+2}y|_0.$$ Repeating this estimate, we get $$|\dot{D}^kD^jy|_0\leq \Big(\frac{2}{k+1}\Big)^j|\dot{D}^{k+2j}y|_0. \eqno(B.2)$$ On the other hand, since $\dot{D}^2=\triangle -\frac{N-1}{2}D$ and $D\triangle -\triangle D=D^2$, we have $$\dot{D}^{2\mu}=\sum_{k=0}^{\mu}C_{k\mu}\triangle^{\mu-k}D^k \eqno(B.3)$$ with some constants $C_{k\mu}=C(k,\mu,N)$. Then it follows from (B.3) and Proposition 3 that $$|\dot{D}^{2\mu}D^jy|_0\leq C|\triangle^{\mu+j}y|_0. \eqno(B.4)$$ Since $$\triangle =\dot{D}^2+\frac{N-1}{2}D\quad\mbox{and}\quad D\dot{D}^2-\dot{D}^2D=D^2, $$ it is easy to see that there are constants $C_{km}=C(k,m,N)$ such that $$\triangle^m=\sum_{k=0}^m C_{km}\dot{D}^{2(m-k)}D^k. \eqno(B.5)$$ Applying the Leibnitz' rule to $D$ and $\dot{D}$, we see $$\triangle^m(f\cdot g)= \sum C_{k\ell jm}(\dot{D}^{2(m-k)-\ell}D^{k-j}f)\cdot(\dot{D}^{\ell}D^jg) \eqno(B.6)$$ with some constants $C_{k\ell jm}$. The summation is taken for $0\leq j\leq k\leq m, 0\leq \ell \leq 2(m-k)$. By estimating each term of the right-hand side of (B.6), we can obtain the assertion of Proposition 4. In fact, we consider the term $$(\dot{D}^{\ell'}D^{j'}f)\cdot(\dot{D}^{\ell}D^jg)$$ provided that $\ell'+\ell +2(j'+j)=2m$. By (B.2) and (B.4) we have \begin{eqnarray*} |\dot{D}^{\ell}D^jg|_0\leq C|\dot{D}^{\ell +2j}g|_0 \leq C'|\dot{D}^{2m}g|_0^{\frac{\ell +2j}{2m}}|g|_0^{1-\frac{\ell+2j}{2m}} \leq C''|\triangle^mg|_0^{\frac{\ell+2j}{2m}}|g|_0^{1-\frac{\ell+2j}{2m}} \end{eqnarray*} for some positive constants $C$, $C'$ and $C''$. Here we have used the Nirenberg interpolation for $\dot{D}=\partial/\partial \xi$, where $x=\xi^2/4$. The same estimate holds for $|\dot{D}^{\ell'}D^{j'}f|_0$. Therefore we have \begin{eqnarray*} |(\dot{D}^{\ell'}D^{j'}f)\cdot(\dot{D}^{\ell}D^jg)|_0 \leq && C|\triangle^mf|_0^{\frac{\ell'+2j'}{2m}}|f|_0^{1-\frac{\ell'+2j'}{2m}}|\triangle^mg|_0^{\frac{\ell+2j}{2m}} |g|_0^{1-\frac{\ell+2j}{2m}} \\ \leq && C(|\triangle^mf|_0|g|_0+|f|_0|\triangle^mg|_0), \end{eqnarray*} since $X^{\theta}Y^{1-\theta}\leq X+Y$. \\ \noindent{\bf Proof of Proposition 5}\medskip Suppose $F(x,y)$ is a smooth function of $x$ and $y$. Let us consider the composed function $U(x):=F(x,y(x))$. We claim that $$|\triangle^mU|_0\leq C(1+|y|_m) $$ provided that $|y|_0\leq M$. In fact, $$\triangle^mU=\sum C_{km}\dot{D}^{2(m-k)}D^k U $$ consists of several terms of the following form: $$\Big(\dot{D}_x^K\Big(\frac{\partial}{\partial y}\Big)^LD_x^k\Big(\frac{\partial}{\partial y}\Big)^{\ell}F\Big)\cdot (\dot{D}^{K_1}y)\cdots(\dot{D}^{K_L}y)\cdot(\dot{D}^{\mu_1}D^{k_1}y)\cdots (\dot{D}^{\mu_{\ell}}D^{k_{\ell}}y), $$ where $$k+k_1+\cdots+k_{\ell}=\kappa,$$ $$ K+K_1+\cdots K_L+\mu_1+\cdots+\mu_{\ell}=2(m-\kappa).$$ Therefore $$K_1+\cdots+K_L+(\mu_1+2k_1)+\cdots(\mu_{\ell} +2k_{\ell}) \leq 2m. $$ Applying the Nirenberg interpolation to $\dot{D}$ and using (B.4), we have $$|\dot{D}^{K_1}y|_0\leq C|y|_m^{\frac{K_1}{2m}}|y|_0^{1-\frac{K_1}{2m}}. $$ Similarly, $$|\dot{D}^{\mu_1} D^{k_1}y|_0\leq C|\dot{D}^{\mu_1+2k_1}y|_0\leq C'|y|_m^{\frac{\mu_1+2k_1}{2m}} |y|_0^{1-\frac{\mu_1+2k_1}{2m}}, $$ and so on. Then our claim follows obviously. \\ \textbullet\ We note that by (B.2), (B.4) and (B.5) we have $$\frac{1}{C}|\dot{D}^{2j}f|_0\leq |\triangle^jf|_0 \leq C|\dot{D}^{2j}f|_0. \eqno(B.7)$$ \\ \noindent{\bf Proof of Proposition 6}\medskip It can be verified that $$\triangle^m(\alpha y)= \sum_{j+k=m}(\alpha_{1k}^{(m)}\check{D}\triangle^jy+ \alpha_{0k}^{(m)}\triangle^jy), $$ where $\alpha_{1k}^{(m)}$ and $\alpha_{0k}^{(m)}$ are determined by the recurrence formula \begin{eqnarray*} \alpha_{1k}^{(m+1)}&=&\alpha_{1k}^{(m)}+ (\triangle -(N-2)D)\alpha_{1,k-1}^{(m)}+2D\alpha_{0,k-1}^{(m)},\\ \alpha_{0k}^{(m+1)}&=&(1+2\check{D})\alpha_{1k}^{(m)}+ \alpha_{0k}^{(m)}+\triangle\alpha_{0,k-1}^{(m)}, \end{eqnarray*} starting from $$\alpha_{10}^{(0)}=0, \qquad \alpha_{00}^{(0)}=\alpha. $$ Here we have used the convention $\alpha_{1k}^{(m)}=\alpha_{0k}^{(m)} =0$ for $k<0$ or $k>m$. Of course $\alpha_{10}^{(m)}=0$ for any $m$. Therefore we see that $\|\triangle^m(\alpha y)\|_0\leq C \|y\|_{2m}$. Differentiating the formula, we get $$ \dot{D}\triangle^m(\alpha y) = \sum_{j+k=m}(\dot{\alpha}_{2k}^{(m)}\triangle^{j+1}y+\dot{\alpha}_{1k}^{(m)}\dot{D}\triangle^jy + \dot{\alpha}_{0k}^{(m)}\triangle^jy), $$ where \begin{eqnarray*} \dot{\alpha}_{2k}^{(m)}&=&\sqrt{x}\alpha_{1k}^{(m)}, \\ \dot{\alpha}_{1k}^{(m)}&=&\Big(-\frac{N}{2}+1+\check{D}\Big)\alpha_{1k}^{(m)}+ \alpha_{0k}^{(m)}, \\ \alpha_{0k}^{(m)}&=&\dot{D}\alpha_{0k}^{(m)}. \end{eqnarray*} It is clear that $\|\dot{D}\triangle^m(\alpha y)\|_0 \leq C\|y\|_{2m+1}$, since $\dot{\alpha}_{20}^{(m)}=0$ for any $m$. $\blacksquare$\\ \noindent{\bf Proof of Proposition 7}\medskip It is sufficient to prove that $$\|\dot{D}y\|\leq C(\|y\|+\|\triangle y\|), $$ where and hereafter we denote $\|\cdot\|=\|\cdot\|_{\mathfrak{X}}$. If $w$ satisfies the Dirichlet boundary condition $w(1)=0$, then $$\|\dot{D}w\|^2=(-\triangle w\ |\ w)\leq \|\triangle w\|\|w\|.$$ Therefore we have $$\|\dot{D}y\|^2\leq \|\triangle y\|(\|y\|+|y(1)|).$$ On the other hand we have $$\sqrt{\frac{2}{N}}|y(1)|\leq \|y\|+\sqrt{\frac{2}{N-2}}\|\dot{D}y\|.$$ In fact, since $$y(1)=y(x)+\int_x^1\frac{1}{\sqrt{x}}\dot{D}y(x')dx',$$ we have $$|y(1)|^2\leq |y(x)|^2 +\frac{2}{N-2}\|\dot{D}y\|^2x^{-\frac{N}{2}+1}$$ for $x>0$. Integrating this, we get the above estimate of $|y(1)|$. Hence we have, for any $\epsilon>0$, \begin{eqnarray*} \|\dot{D}y\|^2&&\leq C\|\triangle y\|(\|y\|+\|\dot{D}y\|) \\ &&\leq C\Big(\frac{1}{2\epsilon}\|\triangle y\|^2+\frac{\epsilon}{2}(\|y\|+\|\dot{D}y\|)^2\Big) \\ &&\leq C\big(\frac{1}{2\epsilon}\|\triangle y\|^2+\epsilon \|y\|^2+ \epsilon \|\dot{D}y\|^2\Big). \end{eqnarray*} Taking $\epsilon$ to be small, we get the desired estimate. $\blacksquare$\\ \noindent{\bf C. Existence of the smooth solution to the linear wave equation}\medskip Let us give a proof of the existence of the smooth solution to the initial boundary value problem (IBP): \begin{eqnarray*} &&\frac{\partial^2h}{\partial t^2}+\mathcal{A}h=g(t,x), \qquad h|_{x=1}=0, \\ &&h|_{t=0}=\frac{\partial h}{\partial t}\Big|_{t=0}=0. \end{eqnarray*} We assume that $g(t,x)=0$ for $0\leq t\leq \tau_1$.\\ {\bf Existence.} The existence of the solution can be proved by applying the Kato's theory developed in \cite{Kato1970}. In fact, we consider the closed operator $$\mathfrak{A}(t)=\begin{bmatrix} 0 & -1\\ \mathcal{A}(t) & 0 \end{bmatrix} $$ in $\mathfrak{H}:=\mathfrak{X}_0^1\times\mathfrak{X}$ densely defined on $$\mathcal{D}(\mathfrak{A}(t))=\mathfrak{G}:=\mathfrak{X}_{(0)}^2\times\mathfrak{X}_1.$$ Here $\mathfrak{X}=L^2((0,1);x^{\frac{N}{2}-1}dx), \mathfrak{X}^1=\{y\in\mathfrak{X} | \dot{D}y\in\mathfrak{X}\}, \mathfrak{X}_0^1=\{y\in\mathfrak{X}^1 | y|_{x=1}=0\}, \mathfrak{X}^2=\{y\in\mathfrak{X}^1 | \triangle y\in \mathfrak{X}\}$ and $\mathfrak{X}_{(0)}^2=\mathfrak{X}^2\cap\mathfrak{X}_0^1=\{y\in\mathfrak{X}^2 | y|_{x=1}=0\}$. The problem (IBP) is equivalent to $$\frac{du}{dt}+\mathfrak{A}(t)u=\mathfrak{f}(t), \qquad u|_{t=0}=0,$$ with $$ u=\begin{bmatrix} h \\ \frac{\partial h}{\partial t}\end{bmatrix} \quad \mbox{and}\quad \mathfrak{f}(t)=\begin{bmatrix} 0 \\ g(t,\cdot)\end{bmatrix}.$$ We can write $$\mathcal{A}(t)y=-x^{-\frac{N}{2}+1}\frac{d}{dx}ax^{\frac{N}{2}}\frac{dy}{dx}+b\check{D}y +cy,$$ where $$a=b_2,\qquad b=b_1+Db_2,\qquad c=b_0.$$ Then $$(\mathcal{A}(t)y|v)_{\mathfrak{X}}=(a(t)\dot{D}y|\dot{D}v)_{\mathfrak{X}}+((b\check{D}+c)y|v)_{\mathfrak{X}}$$ for $y \in \mathfrak{X}_{(0)}^2 $ and $v\in\mathfrak{X}_0^1$. The inner product $$(y|v)_t=(a(t)\dot{D}y|\dot{D}v)_{\mathfrak{X}}+(y|v)_{\mathfrak{X}}$$ introduces an equivalent norm $\|\cdot\|_t$ in $\mathfrak{X}_0^1$ provided that $|1-a|\leq 1/2$, $\|a\|_{L^{\infty}}, \|b\|_{L^{\infty}}, \|c\|_{L^{\infty}} \leq M_0.$ Then we have $$-(\mathfrak{A}(t)u|u)_{\mathfrak{H}_t}=(u_2|u_1)_{\mathfrak{X}}- ((b\check{D}+c)u_1|u_2)_{\mathfrak{X}} \leq \beta \|u||_{\mathfrak{H}_t}^2,$$ where \begin{eqnarray*} (u|\phi)_{\mathfrak{H}_t}&=&(u_1|\phi_1)_t+(u_2|\phi_2)_{\mathfrak{X}}\\ &=&(a(t)\dot{D}u_1|\dot{D}\phi_1)_{\mathfrak{X}}+ (u_1|\phi_1)_{\mathfrak{X}}+(u_2|\phi_2)_{\mathfrak{X}} \end{eqnarray*} and $\beta$ depends only upon $M_0$. $\|u\|_{\mathfrak{H}_t}=\sqrt{(u|u)_{\mathfrak{H}_t}}$ is equivalent to $\|u\|_{\mathfrak{H}}$ and depends on $t$ smoothly in the sense of \cite{Kato1970}, Proposition 3.4. From the above estimate it follows that $\mathfrak{A}(t)$ is a quasi-accretive generator in the norm $\|\cdot\|_{\mathfrak{H}_t}$. In fact the following argument is standard: the equation $$(\lambda+\mathfrak{A}(t))u=f$$ is reduced to an elliptic equation $$(\lambda^2+\mathcal{A}(t))u_1=\lambda f_1+f_2,$$ which admits a solution $u_1\in \mathfrak{X}_{(0)}^2$ for given $f_3:=\lambda f_1+f_2\in\mathfrak{X}$, provided that $\lambda^2>\|b\|_{L^{\infty}}^2+\|c\|_{L^{\infty}}+\frac{1}{4}$; then $$Q[u]:=\lambda^2\|u\|_{\mathfrak{X}}^2+ (a(t)\dot{D}u|\dot{D}u)_{\mathfrak{X}}+((b\check{D}+c)u|u)_{\mathfrak{X}}\geq \frac{1}{4}\|u\|_{\mathfrak{X}^1}^2,$$ and for given $f_3\in\mathfrak{X}$ there is a $u_1\in \mathfrak{X}_0^1$ such that $Q(u_1,v)=(f|v)_{\mathfrak{X}}$ for any $v\in \mathfrak{X}_0^1$; thus $(\lambda+\mathfrak{A}(t))^{-1}\in\mathcal{B}(\mathfrak{H})$ and $\|(\lambda+\mathfrak{A}(t))^{-1}\|_{\mathcal{B}(\mathfrak{H}_t)} \leq(\lambda-\beta)^{-1}$. Therefore by \cite{Kato1970}, Proposition 3.4, $(\mathfrak{A}(t))_t$ is a stable family of generators. Hence by \cite{Kato1970}, Theorem 7.1 and 7.2, we can claim that there exists a solution $u\in C^1([0,T]; \mathfrak{H})\cap C([0,T];\mathfrak{G})$, which gives the desired solution $h\in C^2([0,T];\mathfrak{X})\cap C^1([0,T];\mathfrak{X}^1)\cap C([0,T];\mathfrak{X}^2)$, since $g\in C^{\infty}([0,T]\times[0,1])$. {\bf Regularity.} We want to show that $h \in C^{\infty}([0,T]\times[0,1])$. To do so, we apply the Kato's theory developed in \cite{Kato1976}, Section 2. We consider the spaces \begin{eqnarray*} &&\hat{\mathfrak{H}}=\hat{\mathfrak{H}}_0=\mathfrak{X}_0^1\times\mathfrak{X}\times\mathbb{R}, \\ &&\hat{\mathfrak{H}}_j=\mathfrak{X}_{(0)}^{j+1}\times\mathfrak{X}^j\times\mathbb{R}, \\ &&\hat{\mathfrak{G}}=\hat{\mathfrak{G}}_1=\mathfrak{X}_{(0)}^2\times \mathfrak{X}_0^1\times\mathbb{R}, \\ &&\hat{\mathfrak{G}}_j=\hat{\mathfrak{G}}\cap\hat{\mathfrak{H}}_j=\mathfrak{X}_{(0)}^{j+1}\times \mathfrak{X}_0^j\times\mathbb{R}. \end{eqnarray*} Here $\mathfrak{X}^k$ is $\{y | \|y\|_k=(\sum_{0\leq\ell\leq k}(y)_{\ell}^2)^{1/2}<\infty\}$ and so on. Introducing the closed operator $$ \hat{\mathfrak{A}}(t)=\begin{bmatrix} 0 & -1 & 0 \\ \mathcal{A}(t) & 0 & -g(t) \\ 0 & 0 & 0 \end{bmatrix}$$ in $\hat{\mathfrak{H}}$ densely defined on $$\mathcal{D}(\hat{\mathfrak{A}}(t))=\hat{\mathfrak{G}},$$ we can convert (IBP) to $$\frac{du}{dt}+\hat{\mathfrak{A}}(t)=0,\qquad u|_{t=0}=\phi_0, $$ where $$\phi_0=\begin{bmatrix}0\\ 0\\ 1\end{bmatrix}.$$ Since $g \in C^{\infty}$, the stability of $(\hat{\mathfrak{A}}(t))_t$ is reduced to that of $(\mathfrak{A}(t))_t$ by the perturbation theorem (\cite{Kato1976}, Proposition 1.2). Therefore $(\hat{\mathfrak{A}}(t))_t$ is a stable family of generators in $\hat{\mathfrak{H}}$. Since the coefficients of the differential operator $\mathcal{A}$ are in $C^{\infty}$, we see $\mathcal{D}(\hat{\mathfrak{A}}(t))\cap\hat{\mathfrak{H}}_1= \hat{\mathfrak{G}}$ and $$\frac{d^k}{dt^k}\hat{\mathfrak{A}}(t)\in L^{\infty}([0,T]; \mathcal{B}(\hat{\mathfrak{G}}_{j+1}, \hat{\mathfrak{H}}_j))$$ for all $j,k$. Moreover we have `ellipticity', i.e., for each $t,j$ $u\in \mathcal{D}(\hat{\mathfrak{A}}(t))$ and $\hat{\mathfrak{A}}(t)u\in \hat{\mathfrak{H}}_j$ implies $u\in \hat{\mathfrak{H}}_{j+1}$ with $$\|u\|_{\hat{\mathfrak{H}}_{j+1}}\leq C(\|\hat{\mathfrak{A}}(t)u\|_{\hat{\mathfrak{H}}_j}+ \|u\|_{\hat{\mathfrak{H}}}). $$ In fact this condition is reduced to the fact that if $y \in \mathfrak{X}^2$ and $\mathcal{A}(t)y \in \mathfrak{X}^j$ then $y \in \mathfrak{X}^{j+2}$ and $$\|y\|_{j+2}\leq C(\|\mathcal{A}(t)y\|_j+\|y\|_1).$$ See Proposition 8. Thus we can apply \cite{Kato1976} Theorem 2.13, say, if $\phi_0\in D_m(0)$, then the solution $u$ satisfies $$u \in \bigcap_{j+k=m}C^k([0,T]; \hat{\mathfrak{H}}_j), $$ which implies $$h \in \bigcap_{j+k=m}C^k([0,T];\mathfrak{X}_{(0)}^{j+1}).$$ Recall $\phi_0=(0,0,1)^T$ and the space of compatibility $D_m(0)$ is characterized by \begin{eqnarray*} D_0(0)&=&\hat{\mathfrak{H}}, \\ S^0(0)&=&I \\ D_{j+1}(0)&=&\{\phi\in D_j(0) | S^k(0)\phi \in \hat{\mathfrak{G}}_{j+1-k}, 0\leq k\leq j\}, \\ S^{j+1}(0)\phi&=&- \sum_{k=0}^j \binom{j}{k}\Big(\frac{d}{dt}\Big)^{j-k}\hat{\mathfrak{A}}(0)S^k(0)\phi. \end{eqnarray*} See \cite{Kato1976}, (2.40), (2.41). Since $g=0$ for $0\leq t\leq \tau_1$, we have $$\Big(\frac{d}{dt}\Big)^{n}\hat{\mathfrak{A}}(0)= \begin{bmatrix}0&0&0\\ \Big(\frac{d}{dt}\Big)^n\mathcal{A}(0) & 0 & 0\\ 0&0&0 \end{bmatrix}.$$ Thus it is easy to see $\phi_0\in D_j(0)$ and $S^j(0)\phi=0$ for $j\geq 1$ inductively on $j$. (Note that $S^0(0)\phi_0=\phi_0$.) Hence for any positive integer $m$ we have $\phi_0\in D_m(0)$ and obtain the desired regularity of the solution $h$.
{ "timestamp": "2013-07-16T02:07:20", "yymm": "1210", "arxiv_id": "1210.3670", "language": "en", "url": "https://arxiv.org/abs/1210.3670" }
\section{Introduction} \label{intro} Particle-based simulation methods have a long and successful history in plasma physics. The original proposal by Hockney~\cite{hockney:1826}, based on a method developed for fluid simulations \cite{harlow:1964}, used computational macro-particles moving in a self-consistent, mean field. Fields were approximated on a spatial grid and interpolated to the particle position to determine the Lorentz force. The essential physics was successfully captured but the simulations suffered from high numerical noise as $\delta$-functions were used to represent the macro-particles. It was later realized~\cite{dawson:1983:403}, that $\delta$-function particles used in this way can lead to numerical instability~\cite{Dawson60a}. A significant improvement was achieved by allowing the macro-particles to have a finite spatial extent~\cite{langdon_birdsall:2115}; these improved schemes originated the class of methods now known as Particle-In-Cell (PIC) algorithms. The PIC algorithm was first used to model plasmas with negligible collisions but later Coulomb collisions and atomic physics were included with techniques based on the Monte Carlo method~\cite{Vahedi:1995}. PIC simulations are now widely used to study far ranging plasma systems including modeling laser-plasma interactions~\cite{faure:2004,geddes:2008,yin:2009,Mori:2010aa,vay:2011}, Z-pinches \cite{welch:2011}, astrophysical and magnetized plasmas~\cite{daughton_kinetic_2005a,lapenta_brackbill:2006:055904,brackbill_lapenta:2008:433}, plasma discharges and low temperature plasma processing~\cite{Vahedi:1995,Nanbu:2000}, and numerous other applications. PIC methods have subsequently undergone a significant development. Research on improvements in reliability and stability lead to the discovery of non-physical, purely numerical artifacts. The PIC algorithm does not conserve total energy exactly (even in the absence temporal discretization), which leads to a surprising phenomenon known as ``grid heating" \cite{Langdon:1970aa,okuda:1972:475}. (See Ref.~\cite{Cormier-Michel:2008bs} for a overview of grid heating.) Grid heating is attributed to a kinetic instability where sub-grid structures are aliased to low frequency modes (due to finite grid resolution). Typically this instability saturates once the plasma has heated to the point where the Debye length is on the order of the grid spacing~\cite{Langdon:1970aa}. More recently, it has been shown that the choice of particle shape (the spline used for current and charge deposition and force interpolation) can lead to unphysical effects, especially when considering threshold phenomena such as self-trapping in a laser-plasma accelerator~\cite{Cormier-Michel:2008bs}. A further limitation of the PIC algorithm is that its overall accuracy is at most second order in both space and time. This is due to the interpolation (typically splines) of quantities between the continuous particle position and the spatial grid. In an attempt to correct for these deficiencies of the standard PIC algorithm, energy-conserving particle algorithms were devised \cite{lewis:1970:136}. While strict energy conservation eliminated the grid heating instability, these algorithms had their own drawbacks. They did not seem to have the same flexibility with respect to a choice of particle shapes as PIC algorithms, which in turn affected the level of numerical noise; \textit{i.e.}, for the same numerical accuracy, PIC algorithms had lower noise levels. Energy conserving algorithms also may require mass-matrix inversions, which is avoided by the field-based formulations of PIC. As a result, energy-conserving algorithms did not become as popular as PIC algorithms. One major difference between PIC and energy-conserving algorithms lies in the way each is formulated. In PIC algorithms \cite{Hockney88,Birdsall:1991aa}, relations between the discretized electric (vector) potential, electric (magnetic) field, charge deposition, and current deposition were obtained by discretizing the corresponding continuous relations and equations. In this process, critical terms of the order of the accuracy of discretization are dropped (as is justified in an asymptotic procedure), but which led to the loss of energy conservation (and possibly violation of other conservation laws). In comparison, energy conserving algorithms were derived from a variational principle \cite{lewis:1970:136,Eastwood:1991aa}, using the fact that the Vlasov equation could be obtained from Low's Lagrangian~\cite{low:1958:282}. A number of benefits in using variational principle was pointed out: basic properties of the original system were retained in the reduced system; a natural way of making coordinate transformations was provided; and use of high accuracy space and time solvers was possible. The goal of this paper is to generalize previous variational formulations and to offer a new formulation of energy conserving algorithms based on the Hamiltonian and the non-canonical Poisson bracket proposed by Morrison \cite{morrison:1980:383,Weinstein-Morrison81,Morrison:1982aa}. Our approach is to use particular reductions of the distribution function to a finite collection of terms as well as particular reductions of the continuous fields to finite number of degrees-of-freedom, either in the Lagrangian or in the Hamiltonian and Poisson bracket. As a result of our general method, we show how to avoid many of the previous drawbacks and deficiencies of both PIC and the energy-conserving methods. In addition to the energy-conserving property of all algorithms in this work, we: (i)~show that particle shapes in energy conserving algorithms can be chosen with more freedom instead of being a delta-function in space. In fact, the shape of an extended space particle has very few physical constraints; the shape may be symmetric about its centroid or may exploit the spatial symmetry of a particular physical problem, \textit{e.g.}, systems with azimuthal symmetry or symmetries in the gyrokinetic approximation \cite{lee:2003:3196,yulin:2005}; (ii)~relax the method of time integration of the equations of motion (most often leapfrog previously) allowing for higher than second order accuracy. The choice of a time integrator becomes limited only by numerical stability. In this paper we emphasize the spatial discretization and leave time continuous, which allows for convenient formulations of time-explicit schemes. We prove conservation laws with continuous time and only in the last step do we choose a particular (explicit) time advancing scheme; (iii)~show how to reduce continuous quantities with either grid-based reduction (using finite differences) or truncated bases; previous authors have only used truncated bases, which for the case of finite elements may necessitate mass matrix inversions. By using grid-based reduction, mass matrices do not appear (for an example, see \ref{HybridModel}), which may be computationally advantageous; (iv)~derive formulations in terms of fields that are based on the Hamiltonian and a non-canonical Poisson bracket. Previously only potential-based energy-conserving algorithms have been derived from a variational principle~\cite{lewis:1970:136,Eastwood:1991aa}. Using field-based formulations eliminates the need for solving Poisson's equation; and (v)~derive a particle algorithm that conserves both total energy {\it and} total momentum. The arguments leading to this algorithm demonstrate the usefulness of the variational approach and exploit the relations between conservation laws and symmetries of the Lagrangian. Previous such models were derived by assuming a delta function for the particle shape \cite{evstatiev:2003,evstatiev:2005}. Throughout, we restrict our discussion to the case of a one-dimensional, nonrelativistic, unmagnetized, electrostatic plasma. This is done purely to streamline the discussion, elucidating the central ideas. The extension of these concepts to the fully relativistic, electromagnetic case, while involving numerous technical details, is largely straightforward and will be the subject of a future publication. It has been emphasized before \cite{lewis:1970:136,Eastwood:1991aa} that variational formulations naturally lead to higher overall accuracy particle algorithms (\textit{i.e.}, both in space and time). Here we show that this remains true with all of the above relaxed conditions. We show that force interpolation, field integrators, and time integrators may be chosen to increase the accuracy of a particle method beyond second order. In the course of all derivations, we point out where a certain property is being relaxed or is being lost. Time-implicit formulations of particle algorithms have significant attraction in simulating problems where long time evolution is necessary. Recently, authors have been successfully reformulated the PIC algorithm in terms of implicit time integration with the added benefit of energy conservation \cite{chen_chacon:2011,markidis_lapenta:2011}. However, these formulations do not offer a general derivation and so is unclear how they can be extended. For example, these formulations use the Crank-Nicholson time integrator and charge-conserving particle shapes but it is an open question how to extend these methods to use more accurate time integrators. While continuous-time formulations are the focus of this paper, we note that one may discretize the action principle in both space and time. In the context of particle methods for plasma simulations, this was first considered by Eastwood~\cite{Eastwood:1991aa}. While Eastwood's method conserved energy exactly, it did so at the expense of an implicit time-advance. When the time-advance was altered to be fully explicit, exact energy conservation was lost. The notion of performing the temporal discretization in the action has been studied extensively by Marsden and co-workers (see for example Refs~\cite{wendlandt_marsden:1997} and \cite{Marsden:2001aa}). There are a number of attractive features of this approach and it is a natural extension of the methods presented herein; this will be the subject of future work.\label{discrete-mechanics} The paper is organized as follows. Section~\ref{Lagrangian_PIC} is devoted to deriving algorithms based on a Lagrangian formulation of the Vlasov--Poisson system. This section presents the finite differencing formulation. In section~\ref{TruncatedBasisDerivation} the particle models are derived from a Lagrangian formulation in terms of truncated bases. The important model which conserves both momentum and energy is derived there. In these derivations the fields are described in terms of electrostatic potential. Section~\ref{BracketDerivation} presents the Hamiltonian derivation of particle models using truncated basis and a reduction of the non-canonical Poisson bracket. The equations of motion are formulated in terms of electric field. Section~\ref{Examples} illustrates properties of the derived particle models with numerical examples. Conclusions are in section~\ref{Conclusions}. \ref{particle-shapes} gives many examples of charge deposition rules, while \ref{HybridModel} presents a hybrid cold fluid-kinetic particle model from a Lagrangian starting point. It demonstrates our general method with a different reduction of the particle distribution function. It also illustrates how the mass matrix and its inverse may be avoided by the use of grid-based reduction of the continuous quantities. \section{Lagrangian formulation} \label{Lagrangian_PIC} A plasma with negligible collisions is well described by a single-particle phase-space distribution function, $f$, whose phase-space evolution is governed by the Vlasov equation~\cite{Krall:1973aa} \begin{equation} \frac{\partial f}{\partial t} + v \frac{\partial f}{\partial x} + \frac{q_s}{m_s}\,E\,\frac{\partial f}{\partial v} = 0,\label{Vlasov-v} \end{equation} where $E=-\nabla\varphi$ is the electric field, $\varphi$ is the electric potential, $m_s$ and $q_s$ are the species mass and charge. For an initial phase-space distribution $f_0(\tilde x, \tilde v)$, the distribution at any later time is given by \begin{equation} f(x, v, t) = f_0(\tilde x, \tilde v), \label{f-evolution} \end{equation} where $x(t; \tilde x, \tilde v)$ and $v(t; \tilde x, \tilde v) = \partial x(t; \tilde x, \tilde v)/\partial t$ are the macro-particle trajectories with initial conditions $\tilde x$ and $\tilde v$: $x(0;\tilde x,\tilde v) = \tilde x$ and $v(0;\tilde x,\tilde v) = \tilde v$. The particle trajectories correspond to characteristics of the Vlasov equation and~\eqref{f-evolution} is simply the statement that the distribution function is constant on the characteristics. Vlasov dynamics can be obtained from the Lagrangian \cite{low:1958:282,Galloway:1971aa,Ye:1992aa} \begin{multline} \mathcal{L} = \frac{m_s}{2}\intD\dmut f_0(\tilde x, \tilde v)\, \left[\frac{\partial x(t; \tilde x, \tilde v)}{\partial t} \right]^2 \\ - q_s\intD\dmut\, f_0(\tilde x, \tilde v)\,\varphi\left( x(t; \tilde x, \tilde v)^{\!\!\!\phantom{k}},t\right) + \frac{1}{8\pi} \intD{\meas x} \left[ \nabla \varphi(x) \right]^2, \label{Lows_Lagrangian} \end{multline} where $x(t; \tilde x, \tilde v)$ and $\varphi(x)$ are to be varied independently. (Here we consider a single-species plasma but the extension to multiple species is obvious.) In the usual way, demanding the action be stationary with respect to variations of the dynamical variables leads to the equations of motion. Variation with respect to particle positions gives \begin{equation} m_s\,\ddot x = -q_s\nabla\varphi\,, \end{equation} while variation with respect to the potential gives \begin{equation} \nabla^2\varphi = -4\pi\,q_s\intD{\meas v} f(x,v,t) \equiv -4\pi\rho(x). \label{Poisson} \end{equation} We have used \cite{Galloway:1971aa} \begin{equation} \meas\xt\,\meas\vt f_0(\tilde x,\tilde v) = \meas x\,\meas v f(x, v, t) \label{measure} \end{equation} in~\eqref{Poisson} and we have assumed either periodic boundary conditions or an infinite system to allow surface terms to be dropped. Note that \eqref{measure} is a statement of particle number conservation and is equivalent to Gardner's restacking theorem \cite{Gardner:1963rz}. The basic idea of Lagrangian macro-particle methods lies in the representation of the full distribution function $f({ x}, { v}, t)$ as a sum of moving spatial volumes, $f_\alpha({ x}, {v}, t)$, called macro-particles: \begin{eqnarray} f({ x}, {v}, t) &=& \sum_\alpha f_\alpha({ x}, {v}, t) \nonumber \\ &=& \sum_\alphaw_\alpha\,S[{ x}-\xi_\alpha(t)] \,\delta[{ v}-\xid_\alpha(t)]. \label{Sum_f_i} \end{eqnarray} The choice of a delta function in velocity space is not essential but avoids the necessity to track stretching phase space volumes, which is why we adhere to it. In Eq.~(\ref{Sum_f_i}), $w_\alpha$ are constant weights and the function $S$ is the fixed spatial extent of the computational particle (hereafter we use the terms particle, computational particle, and macro-particle interchangeably unless otherwise specified) and is normalized as \begin{equation}\label{Shape_norm} \intD{\meas x} S[{ x}-\xi_\alpha(t)] = 1. \end{equation} An additional simplification is made by assuming that all particles have the same shape. We note that the representation (\ref{Sum_f_i}) is general and independent of whether both electric and magnetic fields are present in the system, {\it i.e.}, it is valid for the general Vlasov--Maxwell system. For clarity of the presentation, in this paper we consider only electrostatic, non-relativistic models. Electromagnetic and relativistic models can be derived similarly to this presentation and will be presented in a future publication. We view \eqref{Sum_f_i} as a particular \emph{reduction} of the particle distribution function. \ref{HybridModel} gives another example of such a reduction. Substituting our form of the distribution function, \eqref{Sum_f_i}, into the Lagrangian and again using \eqref{measure}, we obtain a reduced Lagrangian \begin{equation} \begin{aligned} \mathcal{L} &= \frac\ma2\sum_{\alpha=1}^{\Np} w_\alpha\,\dot\xi^2_\alpha - q_s \sum_{\alpha=1}^{\Np} w_\alpha \intD{\meas x} S(x-\xi_\alpha)\,\varphi(x) + \frac{1}{8\pi}\intD{\meas x}\left(\nabla\varphi\right)^2\\ &= \LL_{\rm kin} + \LL_{\rm int} + \LL_{\rm field}\,, \end{aligned} \label{L_Cont} \end{equation} where \begin{align} \LL_{\rm kin} &= \frac\ma2\sum_{\alpha=1}^{\Np} w_\alpha\,\xid_\alpha^2, \label{L_kin} \\ \LL_{\rm int} &= \intD{\meas x} S(x-\xi_\alpha)\,\varphi(x)\,, \label{coupling} \\ \LL_{\rm field} &= \frac1{8\pi}\intD{\meas x}\left(\nabla\varphi\right)^2.\label{field} \end{align} Although we have replaced a continuum of particles with labels $\tilde x$ and $\tilde v$ by $N_p$ macro-particles, we still have an infinite degree-of-freedom system due to the presence of the continuous field $\varphi$. The equations of motion are obtained from \eqref{L_Cont} by considering variations of the particle position and of the potential. For the particles, the usual Euler--Lagrange equation \begin{equation} \frac d{dt}\,\frac{\partial\mathcal{L}}{\partial\xid_\alpha} - \frac{\partial\mathcal{L}}{\partial\xi_\alpha} =0, \label{EOM_xi_alpha_temp} \end{equation} gives \begin{equation} \xidd_\alpha = - \frac\qam_s\intD{\meas x}\frac{\partial S}{\partial\xi_\alpha} \varphi(x) = -\frac\qam_s\intD{\meas x} S[{x}-\xi_\alpha(t)] \nabla \varphi\,. \label{xi-EOM-Cont} \end{equation} Since the potential is a field, the Euler--Lagrange equation for the potential is \begin{equation} \frac{\delta\mathcal{L}}{\delta \varphi} = 0, \, \end{equation} where $\delta/\delta \varphi$ denotes a functional derivative. Then \begin{equation} \nabla^2 \varphi = - 4\pi\,q_s\sum_{\alpha=1}^{\Np} w_\alpha S[{ x}-\xi_\alpha(t)]\, . \label{Poisson-Cont} \end{equation} Note that the factor $q_s/m_s$ appearing in \eqref{xi-EOM-Cont} is the physical charge to mass ratio of the plasma species. It is not necessary to make the ad-hoc assumption that the macro-particle have the same charge to mass ratio as the plasma species, this is a consequence of the phase-space decomposition \eqref{Sum_f_i}. Furthermore, the second form of the force in \eqref{xi-EOM-Cont} may clearly be interpreted as the electric field averaged over the particle shape. The substitution of \eqref{Sum_f_i} into \eqref{Lows_Lagrangian} is equivalent to a choice of a trial function for $f$ that depends on a number of parameters, which in our case are the particle positions and velocities. The values of these parameters are obtained by solving the equations resulting from the variation \eqref{EOM_xi_alpha_temp}. Other choices of trial functions may lead to models without particles at all \cite{lewis_barnes:1987,Shadwick:2010aa}. A significant advantage of the variational formulation is the connection between symmetries and conservation laws as embodied in Noether's theorem \cite{Jose:1998aa}. Our introduction of macro-particles through \eqref{Sum_f_i} neither results in explicit time-dependence in the Lagrangian nor breaks translational invariance of the Lagrangian, thus we should expect the equations of motion \eqref{xi-EOM-Cont} and \eqref{Poisson-Cont} to exactly conserve both energy and momentum. The total energy of the system is the sum of macro-particle kinetic energy and field energy, \begin{equation} W = \frac\ma2\sum_{\alpha=1}^{\Np} w_\alpha\,\xid_\alpha^2 + \frac{1}{8\pi}\intD{\meas x}\left(\nabla\varphi\right)^2. \label{W-cont} \end{equation} Using the equations of motion, it is straightforward to see that $W$ is an invariant: \begin{align} \frac{dW}{dt} & = m_s\sum_{\alpha=1}^{\Np} w_\alpha\,\xid_\alpha\,\xidd_\alpha - \frac{1}{4\pi}\intD{\meas x}\varphi\,\frac{\partial}{\partial t}\,\nabla^2\varphi\nonumber\\ & = -q_s\sum_{\alpha=1}^{\Np} w_\alpha\,\xid_\alpha\intD{\meas x} S(x - \xi_\alpha)\nabla\varphi + q_s\intD{\meas x}\varphi\,\frac{\partial}{\partial t}\sum_{\alpha=1}^{\Np} w_\alpha S(x-\xi_\alpha)\nonumber\\ & = -q_s\sum_{\alpha=1}^{\Np} w_\alpha\,\xid_\alpha\intD{\meas x} S(x - \xi_\alpha)\nabla\varphi - q_s\sum_{\alpha=1}^{\Np} w_\alpha\,\xid_\alpha\intD{\meas x}\varphi\,\frac{\partial}{\partial x}S(x-\xi_\alpha)\nonumber\\ & = -q_s\sum_{\alpha=1}^{\Np} w_\alpha\,\xid_\alpha\intD{\meas x} S(x - \xi_\alpha)\nabla\varphi + q_s\sum_{\alpha=1}^{\Np} w_\alpha\,\xid_\alpha\intD{\meas x} S(x-\xi_\alpha)\,\nabla\varphi\nonumber\\ & = 0\,. \end{align} The total momentum of the system is simply \begin{equation} P = m_s\sum_{\alpha=1}^{\Np} w_\alpha\,\xid_\alpha\,, \label{P_cont} \end{equation} since the electrostatic field carries no momentum (the Poynting vector is zero in the electrostatic approximation). Now \begin{align} \frac{dP}{dt} & = -q_s\sum_{\alpha=1}^{\Np} w_\alpha\intD{\meas x} S(x-\xi_\alpha)\nabla\varphi \nonumber\\ & = \frac1{4\,\pi}\intD{\meas x} \nabla^2\varphi\nabla\varphi \nonumber\\ & = \frac1{8\,\pi}\intD{\meas x} \nabla\left(\nabla\varphi\right)^2 \nonumber\\ & = 0\,, \end{align} where we have used \eqref{Poisson-Cont}. At this point in our reduction, we have a finite number of macro particles representing the plasma but a continuous field for the potential. Here one must provide some approximate or exact solution of \eqref{Poisson-Cont}, which is then used to integrate the macro-particle equations of motion. One possibility would be to use methods based on evaluating the Green's function, constructing $\varphi$ as the superposition of the potentials due to each macro-particle. Even though the macro-particles interact via the mean field, computation of $\varphi$ using Green's functions scales as $O(N_p^2)$ and is thus limited to relatively small systems. A more computationally advantageous alternative is to introduce a discrete representation for the potential. There are two general approaches: using a spatial grid, approximating the potential by its value at the grid point, or using a truncated set of (local or global) basis functions and representing the potential by its projection onto the basis. The interaction term in the Lagrangian, \eqref{coupling}, provides both the force in \eqref{xi-EOM-Cont} as well as the charge density in \eqref{Poisson-Cont}. Of course, this will continue to be the case when the continuous potential is replaced by a discrete approximation. It will be necessary to approximate $\LL_{\rm int}$ consistently with the choice of the discrete potential but this single approximation ultimately yields both the force term in the $\xi_\alpha$ equation of motion and the charge density in a discrete analogue of Poisson's equation. Thus we are guaranteed that these terms are consistently approximated. \subsection{Discretization using a spatial grid} \label{spatial-grid} We assume a fixed spatial grid $x_i$ with $i\in[1,N_g]$ and grid spacing $h$ with $\varphi_i$ being the numerical approximation to $\varphi(x_i)$. We must now approximate two terms in \eqref{L_Cont}, $\LL_{\rm int}$ and $\LL_{\rm field}$. The interaction term requires knowledge of $\varphi$ between the grid-points; so some manner of interpolation is required. Finite elements \cite{Becker:1981aa} offer a consistent way to perform such interpolations to any accuracy. Let $\Psi_i(x)$, $i = 1,\ldots,N_g$ be finite-element basis of some order. We interpolate $\varphi$ between the grid points by \begin{equation} \varphi(x) = \sumg i \varphi_i\Psi_i(x) \label{fe-interp} \end{equation} and thus \eqref{coupling} becomes \begin{equation} \intD{\meas x} S(x-\xi_\alpha)\,\varphi(x) = \sumg i\varphi_i\intD{\meas x} S(x - \xi_\alpha)\Psi_i(x) = \sumg i\varphi_i\rho_i(\xi_\alpha)\,, \label{discrete-coupling} \end{equation} where \begin{equation} \rho_i(\xi_\alpha) = \intD{\meas x} S(x - \xi_\alpha)\Psi_i(x) \label{deposition} \end{equation} is the effective (projected) shape of the particle. Note the expression for $\rho_i$ can be computed analytically since the function $S$ is known. If $\Psi_i(x)$ are constructed from Lagrange polynomials, then $\sumg i \Psi_i(x)$ = 1 and \begin{equation} \sumg i \rho_i(\xi_\alpha) = \sumg i \intD{\meas x} S(x - \xi_\alpha)\Psi_i(x) = \intD{\meas x} S(x - \xi_\alpha) = 1\,. \label{charge-norm} \end{equation} This property means that the total charge deposited on the grid and at any instant of time is constant. It remains to approximate $\LL_{\rm field}$ in terms of $\varphi_i$. This can be approached in two ways which give roughly equivalent results. We can use \eqref{fe-interp} to write the integral in \eqref{field} as \begin{equation} \intD{\meas x}\left(\nabla\varphi\right)^2 = \sumg{i,j}\varphi_i\varphi_j\intD{\meas x}\frac{d\Psi_i(x)}{dx}\,\frac{d\Psi_j(x)}{dx}\,. \end{equation} Defining \begin{equation} -hK_{ij} = \intD{\meas x}\frac{d\Psi_i(x)}{dx}\,\frac{d\Psi_j(x)}{dx}\,, \label{Kij} \end{equation} we have \begin{equation} \intD{\meas x}\left(\nabla\varphi\right)^2 = -h\sumg{i,j}\varphi_i\varphi_jK_{ij}\,. \label{field-fe} \end{equation} Alternatively, after integrating by parts in $\LL_{\rm field}$, we can approximate the integral as \begin{equation} \intD{\meas x}\left(\nabla\varphi\right)^2 = - \intD{\meas x}\varphi(x)\nabla^2\varphi(x) \approx -h\sumg i \left.\varphi_i \frac{d^2\varphi}{dx^2}\right|_{x_i}. \end{equation} While this appears to simply be using the trapezoidal rule to evaluate the integrand, with either periodic boundary conditions or an infinite domain, this approximation has spectral accuracy, that is, all modes supported by the grid are integrated exactly. We complete the approximation by choosing a finite-difference representation for the second derivative. Regardless of details of the finite-difference approximation, it can always be expressed as \begin{equation} \left.\frac{d^2\varphi}{dx^2}\right|_{x_i} = \sumg j\widetilde K_{ij}\varphi_j + O(h^a), \label{K_tilde} \end{equation} for some integer $a$. Thus we have \begin{equation} \intD{\meas x}\left(\nabla\varphi\right)^2 \approx -h\sumg{i,j} \varphi_i \widetilde K_{ij}\varphi_j\,, \label{field-fd} \end{equation} which has the same form as \eqref{field-fe}. Notice that while $K_{ij}$ is always symmetric [cf.~\eqref{Kij}], this need not be true for $\widetilde{K}_{ij}$. We now arrive at the finite degree-of-freedom Lagrangian \begin{equation} \mathcal{L} = \frac\ma2\sum_{\alpha=1}^{\Np} w_\alpha\,\dot\xi^2_\alpha - q_s \sum_{\alpha=1}^{\Np}\sumg i w_\alpha \rho_i(\xi_\alpha)\,\varphi_i - \frac{h}{8\pi}\sumg{i,j}\varphi_i\mathcal{K}_{ij}\varphi_j , \label{L-grid} \end{equation} where we take either $\mathcal{K}_{ij} = K_{ij}$ or $\mathcal{K}_{ij} = \widetilde K_{ij}$. The dynamical equations are obtained by demanding that the action be stationary with respect to variations in both $\xi_\alpha$ and $\varphi_i$. Taking these variations yields \begin{equation} \xidd_\alpha = - \frac\qam_s\sumg i\frac{\partial\rho_i(\xi_\alpha)}{\partial\xi_\alpha}\,\varphi_i\, \label{xi-EOM} \end{equation} and \begin{equation} \sumg j\mathcal{K}_{ij} \varphi_j = - 4\pi\,\fracq_s h\sum_{\alpha=1}^{\Np} w_\alpha\,\rho_i(\xi_\alpha)\,. \label{Poisson-d} \end{equation} The reason we choose to integrate by parts in \eqref{field-fd} is now clear: by dong so, we are able to directly specify the difference method for the second derivative that appears in Poisson's equation, \eqref{Poisson-d}. Discretizing $\varphi(x)$ in \eqref{W-cont} in the same manner as in the Lagrangian, we have \begin{equation} W_L = \frac\ma2\sum_{\alpha=1}^{\Np} w_\alpha\,\xid_\alpha^2 - \frac{h}{8\pi}\sumg{i,j}\varphi_i\mathcal{K}_{ij}\varphi_j. \label{W} \end{equation} Using the equations of motion, we find \begin{align} \frac{dW_L}{dt} & = m_s\sum_{\alpha=1}^{\Np} w_\alpha\,\xid_\alpha\,\xidd_\alpha - \frac{h}{4\pi}\sumg{i,j}\varphi_i\,\mathcal{K}_{ij}\frac{d\varphi_j}{dt}\nonumber\\[2pt] & = -q_s\sum_{\alpha=1}^{\Np}\sumg i w_\alpha\,\xid_\alpha\,\varphi_i\,\frac{\partial\rho_i}{\partial\xi_\alpha} + q_s\sumg i\varphi_i\sum_{\alpha=1}^{\Np} w_\alpha\frac{d\rho_i(\xi_\alpha)}{dt}\nonumber\\[2pt] & = -q_s\sum_{\alpha=1}^{\Np}\sumg i w_\alpha\,\varphi_i\,\frac{\partial\rho_i}{\partial\xi_\alpha}\,\xid_\alpha + q_s\sumg i\sum_{\alpha=1}^{\Np} w_\alpha\,\varphi_i\,\frac{\partial\rho_i}{\partial\xi_\alpha}\xid_\alpha\nonumber\\ & = 0\,. \end{align} Introducing a spatial grid does not affect energy conservation. This is expected since the spatial discretization of $\varphi$ does not introduce explicit time-dependence into the Lagrangian. An immediate advantage of the variational approach is that models derived in this way are automatically free of grid heating. A side effect of introducing a spatial grid is that it breaks the translation invariance of $\mathcal{L}$ and consequently total momentum is no longer exactly conserved; see section~\ref{TruncatedBasisDerivation} for a more complete discussion. We conclude this section by providing a concrete example of this procedure to derive a model that is second-order accurate in $h$. For simplicity, we consider the case of a charge-neutral electron plasma with an immobile ionic background and a spatially periodic domain. Since this system has no ion dynamics, we can forgo summing over species and simply introduce the ion density into the Lagrangian \begin{equation} \mathcal{L} = \frac\me2\sum_{\alpha=1}^{\Np} w_\alpha\,\dot\xi^2_\alpha - q_e \sum_{\alpha=1}^{\Np}\sumg i w_\alpha \rho_i(\xi_\alpha)\,\varphi_i - \sumg i \rho^{\scriptscriptstyle\textsc{(Ion)}}_i\,\varphi_i - \frac{h}{8\pi}\sumg{i,j}\varphi_i\mathcal{K}_{ij}\varphi_j, \label{L-qn} \end{equation} where \begin{equation} \rho^{\scriptscriptstyle\textsc{(Ion)}}_i = q_{\scriptscriptstyle\textsc{I} }\intD{\meas x} n^{\scriptscriptstyle\textsc{(Ion)}}(x)\Psi_i(x), \end{equation} with $n^{\scriptscriptstyle\textsc{(Ion)}}(x)$ being the given ion density. Linear finite elements yield second order accuracy interpolation \cite{Becker:1981aa} (see Figure \ref{linearFE}): \begin{equation} \Psi^{\scriptscriptstyle(1)}_i(x) = \begin{cases} 1 - \dfrac{|x - x_i|}h & x_{i-1} \le x \le x_{i+1},\\ 0 &\textrm{otherwise.} \end{cases} \end{equation} To determine $\rho_i$ we need to specify $S(x)$. Regardless of the choice of $S$, the accuracy of the interpolation will be second order due to our basis choice. The choice of $S$ affects the quality of our approximation through the extent to which \eqref{Sum_f_i} is a good ansatz but has no influence on the formal order of the model. Arguably the simplest choice for $S$ is a top-hat cell-wide function: \begin{equation} S(x - \xi_\alpha) = \begin{cases}\dfrac1h& |x - \xi_\alpha| \le \frac12\,h,\\[6pt]0 &\textrm{otherwise.}\end{cases} \label{top-hat-S} \end{equation} While we have chosen $S$ to be exactly one grid-cell wide, this is by no means essential. The choice of support of $S$ is completely independent of the grid spacing; the particular choice in \eqref{top-hat-S}, allows us to make connection with the usual PIC particle shapes (see Figure \ref{shapes-fig} and Table \ref{shapes}). \begin{figure}[htb] \centering \includegraphics{linearFE} \caption{Linear finite element basis functions. The basis functions $\Psi_i$ are identified with the grid-point at which it takes on the value $1$; e.g., $\Psi_i$ is the tent function with support $[x_{i-1},x_{i+1}]$.} \label{linearFE} \end{figure} We now use \eqref{deposition} to determine the grid charge deposition. With this $S$, for any $\xi_\alpha$, there are only three values of $i$ for which $\rho_i(\xi_\alpha)\ne 0$. Take $x_k$ to be the grid point nearest $\xi_\alpha$ and let $\Delta = (\xi_\alpha - x_k)/h$. Clearly $|\Delta| \le 1/2$. It is straightforward to evaluate \eqref{deposition} to obtain \begin{equation} \begin{aligned} \rho_{k-1} &= \frac12\left(\Delta - \frac12\right)^2,\\ \rho_{k\phantom{-1}} &= \frac34-\Delta^2,\\ \rho_{k+1} &= \frac12\left(\Delta + \frac12\right)^2. \end{aligned} \label{quadratic} \end{equation} This is equivalent to the charge deposition obtained from the usual PIC quadratic particle shape \cite{Hockney88}. It is possible to recover all of the usual smooth particle shapes. For example, taking \begin{equation} S(x - \xi_\alpha) = \frac1h \begin{cases} \displaystyle \frac{3}{4} - \left(x - \xi_\alpha\right)^2 & |x - \xi_\alpha| \le \frac12h,\\[8pt] \displaystyle \frac12\left(\frac32 - |x - \xi_\alpha|\right)^2 & \frac12h < |x - \xi_\alpha| \le \frac32h,\\[8pt] \displaystyle 0 &\text{otherwise,} \end{cases} \label{S2} \end{equation} we obtain \begin{equation} \begin{aligned} \rho_{k-2} &= \frac1{24}\left(\Delta - \frac12\right)^4\\[8pt] \rho_{k-1} &= \frac{19}{96} - \frac{11}{24}\,\Delta + \frac14\,\Delta^2 + \frac{1}{6}\,\Delta^3 - \frac{1}{6}\,\Delta^4\\[8pt] \rho_{k\phantom{-1}} &= \frac{115}{192} - \frac58\,\Delta^2 + \frac14\,\Delta^4\\[8pt] \rho_{k+1} &= \frac{19}{96} + \frac{11}{24}\,\Delta + \frac14\,\Delta^2 - \frac{1}{6}\,\Delta^3 - \frac{1}{6}\Delta^4\\[8pt] \rho_{k+2} &= \frac1{24}\left(\Delta + \frac12\right)^4 \end{aligned} \label{quartic} \end{equation} which is equivalent to the usual quartic charge deposition rule. We take up the matter of particle shapes in some detail in \ref{particle-shapes}. In particular, we demonstrate a cubic $\rho_{k}$ spanning only three grid-points. All that remains is to approximate \eqref{field} to second-order accuracy. Evaluating \eqref{Kij} for our linear basis, $\Psi^{\scriptscriptstyle(1)}_i$ is straightforward. For any $i$, we can see that $\Psi^{\scriptscriptstyle(1)}_i{}'(x)$ has a non-zero overlap with $\Psi^{\scriptscriptstyle(1)}_j{}'(x)$ only for $j= i\pm1$ and thus we have \begin{equation} K_{ij} = \begin{cases} -\dfrac2{h^2} & j = i,\\[8pt] \phantom{-}\dfrac1{h^2} & j = i\pm1\,. \end{cases} \end{equation} Alternatively, we can use finite difference approximations and evaluate \eqref{field} using \eqref{field-fd}. Since we are considering a periodic domain, it is reasonable to choose a central difference approximation \begin{equation} \frac{d^2\varphi}{dx^2}\Biggl|_{x_i} = \frac{\varphi_{i+1} - 2\,\varphi_i + \varphi_{i-1}}{h^2} + O(h^2)\,, \end{equation} which gives \begin{equation} \widetilde K_{ij} = K_{ij}\,. \end{equation} Linear finite-elements give the same approximation to \eqref{field} as taking second-order central differences. This is essentially a coincidence; higher-order finite element bases (quadratic, cubic, \textit{etc.}) do not yield expressions for $K_{ij}$ that can be equated to conventional differencing schemes. Nothing, other than the symmetry of the problem, forces us to choose central differences; any second-order approximation would suffice. Note, using an expression for $\widetilde K_{ij}$ that is accurate beyond second-order will not increase the overall spatial order of the method unless a corresponding more accurate interpolation scheme is used to evaluate \eqref{deposition}. The macro-particle equation of motion, which is not alerted by the specifics of the spatial discretization, remains as in \eqref{xi-EOM}. For a uniform ion background, $n^{\scriptscriptstyle\textsc{(Ion)}}_0$, we have \begin{equation} \rho^{\scriptscriptstyle\textsc{(Ion)}}_i = q_{\scriptscriptstyle\textsc{I} }\,n^{\scriptscriptstyle\textsc{(Ion)}}_0\intD{\meas x} \Psi^{\scriptscriptstyle(1)}_i(x) = q_{\scriptscriptstyle\textsc{I} }\,n^{\scriptscriptstyle\textsc{(Ion)}}_0\,h\,. \end{equation} For completeness we restate Poisson's equation including the ionic background and our approximation for $\mathcal{K}_{ij}$ \begin{equation} \frac1{h^2}\left(\varphi_{i+1} - 2\varphi_i + \varphi_{i+1}\right) = - 4\pi\,\fracq_s h\sum_{\alpha=1}^{\Np} w_\alpha\,\rho_i(\xi_\alpha) - 4\pi\,q_{\scriptscriptstyle\textsc{I} }\,n^{\scriptscriptstyle\textsc{(Ion)}}_0\,. \end{equation} \subsection{Discretization using a truncated basis and the question of momentum conservation} \label{TruncatedBasisDerivation} In the previous section the interaction and field parts of the Lagrangian (\ref{L_Cont}) were reduced from an infinite to a finite degree of freedom quantities by discretizing on a grid. We noted that the introduction of a spatial grid breaks the translational invariance of $\mathcal{L}$, which leads to loss of momentum conservation in the reduced system. In this section we consider a reduction using a truncated global basis and investigate the question of momentum conservation in this case. We show that replacing the continuous potential by a finite collection of projections onto a truncated basis can result in a discrete system that retains translation invariance. (Of course, if the basis is not truncated, which is not useful from a computational perspective, then we would expect translation invariance to be maintained for any complete basis.) Let $\Phi_m(x)$, $m = 1,\ldots, M$ be the first $M$ elements of an orthonormal basis. We approximate the potential as \begin{equation} \varphi \approx \sum_{m=1}^M \varphi_m\,\Phi_m(x), \end{equation} where \begin{equation} \varphi_m = \intD{\meas x} \Phi^\dag_m(x)\,\varphi(x) \end{equation} and $\Phi^\dag_m(x)$ is the dual to $\Phi_m(x)$ satisfying \begin{equation} \intD{\meas x} \Phi_m(x)\,\Phi^\dag_n(x) = \delta_{mn}, \label{bv-orthog} \end{equation} for $m,n = 1,\ldots,M$. The accuracy of this approximation will depend on the number of elements kept and the convergence properties of the basis. For a truncated system to possess translation invariance requires that the basis have certain properties. These properties are made evident by shifting the origin by an amount $\delta x$ while leaving the physical system unchanged and requiring this transformation to be a symmetry of $\mathcal{L}$ \cite{Jose:1998aa}. The kinetic term, $\LL_{\rm kin}$, is obviously translation invariant under such a shift since $\delta x$ is time independent. Consider $\LL_{\rm int}$. Let $x'$ be the new coordinate, with $x = x' + \delta x$. The particle coordinates and potential relative to $x'$ are \begin{equation} \begin{aligned} \xit_\alpha & = \xi_\alpha - \delta x , \\ \widetilde\V(x') & = \varphi(x' + \delta x) \,. \end{aligned} \end{equation} The symmetry condition is then \begin{equation} \LL_{\rm int}[\varphi, \xi_\alpha] = \LL_{\rm int}[\widetilde\V,\xit_\alpha]\,. \end{equation} Introducing our basis expansion into $\LL_{\rm int}$ and suppressing prime symbols, we have \begin{equation} \begin{aligned} \LL_{\rm int}[\widetilde\V,\xit_\alpha] &= -q_s\sum_{\alpha=1}^{\Np}\sum_{m=1}^M w_\alpha\,\widetilde\V_m\intD{\meas x} S(x - \xit_\alpha)\Phi_m(x)\\ &= -q_s\sum_{\alpha=1}^{\Np}\sum_{m=1}^M w_\alpha\,\widetilde\V_m\intD{\meas x} S(x - \xi_\alpha + \delta x)\Phi_m(x)\\ &= -q_s\sum_{\alpha=1}^{\Np}\sum_{m=1}^M w_\alpha\,\widetilde\V_m\intD{\meas x} S(x - \xi_\alpha)\Phi_m(x - \delta x) \end{aligned} \end{equation} and \begin{equation} \widetilde\V_m = \intD{\meas x} \Phi^\dag_m(x)\,\widetilde\V(x) = \intD{\meas x} \Phi^\dag_m(x)\,\varphi(x+\delta x) = \intD{\meas x} \Phi^\dag_m(x - \delta x)\,\varphi(x)\,. \end{equation} Combining these expressions and expanding to lowest order in $\delta x$, we have \begin{align} \LL_{\rm int}[\widetilde\V,\xit_\alpha] &= -q_s\sum_{\alpha=1}^{\Np}\sum_{m=1}^M w_\alpha\Biggl\{\varphi_m\intD{\meas x} S(x - \xi_\alpha)\Phi_m(x)\nonumber\\[6pt] &\hskip-36pt - \delta x\Biggl[\intD{\meas x}\varphi(x)\,\frac{d\Phi^\dag_m(x)}{dx}\intD{\meas x} S(x - \xi_\alpha)\Phi_m(x)\nonumber\\ &\hskip115pt+\intD{\meas x}\varphi(x)\Phi^\dag_m(x)\intD{\meas x} S(x - \xi_\alpha)\,\frac{d\Phi_m(x)}{dx}\Biggr]\Biggr\}\nonumber\\ &= \LL_{\rm int}[\varphi, \xi_\alpha] +\delta x\,q_s\sum_{\alpha=1}^{\Np}\sum_{m=1}^M w_\alpha\Biggl[\intD{\meas x}\varphi(x)\,\frac{d\Phi^\dag_m(x)}{dx}\intD{\meas x} S(x - \xi_\alpha)\Phi_m(x)\nonumber\\[4pt] &\hskip115pt+ \intD{\meas x}\varphi(x)\Phi^\dag_m(x)\intD{\meas x} S(x - \xi_\alpha)\,\frac{d\Phi_m(x)}{dx}\Biggr]. \label{Lint-symm} \end{align} Our symmetry condition requires that the term multiplying $\delta x$ in \eqref{Lint-symm} vanish. For the symmetry to exist independent of the particle shape $S$ and the details of the potential, this term must vanish for each $m$. We are led to the condition \begin{equation} \frac{d\Phi_m(x)}{dx} = \pm \alpha(m)\,\Phi_m(x)\quad\textrm{and}\quad \frac{d\Phi^\dag_m(x)}{dx} = \mp\alpha(m)\Phi^\dag_m(x). \label{m-c-cond} \end{equation} On a finite domain with periodic boundary conditions this condition is satisfied by the discrete Fourier basis; we are not aware of any other discrete basis that fulfills \eqref{m-c-cond} on either a finite or infinite domain. Using the discrete Fourier basis, it is straightforward to show that $\LL_{\rm field}$ is also translation invariant. We now specialize our discussion to the case of a truncated Fourier basis. Let \begin{equation} \begin{aligned} \Phi_k(x) &= e^{ikx}\\ \Phi^\dag_k(x) & = \frac1L\,e^{-ikx} \end{aligned} \label{fb-def} \end{equation} where $k = 2\,m\,\pi/L$, $m=0,\pm1,\ldots,\pm M$ and $L$ is the domain size. With this basis, the interaction term becomes \begin{equation} \begin{aligned} \LL_{\rm int} & = -q_s\sum_{\alpha=1}^{\Np}\sum_k w_\alpha\varphi_k\intDl 0L{\meas x} S(x - \xi_\alpha)\Phi_k(x)\\ & = -q_s L\sum_{\alpha=1}^{\Np}\sum_k w_\alpha\varphi_k\left[\intDl 0L{\meas x} S(x - \xi_\alpha)\Phi^\dag_k(x)\right]^* \\ & = -q_s L\sum_{\alpha=1}^{\Np}\sum_k w_\alpha\varphi_k\,\rho^*_k(\xi_\alpha)\,, \end{aligned} \end{equation} where \begin{equation} \rho_k(\xi_\alpha) = \intDl 0L{\meas x} S(x-\xi_\alpha)\,\Phi^\dag_k(x),\label{rho_k-fb} \end{equation} and \begin{equation} \varphi_k = \intDl 0L{\meas x} \varphi(x)\,\Phi^\dag_k(x),\label{phi_k-fb} \end{equation} and we have used the relation $\Phi_k(x)/L = [\Phi^\dag_k(x)]^*$. We also need to evaluate \eqref{field}: \begin{align} \LL_{\rm field} &= \frac1{8\pi}\sum_{k,k'}\varphi_k\,\varphi_{k'}\intDl 0L{\meas x}\frac{d\Phi_k(x)}{dx}\,\frac{d\Phi_{k'}(x)}{dx}\nonumber\\ &= -\frac1{8\pi}\sum_{k,k'}k\,k'\varphi_k\,\varphi_{k'}\intDl 0L{\meas x}\Phi_k(x)\Phi_{k'}(x)\nonumber\\ &= -\frac L{8\pi}\sum_{k,k'}k\,k'\varphi_k\,\varphi_{k'}\intDl 0L{\meas x}\Phi_k(x)\bigl[\Phi^\dag_{k'}(x)\bigr]^*\\ &= -\frac L{8\pi}\sum_{k,k'}k\,k'\varphi_k\,\varphi_{k'}\intDl 0L{\meas x}\Phi_k(x)\Phi^\dag_{-k'}(x)\nonumber\\ &= \frac L{8\pi}\sum_{k}k^2\varphi_k\,\varphi_{-k} = \frac L{4\pi}\sum_{k>0} k^2\varphi_k\,\varphi^*_k,\nonumber \end{align} where, since $\varphi$ is real, $\varphi_{-k} = \varphi^*_k$. Finally we arrive at the discrete form of the Lagrangian \begin{equation} \mathcal{L} = \frac\ma2\sum_{\alpha=1}^{\Np} w_\alpha\,\dot\xi^2_\alpha - q_s\,L\sum_{\alpha=1}^{\Np}\sum_k w_\alpha\,\varphi_k\,\rho^*_k(\xi_\alpha) + \frac{L}{4\pi}\sum_{k>0} k^2\varphi_k\,\varphi^*_k\,. \label{L-tfb} \end{equation} To obtain the equations of motion, we require the action to be stationary with respect to variations of $\xi_\alpha$ and $\varphi_k$ (since $\varphi_k$ and $\varphi^*_k$ are not independent, we need only consider variations of $\varphi_k$). The equation of motion are \begin{equation} \xidd_\alpha = - \frac{q_s L}m_s\sum_k\frac{\partial\rho^*_k(\xi_\alpha)}{\partial\xi_\alpha}\,\varphi_k\,, \end{equation} and \begin{equation} k^2\varphi_k = 4\pi\,q_s \sum_{\alpha=1}^{\Np} w_\alpha\,\rho_k(\xi_\alpha)\,. \label{poisson-fb} \end{equation} Using \eqref{rho_k-fb} and \eqref{fb-def} it is easy to show \begin{equation} \frac{\partial\rho_k(\xi_\alpha)}{\partial\xi_\alpha} = -ik\,\rho_k(\xi_\alpha) \label{drhodxi} \end{equation} allowing us to write the equation of motion as \begin{align} \xidd_\alpha &= - i\,\frac{q_s L}m_s\sum_k k\,\rho^*_k(\xi_\alpha)\,\varphi_k\nonumber\\ &= - i\,\frac{q_s L}m_s\sum_{k>0} k\left[\rho^*_k(\xi_\alpha)\,\varphi_k - \rho_k(\xi_\alpha)\,\varphi^*_k\right]\nonumber\\ & = \frac{q_s L}m_s\sum_{k>0} 2k\Imag\left[\rho^*_k(\xi_\alpha)\,\varphi_k\right]\label{eom-fb} \end{align} The spatial charge density associated with a single particle is \begin{equation} q_s \sum_k \rho_k(\xi_\alpha)\,\Phi_k(x) \end{equation} and the corresponding total charge is \begin{align} q_s \sum_k \rho_k(\xi_\alpha)\intDl 0L{\meas x} \Phi_k(x) &= q_s \sum_k \rho_k(\xi_\alpha)\,L\,\delta_{k0}\nonumber\\ &= q_s\,L\, \rho_0(\xi_\alpha)\nonumber\\ &= q_s\,L\intDl 0L{\meas x} S(x-\xi_\alpha)\,\frac1L\nonumber\\ &= q_s\,. \end{align} Thus, regardless of the number of modes retained, the charge associated with each particle remains $q_s$. The energy of this system is \begin{equation} W_L = \frac\ma2\sum_{\alpha=1}^{\Np} w_\alpha\,\xid_\alpha^2 + \frac{L}{4\pi}\sum_{k>0} k^2\varphi_k\,\varphi^*_k\,. \end{equation} Using the equations of motion we have \begin{align} \frac{dW_L}{dt} &= m_s \sum_{\alpha=1}^{\Np} w_\alpha\,\xid_\alpha\,\xidd_\alpha + \frac{L}{4\pi}\,\sum_{k>0} k^2\left(\dot\varphi_k\,\varphi^*_k + \varphi_k\,\dot\varphi^*_k\right) \nonumber\\ & = -i\,q_s\,L\sum_{\alpha=1}^{\Np}\sum_{k>0} w_\alpha\,k\,\xid_\alpha\left(\rho^*_k\,\varphi_k - \rho_k\,\varphi^*_k\right) - i\,q_s\,L\sum_{k>0}\sum_{\alpha=1}^{\Np} k\,\xid_\alpha\left(\rho_k\varphi^*_k - \rho^*_k\,\varphi_k\right) \nonumber\\ & = 0\,, \end{align} where we have used \eqref{poisson-fb} and \eqref{drhodxi} to find $\dot\varphi_k$. From \eqref{P_cont} we have \begin{align} \frac{dP}{dt} & = m_s\sum_{\alpha=1}^{\Np} w_\alpha\,\xidd_\alpha \nonumber\\ & = -i\,q_s\,L\sum_{\alpha=1}^{\Np}\sum_{k>0} w_\alpha\,k\left[\rho^*_k(\xi_\alpha)\,\varphi_k - \rho_k(\xi_\alpha)\,\varphi^*_k\right] \nonumber\\ & = -i\,\frac L{4\pi}\sum_{k>0} k^3\left(\varphi^*_k\,\varphi_k - \varphi_k\,\varphi^*_k\right) \nonumber\\ & = 0\,. \end{align} Thus the model using a truncated Fourier basis conserves both energy and momentum. This is as expected since the spatial discretization does not introduce time-dependence into the Lagrangian and the basis was specifically constructed to maintain spatial translation invariance. Consider the same system as in Section \ref{spatial-grid} with $S$ given by \eqref{top-hat-S} where $h$ is an independent parameter. Now \eqref{rho_k-fb} becomes \begin{equation} \rho_k(\xi_\alpha) = \frac1L\,e^{-ik\xi_\alpha}\sinc\left(\tfrac12kh\right), \end{equation} where $\sinc x = \sin(x)/x$. If, as before, we take a quasi-neutral plasma with a uniform ion density, then the ions only contribute to Poisson's equation for $k=0$. Further, we see that $k=0$ does not contribute to $\xidd_\alpha$, and thus we are free to take $\varphi_0=0$. With this form of $\rho_k$, the potential becomes \begin{equation} \varphi_k = \frac{4\pi\,q_s}{k^2L}\sinc\left(\tfrac12kh\right)\sum_{\alpha=1}^{\Np} w_\alpha\,e^{-ik\xi_\alpha},\quad k>0 \label{eom-pot-fb} \end{equation} and the particle equation of motion becomes \begin{align} \xidd_\alpha &= \frac{q_s}m_s\sum_{k>0} (-i\,k)\sinc\left(\tfrac12kh\right)\left[e^{ik\xi_\alpha}\,\varphi_k - \,e^{-ik\xi_\alpha}\,\varphi^*_k\right]\nonumber\\[4pt] &= \frac{q_s}m_s\sum_{k>0}\sinc\left(\tfrac12kh\right)\left[e^{ik\xi_\alpha}E_k + \,e^{-ik\xi_\alpha}\,E^*_k\right]\nonumber\\[4pt] &= \frac{q_s}m_s\sum_k\sinc\left(\tfrac12kh\right)E_k\,e^{ik\xi_\alpha} \label{eom-th-fb} \end{align} where $E_k = -i\,k\,\varphi_k$. For the complete basis, by the convolution theorem, \eqref{eom-th-fb} is identical to \eqref{xi-EOM-Cont}. While, due to the truncation, the convolution theorem does not apply, we may still interpret the force in \eqref{eom-th-fb} as sampling the electric field over the effective spatial extent of the particle. In this subsection we have derived a particle algorithm that preserves the time and space translational invariance of the Lagrangian and thus conserves both energy and momentum exactly. Since the use of grid in the reduction violates the spatial translational invariance, we were lead to use a continuous basis. In the course of the derivation, we found that only one such basis exists, a (possibly truncated) Fourier basis. One may argue that the use of a Fourier limits the applicability of this algorithm; for example, restricting to systems with periodic boundary conditions or being unsuitable for large-scale parallel simulations. This result establishes, however, that simultaneous conservation of energy and momentum is indeed possible.\label{fb-summary} \section{Noncanonical Hamiltonian formulation} \label{BracketDerivation} It is well known \cite{Morrison:1982aa,Weinstein-Morrison81,morrison:1980:383} that the Vlasov--Maxwell system possesses a Hamiltonian structure in terms of non-canonical field variables. Specializing to the 1-D electrostatic case and treating the electric field $E$ as a dynamical variable, the Vlasov--Maxwell bracket \cite{morrison:1980:383,Morrison:1982aa} becomes \begin{equation} \PB F,G; = \intD\dxp f \pb{\fd F,f;}{\fd G,f;} + 4\piq_s \intD\dxp \frac{\partial f}{\partial p} \left(\fd F,E;\,\fd G,f; - \fd G,E;\,\fd F,f; \right), \label{EM_Continuous_Bracket} \end{equation} where $F$ and $G$ are any functionals of $f$ and $E$ and $\pb ab$ denotes the usual phase-space Poisson bracket: \begin{equation} \pb ab = \frac{\partial a}{\partial x}\frac{\partial b}{\partial p} - \frac{\partial a}{\partial x}\frac{\partial b}{\partial p}\, . \end{equation} The Vlasov equation and the equations for the fields are obtained from this bracket and the Hamiltonian \begin{equation} H = \frac1{2m_s}\intD{\meas x} p^2\,f + \frac1{8\pi}\intD{\meas x} E^2 \label{H_Continuous_Bracket} \end{equation} as \begin{align} \frac{\partial f}{\partial t} &= \PB f,H; = - \frac p{m_s}\, \frac{\partial f}{\partial x} - q_s\,\frac{\partial f}{\partial p}\,E\,, \label{Vlasov}\\[4pt] \frac{\partial E}{\partial t}&= \PB E,H; = - 4\pi \intD{\meas p} \frac p{m_s}\, f = - 4\pi j. \label{dot_E} \end{align} Poisson's equation is considered as an initial condition and is satisfied for all time as a consequence of \eqref{dot_E}. We use a reduction of the distribution function, which is identical to \eqref{Sum_f_i} but written in terms of momentum: \begin{align} f(x, p, t) & = \sum_\alphaf_\alpha(x, p, t) \nonumber \\ % &= \sum_\alphaw_\alpha\,S[x - \xi_\alpha(t)] \,\delta[p - \pi_\alpha(t)].\label{Sum_f_i_p} \end{align} Consider a single $f_\alpha$. The quantities $w_\alpha$, $\xi_\alpha$, and $\pi_\alpha$, which denote the macro-particle weight, centroid, and momentum, may be expressed as: \begin{align} w_\alpha & = \intD\dxp f_\alpha\,, \\[2pt] \xi_\alpha & = \frac1w_\alpha\intD\dxp x\,f_\alpha\,, \\[2pt] \pi_\alpha & = \frac1w_\alpha\intD\dxp p\,f_\alpha\,. \end{align} Therefore, they may be thought of as functionals of $f_\alpha$. To an arbitrary functional $F[f]$ there exists a corresponding function $\widetilde F(w_\alpha, \xi_\alpha, \pi_\alpha)$ such that $\widetilde F(w_\alpha, \xi_\alpha, \pi_\alpha) = F[f]$. (Both $F$ and $\widetilde F$ can also be functionals of $E$; for the moment, we are only interested in their dependence on $f$). Then a functional derivative of $F[f]$ may be found using the chain rule as \begin{equation} \fd F,f_\alpha; = \fdw_\alpha,f_\alpha;\,\frac{\partial \widetilde F}{w_\alpha} + \fd\xi_\alpha,f_\alpha;\,\frac{\partial \widetilde F}{\xi_\alpha} + \fd\pi_\alpha,f_\alpha;\,\frac{\partial \widetilde F}{\pi_\alpha}\,. \end{equation} Evaluating the functional derivatives of $w_\alpha$, $\xi_\alpha$, and $\pi_\alpha$ \begin{align} \fdw_\alpha,f_\alpha; & = 1\,,\nonumber \\[2pt] \fd\xi_\alpha,f_\alpha; & = \frac{x - \xi_\alpha}w_\alpha\,, \\[2pt] \fd\pi_\alpha,f_\alpha; & = \frac{p - \pi_\alpha}w_\alpha\,\nonumber\, \end{align} we find \begin{equation} \fd F,f_\alpha; = \frac{\partial \widetilde F}{\partialw_\alpha} + \frac{x - \xi_\alpha}w_\alpha\,\frac{\partial \widetilde F}{\partial\xi_\alpha} + \frac{p - \pi_\alpha}w_\alpha\,\frac{\partial \widetilde F}{\partial\pi_\alpha}\,. \end{equation} Consider \begin{align} \pb{\fd F,f_\alpha;}{\fd G,f_\alpha;} & =\pb {\frac{\partial \widetilde F}{\partialw_\alpha} + \frac{x - \xi_\alpha}w_\alpha\,\frac{\partial \widetilde F}{\partial\xi_\alpha} + \frac{p - \pi_\alpha}w_\alpha\,\frac{\partial \widetilde F}{\partial\pi_\alpha}} {\frac{\partial \widetilde G}{\partialw_\alpha} + \frac{x - \xi_\alpha}w_\alpha\,\frac{\partial \widetilde G}{\partial\xi_\alpha} + \frac{p - \pi_\alpha}w_\alpha\,\frac{\partial \widetilde G}{\partial\pi_\alpha}}\nonumber \\[4pt] & = \frac{\partial \widetilde F}{\partial\xi_\alpha}\frac{\partial \widetilde G}{\partial\pi_\alpha}\pb{\frac{x - \xi_\alpha}w_\alpha}{\frac{p - \pi_\alpha}w_\alpha} + \frac{\partial \widetilde F}{\partial\pi_\alpha}\frac{\partial \widetilde G}{\partial\xi_\alpha}\pb{\frac{p - \pi_\alpha}w_\alpha}{\frac{x - \xi_\alpha}w_\alpha}\nonumber \\[4pt] & = \frac{\partial \widetilde F}{\partial\xi_\alpha}\frac{\partial \widetilde G}{\partial\pi_\alpha}\pb{\frac{x}w_\alpha}{\frac{p}w_\alpha} + \frac{\partial \widetilde F}{\partial\pi_\alpha}\frac{\partial \widetilde G}{\partial\xi_\alpha}\pb{\frac{p}w_\alpha}{\frac{x}w_\alpha}\nonumber \\[4pt] & = \frac1{w_\alpha^2} \left(\frac{\partial \widetilde F}{\partial\xi_\alpha}\frac{\partial \widetilde G}{\partial\pi_\alpha} - \frac{\partial \widetilde F}{\partial\pi_\alpha}\frac{\partial \widetilde G}{\partial\xi_\alpha}\right)\nonumber \\[4pt] & = \frac1{w_\alpha^2} \pb \widetilde F\widetilde G_{\xi\pi} \end{align} where \begin{equation} \pb ab_{\xi\pi} = \frac{\partial a}{\partial\xi_\alpha}\frac{\partial b}{\partial\pi_\alpha} - \frac{\partial a}{\partial\xi_\alpha}\frac{\partial b}{\partial\pi_\alpha}\,. \end{equation} The first terms in \eqref{EM_Continuous_Bracket} then become \begin{align} \intdxpf_\alpha \pb{\fd F,f_\alpha;}{\fd G,f_\alpha;} & = \intdxpf_\alpha \frac1{w_\alpha^2} \pb \widetilde F\widetilde G_{\xi\pi}\nonumber\\[4pt] & =\frac1{w_\alpha^2}\pb \widetilde F\widetilde G_{\xi\pi}\intdxpf_\alpha\nonumber \\[4pt] & =\frac1{w_\alpha}\pb \widetilde F\widetilde G_{\xi\pi}.\label{PB-cont-E-t1} \end{align} Now consider \begin{align} \intD\dxp\frac{\partialf_\alpha}{\partial p}\fd F,E;\,\fd G,f_\alpha; & = - \intdxpf_\alpha\fd F,E;\,\frac{\partial}{\partial p}\fd G,f_\alpha;\nonumber\\[4pt] & = \intdxpf_\alpha\fd F,E;\frac{\partial}{\partial p}\left(\frac{\partial \widetilde G}{\partialw_\alpha} + \frac{x - \xi_\alpha}w_\alpha\,\frac{\partial \widetilde G}{\partial\xi_\alpha} + \frac{p - \pi_\alpha}w_\alpha\,\frac{\partial \widetilde G}{\partial\pi_\alpha}\right) \nonumber \\[4pt] & = \intdxpf_\alpha\fd F,E;\,\frac1w_\alpha\,\frac{\partial \widetilde G}{\partial\pi_\alpha}\nonumber \\[4pt] & = \frac{\partial \widetilde G}{\partial\pi_\alpha}\,\frac1w_\alpha\intD{\meas x}\fd F,E;\intdpf_\alpha\nonumber \\[4pt] & = \frac{\partial \widetilde G}{\partial\pi_\alpha}\intD{\meas x} S(x-\xi_\alpha)\,\fd F,E;\,,\label{PB-cont-E-t2} \end{align} where the first line follows from integration-by-parts and the fact that $\delta F/\delta E$ does not have $p$ dependence since $E$ is a function of $x$ only. Combining \eqref{PB-cont-E-t1} and \eqref{PB-cont-E-t2} with \eqref{EM_Continuous_Bracket} leads to the bracket: \begin{equation} \PB F,G; = \frac1w_\alpha \pb \widetilde F\widetilde G_{\xi\pi} + 4\piq_s\intD{\meas x} S(x - \xi_\alpha)\left(\fd G,E;\frac{\partial \widetilde F}{\partial\pi_\alpha} - \fd F,E;\frac{\partial \widetilde G}{\partial\pi_\alpha}\right)\, . \label{PB-cont-E} \end{equation} We now extend this result to a collection of $f_\alpha$. We treat each $f_\alpha$ as a separate species, which mandates that the only interaction between the various $f_\alpha$ is through the mean field. The bracket is thus just the sum of the \eqref{PB-cont-E} over $\alpha$: \begin{equation} \PB F,G; = \sum_{\alpha=1}^{\Np}\frac1w_\alpha \pb \widetilde F\widetilde G_{\xi\pi} + 4\piq_s\sum_{\alpha=1}^{\Np}\intD{\meas x} S(x - \xi_\alpha)\left(\fd G,E;\frac{\partial \widetilde F}{\partial\pi_\alpha} - \fd F,E;\frac{\partial \widetilde G}{\partial\pi_\alpha}\right)\, . \label{PB-s-cont-E} \end{equation} Under our reduction, the Hamiltonian becomes (hereafter we drop the tilde notation as it should be clear from the above calculation where a functional derivative or a partial derivative is taken) \begin{equation} H = \frac1{2m_s}\,\sum_{\alpha=1}^{\Np}w_\alpha\,\pi_\alpha^2 + \frac1{8\pi}\intD{\meas x} E^2 \end{equation} and the equations of motion are \begin{align} \xid_\alpha = \PB\xi_\alpha,H; &= \frac\piam_s \label{xi-dot-ham}\\ \pid_\alpha = \PB\pi_\alpha,H; &= q_s \intD{\meas x} S(x-\xi_\alpha)E(x) \label{pi-dot-ham}\\ \frac{\partial E}{\partial t} = \PB\pi_\alpha,E; &= - 4\piq_s\sum_{\alpha=1}^{\Np}w_\alpha\,\frac\piam_s\,S(x-\xi_\alpha)\nonumber\\ &= -4\piq_s\sum_{\alpha=1}^{\Np}w_\alpha\,\xid_\alpha\,S(x-\xi_\alpha) = -4\pi\,j\,. \label{E-dot-ham} \end{align} Equations \eqref{xi-dot-ham} and \eqref{pi-dot-ham} are easily seen to be equivalent to \eqref{xi-EOM-Cont}. Comparing the spatial derivative of \eqref{E-dot-ham} to the time derivative of \eqref{Poisson-Cont}, we see that \eqref{E-dot-ham} and \eqref{Poisson-Cont} are indeed equivalent. As in the Lagrangian case, the reduction from a continuous phase space distribution function does not break energy or momentum conservation. In the Hamiltonian setting, energy conservation follows from the antisymmetry of the Poisson bracket under $F\leftrightarrow G$ and hence is intrinsic to the theory. To complete the reduction to a finite degree-of-freedom model, we represent $E$ using a finite, discrete basis, $\Psi_k$ having $N_b$ elements as \begin{equation} E(x,t) = \sum_{i=1}^{N_b} E_i(t)\Psi_i(x)\,, \label{EB_Basis} \end{equation} where \begin{equation} E_i(t) = \sum_{j=1}^{N_b}\intD{\meas x} E(x,t)M^{-1}_{ij}\Psi_j(x) \label{Ek} \end{equation} and \begin{equation} M_{ij} = \intD{\meas x}\, \Psi_i(x)\,\Psi_j(x)\,. \label{Mass_Matrix} \end{equation} When $\Psi_i(x)$ are a finite element basis, $M_{ij}$ is called the mass matrix. From \eqref{Ek}, we have \begin{equation} \fd E_i,E; = \sum_{j=1}^{N_b} M^{-1}_{ij}\Psi_j(x). \end{equation} Now, the $E_i$, through \eqref{EB_Basis}, provide a complete characterization of $E$ and thus any functional of $E$ can be written as a function of the $E_i$. Consequently \begin{align} \fd ,E; &= \sum_{i=1}^{N_b}\fd E_i,E;\,\frac\partial{\partial E_i}\nonumber\\ &= \sum_{i,j=1}^{N_b} M^{-1}_{ij}\Psi_j(x)\,\frac\partial{\partial E_i}\,. \end{align} Using this expression, the bracket becomes \begin{equation} \PB F,G; =\sum_{\alpha=1}^{\Np}\frac1w_\alpha \pb FG_{\xi\pi}+ 4\piq_s\sum_{i,j=1}^{N_b}\sum_{\alpha=1}^{\Np}\left(\frac{\partial G}{\partial E_i}\,\frac{\partial F}{\partial\pi_\alpha} - \frac{\partial F}{\partial E_i}\frac{\partial G}{\partial\pi_\alpha}\right) M^{-1}_{ij}\rho_j(\xi_\alpha)\,, \label{Full_Bracket_Discr} \end{equation} where $\rho_j(\xi_\alpha)$ is defined by \eqref{deposition}. The reduction of the bracket is exact in the sense that given the representation of $f$ and $E$, [\eqref{Sum_f_i_p} and \eqref{EB_Basis}, respectively] the reduced bracket and full bracket, restricted to functionals of the appropriate form, give the same result. Consequently, the reduced bracket inherits the Jacobi identity~\cite{Shadwick:2012aa} (and all other properties) from the full bracket. Using \eqref{EB_Basis} and \eqref{Mass_Matrix}, we can write the Hamiltonian as \begin{equation}\label{H_Discr_Bracket} H = \frac1{2m_s}\,\sum_{\alpha=1}^{\Np}w_\alpha\,\pi_\alpha^2 + \frac1{8\pi}\sum_{i,j=1}^{N_b} M_{ij}\,E_i\,E_j. \end{equation} The equations of motion are then \begin{align} \xid_\alpha & = \frac\piam_s \label{xi-dot-ham-d}\\ \pid_\alpha & = q_s \sum_{i=1}^{N_b} E_i\,\rho_i(\xi_\alpha) \label{pi-dot-ham-d}\\ \dot E_k & = -4\piq_s\sum_{\alpha=1}^{\Np}\sum_{j=1}^{N_b}w_\alpha\,\frac\piam_s\,M^{-1}_{kj}\rho_j(\xi_\alpha) = -4\piq_s\sum_{\alpha=1}^{\Np}\sum_{j=1}^{N_b}w_\alpha\,\xid_\alpha\,M^{-1}_{kj}\rho_j(\xi_\alpha) = -4\pi j_k\,. \label{E-dot-ham-d} \end{align} To make a connection with the model based on finite differences (Sec.~\ref{Lagrangian_PIC}), note that multiplication by the matrix $M$ is equivalent to performing an integration. For finite elements constructed from Lagrange polynomials one may reduce the mass matrix to a diagonal form (a procedure known as ``lumping'') while preserving the accuracy of the approximation \cite{jensen:1996}. If we use linear finite elements on a grid with spacing $h$, lumping the mass matrix gives \begin{equation} M_{ij} \longrightarrow h\,\delta_{ij}.\label{Lumped_M} \end{equation} \section{Examples} \label{Examples} In this section we present two examples illustrating some properties of the energy conserving models derived in this paper. We begin with a benchmarking example: the linear growth rate of the instability caused by a small electron beam of density $n_b$ propagating in a neutralizing background plasma of density $n_0$ (beam-plasma instability). For small beam-to-plasma density ratio, $(n_b/n_0) \ll 1$ [more precisely, the parameter $({n_b}/{2n_0} )^{1/3}$ must be small in this linear theory], the linear growth rate of this instability is given by: \begin{equation} \gamma_L = \frac{\sqrt{3}}{2}\left(\frac{n_b}{2n_0} \right)^{1/3}\omega_p. \label{growth_rate} \end{equation} \begin{figure}[htb] \centering \includegraphics{fig_linear_growth_fpic} \caption{Linear growth and saturation of the first four harmonics in the beam--plasma problem computed using the truncated Fourier model, \eqref{eom-pot-fb} and \eqref{eom-th-fb}, for $n_b/n_0=10^{-4}$. The analytical growth rate, \eqref{growth_rate}, gives $\gamma_L\approx0.03190$ for $k=1$.} \label{fig_fpic_linear_growth} \end{figure} All simulations are in dimensionless variables, where the time is measured in units of inverse plasma frequency, $\omega_p^{-1}$, momentum is measured in units of $m_e c$, potential is measured in units of $m_e c^2/e$, and energy in units of $m_e c^3 n_0/\omega_p$ (assuming 1-D). In the latter notation $m_e$ is the electron mass, $e$ is electron charge. The system is assumed to be periodic and its dimensionless size is $2\pi$. In this way, the numerical growth rate is dimensionless while the physical growth rate is measured in units of $\omega_p$. In Fig.~\ref{fig_fpic_linear_growth} we show a simulation using the model of Sec.~\ref{TruncatedBasisDerivation}, \eqref{eom-pot-fb} and \eqref{eom-th-fb}. The simulation was initialized by perturbing the beam density (position of beam particles) at the wavelength of the first harmonic, and the velocity of the beam was matched to the plasma wave phase velocity, e.g., $v_{\rm beam} = 1$. (To initialize the second harmonic, $k=2$, the beam velocity would have to be set to $v_{\rm beam} = 1/2$, \textit{etc.}) The beam-to-plasma density ratio for this simulation is $10^{-4}$. There are $300$ particles for each group of particles, \textit{i.e.}, beam electrons, background (plasma) electrons, and plasma ions, as well as $128$ Fourier modes. The plasma ions neutralize exactly both the beam and the plasma electrons, which is achieved by an appropriate choice of particle weight (this assures the potential has zero bias). The beam to plasma density ratio was also adjusted by an appropriate choice of beam and background particles weight. \begin{figure}[htbp] \centering \includegraphics[height=0.7\textheight]{fig_fpic_energy_momentum} \caption{Momentum and energy balance for the simulation in Fig.~\ref{fig_fpic_linear_growth}.} \label{fig_fpic_energy_momentum} \end{figure} The numerical growth rate of the fundamental harmonic is approximately $0.03132$, which differs by less than $2\%$ from the theoretical value $\gamma_{L} = 0.03190$. (The regions where the growth rates are determined are indicated by dots.) Better agreement can be achieved for smaller beam-to-plasma density ratios. Also seen from this figure is that the next three harmonics grow sequentially as a result of the non-linearity developing in the growth of the previous harmonics; \textit{i.e.,} the second harmonic is seeded by the non-linearity of the first harmonic when the quadratic term of the field has grown sufficiently, etc. In this scenario, linear growth rates of higher harmonics are multiples of the growth rate of the first harmonic independent of how well the numerical growth rate agrees with formula (\ref{growth_rate}), as long as a clear linear stage exists; this is indeed the case in Fig.~\ref{fig_fpic_linear_growth}. Energy and momentum balance are shown in Fig.~\ref{fig_fpic_energy_momentum}. Momentum is conserved to machine precision even in the time-discretized model, while energy conservation depends on the time integrator properties and time step $\Delta t$. To show the flexibility of the particle algorithm with respect to a choice of a time integration scheme, we chose a symplectic integrator of fourth order accuracy (the PEFRL algorithm of Omelyan \textit{et al.}~\cite{omelyan_optimized_2002}). For a choice of time step $\Delta t=0.01$, energy conservation is virtually perfect, at approximately $10^{-13}$ maximum relative error. \begin{figure}[htb] \centering \includegraphics{fig_linear_growth_lpic} \caption{Linear growth and saturation of the first four harmonics in the beam--plasma problem computed using potential-based particle model, \eqref{xi-EOM} and \eqref{Poisson-d}, for $n_b/n_0=10^{-4}$ and the same simulation parameters as in Fig.~\ref{fig_fpic_linear_growth}.} \label{fig_pic_linear_growth} \end{figure} Similar results, shown in Fig.~\ref{fig_pic_linear_growth}, are obtained when the beam-plasma instability simulation is performed with the potential-based model, \eqref{xi-EOM} and \eqref{Poisson-d}. (The regions where the growth rates are determined are indicated by dots and correspond to the same regions used in Fig.~\ref{fig_fpic_linear_growth}.) The equations of motion were integrated with a second-order Runge-Kutta method with time step $\Delta t=0.001$. No time-splitting was used, \textit{i.e.}, all particle and field data are known at common points in time. The number of grid points was $2048$, the number of particles per cell was $4$, $\rho_{k}$ was cubic in $\xi_\alpha$, corresponding to the shape $S_{1}$ (see Table~\ref{shapes}). The growth rate of the first few harmonics is in excellent agreement with the truncated Fourier series model. Energy conservation for this model is also very good, with relative energy error of less that $0.6\%$ (not shown). The examples of Figs.~\ref{fig_fpic_linear_growth}--\ref{fig_pic_linear_growth} demonstrate that energy conserving algorithms perform reliably in this benchmarking test and have low noise due to the freedom to choose smooth particle shapes. \begin{figure}[htb] \begin{center} \includegraphics{Converge-LPIC_Potential-Shape} \end{center} \caption{Dependence of the $k=1$ growth rate in the beam--plasma problem on grid size for the potential-based particle model, \eqref{xi-EOM} and \eqref{Poisson-d}. Plotted is the difference between the growth rate calculated using a given value of $\Delta x$ compared to the growth rate computed with very high spatial resolution for various values of $\Delta t$. As the plot shows, the method is second-order, as expected by our approximation of $\mathcal{L}$. The panels are labeled with the particle shape $S$ and the resulting order of $\rho_{k}$ (see Table~\ref{shapes}).} \label{fig_lpic_field_energy_error_dx} \end{figure} Figure~\ref{fig_lpic_field_energy_error_dx} shows the dependence of the $k=1$ growth rate of the beam-plasma problem on the spatial resolution in the potential-based model \eqref{xi-EOM} and \eqref{Poisson-d}. Plotted is the difference between the growth rate calculated using a given value of $\Delta x$ compared to the growth rate computed with very high spatial resolution (and the specified value of $\Delta t$.) As expected from our approximation to $\mathcal{L}$, growth rate is second order in the grid spacing, $\Delta x$, regardless of the particle shape or time-step. \begin{figure}[htb] \begin{center} \includegraphics{Energy-LPIC_Potential-Shape} \end{center} \caption{Conservation of energy for the potential-based particle model, \eqref{xi-EOM} and \eqref{Poisson-d}. The relative energy error is shown as a function of $\Delta t$ for various spatial resolutions and particle shapes. As expected, the energy error depends only on the temporal discretization. The equations of motion were solved with a second order Runge-Kutta integrator. The panels are labeled with the particle shape $S$ and the resulting order of $\rho_{k}$ (see Table~\ref{shapes}).} \label{fig_lpic_potential_energy_error} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics{Energy-LPIC_Field-Shape} \end{center} \caption{Conservation of energy for the field-based particle model, \eqref{xi-dot-ham-d}--\eqref{E-dot-ham-d}). The relative energy error is shown as a function of $\Delta t$ for various spatial resolutions and particle shapes. As expected, the energy error depends only on the temporal discretization. The equations of motion were solved with a second order Runge-Kutta integrator. The panels are labeled with the particle shape $S$ and the resulting order of $\rho_{k}$ (see Table~\ref{shapes}).} \label{fig_lpic_field_energy_error} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics{Energy-PIC_Potential-Shape} \end{center} \caption{Conservation of energy for a standard (momentum conserving) PIC algorithm. The relative energy error is shown as a function of $\Delta t$ for various spatial resolutions and particle shapes. The equations of motion were solved with a second order Runge-Kutta integrator. The panels are labeled with the order of the charge-deposition/force-interpolation spline used; see Ref.\cite{Hockney88}.} \label{fig_pic_field_energy_error} \end{figure} \afterpage{\clearpage} The next example illustrates an important property of these energy conserving algorithms, namely, that energy conservation depends solely on the properties of the time discretization. We consider a linear plasma oscillation, with electric field amplitude of $0.1$ and integrate the equations of motion with a second-order Runge-Kutta method. The relative energy error at $t=400$ is plotted against the time step $\Delta t$ for various particle shapes (see Table \ref{shapes}) and grid resolutions. Figures~\ref{fig_lpic_potential_energy_error} and \ref{fig_lpic_field_energy_error} show the relative error for the potential-based, \eqref{xi-EOM} and \eqref{Poisson-d}, and field-based, \eqref{xi-dot-ham-d}--\eqref{E-dot-ham-d}, energy conserving algorithms, respectively. The scaling with time step for all particle shapes is $\sim O(\Delta t^3)$; the exception is only for the potential-based method with linear $\rho_{k}$. In this case the force on a particle is discontinuous, i.e., it has jumps as a particle moves from one cell to another. Therefore, for the potential-based formulation linear particles are not recommended. Interestingly, in the case of linear particles in the field-based formulation, this deficiency does not show as strongly and the scaling has the same trend as for smoother particle shapes, Fig.~\ref{fig_lpic_field_energy_error}. Simulations with the truncated Fourier basis particle model exhibit similar behavior (not shown). Figure~\ref{fig_pic_field_energy_error} shows energy conservation in the standard (momentum conserving) PIC algorithm. As expected, the relative energy error in these algorithms depends on both the time step as well as the spatial grid resolution. Note that for the same particle smoothness and time step, the energy conserving algorithms have a (much) smaller relative energy error for $\Delta t \lesssim 10^{-2}$; for smaller $\Delta t$, energy conservation in the PIC algorithm is limited by the maximum number of grid points being fixed at $512$. These examples combined with the example in Fig.~\ref{fig_lpic_field_energy_error_dx} demonstrate that our method has overall accuracy of second order in both time and space. \section{Conclusions} \label{Conclusions} We have derived time-explicit, energy-conserving algorithms based on two approaches: Lagrangian in terms of potentials and a Hamiltonian with a non-canonical Poisson bracket in terms of fields. These models are derived without specifying any particular spatial or time discretization scheme, accuracy, or particle shape. Our general method allows the Lagrangian-based derivation to relax a number of restrictions imposed previously. Continuous quantities are reduced by performing either a grid reduction (i.e., finite differences) or truncated bases. When a grid reduction is used, mass matrices do not appear, which decreases computational load and improves the efficiency of memory usage. The important role of the particle shape and its relation to force interpolation is exhibited. A relaxed choice of particle shape helps decrease numerical noise in energy-conserving algorithms. A Hamiltonian derivation is presented here for the first time. The method uses a reduction of both the Hamiltonian and the non-canonical Poisson bracket. Since its formulation is in terms of fields, it avoids solving Poisson's equation. A model conserving both energy and momentum is derived and the conditions that make this possible are described. Its derivation uses the relation between conservation laws and Lagrangian symmetries, thus emphasizing the power of variational principles. Numerical benchmarking confirms the improvements in our algorithms. It is shown that conservation of energy in all particle models derived here only depends on the accuracy of time integration; in comparison, energy conservation in PIC depends on both the grid spacing and time integration accuracy. We restricted our discussion to the case of a one-dimensional, nonrelativistic, unmagnetized, electrostatic plasma. The generalization to three dimensional, relativistic, electromagnetic plasmas is straightforward and will be presented elsewhere. It is shown how to increase overall accuracy (in space and time) beyond second order. \section*{Acknowledgments} This work was supported in part by the US DoE under contract number DE-FG02-08ER55000 and by the University of Nebraska Atomic, Molecular, Optical, and Plasma Physics Program of Excellence.
{ "timestamp": "2012-12-14T02:02:49", "yymm": "1210", "arxiv_id": "1210.3743", "language": "en", "url": "https://arxiv.org/abs/1210.3743" }
\section*{References} [1] Brennen, C. E., 1970, “Cavity Surface Wave Patterns and General Appearance,” J. Fluid Mech. 44 ptI, pp. 33-49 \noindent [2] Andre, M.A. and Bardet, P.M., 2012, “Experimental investigation of boundary layer instabilities on the free surface of non-turbulent jet”, Proceedings of the Open Forum on Multiphase Flows, FEDSM2012-72328 \noindent [3] Longuet-Higgins, M.S., 1992, “Capillary rollers and bores”, J. Fluid Mech. 240, pp. 659-679 \end{document}
{ "timestamp": "2012-10-30T01:05:10", "yymm": "1210", "arxiv_id": "1210.3676", "language": "en", "url": "https://arxiv.org/abs/1210.3676" }
\section{Introduction} \label{Intro} The topic of vector meson interactions with nuclei has attracted, and continues to attract, much attention. After thorough experimental and theoretical studies of pion nuclear interaction and other pseudoscalar nucleus interactions, the turn came for exploring the properties of the vector mesons in nuclei. In this limited study we do not pretend to make an exhaustive review of the field, which has been done anyway in other papers \cite{rapp} and more recently in \cite{Hayano:2008vn,Leupold:2009kz}. We shall make emphasis on the new perspective that the use of the local hidden gauge theory \cite{hidden1,hidden2,hidden4} has brought into this topic. From a historical perspective, one certainly must admit, that in spite of much theoretical evidence against it from detailed calculations \cite{Rapp:1997fs,Peters:1997va,Rapp:1999ej,Urban:1999im,Cabrera:2000dx,Cabrera:2002hc}, the guess in \cite{Brown:1991kk} that the vector meson masses would be drastically reduced in a nuclear medium stimulated much experimental work. Experimental searches have finally concluded that this was not the case \cite{na60,wood,Hayano:2008vn,Leupold:2009kz}. However, a few surprises have been found on the way. As a matter of example let us recall here the history concerning the $\omega$ mass in the medium from experiments in the ELSA (Bonn) Laboratory. The analysis of the experimental results on the photoproduction of $\omega$, observed from the decay channel $\pi^0 \gamma$, led the authors of \cite{Trnka:2005ey} to claim the first observation of in-medium modifications of the $\omega$ meson mass, by an approximate amount of 100 MeV for normal nuclear matter density. Yet, the observation in \cite{Kaskulov:2006zc} that the results of \cite{Trnka:2005ey} were tied to a particular choice of background led to a thorough search for background processes in \cite{Nanova:2010sy} that finished with the withdrawal of the $\omega$ mass shift claims. In between, suggestions that some signal reported in \cite{tesistrnka,Metag:2007zz} could be indicative of an $\omega$ meson bound state in nuclei were aborted very early, realizing that a double hump structure in the experiment was due to a different scaling of the uncorrelated $\pi^0 \gamma$ production events and the $\omega$ production process with subsequent $\pi^0 \gamma$ decay \cite{Kaskulov:2006fi}. Before the final conclusions of \cite{Nanova:2010sy}, it was also hoped that the use of the successful mixed events method to separate the background from the signal should be sufficient to isolate the $\omega$ signal \cite{Metag:2007hq}. However, a simulation of the reaction \cite{mixed} showed that the mixed event method produced a background essentially independent of the real background in the region of relevance to omega production, basically determined from events occurring at much lower invariant masses. The persistence of both theoretical and experimental teams in the clarification of the problem gave undoubtedly some fruits and this is now the problem most thoroughly studied in this field, that has led to clarifications in other related problems. Yet, there is some interesting physical information that survived the close scrutiny of the former works and this is the large width of the $\omega$ in the medium found in \cite{Kotulla:2008aa} and also studied in \cite{Kaskulov:2006zc}. The width of the $\omega$ in the medium extrapolated to normal nuclear matter density was of the order of 100 MeV in \cite{Kaskulov:2006zc} and 130-150 MeV in \cite{Kotulla:2008aa}. The theoretical understanding of this large width, related to decay channels of the $\omega$ in the nuclear medium, is a challenge that will require combined efforts of hadron dynamics and many body theory. In this paper we shall review recent developments on the interaction of vector mesons with baryons and nuclei, using effective field theory with a combination of effective Lagrangians to account for hadron interactions, and implementing exactly unitarity in coupled channels. This approach is a very efficient tool to face many problems in Hadron Physics. Using this coupled channel unitary approach with the input from chiral Lagrangians, usually referred to as chiral unitary approach, the interaction of the octet of pseudoscalar mesons with the octet of stable baryons has been studied and leads to $J^P=1/2^-$ resonances which fit quite well the spectrum of the known low lying resonances with these quantum numbers \cite{Kaiser:1995cy,angels,ollerulf,carmenjuan,hyodo,ikeda}. Among the new resonances predicted, the most notable is the $\Lambda(1405)$, where all the chiral approaches find two poles close by \cite{Jido:2003cb,Borasoy:2005ie,Oller:2005ig,Oller:2006jw,Borasoy:2006sr,Hyodo:2008xr,Roca:2008kr}, rather than one, for which experimental support is presented in \cite{magas,sekihara}. Another step forward in this direction has been the interpretation of low lying $J^P=1/2^+$ states as molecular systems of two pseudoscalar mesons and one baryon \cite{alberto,alberto2,kanchan,Jido:2008zz,KanadaEn'yo:2008wm}. More recently, vectors instead of pseudoscalars have also been considered. In the baryon sector the interaction of the $\rho \Delta$ interaction was addressed in \cite{vijande}, where three degenerate $N^*$ states and three degenerate $\Delta$ states around 1900 MeV, with $J^P=1/2^-, 3/2^-, 5/2^-$, were found. The extrapolation to SU(3) with the interaction of the vectors of the nonet with the baryons of the decuplet was studied in \cite{sourav}. The starting point of these works is the hidden gauge formalism \cite{hidden1,hidden2,hidden4}, which deals with the interaction of vector mesons and pseudoscalars, respecting chiral dynamics, providing the interaction of pseudoscalars among themselves, with vector mesons, and vector mesons among themselves. It also offers a perspective on the chiral Lagrangians as limiting cases at low energies of vector exchange diagrams occurring in the theory. The results of the interaction of the nonet of vector mesons with the octet of baryons were reported in \cite{angelsvec}. The scattering amplitudes, obtained under the approximation of neglecting the three momentum of the particles versus their mass, led to poles in the complex plane which can be associated to some well known resonances. In \cite{angelsvec} one obtains degenerate states of $J^P=1/2^-,3/2^-$, a degeneracy that seems to be followed qualitatively by the experimental spectrum, although in some cases the spin partners have not been identified. We will also report on improvements in this theory which consider the mixing of the vector-baryon states with pseudoscalar-baryon ones. In fact, there is in principle no reason to expect that the interaction of pseudoscalar mesons with baryons and the interaction of vector mesons with baryons should be decoupled for states which share strangeness, isospin, and $J^P$ (spin-parity) quantum numbers. The consequences of coupling these interactions, that have been treated independently in the previous works of \cite{sourav} and \cite{angelsvec}, were first explored in the three flavour sector in Ref.~\cite{Gamermann:2011mq}. In that work, an SU(6) framework ~\cite{GarciaRecio:2005hy,GarciaRecio:2006wb,Toki:2007ab} was used which combines spin and flavor symmetries within an enlarged Weinberg-Tomozawa meson--baryon Lagrangian in order to accommodate vector mesons and decuplet baryons. This guarantees that chiral symmetry is recovered when interactions involving pseudoscalar Nambu-Goldstone bosons are being examined\footnote{A similar study for the case of meson-meson light resonances was carried out in Ref.~\cite{GarciaRecio:2010ki}.}. Chiral symmetry constraints the pseudoscalar octet--baryon decuplet interactions. However, the interaction of vector mesons with baryons is not constrained by chiral symmetry, and thus the model presented in \cite{Gamermann:2011mq} differs from those of Refs.~\cite{sourav,angelsvec}, based on the hidden gauge formalism. However, in the presence of heavy quarks the analogous scheme to that of Ref.~\cite{Gamermann:2011mq} automatically embodies heavy quark spin symmetry, another well established approximate symmetry of QCD. Indeed, the model of Ref.~\cite{Gamermann:2011mq} has been successfully extended to the charm sector in \cite{GarciaRecio:2008dp,Gamermann:2010zz,Romanets:2012hm}. The vast number of resonances with charm or hidden charm found in the recent years, some of which having a clear molecular structure, and the possibility of studying them copiously at LHC or upgraded B-factories, has injected a renewed interest in this field. We will report on composite states of hidden charm emerging from the interaction of vector mesons and baryons with charm. Finally, we devote some attention to new developments on the properties of vector mesons in a nuclear medium, focusing specifically on the interactions of the $K^*$ and $J/\psi$ mesons with nuclei. \section{Formalism for $VV$ interaction} \label{sec:formalism} The formalism of the hidden gauge interaction for vector mesons is provided in \cite{hidden1,hidden2} (see also \cite{hidekoroca} for a practical set of Feynman rules). The Lagrangian involving the interaction of vector mesons amongst themselves is given by \begin{equation} {\cal L}_{III}=-\frac{1}{4}\langle V_{\mu \nu}V^{\mu\nu}\rangle \ , \label{lVV} \end{equation} where the symbol $\langle \rangle$ stands for the trace in the SU(3) space and $V_{\mu\nu}$ is given by \begin{equation} V_{\mu\nu}=\partial_{\mu} V_\nu -\partial_\nu V_\mu -ig[V_\mu,V_\nu]\ , \label{Vmunu} \end{equation} with $g$ given by $g=\frac{M_V}{2f}$ where $f=93$~MeV is the pion decay constant. The magnitude $V_\mu$ is the SU(3) matrix of the vectors of the nonet of the $\rho$ \begin{equation} V_\mu=\left( \begin{array}{ccc} \frac{\rho^0}{\sqrt{2}}+\frac{\omega}{\sqrt{2}}&\rho^+& K^{*+}\\ \rho^-& -\frac{\rho^0}{\sqrt{2}}+\frac{\omega}{\sqrt{2}}&K^{*0}\\ K^{*-}& \bar{K}^{*0}&\phi\\ \end{array} \right)_\mu \ . \label{Vmu} \end{equation} The interaction of ${\cal L}_{III}$ gives rise to a contact term coming from $[V_\mu,V_\nu][V_\mu,V_\nu]$ \begin{equation} {\cal L}^{(c)}_{III}=\frac{g^2}{2}\langle V_\mu V_\nu V^\mu V^\nu-V_\nu V_\mu V^\mu V^\nu\rangle\ , \label{lcont} \end{equation} as well as to a three vector vertex from \begin{equation} {\cal L}^{(3V)}_{III}=ig\langle (\partial_\mu V_\nu -\partial_\nu V_\mu) V^\mu V^\nu\rangle =ig\langle (V^\mu\partial_\nu V_\mu -\partial_\nu V_\mu V^\mu) V^\nu\rangle \label{l3Vsimp}\ . \end{equation} It is worth stressing the analogy with the coupling of vectors to pseudoscalars given in the same theory by \begin{equation} {\cal L}_{VPP}= -ig ~\langle [P,\partial_{\mu}P]V^{\mu}\rangle, \label{lagrVpp} \end{equation} where $P$ is the SU(3) matrix of the pseudoscalar fields. The Lagrangian for the coupling of vector mesons to the baryon octet is given by \cite{Klingl:1997kf,Palomar:2002hk} \begin{equation} {\cal L}_{BBV} = \frac{g}{2}\left(\langle\bar{B}\gamma_{\mu}[V^{\mu},B]\rangle+\langle\bar{B}\gamma_{\mu}B\rangle \langle V^{\mu}\rangle \right), \label{lagr82} \end{equation} where $B$ is now the SU(3) matrix of the baryon octet \cite{Eck95,Be95}. Similarly, one has also a Lagrangian for the coupling of the vector mesons to the baryons of the decuplet, which can be found in \cite{manohar}. Starting from these Lagrangians one can draw the Feynman diagrams that lead to the $PB \to PB$ and $VB \to VB$ interaction, by exchanging a vector meson between the pseudoscalar or the vector meson and the baryon, as depicted in Fig.\ref{f1} . \begin{figure}[tb] \epsfig{file=f1a.eps, width=7cm} \epsfig{file=f1b.eps, width=7cm} \caption{Diagrams obtained in the effective chiral Lagrangians for interaction of pseudoscalar [a] or vector [b] mesons with the octet or decuplet of baryons.}% \label{f1}% \end{figure} It was shown in \cite{angelsvec} that in the limit of small three momenta of the vector mesons the vertices of Eq. (\ref{l3Vsimp}) and Eq. (\ref{lagrVpp}) give rise to the same expression. This makes the work technically easy, allowing the use of many previous results. A caveat must be made in the case of vector mesons due to the mixing of $\omega_8$ and the singlet of SU(3), $\omega_1$, to give the physical states of the $\omega$ and the $\phi$. In this case, all one must do is to take the matrix elements known for the $PB$ interaction and, wherever $P$ is the $\eta_8$, multiply the amplitude by the factor $1/\sqrt 3$ to get the corresponding $\omega $ contribution and by $-\sqrt {2/3}$ to get the corresponding $\phi$ contribution. Upon the approximation consistent with neglecting the three momentum versus the mass of the particles (in this case the baryon), we can just take the $\gamma^0$ component of Eq. (\ref{lagr82}) and then the transition potential corresponding to the diagram of Fig. 1(b) is given by \begin{equation} V_{i j}= - C_{i j} \, \frac{1}{4 f^2} \, (k^0 + k'^0)~ \vec{\epsilon}\vec{\epsilon } ', \label{kernel} \end{equation} where $k^0, k'^0$ are the energies of the incoming and outgoing vector mesons. The same occurs in the case of the decuplet. The $C_{ij}$ coefficients of Eq. (\ref{kernel}) can be obtained directly from \cite{angels,bennhold,inoue} with the simple rules given above for the $\omega$ and the $\phi$, and substituting $\pi$ by $\rho$ and $K$ by $K^*$ in the matrix elements. The scattering matrix is constructed by solving the coupled channels Bethe Salpeter equation in the on shell factorization approach of \cite{angels,ollerulf} \begin{equation} T = [1 - V \, G]^{-1}\, V, \label{eq:Bethe} \end{equation} with $G$ being the loop function of a vector meson and a baryon, which we calculate in dimensional regularization using the formula of \cite{ollerulf} and similar values for the subtraction constants. In the present case the $\rho$ and the $K^*$ have a significant width and the $G$ functions involving these mesons must be convoluted with their corresponding spectral functions. The iteration of diagrams implicit in the Bethe Salpeter equation in the case of the vector mesons propagates the $\vec{\epsilon}\vec{\epsilon }'$ term of the interaction. Hence, the factor $\vec{\epsilon}\vec{\epsilon }'$ appearing in the potential $V$ factorizes also in the $T$ matrix for the external vector mesons. As a consequence of this, the interaction is spin independent and one finds degenerate states having $J^P=1/2^-$ and $J^P=3/2^-$. \section{Incorporating the pseudoscalar meson-baryon channels} Improvements to the work of \cite{angelsvec} have been done by incorporating intermediate states of a pseudoscalar meson and a baryon in \cite{garzon}. This is practically implemented by including the diagrams of Fig.~(\ref{box}). However, arguments of gauge invariance \cite{Rapp:1997fs,Peters:1997va,Rapp:1999ej,Urban:1999im,Cabrera:2000dx,Cabrera:2002hc,kanchan1,kanchan2} demand that the meson pole term be accompanied by the corresponding Kroll-Ruderman contact term, see Fig. \ref{fig:vbpb}. \begin{center} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.5]{intbox.eps} \end{center} \caption{ Diagram for the $VB \rightarrow VB$ interaction incorporating the intermediate pseudoscalar-baryon states.} \label{box} \end{figure} \end{center} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.5]{vbpb.eps} \end{center} \caption{Diagrams of the $VB\rightarrow PB$. (a) meson pole term, (b) Kroll-Ruderman contact term.} \label{fig:vbpb} \end{figure} In the intermediate $B$ states of Fig. \ref{box} we include baryons of the octet and the decuplet. The results of the calculations are a small shift and a broadening of the resonances compared with what is obtained with the basis of vector-baryon alone. In Fig.~\ref{res1} we observe two peaks for the states having quantum numbers $S=0$ and $I=1/2$, one around 1700 MeV, in channels $\rho N$ and $K^* \Lambda$, and another peak near 1980 MeV, which appears in all the channels except for $\rho N$. It can be seen that the mixing of the $PB$ channels (solid lines) affects differently the two spin sectors, $J^P=1/2^-$ and $3/2^-$, as a consequence of the extra mechanisms contributing mainly to the $J^P=1/2^-$. Indeed, the $PB-VB$ mixing mechanism is more important in the $J^P=1/2^-$ sector because the Kroll-Ruderman term only allows the $1/2^-$ pseudoscalar baryon intermediate states in the box diagram. The most important feature is a breaking of the degeneracy for the peak around 1700 MeV. This is most welcome since allows us to associate the $1/2^-$ peak found at 1650 MeV with the $N^*(1650)(1/2^-)$ while the peak for $3/2^-$ at 1700 MeV can be naturally associated to the $N^*(1700)(3/2^-)$. However, let us recall that in the baryon lines of Fig. \ref{box} we only include ground states ($N$ and $\Delta$). A resonance like the $N^*(1520)(3/2^-)$ also appears dynamically generated in the scheme extending the space to $\pi N$(d-wave) and $\pi \Delta$(d-wave) and will be reported elsewhere \cite{javinew}. The resonances obtained are summarized in Tables \ref{tab:pdg12} and \ref{tab:pdg32}.. There are states which one can easily associate to known resonances, and there remain ambiguities in other cases. The nearly degeneracy in spin that the theory predicts is clearly visible in the experimental data, as one sees a few states with about 50 MeV or less mass difference between them. In some cases, the theory predicts quantum numbers for resonances which have no assigned spin and parity. It would be interesting to pursue the experimental determination of these quantum numbers to test the theoretical predictions. In addition, the predictions made here for resonances not yet observed should be a stimulus for further search of such states. In this sense it is worth noting the experimental program at Jefferson Lab \cite{Price:2004xm} to investigate the $\Xi$ resonances. With admitted uncertainties of about 50 MeV in masses, we are confident that the predictions shown here stand on solid grounds and look forward to progress in the area of baryon spectroscopy and on the understanding of the nature of the baryonic resonances. Before finishing this section we note that the vector meson-baryon and pseudoscalar-baryon coupled problem has also been studied by taking the s-, t- and u-channel diagrams together with a contact term originating from the hidden local symmetry Lagrangian to obtain the $VB$ interaction \cite{xx1}. The $PB$ interaction was calculated using the Kroll-Ruderman contact term, which contrary to \cite{garzon}, refrained the authors from investigating the possible effects of coupling $PB$ to $VB$ channels with spin 3/2. In the spin 1/2 case, some new resonances were found to get generated by the coupled $PB$ and $VB$ dynamics \cite{xx2,kanchan1,kanchan2}. The coupling constants of the low-lying resonances to the $VB$ channels were also obtained in Ref.~\cite{xx2,kanchan1}, which can be useful in studying the photoproduction of these states. \begin{figure}[ht!] \begin{center} \includegraphics[scale=1.2]{spin1.eps} \end{center} \caption{$|T|^2$ for the S=0, I=1/2 states. Dashed lines correspond to tree level only and solid lines are calculated including the box diagram potential. Vertical dashed lines indicate the channel threshold.} \label{res1} \end{figure} \begin{table}[ht] \begin{center} \begin{tabular}{c|c|cc|ccccc} \hline\hline $S,\,I$ &\multicolumn{3}{c|}{Theory} & \multicolumn{5}{c}{PDG data}\\ \hline & pole position & \multicolumn{2}{c|}{real axis} & & & & & \\ & $M_R+i\Gamma /2$ & mass & width &name & $J^P$ & status & mass & width \\ \hline $0,1/2$ & $1690+i24^{*} $ & 1658 & 98 & $N(1650)$ & $1/2^-$ & $\star\star\star\star$ & 1645-1670 & 145-185\\ & $1979+i67 $ & 1973 & 85 & $N(2090)$ & $1/2^-$ & $\star$ & $\approx 2090$ & 100-400 \\ \hline $-1,0$ & $1776+i39 $ & 1747 & 94 & $\Lambda(1800)$ & $1/2^-$ & $\star\star\star$ & 1720-1850 & 200-400 \\ & $1906+i34^{*} $ & 1890 & 93 & $\Lambda(2000)$ & $?^?$ & $\star$ & $\approx 2000$ & 73-240\\ & $2163+i37 $ & 2149 & 61 & & & & & \\ \hline $-1,1$ & $ - $ & 1829& 84 & $\Sigma(1750)$ & $1/2^-$ & $\star\star\star$ & 1730-1800 & 60-160 \\ & $ - $ & 2116 & 200-240 & $\Sigma(2000)$ & $1/2^-$ & $\star$ & $\approx 2000$ & 100-450 \\ \hline $-2,1/2$& $2047+i19^{*} $ & 2039 & 70 & $\Xi(1950)$ & $?^?$ & $\star\star\star$ & $1950\pm15$ & $60\pm 20$ \\ & $ - $ & 2084 & 53 & $\Xi(2120)$ & $?^?$ & $\star$ & $\approx 2120$ & 25 \\ \hline\hline \end{tabular} \caption{The properties of the nine dynamically generated resonances and their possible PDG counterparts for $J^P=1/2^-$. The numbers with asterisk in the imaginary part of the pole position are obtained without convoluting with the vector mass distribution of the $\rho$ and $K^*$.} \label{tab:pdg12} \end{center} \end{table} \begin{table}[ht] \begin{center} \begin{tabular}{c|c|cc|ccccc} \hline\hline $S,\,I$ &\multicolumn{3}{c|}{Theory} & \multicolumn{5}{c}{PDG data}\\ \hline & pole position & \multicolumn{2}{c|}{real axis} & & & & & \\ & $M_R+i\Gamma /2$ & mass & width &name & $J^P$ & status & mass & width \\ \hline $0,1/2$ & $1703+i4^{*} $ & 1705 & 103 & $N(1700)$ & $3/2^-$ & $\star\star\star$ & 1650-1750 & 50-150\\ & $1979+i56 $ & 1975 & 72 & $N(2080)$ & $3/2^-$ & $\star\star$ & $\approx 2080$ & 180-450 \\ \hline $-1,0$ & $1786+i11 $ & 1785 & 19 & $\Lambda(1690)$ & $3/2^-$ & $\star\star\star \star$ & 1685-1695 & 50-70 \\ & $1916+i13^{*} $ & 1914 & 59 & $\Lambda(2000)$ & $?^?$ & $\star$ & $\approx 2000$ & 73-240\\ & $2161+i17 $ & 2158 & 29 & & & & & \\ \hline $-1,1$ & $ - $ & 1839& 58 & $\Sigma(1940)$ & $3/2^-$ & $\star\star\star$ & 1900-1950 & 150-300\\ & $ - $ & 2081 & 270 & & & & & \\ \hline $-2,1/2$& $2044+i12^{*} $ & 2040 & 53 & $\Xi(1950)$ & $?^?$ & $\star\star\star$ & $1950\pm15$ & $60\pm 20$ \\ & $2082+i5^{*} $ & 2082 & 32 & $\Xi(2120)$ & $?^?$ & $\star$ & $\approx 2120$ & 25 \\ \hline\hline \end{tabular} \caption{The properties of the nine dynamically generated resonances and their possible PDG counterparts for $J^P=3/2^-$. The numbers with asterisk in the imaginary part of the pole position are obtained without convoluting with the vector mass distribution of the $\rho$ and $K^*$.} \label{tab:pdg32} \end{center} \end{table} \section{Hidden charm baryons from vector-baryon interaction} Following the idea of \cite{angelsvec} it was found in \cite{wu} that several baryon states emerged as hidden charm composite states of mesons and baryons with charm. In particular, in the context of vector-baryon interaction, a hidden charm baryon that couples to $J/\psi N$ and other related channels was found, as shown in Tables \ref{jpsicoupling} and \ref{jpsiwidth}. This will play a role later on when we discuss the $J/\psi$ suppression in nuclei. The calculations in the charm sector require an extension to SU(4) of the hidden gauge Lagrangians, but the symmetry is explicitly broken by using the physical masses of the hadrons involved in the processes. Therefore, when the exchange of a heavy vector meson is implied, the appropriate reduction in the Feynman diagram is taken into account. In section \ref{supp} we report on the study of $J/\psi$ propagation in nuclei. \begin{table}[ht] \renewcommand{\arraystretch}{1.1} \setlength{\tabcolsep}{0.4cm} \begin{center} \begin{tabular}{cccccc}\hline $(I, S)$& $z_R$ & \multicolumn{4}{c}{$g_a$}\\ \hline $(1/2, 0) $ & & $\bar{D}^{*} \Sigma_{c}$ & $\bar{D}^{*} \Lambda^{+}_{c}$& $J/\psi N$ \\ & $4415-9.5i$ & $2.83-0.19i $ &$-0.07+0.05i $ & $-0.85+0.02i$ \\ & &$ 2.83 $ &$0.08 $ & $0.85$ \\ \hline $(0, -1) $ & & $\bar{D}^{*}_{s} \Lambda^{+}_{c}$ & $\bar{D}^{*} \Xi_{c}$ & $\bar{D}^{*} \Xi'_{c}$ & $J/\psi \Lambda$\\ & $4368-2.8i $ & $1.27-0.04i $ &$ 3.16-0.02i $ & $-0.10+0.13i $ & $0.47+0.04i $ \\ & & $1.27 $ & $3.16 $ & $0.16 $ & $0.47 $ \\ & $4547-6.4i $ & $0.01+0.004i$ & $0.05-0.02i$ & $2.61-0.13i $ & $-0.61-0.06i $ \\ & & $0.01 $ & $0.05 $ & $2.61$ & $0.61 $ \\ \hline\end{tabular} \caption{Pole position ($z_R$) and coupling constants ($g_a$) to various channels for the states from $VB\rightarrow VB$ including the $J/\psi N$ and $J/\psi\Lambda$ channels. } \label{jpsicoupling} \end{center} \renewcommand{\arraystretch}{1.1} \setlength{\tabcolsep}{0.4cm} \begin{center} \begin{tabular}{ccccc}\hline $(I, S)$ & $z_R$ & \multicolumn{2}{c}{Real axis} & $\Gamma_i$ \\ & & $M$ & $\Gamma$ & \\ \hline $(1/2, 0)$ & & & & $J/\psi N$\\ & $4415-9.5i$ & $4412$ & $47.3$ & $19.2$ \\ \hline $(0, -1)$ & & & & $J/\psi\Lambda$\\ & $4368-2.8i$ & $4368 $& $28.0 $ & $5.4$ \\ & $4547-6.4i $ & $4544 $& $36.6 $ & $13.8$ \\ \hline\end{tabular} \caption{Pole position ($z_R$), mass ($M$), total width ($\Gamma$, including the contribution from the light meson and baryon channel) and the decay widths for the $J/\psi N$ and $J/\psi\Lambda$ channels ($\Gamma_i$). The units are in MeV.} \label{jpsiwidth} \end{center} \end{table} \section{The properties of $K^*$ in nuclei} Much work about the vector mesons $\rho,\phi,\omega$ in nuclei has been done looking for dileptons \cite{na60,wood,hades,buss,naruki}. Maybe this technical detail is what has prevented any attention being directed to the renormalization of the $K^*$ in nuclei. However, recently this problem has been addressed in \cite{lauraraquel} with very interesting results. The $K^{*-}$ width in vacuum is determined in \cite{lauraraquel} by evaluating the imaginary part of the free $\bar K^*$ self-energy at rest, ${\rm Im} \Pi^0_{\bar K^*}$, due to the decay of the $\bar{K}^*$ meson into $\bar{K}\pi$ pairs, using the model parameters of the Lagrangians described in Sect.~\ref{sec:formalism}. The obtained width, $\Gamma_{K^{*-}}=-\mathrm{Im}\Pi_{\bar{K}^*}^{0}/m_{\bar K^*}=42$~MeV, is quite close to the experimental value of $50.8\pm 0.9$ MeV. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth,height=5cm]{Fig2.eps} \hfill \caption{Self-energy diagrams from the decay of the $\bar{K}^*$ meson in the medium.} \label{fig:1} \end{center} \end{figure} The $\bar{K}^*$ self-energy in matter, on one hand, results from its decay into ${\bar K}\pi$, $\Pi_{\bar{K}^*}^{\rho,{\rm (a)}}$, including both the self-energy of the antikaon \cite{Tolos:2006ny} and the pion \cite{Oset:1989ey,Ramos:1994xy} (see first diagram of Fig.~\ref{fig:1} and some specific contributions in diagrams $(a1)$ and $(a2)$ of Fig.~\ref{fig:3}). Moreover, vertex corrections required by gauge invariance are also incorporated, and they are associated to the last three diagrams in Fig. \ref{fig:1}. Another contribution to the $\bar K^*$ self-energy comes from its interaction with the nucleons in the Fermi sea, as displayed in diagram (b) of Fig.~\ref{fig:3}. This accounts for the direct quasi-elastic process $\bar K^* N \to \bar K^* N$, as well as other absorption channels $\bar K^* N\to \rho Y, \omega Y, \phi Y, \dots$ with $Y=\Lambda,\Sigma$. This contribution is determined by integrating the medium-modified $\bar K^* N$ amplitudes, $T^{\rho,I}_{\bar K^*N}$, over the Fermi sea of nucleons and, therefore, it will be sensitive to the resonant structures in these amplitudes. In particular, the in-medium amplitudes retain clear traces of two resonances that are generated from the $\bar K^* N$ interaction and related channels at 1783 MeV and 1830 MeV \cite{angelsvec}, which can be identified with the experimentally observed $J^P = 1/2^-$ states $\Lambda(1800) $and $\Sigma(1750)$, respectively. Note that the self-energy $\Pi_{\bar{K}^*}^{\rho,{\rm (b)}}$ has to be determined self-consistently since it is obtained from the in-medium amplitude ${T}^\rho_{\bar K^*N}$ which contains the $\bar K^*N$ loop function ${G}^\rho_{\bar K^*N}$, and this last quantity itself is a function of the complete self-energy $\Pi_{\bar K^*}^{\rho}=\Pi_{\bar{K}^*}^{\rho,{\rm (a)}} +\Pi_{\bar{K}^*}^{\rho,{\rm (b)}}$. \begin{figure}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{new_diagrams.eps} \caption{Contributions to the $\bar K^*$ self-energy, depicting their different inelastic sources.} \label{fig:3} \end{center} \end{figure} The two contributions to the $\bar K^*$ self-energy, coming from the decay of $\bar K \pi$ pairs in the medium [Figs.~\ref{fig:3}(a1) and \ref{fig:3}(a2)] or from the $\bar K^* N$ interaction [Fig.~\ref{fig:3}(b)] provide different sources of inelastic $\bar K^* N$ scattering, which add incoherently in the $\bar K^*$ width. Note that the $\bar K^* N$ amplitudes mediated by intermediate $\bar K N$ or $\pi Y$ states are not unitarized, in contrast to what is done for the contributions from intermediate $VB$ states. The problem arises because the exchanged pion may be placed on its mass shell, which forces one to keep track of the proper analytical cuts making the iterative process more complicated. A technical solution can be found by calculating the box diagrams of Figs.~\ref{fig:3}(a1) and \ref{fig:3}(a2), taking all the cuts into account properly, and adding the resulting $\bar K^* N \to \bar K^* N$ terms to the $VB \to V^\prime B^\prime$ potential coming from vector-meson exchange, in a similar way as done for the study of the vector-vector interaction in Refs.~\cite{raquel,gengvec}. As we saw in the former sections, the generated resonances barely change their position for spin 3/2 and only by a moderate amount in some cases for spin 1/2. Their widths are somewhat enhanced due to the opening of the newly allowed $PB$ decay channels \cite{garzon}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{Fig5_colouronline.eps} \hfill \caption{ $\bar K^*$ self-energy for $\vec{q}=0 \, {\rm MeV/c}$ and $\rho_0$. } \label{fig:auto-spec} \end{center} \end{figure} The full $\bar K^*$ self-energy as a function of the $\bar K^*$ energy for zero momentum at normal nuclear matter density is shown in Fig.~\ref{fig:auto-spec}. We explicitly indicate the contribution to the self-energy coming from the self-consistent calculation of the $\bar K^* N$ effective interaction (dashed lines) and the self-energy from the $\bar K^* \rightarrow \bar K \pi$ decay mechanism (dot-dashed lines), as well as the combined result from both sources (solid lines). Around $q^0= 800-900$ MeV we observe an enhancement of the width as well as some structures in the real part of the $\bar K^*$ self-energy. The origin of these structures can be traced back to the coupling of the $\bar K^*$ to the in-medium $\Lambda(1783) N^{-1}$ and $\Sigma(1830) N^{-1}$ excitations, which dominate the $\bar K^*$ self-energy in this energy region. However, at lower energies where the $\bar K^* N\to V B$ channels are closed, or at large energies beyond the resonance-hole excitations, the width of the $\bar K^*$ is governed by the $\bar K \pi$ decay mechanism in dense matter. As we can see, around $q^0=m_{\bar{K}^*}$ the $\bar K^*$ feels a moderately attractive optical potential and acquires a width of $260$ MeV, which is about five times its width in vacuum. A method to measure this large width experimentally was devised in \cite{lauraraquel} suggesting to use the transparency ratio defined as \begin{equation} T_{A} = \frac{\tilde{T}_{A}}{\tilde{T}_{^{12}C}} \hspace{1cm} ,{\rm with} \ \tilde{T}_{A} = \frac{\sigma_{\gamma A \to K^+ ~K^{*-}~ A'}}{A \,\sigma_{\gamma N \to K^+ ~K^{*-}~N}} \ . \end{equation} The quantity $\tilde{T}_A$ is the ratio of the nuclear $K^{*-}$-photoproduction cross section divided by $A$ times the same quantity on a free nucleon. This describes the loss of flux of $K^{*-}$ mesons in the nucleus and is related to the absorptive part of the $K^{*-}$-nucleus optical potential and, therefore, to the $K^{*-}$ width in the nuclear medium. In order to remove other nuclear effects not related to the absorption of the $K^{*-}$, it was also suggested to look at this ratio with respect to that of a medium nucleus like $^{12}$C, $T_A$ (see \cite{luismagas}). Results for $T_A$ as a function of $A$ can be seen in \cite{lauraraquel}, indicating a sizable depletion of $\bar K^*$ production in nuclei that we would like to encourage to be measured. \section{$J/\psi$ suppression} \label{supp} $J/\psi$ suppression in nuclei has been a hot topic \cite{Vogt:1999cu}, among other reasons for its possible interpretation as a signature of the formation of quark gluon plasma in heavy ion reactions \cite{Matsui:1986dk}, but many other interpretations have been offered \cite{Vogt:2001ky,Kopeliovich:1991pu,Sibirtsev:2000aw}. In a recent paper \cite{raquelxiao} a study has been done of different $J/\psi N$ reactions which lead to $J/\psi$ absorption in nuclei. The different reactions considered are the transition of $J/\psi N$ to $VN$ with $V$ being a light vector, $\rho, \omega,\phi$, together with the inelastic channels, $J/\psi N \to \bar D \Lambda_c$ and $J/\psi N \to \bar D \Sigma_c$. Analogously, the mechanisms where an exchanged $D$ collides with a nucleon giving rise to $\pi \Lambda_c$ or $\pi \Sigma_c$ states are also considered. The total, elastic and inelastic cross sections, obtained from the unitarized $J/\psi N \to J/\psi N$ amplitude where only intermediate vector-baryon states are considered, are shown in Fig.~\ref{crosec}. We can clearly see the peak around 4415 MeV produced by the hidden charm resonance, described in Tables \ref{jpsicoupling} and \ref{jpsiwidth}, dynamically generated from the interaction of $J/\psi N$ with its coupled $VB$ channels. \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\textwidth]{crossection3.eps} \caption{The total, elastic and inelastic cross sections obtained from the unitary $J/\psi N \to J/\psi N$ amplitude involving only intermediate vector-baryon states.}\label{crosec} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{sigtot_sum1aa.eps} \includegraphics[width=0.45\textwidth]{sigtot_sum2aa.eps} \caption{The cross section for $J/\psi N \to \bar{D} \Lambda_c$ (up) and $J/\psi N\to \bar{D} \Sigma_c$ (down).}\label{figcro} \end{center} \end{figure} The cross sections for the inelastic transitions $J/\psi N \to \bar{D} \Lambda_c$ and $J/\psi N\to \bar{D} \Sigma_c$ are shown in Fig. \ref{figcro}. We can see that the first cross section is sizable and bigger than the one from the $VB$ channels. The cross sections for $J/\psi N \to \bar D \pi \Lambda_c$ or $\bar D \pi \Sigma_c$ are small in size in the region of interest and are not plotted here. The total $J/\psi N$ inelastic cross section, obtained as the sum of all inelastic cross sections from the different sources discussed before, is represented in Fig. \ref{sigin} (left). With the inelastic cross section obtained, the transparency ratio for electron induced $J/\psi$ production in nuclei at beam energies around 10 GeV has been studied. The results are shown in Fig. \ref{sigin} (right) where the transparency ratio of $^{208}$Pb relative to that of $^{12}$C is displayed as a function of the energy. It is clear that one finds sizable reductions in the rate of $J/\psi$ production in electron induced reactions. It should be noted that the calculation of the transparency ratio discussed so far does not consider the shadowing of the photons and assumes they can reach every point without being absorbed. However, for $\gamma$ energies of around 10 GeV, as suggested here, the photon shadowing, or initial photon absorption, cannot be ignored. Taking this into account is easy since one must multiply the ratio $T_A$ by the ratio of $N_{rm eff}$ for a nucleus of mass $A$ relative to $^{12}$C. This ratio for $^{208}$Pb to $^{12}$C at $E_\gamma =$10 GeV is of the order $0.8$, but with uncertainties \cite{Bianchi:1995vb}. This factor is applied to the lower curve of Fig. \ref{sigin} (right) for a proper comparison with experiment. The results for the transparency ratio imply that $30 - 35$ \% of the $J/\psi$ produced in heavy nuclei are absorbed inside the nucleus. This is very much in line with depletions of $J/\psi$ in matter observed in other reactions and offers another perspective in the interpretation of the $J/\psi$ suppression in terms of hadronic reactions, which has also been advocated before \cite{Sibirtsev:2000aw}. Apart from novelties in the details of the calculations and the reaction channels considered, we find that the presence of the resonance that couples to $J/\psi N$ produces a peak in the inelastic $J/\psi N$ cross section and a dip in the transparency ratio. However, this dip is washed away when effects of Fermi motion are taken into account. \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{sigtot_sum5.eps} \includegraphics[width=0.45\textwidth]{totTA_g12aver.eps} \caption{Left: The total inelastic cross section of $J/\psi N$. Right: The transparency ratio of $J/\psi$ photoproduction as a function of the energy in the CM of $J/\psi$ with nucleons of the nucleus. Solid line: represents the effects due to $J/\psi$ absorption. Dashed line: includes photon shadowing \cite{Bianchi:1995vb}.}\label{sigin} \end{center} \end{figure} \section{Conclusions} We have made a survey of recent developments along the interaction of vector mesons with baryons and the properties of some vector mesons in a nuclear medium. We showed that the interaction is strong enough to produce resonant states which can qualify as quasibound states of a vector meson and a baryon in coupled channels. This adds to the wealth of composite states already established from the interaction of pseudoscalar mesons with baryons. At the same time we reported on studies of the mixing of the pseudoscalar-baryon states with the vector-baryon states which break the spin degeneracy that the original model had. The mechanisms of vector-baryon interaction extended to the charm sector also produced some hidden charm states which couple to the $J/\psi N$ channel and had some repercussion in the $J/\psi$ suppression in nuclei. We also showed results for the spectacular renormalization of the $K^*$ in nuclei, where the width becomes as large as 250 MeV at normal nuclear matter density and we made suggestions of experiments that could test this large change. \section*{Acknowledgments} This work is partly supported by DGICYT contract numbers FIS2011-28853-C02-01, FIS2011-24154, the Generalitat Valenciana in the program Prometeo, 2009/090 and Grant No. 2009SGR-1289 from Generalitat de Catalunya. L.T. acknowledges support from Ramon y Cajal Research Programme, and from FP7-PEOPLE-2011-CIG under contract PCIG09-GA-2011-291679. We acknowledge the support of the European Community-Research Infrastructure Integrating Activity Study of Strongly Interacting Matter (acronym HadronPhysics3, Grant Agreement n. 283286) under the Seventh Framework Programme of EU.
{ "timestamp": "2012-10-16T02:02:16", "yymm": "1210", "arxiv_id": "1210.3738", "language": "en", "url": "https://arxiv.org/abs/1210.3738" }
\section{Introduction} Gamma ray bursts (GRBs) are the most energetic explosions known in the Universe, second only to the Big Bang. Discovered in the 1960s, they were widely believed to originate in the Milky Way because of their relatively high flux of photons, which needs an unprecedented emission mechanism to account for this high energy output. It was not until 1997 when the first measurement of redshift was performed on a GRB afterglow that the cosmological nature of these objects was asserted without doubt \citep{1997Natur.387..878M}. The GRB afterglows fade within a few hours, and as a consequence, the redshift of most GRBs are unknown. In the past several studies have been carried out to indirectly determine the Galactic or extra-Galactic nature of the bursts by analyzing their spatial distribution in the sky \citep{1981Ap&SS..80....3M, 1992Natur.355..143M, 1998A&A...339....1B}, and historically it served as a strong argument against the Galactic origin of GRBs \citep{1999ApJS..122..465P}. This technique has also been used to suggest a more local nature of long-lag bursts by showing that they may be related to the super-Galactic structure \citep{2002ApJ...579..386N, 2008A&A...484..143F}. The observed light curve of each GRB varies from burst to burst, particularly during in the prompt phase when the gamma ray emission is emitted, where one or multiple peaks with a variety of shapes are observed. However, some of them present a fast rise and exponential decay (FRED hereafter) behavior. These have been correlated with other properties of the bursts \citep{1994ApJ...426..604B}, suggesting that they may be of a different nature than other GRBs. There has been at least one reported GRB that upon closer examination has resulted to be a phenomenon from within the Milky Way \citep{2008Natur.455..506C,2008Natur.455..503S}. This source displayed a FRED structure, which leads us to believe that there could be others like it. We aim to estimate the most probable degree of contamination by Galactic sources that certain samples of FREDs have. We have organized the paper as follows: in Section~2 we establish the selection criteria of the studied samples. Section~3 describes the methodology used for quantifying the anisotropy and determining the probability of observing these values for both extra-Galactic and Galactic sources while taking into account the exposure of \emph{Swift}. We discuss the results from our analysis in Section~4 and give our main conclusions in Section~5. \section{Sample selection} To achieve a homogeneous distribution, only \emph{Swift}-detected GRBs were taken into account. From the catalog of 596 GRBs detected by \emph{Swift} before March 2011, 111 GRBs were selected because they had a FRED structure reported in a GCN. Using the information available in peer-reviewed papers\footnote{Only two peer-reviewed papers were relevant for the sample selection \citep{2011AA...529A.110C,2009AJ....138.1690P}} and other GCN circulars related to the 111 FRED GRBs, the following subsamples were selected: \footnote{For a list of specific selected bursts see Appendix A} \begin{itemize} \item \textbf{Sample 1:} All 111 FREDs detected by \textit{Swift} until February 2011. \item \textbf{Sample 2:} 77 FREDs without any measured redshift. \item \textbf{Sample 3:} 71 FREDs without stated high-redshift criteria. \item \textbf{Sample 4:} 59 FREDs without any type of indirect redshift indication. \item \textbf{Sample 5:} 49 FREDs without any redshift indications or multiple peaks. \end{itemize} \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{Samples} \caption{Sources in each one of the samples.To avoid redundancy, only the sources not present in the subsequent samples were included.} \label{Samples} \end{figure} It is important to note that only sample 5 included solely those bursts that consisted of one pure FRED peak. \section{Anisotropy quantification} It has been proven \citep{1989ApJ...346..960H} that the mean dipolar and quadripolar moments of the Galactic coordinates ($cos b$ and $sin^2 b $, where $b$ is the Galactic latitude) are good tools to quantify the isotropy with respect to the Galactic plane~\citep{1994wedo.book.....C}. The degree of isotropy of each sample was calculated using the coordinates available from the gamma-ray burst coordinate network (GCN) circulars for each burst. The results are shown in Table~\ref{SampleMoments}. \begin{table} \centering \begin{tabular} {| c | c | c |} \hline Sample & $ <cos b> $ & $<sin^2 b>$ \\ \hline \#1 & 0.7883 & 0.3397 \\ \hline \#2 & 0.8221 & 0.2860 \\ \hline \#3 & 0.8184 & 0.2909 \\ \hline \#4 & 0.8344 & 0.2673 \\ \hline \#5 & 0.8397 & 0.2622 \\ \hline \end{tabular} \caption{Dipolar and quadripolar moments of the samples as a quantitative measurement of the degree of isotropy in the samples.} \label{SampleMoments} \end{table} \subsection{Exposure map} Owing to the nature of its instruments, orbit, and mission, \textit{Swift}'s pointing toward the sky is not homogeneous. It is of particular relevance to note that there has been less integrated exposure time toward the Milky Way's disk than toward the Galactic poles. This fact would represent a bias for the nature of the study carried out for this publication if left unaccounted, therefore we created a map by integrating the exposure mask function for the BAT instrument, multiplied by the exposure times of all observations carried out between April 16, 2005 and February 1, 2011, taking into account the pointing and rotation of the BAT instrument\footnote{The method used to derive the exposure map is the same as the one detailed in Veres \emph{et al.}, 2010\nocite{2010AIPC.1279..457V}}. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{ExpMap-Moll} \caption{Swift exposure map in Galactic coordinates derived for this study. Colors represent the exposure time (in seconds).} \label{Exposure} \end{figure} \subsection{Monte Carlo simulations} Monte Carlo simulations were carried out to determine the probability mass function (PMF) of the average dipolar and quadripolar moments of random GRB distributions. Therefore random coordinates were generated, taking care that they had a homogeneous distribution on an spherical surface. These random points were then used to determine the PMF of the dipolar and quadripolar moments of random sources in the sky to determine by how much the observed samples' values deviated from those of a completely isotropically generated sample. To do this we generated an equal number of random points to that of each sample, recording the value of the mean dipolar and quadripolar moment and iterating a statistically significant number of times $10^6-10^9$ iterations. The histogram of the recorded values was then used to determine the values for standard deviations ($\sigma, 2\sigma, 3\sigma$). \subsection{Metropolis-Hastings algorithm} To account for the anisotropy of \textit{Swift}'s exposure of the night sky, it was necessary to factor in the probability that a particular random source was detected by \textit{Swift}. We used the Metropolis-Hastings algorithm for this, which can be summarized in three steps: \begin{enumerate} \item Create a random source (set of coordinates). \item Create a random number with a range equal to the values of the map or mask. \item Compare the value of the map at those coordinates to the random number. \item If the value of the point on the map exceeds the random number, then the random source is included in the sample for further analysis. Otherwise a new random source is created and the process is repeated until the correct amount of sources is obtained. \end{enumerate} This will effectively generate random sources that are more likely to appear where the exposure is higher. This method was tested by generating a statistically significant number of random sources and checking that the resulting image was one proportional to the weighting mask well within normal statistical fluctuations. \subsection{Contamination by Galactic sources} Considering that i) the density of matter of the Milky Way is roughly correlated with the amount of interstellar dust, and by consequence so is the amount of stellar sources, and ii) the transparency of gamma-rays to interstellar dust, we used maps of dust IR emission ~\citep{1998ApJ...500..525S} as a weighting mask for the Metropolis-Hastings algorithm to generate random Galactic sources. The isotropically generated samples were contaminated by increasing the number of Galactically generated random sources (N) to observe how this affected the PMF of their dipolar and quadripolar moment. We considered all possible combinations for the number of GRBs in the different samples and took into account the \textit{Swift} exposure map for each generated source. \section{Results} The Monte Carlo simulations of the isotropically generated random samples (weighted by the \textit{Swift} exposure map) showed that the dipolar and quadripolar moments from the real samples consistently deviated from the average, as is shown in Figure~\ref{isotropic}. \begin{figure} \includegraphics[width=0.5\textwidth]{isotropic-dip} \includegraphics[width=0.5 \textwidth]{isotropic-quad} \caption{Dipolar (\textbf{top}) and quadripolar (\textbf{bottom}) moment PMFs for samples of isotropically generated sources weighted by \textit{Swift}'s exposure map for each sample size, and the observed values for each sample (vertical lines).} \label{isotropic} \end{figure} Table \ref{Percentiles} lists the percentile of the population in which each sample is located. Considering that by definition, one standard deviation will be between the 15.87th and the 84.13th percentiles, two standard deviations between 2.28th and 97.72th and three deviations between 0.13th and 99.87th, we observe that with the exception of the first sample, all samples have dipolar and quadripolar moments located outside two standard deviations. \begin{table} \centering \begin{tabular}{|c|c|c|c|} \hline Sample & Dipolar & Quadripolar \\ \hline \#1 & 78.24 & 13.26 \\ \hline \#2 & 97.91 & 0.59 \\ \hline \#3 & 96.72 & 1.03 \\ \hline \#4 & 98.52 & 0.44 \\ \hline \#5 & 97.83 & 0.39 \\ \hline \end{tabular} \caption{Percentile of the values observed for the dipolar and quadripolar moments of each sample.} \label{Percentiles} \end{table} The probability distribution of samples that contained both isotropically and Galactically generated sources allowed us to compare how the contamination by Galactic sources affected the likelihood of obtaining certain momentum values, for example see figure \ref{49Gaussians}. This technique is similar to the one used in the past for studying the degree of contamination by Galactic repeater gamma-ray sources present in two GRB catalogs \citep{1998A&A...336...57G}. \begin{figure} \includegraphics[width=0.5\textwidth]{dipplot} \includegraphics[width=0.5 \textwidth]{quadplot} \caption{Example of dipolar (\textbf{top}) and quadripolar (\textbf{bottom}) moment probability distributions for samples of 49 randomly generated sources with an increasing number (\textbf{N}) of those sources being of Galactic origin, and the observed value for sample \#5 (dashed vertical line). Each line has a different amount of Galactic sources, starting with N=0 (red), and only every fifth line was colored for easier reading.} \label{49Gaussians} \end{figure} By observing the probability of the observed values in each one of the curves that resulted from the simulations, we determined the relative probability that each one of those combinations of isotropically and Galactically generated sources would yield the observed momentums. Figure \ref{Probabilities} shows the probability as a function of the amount of Galactic sources introduced in each sample. \begin{figure} \includegraphics[width=0.5\textwidth]{dipprob} \includegraphics[width=0.5 \textwidth]{quadprob} \caption{Relative probability of obtaining the dipolar (\textbf{top}) and quadripolar (\textbf{bottom}) moments $\pm 0.002$ measured for our samples. } \label{Probabilities} \end{figure} \section{Conclusions} With the exception of the first sample, all observed samples show dipolar and quadripolar moments outside two standard deviations from the mean of an isotropically generated distribution. Although this result is not conclusive, there is a high probability that the samples are not of a purely Extra-Galactic nature. The probability of obtaining the dipolar and quadripolar moments that are measured in the samples is much higher when including a significant amount of Galactic sources than it is for only isotropically generated sources. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Sample \#:} & 1 & 2 & 3 & 4 & 5 \\ \hline \textbf{Dipolar:} & 13 & 20 & 17 & 18 & 16 \\ \hline \textbf{Dipolar \%:} & 11.71\% & 25.97\% & 23.94\% & 30.51\% & 32.65\% \\ \hline \textbf{Quadripolar:} & 12 & 22 & 20 & 21 & 18 \\ \hline \textbf{Quad. \%:} & 10.81\% & 28.57\% & 28.17\% & 35.59\% & 36.73\% \\ \hline \end{tabular} \caption{Amount of Galactic sources that yield a higher probability to obtain the observed dipolar and quadripolar moments and the percentage of the sample size they imply.} \label{contamination} \end{table} As shown in Table \ref{contamination}, if we consider the amount of contamination that yields the highest probability of obtaining the observed values, the amount of Galactic sources that are probably contaminating the \emph{Swift} GRB catalog is between 16 and 22. This value represents approximately $3 \%$ of the catalog used for this study. Sample 5 has been narrowed down so that it is likely that one out of every three is in fact a Galactic source, accordingly it is of great interest to study these sources in more detail to determine if there are other indications that they are not GRBs. The high Galactic extinction discourages optical ground-based spectroscopy of most low Galactic latitude GRBs. We showed that a large part of those abandoned follow-ups could reveal a missing population of Galactic events. We encourage ground observers to follow-up those events, since it might lead to the discovery of unknown high-energy phenomena in our Galaxy. \begin{acknowledgements} We have made use of J. Greiner's GRB Table (http://www.mpe.mpg.de/~jcg/grbgen.html) This research has been partially supported by the Spanish Ministry of Economy and Competitivity under the programmes AYA2011- 24780/ESP, AYA2009-14000-C03-01/ESP, and AYA2012-39362-C02-02 and OTKA grant K077795. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2013-01-23T02:02:56", "yymm": "1210", "arxiv_id": "1210.3699", "language": "en", "url": "https://arxiv.org/abs/1210.3699" }
\section{Introduction} \label{introduction} Microsphere composites are used in a multitude of industrial applications. Good examples are ultra-low density fillers in engineering materials such as composites, coatings, sealants, explosives, automotive components, paint and crack fillers and elastomers and as blowing agents in printing inks \citep{ash07}. The microsphere is a spherical particle, a few microns in diameter, with a thermoplastic shell, and shell to diameter ratio typically of the order of $0.01$. The use of microspheres in materials brings forth numerous benefits which include reducing density, improving stability, increasing impact strength, providing a smoother surface finish, increasing thermal insulation, increasing compressibility and often reducing costs. A scanning electron microscopy image of the microstructure of a hollow glass microsphere composite (with very high volume fraction) is shown in Fig.\ \ref{fig:HGM} taken from \cite{li11}. \begin{figure}[ht] \centering \subfigure[]{ \includegraphics[scale=0.26]{figures/14.eps} \label{figures/14.eps} } \subfigure[]{ \includegraphics[scale=0.26]{figures/15.eps} \label{figures/15.eps} } \caption[]{Scanning electron microscopy image of (a) a Hollow Glass Microsphere composite material and (b) its surface/shell structure. In this situation (with applications to thermal conductivity) the composite is densely filled with microspheres. Reproduced (with kind permission) from \cite{li11}.} \label{fig:HGM} \end{figure} The application of interest in this paper is that of acoustics, using microsphere composites as a means of reducing sound reflection. The composite consists of an elastomeric matrix phase, inside which are located a large number of randomly distributed Expancel microspheres, see Fig.\ \ref{figures/14p.eps} \cite{shorter2008}. Of specific interest is how sound reflection can be affected by a macroscopic hydrostatic pressure applied to the microsphere composite. These materials have been found to be useful in such conditions because the presence of the reinforcing shell delays the onset of the cavity collapse and the consequent degradation in the acoustic performance of the composite. In order to understand exactly how the acoustic characteristics of the material are affected by applied pressure, it is necessary to develop models that describe how the composite deforms mechanically under this applied loading. Experimentally it is known that the constitutive \textit{pressure-relative volume change} curve is nonlinear (and hysteretic during unloading) but the dominant physical mechanisms contributing to this nonlinearity are still not fully understood. In Fig.\ \ref{figures/15p.eps} we illustrate the constitutive behaviour of the material with some experimentally determined load curves associated with the composite for increasing volume fractions of the microsphere material \cite{shorter2008}. \begin{figure}[ht] \centering \subfigure[]{ \includegraphics[scale=0.55]{figures/Shorter_image.eps} \label{figures/14p.eps} } \subfigure[]{ \psfrag{s}{stress (MPa)} \psfrag{e}{strain $\%$} \includegraphics[scale=0.45]{figures/const_behave.eps} \label{figures/15p.eps} } \caption[]{In (a) we show an image of a Silicone Room Temperature Vulcanizing (RTV) microsphere elastomer filled with 5\% volume of Expancel microspheres. In (b) we show an experimentally determined stress-strain curve associated with this composite under uniaxial tension. The solid curve is unfilled and others refer to increasing volume fractions of the microsphere phase. Reproduced (with kind permission) from \cite{shorter2008}.} \label{shorterimage} \end{figure} A wide range of work on the modelling of microsphere filled composites has appeared in the literature. For the prediction of their acoustic properties \textit{without applied pressure}, a number of models have been proposed, see e.g.\ \cite{Gau82} and \cite{Baird-Kerr-Townend99} in the elastic and viscoelastic case respectively. Such predictions typically rely upon the use of classical multiple scattering models, the most commonly used being those of \cite{Wat-61, Kus-74a, Bos-74, Gau82, Ans-89}. We note that a useful comparison of experiments and various theories can be found in \cite{Ans-93}. \cite{Gau84} considered the pressure dependence of dynamic moduli, albeit in a simplified case of porous solids, i.e.\ in the absence of the shell phase and thus the effect of the pressure is to reduce the pore size, the main interest lying in the dynamic material properties. Here, although we are certainly interested in the dynamic response, we wish first to understand the origins of nonlinearity in the \textit{pressure-relative volume change} loading curve associated with the composite. Significant work has been carried out on the deformation of porous materials, see e.g.\ \cite{Mac-50} for an early model for associated effective linear elastic properties, and for rubber foam materials, see e.g.\ \cite{Gen59}, \cite{Gib82} and \cite{Lak93}, where the principal mechanisms of deformation are well understood. However, the composite under consideration here has a more complex microstructure due principally to the presence of microspheres and the lack of understanding as to how they behave within the elastomeric substrate under applied pressure. Microspheres are also present in the context of syntactic foams (a material comprising a polymeric matrix filled with microspheres). Such materials open up the possibility of low density materials with high tolerance to damage. Much of the recent modelling work in this area however has focused on the effective \textit{linear} elastic properties of the composite. A variety of static homogenization techniques have been used, see e.g.\ \cite{Bardella-Genna01, Gupta-Woldesenbet04, porfiri-gupta09, Tagliavia-Porfiri-Gupta09}. Few models deal with the nonlinear response of a microsphere composite under loading. \cite{Ker02} proposed an elasto-plastic model for the load curve and subsequent prediction of dynamic material properties. One criticism of this model would be that plasticity yields permanent deformation. However, it is well acknowledged that although the load-unload curve is hysteretic, when all load has been removed the material (eventually) returns to its original configuration \citep{private}. Therefore it does not appear that plastic deformation is the cause of nonlinearity. In a related application \cite{Pan08} considered the acoustic response of inhomogeneous media under applied pressure, although it appears that the microstructure is rather different from that considered here. In \cite{shorter2008}, \cite{shorter2010} the problem of the buckling of a single, isolated spherical shell was considered (as a result of axial compression, rather than hydrostatic pressure) using Finite Element Analysis and results were subsequently compared with experiments involving a table tennis ball embedded inside a transparent elastomer. Comparisons were made between perfectly bonded and unbonded spherical shells and subsequent buckling response. The principal argument of the paper was to propose that microsphere buckling is a dominant contributor to the nonlinear behaviour of the pressure-relative volume change curve. The computational work carried out in \cite{shorter2008} suggests that a model incorporating microsphere buckling could successfully predict the nonlinearity of the load curve. Therefore here we develop a fundamental mathematical model for the loading portion of the pressure-relative volume change curve by incorporating volumetric changes and local microsphere shell buckling effects. We assume that there is a distribution of shell to radius ratio thicknesses and we will suppose that the microspheres are distributed dilutely, so that interaction effects between microspheres can be neglected. Interaction effects will be considered in future work. In order to incorporate the effect of buckling of the microsphere shell, we must understand how a spherical shell buckles inside an elastic medium under far-field hydrostatic pressure. Although a great deal of classical work exists regarding the buckling of spherical shells (where the imposed pressure is on the surface of the shell itself), see for example \cite{wesolowski67,Koiter69,wang_ertepinar72} and more recently \cite{Fu98,goriely_benamar05'}, there is a surprising lack of work regarding the buckling of shells (of any geometry) that are embedded inside another medium. Of specific interest is how the host medium affects the classical buckling pressure. Initial work into the buckling pressure of a spherical shell embedded in an unbounded uniform elastic medium has been carried out by \cite{fox-allwright01} and \cite{Jones-Chapman-Allwright07}. We shall discuss these models and their assumptions later on in the paper, particularly that of \cite{fox-allwright01} which is the model that we shall adopt for buckling here. As described above, \cite{shorter2008} also carried out some experimental work related to this problem. The fundamental objective here then is to determine a model for the loading portion of the \textit{pressure-relative volume change curve} by considering a distribution of shell thickness to radius ratio of microspheres which are dilutely dispersed throughout the material. We also introduce nonlinear (finite) elasticity in order to incorporate large deformation of the rubber composite in the post-buckling regime. The theory provides a modelling tool to assess certain likelihood scenarios. In particular we are able to assess the sensitivity of effective properties to changes in specific parameters, e.g.\ distribution of microspheres, nonlinearity and constitutive behaviour of the constituent materials that make up the composite and the gas law inside microspheres. \section{Preliminaries and background} \label{Preliminaries and background} We consider a composite material with two constituents (or \textit{phases} as we shall term them here) known as the matrix and inclusion phase. The matrix phase is a (possibly compressible) homogeneous rubber material and the inclusion phase consists of a distribution of thin spherical shells (possibly filled with some gas) of initial radius $A$ and shell thickness $H$. We allow for the possibility of a distribution of microsphere shell thickness to radius ratios $X=H/A$. This distribution is governed by a probability distribution function $F(X)$. The volume fraction of the inclusion phase is denoted by $\Phi$. We are specifically interested in the problem where the material is subjected to an external hydrostatic pressure $\hat{p}$. We shall state all pressures relative to atmospheric pressure $p_{\textsc{{atm}}}$ so that upon defining $p=\hat{p}-p_{\textsc{{atm}}}$, $p=0$ corresponds physically to atmospheric pressure. Similarly, we assume that the gas inside the microspheres is also initially at atmospheric pressure so that denoting $\hat{p}_{\textsc{{in}}}$ as the internal hydrostatic pressure we can initially set $p_{\textsc{{in}}}=\hat{p}_{\textsc{{in}}}-p_{\textsc{{atm}}}=0$. Henceforth all pressures are thus defined relative to atmospheric pressure. Consider for the moment a single microsphere embedded in an \textit{unbounded} matrix material so that we assume that the pressure is applied in the `far-field'. At a critical far-field pressure $p_c$, this shell will buckle and the microsphere will lose its compressive rigidity for $p>p_c$. Since we assume that we have a distribution of microspheres, each with a different $X=H/A$, the microspheres will buckle successively as the external pressure is continuously increased. We illustrate this in Fig.\ \ref{sample_cube}. The prediction of the pressure-volume relation, given the volume fraction $\Phi$ of microspheres, the constitutive behaviour of the matrix phase, the elastic properties of the microsphere shell, the knowledge of gas internal to the microspheres and the overall distribution of $X$, is clearly a non-trivial problem. The effects of interaction on buckling have thus far not been studied and therefore here we consider the case where buckling of a microsphere depends only on the far-field hydrostatic pressure $p$ and \textit{not} on the influence of other microspheres. We anticipate therefore that this model is valid for a dilute dispersion of microspheres. We note however, that in many homogenization theories, it is often surprising how accurate dilute dispersion theories are even in the non-dilute regime \citep{Parnell-Abrahams-Brazier-Smith10}. Later work will consider interaction effects in more detail. \begin{figure} \centering \psfrag{p}{\scriptsize{$p$}} \psfrag{d}{\scriptsize{deformation}} \includegraphics[scale=0.4]{figures/hydrostatic_deformation.eps} \caption{A cube of the microsphere composite material (no scale is implied) is subjected to hydrostatic pressure $p$ in the far field with inset figure indicating that the microspheres are spherical pre-buckling. This spherical symmetry is retained until the onset of buckling, after which the shells deform significantly, losing their rigidity in this state, as indicated in the figure inset. We do not indicate the precise structure of the shell post-buckling: this requires detailed post-buckling analysis.} \label{sample_cube} \end{figure} We shall consider each microsphere to have a fixed initial radius $A$ and let the shell thickness $H$ vary, so that $X=H/A$ is governed by a probability distribution function $F(X)$. Alternatively we could consider $H$ fixed and vary $A$ but it transpires that the analysis of the former is more straightforward. (Note that for a single microsphere inclusion the scale invariance means that varying $A$ or $H$ for any fixed $X$ must yield the same result.) In some cases we need to refer to the middle surface of the shell whose radius we denote by $\hat{A}=A-H/2$, and the shell thickness to middle radius ratio as $\hat{X}=H/\hat{A}$. Also the probability distribution function can therefore be given in terms of $\hat{X}$, i.e.\ $F(\hat{X})$. With reference to Fig.\ \ref{singleCS} we define a (fictitious) radius $S>A$ by the condition $\Phi=(A/S)^3$ where $\Phi$ is the prescribed volume fraction of microspheres. We then consider how the material deforms, and how the microsphere buckles, given some hydrostatic pressure $p$ in the far-field with this region inside $S$ (containing the microsphere and which we will term the \textit{composite sphere} (CS)) embedded in a purely matrix material. The prescription of the radius $S$ allows us to consider the volume change of the matrix region under compression. \begin{figure} \centering \psfrag{S}{\scriptsize{$S$}} \psfrag{D}{\scriptsize{$A-H$}} \psfrag{A}{\scriptsize{$A$}} \psfrag{p}{\scriptsize{$p$}} \includegraphics[scale=0.5]{figures/single_CS.eps} \caption{A single `composite sphere' loaded by the hydrostatic far-field pressure $p$.} \label{singleCS} \end{figure} We denote by $\kappa,\mu,E,\nu$ the bulk, shear and Young's moduli and Poisson's ratio respectively from linear elasticity and we note the relations $\nu=(3\kappa-2\mu)/(2(3\kappa+\mu))$ and $E=9\kappa\mu/(3\kappa+\mu)$ since later we usually specify $\mu$ and $\kappa$. We will make use of the subscripts $s$ and $m$ when we wish to refer to the shell and to the matrix medium, respectively. Before the microsphere shell buckles (which we shall term the \textit{pre-buckling} stage) we consider the elastic behaviour of both the matrix and microsphere shell to be linear. As will be shown, this is reasonable since the shell stiffness is significantly higher than that of the matrix phase and therefore induced strains in both media will be small (see also section 7 in \cite{Jones-Chapman-Allwright07} for more details). After the microsphere shell buckles (which we term the \textit{post-buckling} stage) we make the assumption that the shell will lose almost all of its rigidity and therefore that the post-buckled microsphere can be replaced by a cavity (whilst still ensuring continuity of displacement and traction between the matrix and fluid as we shall show later). In this post-buckling regime we incorporate nonlinear elastic behaviour by permitting large strains and also nonlinear constitutive behaviour. In order to justify the linear pre-buckling and nonlinear post-buckling assumptions, respectively, we consider the following example. We are here interested in understanding how the stiffness of the shell can make the material more rigid as compared with the case when the shell is absent. To this end we can consider for example the case of a thin glassy shell (see \cite{Baird-Kerr-Townend99}), with surrounding polymeric elastomer composed of a polyurethane material. The matrix Young's modulus can be taken as $E_m=3.6$ MPa and Poisson ratio is typically close to $0.5$ \citep{Diaconu-Dorohoi05}. In terms of bulk and shear moduli, we choose the parameter set \begin{align}\label{material-constants} \mu_s &= 1.26\ \textrm{GPa},& \kappa_s &= 2.1\ \textrm{GPa}, \notag\\ \mu_m &=1.2\ \textrm{MPa},&\kappa_m &= 4\ \textrm{GPa}. \end{align} Note that with this choice, $\nu_m=0.49985$ i.e.\ the matrix is considered essentially incompressible. Next, let us take a microsphere with shell to radius ratio $X=0.01$ and for an imposed scaled far-field pressure $p/\mu_m$, we evaluate the scaled displacement (with notation reported in section \ref{Linear elasticity}) $u_r^m(A)/A$ at $r=A$ (the radius at which the displacement is maximum). In Fig.\ \ref{displacement} we plot this maximum displacement as a function of the imposed pressure when the shell is and is not present (left and right on the figure, respectively) and in both cases we note that this is predicted by linear elasticity theory. The dashed line denotes the critical pressure for this shell to radius ratio, predicted by the Fok-Allwright buckling criterion \eqref{F-A-criteria} \citep{fox-allwright01}. When the shell \textit{is} included, values of $u(A)_r^m/A$ remain small for applied pressures below this critical value (see the left side of Fig.\ \ref{displacement}). On the contrary, in the absence of the shell, when reasonable values of the pressure are applied, the linear theory is no longer appropriate due to the large values of the scaled displacements $u_r^m(A)/A$ in this case (see the right side of Fig.\ \ref{displacement}). We conclude that in this latter regime we \textit{must} therefore incorporate full nonlinearity in order to permit finite deformations. Although we are concerned with matrix materials that ostensibly behave incompressibly (typically $\mu_m/\kappa_m$ is small, in particular of the order of $10^{-4}$, see \cite{ogden76}), in section \ref{Post-buckling with slight compressibility} we also calculate the volume change post-buckling for a \textit{nearly-incompressible} theory. As will be shown, it is difficult to distinguish the difference between results for the slightly-compressible and incompresssible cases, as should be expected. \begin{figure} \centering \psfrag{0.5}[c][l]{\small{$0.5$}} \psfrag{1.0}[c][l]{\small{$1.0$}} \psfrag{1.5}[c][l]{\small{$1.5$}} \psfrag{2.0}[c][l]{\small{$2.0$}} \psfrag{0.005}[c][b]{\small{$0.005\hspace{0.4cm}$}} \psfrag{0.010}[c][b]{\small{$0.010\hspace{0.4cm}$}} \psfrag{0.015}[c][b]{\small{$0.015\hspace{0.4cm}$}} \psfrag{0.020}[c][b]{\small{$0.020\hspace{0.4cm}$}} \psfrag{0.025}[c][b]{\small{$0.025\hspace{0.4cm}$}} \psfrag{0.1}[c][l]{\small{$0.1$}} \psfrag{0.2}[c][l]{\small{$0.2$}} \psfrag{0.3}[c][l]{\small{$0.3$}} \psfrag{0.4}[c][l]{\small{$0.4$}} \psfrag{B}{\footnotesize{$-\dfrac{u_r^m}{A}$}} \psfrag{A}{\footnotesize{$\dfrac{p}{\mu_m}$}} \includegraphics[scale=0.75]{figures/displacement.eps} \caption{Plot of the the scaled radial displacement $u(A)_r^m/A$ determined using linear elasticity when the shell is (left) and is not (right) included in the model. Associated values of $X$ used are $X=0.01$ and $X=0$ respectively. The dashed line is the critical pressure predicted by \eqref{F-A-criteria}.} \label{displacement} \end{figure} \section{Pre-buckling behaviour and the buckling model} \label{Pre-buckling behaviour} In the pre-buckling stage, we consider both shell and matrix phase to be compressible linear elastic materials (noting that the matrix is almost incompressible) which are perfectly bonded, and we assume that gas resides inside the microsphere providing a constant internal pressure $p_{\textsc{{in}}}$. We wish to determine the total volume change (relative to the initial volume) of the material and in order to do this we determine the volume change in the composite sphere (CS) when the pressure $p$ is imposed at infinity. \subsection{Linear elasticity} \label{Linear elasticity} Under the assumption of linear isotropic elasticity, the governing equations of the corresponding static boundary value problem with no body forces are given as follows \begin{equation}\label{linear_equilibrium} \sigma_{ij,i}=0, \end{equation} \begin{equation}\label{strain_tensor} \epsilon_{ij}=\dfrac{1}{2}\left(u_{i,j}+u_{j,i}\right), \end{equation} \begin{equation}\label{linear_constitutive_equations} \sigma_{ij}=\lambda\epsilon_{kk}\delta_{ij}+2\mu \epsilon_{ij}, \end{equation} where $\sigma_{ij}, \epsilon_{ij}$, and $u_i$ are the components of the stress and strain tensors, and the displacements, respectively and we have introduced the Kronecker delta tensor $\delta_{ij}$. The matrix material is homogeneous with Lam\'e constants $\lambda,\mu$ where $\lambda=\kappa-2\mu/3$. Since the problem is linearly elastic, geometry is spherically symmetric and a purely radial stress is applied, then spherical symmetry is preserved ($u_r=u_r(r), u_{\theta}=u_{\phi}=0$). Hence, equation (\ref{linear_equilibrium}) reduces to a single second order ordinary differential equation which is independent of the Lam\'{e} moduli. The general solution for the displacement in the shell and medium region is therefore of the form \begin{equation}\label{solution} u_r^{i}(r)=A_i r + \frac{B_i}{r^2}, \end{equation} where $i=s,m$ refers to the shell and matrix respectively, and $A_i,B_i$ are constants that are fully determined by imposing continuity of displacement and radial stress on $r=A$ and the following loading boundary conditions \begin{equation} \sigma^s_{rr}(A-H)=-p_{\textsc{{in}}},\quad \sigma^m_{rr}(r)\Big|_{r\rightarrow\infty}=-p. \end{equation} \subsection{Pre-buckling: relative volume change for each CS} \label{Pre-buckling: relative volume change for each CS} Let us consider a single CS of initial radius $S$ and volume $V$ containing a microsphere of undeformed radius $A$ and $X=H/A$. When we increase the far-field pressure $p$ ($0<p<p_c$), this volume $V=(4/3)\pi S^3$ reduces to $v=(4/3)\pi s^3$, where $s=S+u^m_r(S)$ denotes the deformed radius of the CS, referring to \eqref{solution}. The relative volume change, say $\delta v$, occurring in the pre-buckling stage is therefore given by \begin{equation}\label{relative_change_volume_pre} \delta v= \frac{V-v}{V}=1-\left(\frac{s}{S}\right)^3. \end{equation} Note that we have assumed the inner pressure inside the microsphere to remain constant under loading. This appears to be reasonable since volume changes will be small, but will not remain valid in the post-buckling regime as we shall consider in section \ref{postb}. \subsection{Microsphere buckling} \label{Microsphere buckling} In this section we discuss the buckling of a spherical shell inside an unbounded elastic matrix medium, loaded by a far-field hydrostatic pressure $p$. We employ a buckling model introduced by \cite{fox-allwright01}. Given a distribution of sizes of microspheres inside the material, our aim is to determine which of them, for a given imposed pressure $p$, have buckled and which remain unbuckled. In \cite{fox-allwright01} a criterion was derived for the buckling of a spherical shell embedded in an elastic material and loaded by a far-field hydrostatic pressure under the main assumptions that deformations are axisymmetric and the shell is inextensible. They also neglected the inner gas pressure, so in our model we must set $p_{\textsc{{in}}}=0$ in the pre-buckling phase. This latter simplification is, in fact, not too severe as the displacement is affected very little by internal pressure pre-buckling. The assumption of axisymmetric buckling is not a restriction; \cite{wesolowski67} showed that the critical mode number for buckling is the same whether the eigenmode is symmetric or not. Also, for glassy shells in the present model, it is easy to show that the assumption of inextensibility is consistent with the estimates found for the radial and shear stresses. \cite{fox-allwright01} obtained a formula for the critical pressure $p(\hat{X},n)$ in the form \begin{equation}\label{F-A-criteria} \frac{p(\hat{X},n)}{E_s}=\frac{2}{3} \frac{1+\nu_m}{1-\nu_m} \left(1+\frac{1-\nu_s}{1+\nu_m}\frac{E_m}{E_s}\frac{1}{\hat{X}}\right)\left(p_1(n) \hat{X}^3+p_2(n) \hat{X}+p_3(n)\right) \end{equation} where $\hat{X}=H/\hat{A}$ and $n$ are the shell thickness to \textit{middle} radius ratio of the microsphere and the mode number respectively. Note that in the Fok-Allwright approach, $n$ is a natural number greater than 1; dilatational (n=0) and rigid-body (n=1) modes are not considered. The functions $p_1, p_2$ and $p_3$ are given by \begin{align} \label{abc} & p_1= \left[n(n+1)-(1-\nu_s)\right]/12(1-\nu_s^2), \quad p_2= 2/\left[(n-1)(n+2)(1+\nu_s)\right]\notag\\ & p_3= E_m/E_s \frac{(2 n^3-n^2+3n+2)-\nu_m(2n^3-3n^2+5n+2)}{(n-1)^2(n+2)\left(3n+2-2\nu_m(2n+1)\right)(1+\nu_m)}. \end{align} The standard approach would be to specify the shell ratio $\hat{X}$ and material constants $\nu_s,\nu_m, E_s,E_m$ and substitute these into \eqref{F-A-criteria} to give the critical buckling pressure found by minimizing with respect to $n$ (and thus this also gives the corresponding buckling mode $n$). Here, however we need to approach the problem slightly differently since we have a distribution of microsphere sizes and we wish to know what the state of that distribution of microspheres (buckled/unbuckled) is at a given pressure. It will prove convenient, therefore, to treat $n$ as a parameter as we now explain. Assuming $\hat{X}$ is for now \textit{unspecified}, by a continuity argument, assuming that $n>1$ is a given real number we determine the minimum by insisting that $\partial p(\hat{X},n)/\partial n=0$ and solve for $\hat{X}$. From trivial algebraic considerations it is straightforward to show the existence of a minimum for $p(\hat{X},n)$ via \begin{equation}\label{third_degree} p_1'(n)\hat{X}^3+p_2'(n) \hat{X}+p_3'(n)=0 \end{equation} where prime denotes differentiation with respect to argument. The real positive root $\hat{X}$ in \eqref{third_degree} depends on the mode number $n$, which we denote by $\hat{X}_c(n)$ where the subscript $c$ refers to \textit{critical}. Thus, we specify $n\in(1,\infty)$, determine the corresponding $\hat{X}_c$ from \eqref{third_degree}, and then the corresponding $p_c=p(\hat{X}_c,n)$ from \eqref{F-A-criteria}. In this manner we therefore know that shells in the range $\hat{X}\leq\hat{X}_c$ are buckled whereas those for $\hat{X}>\hat{X}_c$ remain unbuckled. In Fig.\ \ref{buckling criterions} we plot the predictions given by the \cite{fox-allwright01} model for the critical pressure $p_c$ as a function of critical size $\hat{X}_c$, letting the buckling mode parameter $n$ lie in the range $[13,1000]$, assuming $p_{\textsc{{in}}}=0$ and given the material constants in \eqref{material-constants}. This range of buckling modes gives rise to realistic pressures; choosing a lower $n$ corresponds to a higher pressure. \begin{figure} \psfrag{0.5}[c][l]{\small{$0.5$\hspace{0.1cm}}} \psfrag{1.0}[c][l]{\small{$1.0$\hspace{0.1cm}}} \psfrag{1.5}[c][l]{\small{$1.5$\hspace{0.1cm}}} \psfrag{2.0}[c][l]{\small{$2.0$\hspace{0.1cm}}} \psfrag{2.5}[c][l]{\small{$2.5$\hspace{0.1cm}}} \psfrag{0.005}[c][b]{\small{$0.005$}} \psfrag{0.010}[c][b]{\small{$0.010$}} \psfrag{0.015}[c][b]{\small{$0.015$}} \psfrag{0.020}[c][b]{\small{$0.020$}} \psfrag{0.025}[c][b]{\small{$0.025$}} \psfrag{B}[cb][lt]{\footnotesize{$\dfrac{p_c}{\mu_m}$}} \psfrag{A}[cl][cr]{\footnotesize{$\hat{X}_c$}} \centering \includegraphics[scale=0.9]{figures/FA13-1000.eps} \caption{Plot of the critical hydrostatic pressure $p_c/\mu_m$ as a function of the critical ratio $\hat{X}_c$ predicted by the \cite{fox-allwright01} model, obtained for $n\in[13,1000]$, when we use the material constants specified in \eqref{material-constants} and we assume $p_{\textsc{{in}}}=0$.} \label{buckling criterions} \end{figure} \section{Post-buckling behaviour: nonlinear elastic response} \label{postb} As we have emphasized before, as we increase the hydrostatic load $p$ gradually, an increasing number of shells will transition to a buckled state. When the shell buckles, there will be some complex modification to the structure of the shell and the local matrix medium. Buckling of the shell will result in a local loss of rigidity and (initially at least) a macroscopic increase in compressibility. Modelling the exact modification to the numerous shell structures is a formidable task and hence from a modelling viewpoint, in order to model the post-buckling behaviour we shall make the following simplification. For pressures $p>p_c$, for a CS region, we assume that the spherical shell region is replaced by a spherical cavity, which at the pressure $p_c$, has the same radius as the microsphere at its buckling pressure. The post-buckling stage of behaviour will be analyzed under the nonlinear deformation assumption, because the loss of rigidity of the microsphere will permit finite deformations if appreciable pressure $p/\mu_m$ is applied (note again Fig.\ \ref{displacement}). We wish to understand how an additional increase in pressure in the CS region, past the buckling pressure $p_c$, decreases the volume of the CS further and subsequently we shall derive this influence on the macroscopic volume of the composite material. From the pre-buckling analysis we can determine exactly the predictions of the deformed radii $s_c$ and $a_c$ of the CS (with corresponding initial radii $S$ and $A$ respectively) where the subscript $c$ indicates the critical value at the buckling pressure $p_c$. We can therefore begin our nonlinear analysis from those values, further increasing the load pressure until we reach the chosen load $p$. This two stage linear-nonlinear approach is however not particularly appealing; in fact we are able to consider the volume change in the post-buckling regime by considering an alternative (nonlinear) problem from the outset (i.e.\ increasing the far-field pressure from zero) that is statically equivalent to the linear problem in the pre-buckling regime as we shall now show. With reference to Fig.\ \ref{staticequiv}, consider a full nonlinear elasticity formulation of the deformation associated with an unstressed medium within which resides a \textit{cavity} with the same radius $A$ as the initially undeformed microsphere. We consider the deformation of the spherical cavity due to a far-field pressure with an additional internal `shell pressure' denoted by $p_{\textsc{{in}}}^s$ which mimics the residual presence of the shell. We choose this pressure by referring to the (linear) pre-buckling analysis, considering this to be the maximum pressure that the shell exerts on the matrix pre-buckling, i.e.\ \begin{equation}\label{newboundary-condition} p_{\textsc{in}}^s= -\sigma^m_{rr}(A),\ \textrm{at}\ p=p_c, \end{equation} such that at the critical pressure we recover to a good approximation the linear elastic solution, i.e.\ to obtain agreement for $a_c$ and $s_c$. This therefore gives the correct starting point for volume change calculations for $p>p_c$. \begin{figure} \psfrag{A}{$A$} \psfrag{ac}{$a_c$} \psfrag{a}{$\bar{a}$} \psfrag{p}{$p$} \psfrag{pc}{$p_c$} \psfrag{pin}{$p^s_{\textnormal{IN}}$} \psfrag{I}{Increase of external pressure} \psfrag{S}{Statically equivalent} \psfrag{N}{Nonlinear} \psfrag{L}{Linear} \centering \includegraphics[scale=0.65]{figures/static_equiv.eps} \caption{To determine the nonlinear deformation post-buckling we use a problem that is statically equivalent to the true problem which is drawn schematically here. Above is the linear-nonlinear process of deformation and below the fully nonlinear process. The presence of the shell is represented by the internal pressure $p_{\textnormal{IN}}^s$ in the latter, which ensures the correct radius when $p=p_c$.} \label{staticequiv} \end{figure} We shall consider several assumptions for the nonlinear elastic matrix, and we make use of a \textit{bar} on quantities in the post-buckling regime, in particular the undeformed radii $A,S$ and the initial volume $V$ are now denoted by $\bar{a}$, $\bar{s}$, and $\bar{v}$ respectively. We must now determine the deformation of the cavity subject to the imposed external pressure. Since this is spherically symmetric we assume that the deformation is purely radial and we therefore write this \textit{radial deformation} in spherical polar coordinates as \begin{equation}\label{radial-deformation_spherical} r=r(R),\qquad \theta=\Theta,\qquad \phi=\Phi, \end{equation} where $(R,\Theta,\Phi)$ are the polar coordinates in the reference configuration and $(r,\theta,\phi)$ are the polar coordinates in the current configuration, respectively, with $\textrm{d}r/\textrm{d}R>0$. From the result of Ericksen \citep{ericksen55} this deformation is not a controllable deformation that is possible in every \textit{compressible} homogeneous and isotropic hyperelastic material. Therefore this \textit{inhomogeneous} deformation for compressible materials has to be discussed in the context of special materials. For example for a special \textit{Blatz-Ko material} an analytical solution has been found in a parametric way \citep{Chung-horgan-abeyaratne86,horgan89,horgan95zamp}. Six other classes of compressible materials have received much attention in the literature where the solution can also be found analytically \citep{carrol88,carrol91a,carrol91b,murphy92}. For an overview of such results see \cite{horgan2001}. In the incompressible case, thanks to the constraint of incompressibility, the radial deformation \eqref{radial-deformation_spherical} can be treated in a much more straightforward manner and indeed is a universal solution (we refer to section 57 of \cite{truesdell-noll92} for more details). This deformation satisfies the balance equations with zero body force, its equilibrium is supported by suitable surface tractions alone, and is the same for all materials (in the class of \textit{constrained} materials). The choice of strain energy function therefore merely dictates the stress field induced by the deformation. The special compressible solutions for radial deformation referred to above are not well suited to describe the constitutive response of a rubber-like material and therefore we choose not to use these here. Our analysis is therefore concerned firstly with purely incompressible materials before we move on to describe nearly-incompressible materials in the context of an asymptotic theory with a small parameter $\mu/\kappa\ll 1$. In the latter case we use a constitutive model proposed in the literature by \cite{horgan-murphy08} which was based on experimental behaviour (see for example \cite{penn70}). For ease of exposition we have placed details of the theory of nonlinear elasticity associated with the subsections to follow in \ref{app:nonlin}. \subsection{Post-buckling with incompressibility} \label{Post-buckling with incompressibility} The polar components of the deformation gradient associated with \eqref{radial-deformation_spherical} are given by \begin{equation}\label{deformation_gradient} \bm{F}= \textrm{diag} (\textrm{d}r/\textrm{d}R,r/R,r/R) \end{equation} and for an incompressible material the constraint of incompressibility $\det{\bm{F}}=1$ states that \begin{equation}\label{incompressibility_condition} r(R)=\left(R^3+\alpha\right)^{1/3}, \end{equation} where $\alpha=\bar{a}^3-A^3$. The equilibrium equations in the absence of body forces are $\textrm{div}{\bm{T}}=0$ where $\bm{T}$ is the Cauchy stress tensor. In the radially symmetric case these reduce to the single ordinary differential equation \begin{equation}\label{incomp_equation} \dfrac{\textrm{d} T_{rr}}{\textrm{d}r}+\dfrac{2}{r}\left(T_{rr}-T_{\theta\theta}\right)=0 \end{equation} where $\bm{T}$ is derived from a \textit{strain energy function} (SEF) $W$, as described in \ref{app:nonlin}. In this subsection we shall consider two incompressible models, the neo-Hookean material with SEF $W_{\textnormal{NH}}$ and Mooney-Rivlin material with SEF $W_{\textnormal{MR}}$, whose forms are given in \ref{app:nonlin}. Up to the point of buckling note that we have assumed that the pressure inside the microsphere $p_{\textsc{{in}}}$ is zero since changes in volume were very small. In the post-buckling stage we require the additional pressure $p_{\textsc{{in}}}^s$ initially to maintain continuity as described in Fig.\ \ref{staticequiv} but we shall also now assume that the inner pressure $p_{\textsc{{in}}}$ can be non-constant. This is motivated by the fact that volume changes can now be large and therefore in this post-buckling stage at some point the gas interior to the cavity can be compressed so much as to act to stiffen the material. We therefore may anticipate a Boyle's law type relation for a massless ideal gas, of the form \begin{equation} p_{\textsc{{in}}}= p_{\textsc{{in}}}^s+p_{\textsc{{in}}}^b \end{equation} where $p_{\textsc{{in}}}^s$ is a constant that accounts for the residual effect of the buckled shell and \begin{equation} p_{\textsc{{in}}}^b = p_{\textsc{{atm}}}\left(\left(\frac{A}{\bar{a}}\right)^{3\eta}-1\right). \label{boyle} \end{equation} In the latter expression the pressure and volume are related through a polytropic exponent relationship, for a diatomic gas, with exponent $\eta=1.4$ where $\eta$ is the heat capacity ratio. Note that all pressures are stated relative to atmospheric pressure, which motivates the form in \eqref{boyle}. Given a SEF $W$, it is straightforward to integrate \eqref{incomp_equation} and apply the traction boundary conditions \begin{equation}\label{new_boundary_conditions} T_{rr}(R)\Big|_{R\rightarrow\infty}=-p,\qquad T_{rr}(A)= -p_{\textsc{{in}}} = -p_{\textsc{{in}}}^s -p_{\textsc{{in}}}^b \end{equation} in order to obtain an expression linking the deformed inner radius $\bar{a}$ and the imposed pressure difference. For the \textit{neo-Hookean} case we obtain \begin{equation}\label{neo-hookean_solution} \frac{p-p_{\textsc{{in}}}^s -p_{\textsc{{in}}}^b}{\mu_m}=\frac{1}{2}\left(\frac{A}{\bar{a}}\right)^4+2\left(\frac{A}{\bar{a}}\right)-\frac{5}{2}, \end{equation} whereas for the \textit{Mooney-Rivlin} model, we find \begin{multline}\label{mooney-rivlin_solution} \frac{p-p_{\textsc{{in}}}^s-p_{\textsc{{in}}}^b}{\mu_m}=\left(\frac{1}{2}+\gamma\right)\left[\frac{1}{2}\left(\frac{A}{\bar{a}}\right)^4+2\left(\frac{A}{\bar{a}}\right)-\frac{5}{2}\right]+\\ \left(\frac{1}{2}-\gamma\right)\left[\left(\frac{A}{\bar{a}}\right)^2-2\left(\frac{\bar{a}}{A}\right)+1\right]. \end{multline} Note that $\gamma$ is a material constant with $-1/2 \le \gamma \le 1/2$, and when $\gamma=1/2$ in \eqref{mooney-rivlin_solution}, we recover the neo-Hookean solution \eqref{neo-hookean_solution}. Using the expressions \eqref{neo-hookean_solution} or \eqref{mooney-rivlin_solution} for $\bar{a}$, from \eqref{incompressibility_condition} we can then obtain the (post-buckling) deformed radius of the CS, i.e.\ $\bar{s}=r(S)$, predicted by the neo-Hookean \eqref{neo-hookean} or Mooney-Rivlin \eqref{Mooney-Rivlin} models, respectively. \subsection{Post-buckling with slight compressibility} \label{Post-buckling with slight compressibility} A great deal of work has been presented in the literature regarding the constrained theory of elasticity (e.g.\ incompressible materials) where many solutions have been obtained in order to describe approximations to real materials. They are approximations because of course no material is in reality completely incompressible. Numerous constitutive models have been proposed in order to model the true behaviour of the material when there is a slight deviation from incompressiblity. We assume, as before, that the material is homogeneous, isotropic, and hyperelastic. The early contribution to this theory was from \cite{Spencer70} and an application was considered by \cite{faulkner71} who considered the time dependent radial deformation of a thick walled spherical shell of almost incompressible material. More recent work has been described by Ogden \citep{ogden76,ogden78,ogden97} and Horgan and Murphy \citep{horgan-murphy07,horgan-murphy07b,horgan-murphy08}. In order to describe the linearity between pressure and volume change (assumed to hold for pressures up to $50$ MPa) summarized by \cite{penn70} for natural rubber, \cite{horgan-murphy08} derived several different forms of the strain energy function and we take the form $W_{\textnormal{HM}}$ in \eqref{horgan-murphy}. In the case of slight compressibility, we have \begin{equation}\label{compr_condition} \epsilon = \dfrac{\mu_m}{\kappa_m}\ll 1. \end{equation} We then consider a regular perturbation problem, seeking asymptotic expansions in powers of $\epsilon$ for the relevant solutions, with the results obtained in section \ref{Post-buckling with incompressibility} arising as the leading order terms. Thus, we anticipate that the leading order deformation will be described by \eqref{incompressibility_condition} and we seek corrections to this, associated with the strain energy function \eqref{horgan-murphy}. We thus derive the deformed radii $\bar{a}$ and $\bar{s}$ which are slightly modified according to the slight compressibility of the matrix. We assume that we can write for the deformed radial coordinate \begin{equation}\label{sligthly_compressible_deformation} r(R)=r_0(R)+\epsilon r_1(R)+\epsilon^2 r_2(R)+O(\epsilon^3), \end{equation} where \begin{equation} r_0(R)=\left(R^3+\alpha\right)^{1/3} \end{equation} is determined from the incompressible theory (equivalently $\epsilon\rightarrow0$). Employing the asymptotic scheme, whose details we provide in \ref{app:nonlin} for ease of exposition, we derive the correction term $r_1(R)$ in \eqref{sligthly_compressible_deformation} as that given in \eqref{r1}. This allows us to derive the additional volume change due to the compressibility of the matrix medium. \subsection{Post-buckling: relative volume change for each CS} \label{Post-buckling: relative volume change for each CS} For $p\geq p_c$, let us consider an initial composite sphere of radius $S$ containing a microsphere of initial size $X=H/A$. The relative volume change for a buckled microsphere is now given by \begin{equation}\label{relative_change_volume_buckled} \delta \bar{v}= \dfrac{V-\bar{v}}{V}=1-\left(\dfrac{\bar{s}}{S}\right)^3, \end{equation} where $\bar{s}=r(S)$. For an incompressible matrix therefore $\bar{s}$ is evaluated via \eqref{incompressibility_condition} for Neo-Hookean or Mooney-Rivlin materials. Alternatively, for a slightly compressible matrix with strain energy function \eqref{horgan-murphy} it is evaluated via \eqref{sligthly_compressible_deformation}. \section{Predicted pressure-relative volume change curves for the microsphere material} \label{Relative volume change for the entire material} \subsection{Total relative volume change for the material} In sections \ref{Pre-buckling: relative volume change for each CS} and \ref{Post-buckling: relative volume change for each CS} we calculated the relative volume change for each composite sphere associated with the pre-buckling and post-buckling stages respectively. The choice of the former or the latter depends upon whether the value of the applied pressure is below or above the theoretical critical pressure required to buckle the microsphere of shell thickness to radius ratio $X$ in the composite sphere, according to the Fok-Allwright theory. Our principal goal is now to use these two models in order to predict the pressure-relative volume change curve for the material as a whole when there is a distribution of different shell thicknesses. Let us introduce a probability distribution function $F(\hat{X})$ which describes the distribution of the microsphere shell thickness to radius ratios. Thus we impose the far-field hydrostatic pressure $p$ and then use the buckling model in section \ref{Microsphere buckling} to predict the critical $X_c$ below and above which buckling will and will not occur respectively. In this way we establish, at each given pressure $p$, the proportion of microspheres that are in a buckled state. Then, in order to determine the macroscopic pressure-relative volume change curve, we use \eqref{relative_change_volume_pre} and \eqref{relative_change_volume_buckled} for that proportion of microspheres that are in the pre-buckled and post-buckled states. The relative volume change of the entire material, say $\delta \mathcal{V}$, is therefore given by the sum of all of the relative volume changes in each composite sphere, the distribution of $X$ being accounted for by the probability distribution function $F(\hat{X})$. As such we write \begin{equation}\label{rel_vol_change_entire_material} \delta \mathcal{V}(p)=\int_0^2 \left(\delta \bar{v}(\hat{X}) \bar{\chi}(\hat{X})+ \delta v(\hat{X}) \chi(\hat{X})\right) F(\hat{X})\ \textrm{d}\hat{X}. \end{equation} where $\chi(\hat{X})=1-\bar{\chi}(\hat{X})$ is the indicator function, defined as \begin{align} \chi(\hat{X})=\begin{cases} 0, & \hat{X}\in[0,\hat{X}_c], \\ 1, & \hat{X}\in[\hat{X}_c,2]. \end{cases} \end{align} Note that in \eqref{rel_vol_change_entire_material} we have allowed $\hat{X}$ to take on all possible values $\hat{X}\in[0,2]$ corresponding to $H\in[0,A]$. However, we note that the buckling theory above is applicable to thin shells only and therefore the choice of $F$ is important in order for \eqref{rel_vol_change_entire_material} to give an accurate prediction. In reality the microsphere shells are very thin (O(0.01)) and thus the distribution function $F$ must model this, as we now describe. \subsection{Parameter studies} \label{Some predicted pressure-volume curves} Let us now consider how the \textit{pressure-relative volume change} curve is affected by the numerous parameters in the problem and in particular we wish to understand the \textit{sensitivity} of the curve to these parameters. We first consider the form of the probability distribution function $F(\hat{X})$. From \cite{PBStech} it appears that the microsphere shell to radius ratio distribution can be described well by a Gamma distribution. We thus define \begin{equation}\label{gamma} F(\hat{X})= \left(\dfrac{k}{\hat{X}_0}\right)^k \dfrac{\hat{X}^{k-1}}{\Gamma(k)} \exp\left[-\left(k/\hat{X}_0\right)\hat{X}\right], \end{equation} where $k>0$ is the shape parameter, $\hat{X}_0>0$ is the mean value (expectation) of $\hat{X}$ and $\Gamma(k)$ is the Gamma function evaluated at $k$. Parameter studies will involve choosing values for the elastic properties ($\kappa_s,\kappa_m,\mu_s,\mu_m$), the initial volume fraction $\Phi$ of microspheres, the parameters $k$ and $\hat{X}_0$ in \eqref{gamma} and the constitutive model for the nonlinear elastic matrix described in section \ref{postb}. Additionally we must decide whether or not to incorporate Boyle's law inside the microspheres during compression. This parameter set can therefore be chosen in many ways. We select certain cases in order to illustrate specific aspects of the model and hence test the sensitivity of the results to the particular parameters. \subsubsection{Influence of nonlinear elastic constitutive model} \label{infnonlin} We start by fixing the material properties as those given in \eqref{material-constants} and we take an initial volume fraction of microspheres as $\Phi=0.05$. Furthermore, within the distribution function $F(\hat{X})$ given in \eqref{gamma} we take the shape parameter $k=8$ as given in \cite{PBStech} and we consider a mean value $\hat{X}_0=0.01$. Finally, we assume that the gas inside the microspheres remains at a constant atmospheric pressure, i.e.\ $p_{\textsc{{in}}}^b=0$. On the left of Fig.\ \ref{firstplot} we plot the predicted pressure-relative volume change curve for three different nonlinear elastic constitutive models in the post-buckling regime: Neo-Hookean (dotted), Mooney-Rivlin with $\gamma=1/18$ (solid) and slightly compressible Horgan-Murphy (dashed) as well as a purely linear elastic deformation (dot-dashed). As can be seen from the left of Fig.\ \ref{firstplot}, all curves are strongly nonlinear, exhibiting a softening behaviour under loading after the initial linear elastic behaviour pre-buckling at small pressures. As should be expected all four curves are initially identical with a slight modification in the post-buckling regime where the linear or nonlinear elastic model becomes important. However even in this regime the curve is fairly insensitive to the constitutive model employed, nonlinear models yielding almost identical results. The linear model is fairly close to the nonlinear predictions at these pressures but as pressure increases the linear results depart significantly as should be expected from this approximate theory. \begin{figure} \centering \psfrag{0.2}[c][b]{\small{$0.2$}} \psfrag{0.4}[c][b]{\small{$0.4$}} \psfrag{0.6}[c][b]{\small{$0.6$}} \psfrag{0.8}[c][b]{\small{$0.8$}} \psfrag{0.002}[c][l]{\small{$0.002\hspace{0.1cm}$}} \psfrag{0.004}[c][l]{\small{$0.004\hspace{0.1cm}$}} \psfrag{0.006}[c][l]{\small{$0.006\hspace{0.1cm}$}} \psfrag{0.008}[c][l]{\small{$0.008\hspace{0.1cm}$}} \psfrag{0.010}[c][l]{\small{$0.010\hspace{0.1cm}$}} \psfrag{0.012}[c][l]{\small{$0.012\hspace{0.1cm}$}} \psfrag{0.014}[c][l]{\small{$0.014\hspace{0.1cm}$}} \psfrag{0.0005}[c][c]{\small{$0.5$}} \psfrag{0.0010}[c][c]{\small{$1.0$}} \psfrag{0.0015}[c][c]{\small{$1.5$}} \psfrag{0.0020}[c][c]{\small{$2.0$}} \psfrag{0.0025}[c][c]{\small{$2.5$}} \psfrag{0.0030}[c][c]{\small{$3.0$}} \psfrag{B}{\footnotesize{$\delta \mathcal{V}$}} \psfrag{A}{\footnotesize{$\dfrac{p}{\mu_m}$}} \psfrag{D}{\footnotesize{$\mathcal{D}\times 10^3$}} \psfrag{C}{\footnotesize{$\dfrac{p}{\mu_m}$}} \includegraphics[scale=0.60]{figures/number1.eps} \caption{Left: Predicted pressure-relative volume change curve with fixed parameters $\Phi=0.05$, $\mu_s,\kappa_s,\mu_m,\kappa_m$ as given in \eqref{material-constants} and $F(\hat{X})$ as in \eqref{gamma} with $\hat{X}_0=0.01,k=8$. Nonlinear model for the matrix is taken as neo-Hookean (dotted), Mooney-Rivlin (solid) and slightly-compressible Horgan Murphy (dashed) with $\gamma=1/18$ and for reference we also plot the response when the post-buckling regime is determined via \textit{linear} elasticity (dot-dash). Right: Plot of the difference $\mathcal{D}$ in the prediction of $\delta \mathcal{V}$ via the alternative nonlinear models. $\mathcal{D}=\delta \mathcal{V}_{MR}-\delta \mathcal{V}_{NH}$ (solid), $\mathcal{D}=\delta \mathcal{V}_{HM}-\delta \mathcal{V}_{MR}$ (dotted), $\mathcal{D}=\delta \mathcal{V}_{HM}-\delta \mathcal{V}_{NH}$ (dashed) and $\mathcal{D}=\delta \mathcal{V}_{LIN}-\delta \mathcal{V}_{MR}$ (dot-dashed).} \label{firstplot} \end{figure} In order to better compare the models, on the right of Fig.\ \ref{firstplot} we plot the \textit{difference} between the relative volume change predicted by the different elastic post-buckling models. For example the difference between Horgan-Murphy and Mooney-Rivlin is calculated as $\mathcal{D}=\delta \mathcal{V}_{HM}-\delta \mathcal{V}_{MR}$ and analogously for the other two possiblities. Given the scale, it is clear that the difference between any of the nonlinear models is $O(10^{-4})$. We reiterate that at these pressures the linear elastic model is relatively close to the nonlinear models but at higher pressures there is a significant departure. Although here we are predominantly interested in $p/\mu_m=O(1)$ we note with reference to Fig.\ \ref{nonlinearmodelbis}, that for both \textit{incompressible} nonlinear materials, as $p/\mu_m\rightarrow\infty$, $\delta \mathcal{V}\rightarrow 0.05$, as we should expect in that case (parameters chosen are those associated with Fig.\ \ref{firstplot}). The Horgan-Murphy model (dashed) predicts a slightly different limit for $\delta \mathcal{V}$ as shown in Fig.\ \ref{nonlinearmodelbis} since this takes into account the slight compressibility of the matrix. The curve associated with the linear elastic model illustrates its unrealistic nature for large deformations. \begin{figure} \centering \psfrag{5}[c][b]{\small{$5$}} \psfrag{10}[c][b]{\small{$10$}} \psfrag{15}[c][b]{\small{$15$}} \psfrag{20}[c][b]{\small{$20$}} \psfrag{25}[c][b]{\small{$25$}} \psfrag{0.01}[c][l]{\small{$0.01$}} \psfrag{0.02}[c][l]{\small{$0.02$}} \psfrag{0.03}[c][l]{\small{$0.03$}} \psfrag{0.04}[c][l]{\small{$0.04$}} \psfrag{0.05}[c][l]{\small{$0.05$}} \psfrag{B}{\footnotesize{$\delta \mathcal{V}$}} \psfrag{A}{\footnotesize{$\dfrac{p}{\mu_m}$}} \includegraphics[scale=0.75]{figures/NonlinearModelsbis.eps} \caption{Prediction of $\delta \mathcal{V}$ for large values of $p/\mu_m$ with all parameters as given in Fig. \ref{firstplot} and plotted for Neo-Hookean (dotted), Mooney-Rivlin (solid) and slightly-compressible Horgan Murphy (dashed) nonlinear models as well as a linear elastic prediction (dot-dashed).} \label{nonlinearmodelbis} \end{figure} \subsubsection{Influence of volume fraction, shell properties and pressure law} We concluded in section \ref{infnonlin} that the predicted curves are relatively insensitive to the nonlinear elastic model employed. As a result of this insensitivity, let us now model the matrix as an incompressible Mooney-Rivlin medium - a standard model for a rubber-like matrix medium. Thus, with the solid curve from Fig. \ref{firstplot} as a starting point, also plotted is a reference curve in Fig.\ \ref{secondplot}, and let us vary other parameters in order to assess their influence. For each curve we keep all parameters fixed except one control parameter in order to assess its particular effect. Thus, the dashed curve in Fig.\ \ref{secondplot} corresponds to changing only the volume fraction of microspheres from $\Phi=0.05$ to $\Phi=0.1$. The dotted curve corresponds to incorporating Boyle's law \eqref{boyle} for the gas interior to the microsphere instead of constant pressure. The thick dashed curve corresponds to softer shell material properties, i.e.\ \begin{equation}\label{shell_softer} \mu_s= 0.126\ \textrm{GPa},\quad \kappa_s = 0.21\ \textrm{GPa}, \end{equation} and the dot-dashed curve is associated with slightly more compressible matrix material properties (for the linear elastic pre-buckling portion of the curve), i.e.\ \begin{equation}\label{host_softer} \mu_m =1.2\ \textrm{MPa},\quad \kappa_m = 0.4\ \textrm{GPa}. \end{equation} These correspond to a Poisson's ratio of $\nu_m=0.4985$. Let us assess each one of these in turn. Increasing the microsphere volume fraction, whilst keeping their distribution fixed has the expected effect: the curve remains qualitatively similar, becoming relatively softer in the nonlinear regime. Incorporating Boyle's law should add some stiffness in the post-buckling regime and this can be seen in the figure; but its effect is rather modest. The dotted and solid curves are identical until the post-buckling effects become important, around $p/\mu_m=0.2$, and then increased pressure interior to the microsphere does yield a small additional stiffness. Softer shell properties are expected to have a larger effect in the transition region from pre to post buckling as the shells will clearly buckle at lower pressures. This can clearly be seen in the figure; an order of magnitude change to the properties has modified the curve significantly. However, the insensitivity post-buckling can be seen by virtue of this curve and the solid curve remaining parallel in this regime. Finally, the small decrease in matrix Poisson's ratio corresponding to the dot-dashed curve yields the expected effect; the material becomes slightly softer in the linear region, recovering an identical nonlinear response to the reference Mooney-Rivlin case in the post-buckling region. As perhaps should be expected, there is a great sensitivity to the material properties of the shell, but the model is relatively insensitive to other parameters. \begin{figure} \centering \psfrag{0.2}[c][b]{\small{$0.2$}} \psfrag{0.4}[c][b]{\small{$0.4$}} \psfrag{0.6}[c][b]{\small{$0.6$}} \psfrag{0.8}[c][b]{\small{$0.8$}} \psfrag{0.000}[c][l]{\small{$$}} \psfrag{0.005}[c][l]{\small{$0.005$}} \psfrag{0.010}[c][l]{\small{$0.010$}} \psfrag{0.015}[c][l]{\small{$0.015$}} \psfrag{0.020}[c][l]{\small{$0.020$}} \psfrag{A}{\footnotesize{$\dfrac{p}{\mu_m}$}} \psfrag{B}{\footnotesize{$\delta \mathcal{V}$}} \includegraphics[scale=0.65]{figures/number2.eps} \caption{Parameter study for the predicted pressure-relative volume change curve. The solid curve corresponds to the parameters assumed in Fig.\ \ref{firstplot} with a Mooney-Rivlin matrix medium. We then vary one specific aspect of the model to assess sensitivity. Different curves correspond to: increasing microsphere volume fraction from $\Phi=0.05$ to $\Phi=0.1$ (dashed); incorporating Boyle's law \eqref{boyle} for the gas interior to the microsphere instead of constant pressure (dotted); softer shell properties given by \eqref{shell_softer} (thick dashed); slightly more compressible matrix phase given by \eqref{host_softer} (dot-dashed).} \label{secondplot} \end{figure} \subsubsection{Influence of probability distribution function parameters} Let us now take the Mooney-Rivlin reference curve (solid) as plotted in Fig. \ref{firstplot} and vary the distribution function parameters in $F$ from those of the reference material $\hat{X}_0=0.01$ and $k=8$. In Fig.\ \ref{thirdplot} we plot the distribution function (left) and corresponding pressure-relative volume change curves (right) whilst keeping $k=8$ fixed and varying $\hat{X}_0$ (top) and then keeping $\hat{X}_0=0.01$ fixed and varying $k$ (bottom). As perhaps should be expected there is great sensitivity to the average shell thickness to radius ratio. This can be seen by the large modifications to the pressure-volume curves in the top-right figure. The main effect of varying $k$ is the later (in terms of higher pressure) softening of the material, although its influence is less marked than variation in $\hat{X}_0$. This information is useful from the viewpoint of knowing the correct type and distribution of microspheres to use in the composite. \begin{figure} \centering \psfrag{0.2}[c][b]{\footnotesize{$0.2$}} \psfrag{0.4}[c][b]{\footnotesize{$0.4$}} \psfrag{0.6}[c][b]{\footnotesize{$0.6$}} \psfrag{0.8}[c][b]{\footnotesize{$0.8$}} \psfrag{0.01}[c][b]{\footnotesize{$0.01$}} \psfrag{0.02}[c][b]{\footnotesize{$$}} \psfrag{0.03}[c][b]{\footnotesize{$0.03$}} \psfrag{0.04}[c][b]{\footnotesize{$$}} \psfrag{0.05}[c][b]{\footnotesize{$0.05$}} \psfrag{0.06}[c][b]{\footnotesize{$$}} \psfrag{50}[c][l]{\footnotesize{$50$}\hspace{-0.1cm}} \psfrag{100}[c][l]{\footnotesize{$100$}\hspace{-0.1cm}} \psfrag{150}[c][l]{\footnotesize{$150$}\hspace{-0.1cm}} \psfrag{200}[c][l]{\footnotesize{$200$}\hspace{-0.1cm}} \psfrag{0}[c][l]{\small{$$}} \psfrag{2}[c][l]{\footnotesize{$0.002$}\hspace{0.6cm}} \psfrag{4}[c][l]{\footnotesize{$0.004$}\hspace{0.6cm}} \psfrag{6}[c][l]{\footnotesize{$0.006$}\hspace{0.6cm}} \psfrag{8}[c][l]{\footnotesize{$0.008$}\hspace{0.6cm}} \psfrag{10}[c][l]{\footnotesize{$0.010$}\hspace{0.4cm}} \psfrag{5}[c][l]{\footnotesize{$0.005$}\hspace{0.6cm}} \psfrag{15}[c][l]{\footnotesize{$0.015$}\hspace{0.4cm}} \psfrag{20}[c][l]{\footnotesize{$0.020$}\hspace{0.4cm}} \psfrag{0.005}[c][b]{\footnotesize{$0.005$}} \psfrag{0.010}[c][b]{\footnotesize{$$}} \psfrag{0.015}[c][b]{\footnotesize{$0.015$}} \psfrag{0.020}[c][b]{\footnotesize{$$}} \psfrag{0.025}[c][b]{\footnotesize{$0.025$}} \psfrag{0.030}[c][b]{\footnotesize{$$}} \psfrag{B}{\footnotesize{$F$}} \psfrag{A}{\footnotesize{$\hat{X}$}} \psfrag{D}{\footnotesize{$\delta \mathcal{V}$}} \psfrag{C}{\footnotesize{$\dfrac{p}{\mu_m}$}} \includegraphics[scale=0.65]{figures/number3.eps} \caption{Plots of the distribution function (left) against shell thickness to radius ratio and corresponding pressure-relative volume change curves (right). The parameter $k=8$ is held fixed and we vary $\hat{X}_0$ (top) whereas we keep $\hat{X}_0=0.01$ fixed and vary $k$ (bottom). Specifically we take $\hat{X}_0=0.01, 0.02$ and $0.005$ (solid, dotted and dashed respectively) (top) and $k=8,15$ and $30$ (solid, dotted and dashed respectively) (bottom).} \label{thirdplot} \end{figure} Finally, we illustrate how the `kink' in the load curve is associated primarily with the distribution function of the shell thicknesses. Let us choose $\hat{X}_0=0.01$ and successively increase $k$ which has the effect of tending the distribution function towards a dirac delta function as can be seen in Fig.\ \ref{diraclimit} where we take $k=8$ (solid), $k=50$ (dotted) and the limit as $k\rightarrow\infty$ (dashed). The limiting case when there is only one size of microsphere shell in the composite ($k\rightarrow\infty$) is reflected in the shape of the pressure-volume curve illustrated on the right of Fig.\ \ref{diraclimit}, manifested by a discontinuity in derivative of the curve. When $k$ is made finite and reduced the curve becomes progressively smoother. \begin{figure} \centering \psfrag{50}[c][l]{\footnotesize{$50$}} \psfrag{100}[c][l]{\footnotesize{$100$}} \psfrag{150}[c][l]{\footnotesize{$150$}} \psfrag{200}[c][l]{\footnotesize{$200$}} \psfrag{250}[c][l]{\footnotesize{$250$}} \psfrag{300}[c][l]{\footnotesize{$300$}} \psfrag{0}[c][l]{\small{$$}} \psfrag{0.005}[c][b]{\footnotesize{$0.005$}} \psfrag{0.010}[c][b]{\footnotesize{$$}} \psfrag{0.015}[c][b]{\footnotesize{$0.015$}} \psfrag{0.020}[c][b]{\footnotesize{$$}} \psfrag{0.025}[c][b]{\footnotesize{$0.025$}} \psfrag{0.030}[c][b]{\footnotesize{$$}} \psfrag{0.2}[c][b]{\footnotesize{$0.2$}} \psfrag{0.4}[c][b]{\footnotesize{$0.4$}} \psfrag{0.6}[c][b]{\footnotesize{$0.6$}} \psfrag{0.8}[c][b]{\footnotesize{$0.8$}} \psfrag{2}[c][l]{\footnotesize{$0.002$}\hspace{0.65cm}} \psfrag{4}[c][l]{\footnotesize{$0.004$}\hspace{0.65cm}} \psfrag{6}[c][l]{\footnotesize{$0.006$}\hspace{0.65cm}} \psfrag{8}[c][l]{\footnotesize{$0.008$}\hspace{0.65cm}} \psfrag{10}[c][l]{\footnotesize{$0.010$}\hspace{0.5cm}} \psfrag{B}{\footnotesize{$F$}} \psfrag{A}{\footnotesize{$\hat{X}$}} \psfrag{D}{\footnotesize{$\delta \mathcal{V}$}} \psfrag{C}{\footnotesize{$\dfrac{p}{\mu_m}$}} \includegraphics[scale=0.6]{figures/diraclimit.eps} \caption{Illustrating how the shape of the load curve is modified in the limit $k\rightarrow\infty$. We choose $\hat{X}_0=0.01$ and take $k=8$ (solid), $k=50$ (dotted) and the limit as $k\rightarrow\infty$ (dashed). The limiting case when there is only one size of microsphere shell in the composite ($k\rightarrow\infty$) is reflected in the shape of the pressure-volume curve illustrated on the right, i.e.\ the appearance of a discontinuity in derivative of the curve. The curve becomes smoother in this region as $k$ is progressively reduced.} \label{diraclimit} \end{figure} \section{Conclusions} We have presented a model that predicts the nonlinear \textit{pressure-relative volume change} loading curve associated with a microsphere elastomeric composite material. The nonlinearity is induced by several mechanisms: (i) incorporating a distribution of sizes of spherical shells which thus buckle successively according to the applied load, (ii) modelling the post-buckling behaviour of the matrix as a nonlinear elastic material, (iii) incorporating Boyle's law for the pressure interior to the microsphere in the post-buckling regime. In this initial study we have neglected any interaction between microspheres both in terms of the buckling analysis and the determination of the change in volume of the composite. Therefore we anticipate that the model is valid only for low volume fractions of microspheres. Parameter studies reveal that, although it appears important to include nonlinear behaviour in the post-buckling stage, the curves are largely insensitive to the chosen nonlinear elastic model. However the curves \textit{are} particularly sensitive to the properties of the microsphere, including shell properties and the distribution of shell thickness, particularly the choice of mean shell thickness to radius ratio $\hat{X}_0$. Furthermore, the shape of the `kink' in the load curve is associated primarily with the distribution function of the shell thicknesses. The smaller $k$ is, the smoother the transition to the nonlinear post-buckling state. We have also seen that when Boyle's law is incorporated in the post-buckling regime (the influence of a gas inside the shell pre-buckling is negligible as so is omitted in the model) there is competition between the softening of the material due to microsphere buckling and stiffening due to Boyle's law. It transpired that the latter contribution is small, as is visible in the transition from pre to post buckling in Fig.\ \ref{secondplot}. The near-incompressible nature of the matrix medium dictates that eventually the response of the composite will change, as the pressure increases, from an initial softening material to one which stiffens for large pressures ($p/\mu_m>O(1)$) as is seen in Fig.\ \ref{nonlinearmodelbis}. Note that in order to solve the post-buckling problem of a single spherical cavity embedded in a unbounded medium subject to inner and external hydrostatic pressure we used the theory of almost-incompressible materials, posing an asymptotic expansion for the Horgan-Murphy model \eqref{horgan-murphy} (see \cite{horgan-murphy08}). Follow-on work will consider the accuracy of the buckling model, by comparing with alternative (e.g.\ \cite{Jones-Chapman-Allwright07}) and new models. We shall also consider the effect of the interaction of microspheres on buckling. This is clearly important when volume fractions of microspheres become larger, a common case in practice. \subsection*{Acknowledgements} The authors are grateful to the Engineering and Physical Sciences Research Council for funding this work via grant EP/H050779/1. They are also grateful to Dr Philip Cotterill and Dr Peter Brazier-Smith (Thales Underwater Systems Ltd) and Dr John Smith (DSTL) for their assistance regarding various aspects of this work. The authors are also grateful to Professor Bing Li (Technical Institute of Physics and Chemistry, Chinese Academy of Science) and Dr James Busfield (Queen Mary, University of London) for their willingness to provide figures \ref{fig:HGM} and \ref{shorterimage} respectively, in order to reproduce them here.
{ "timestamp": "2012-10-16T02:01:47", "yymm": "1210", "arxiv_id": "1210.3701", "language": "en", "url": "https://arxiv.org/abs/1210.3701" }
\section{Introduction} \label{sec:intro} This paper considers a novel problem in experimental design: suppose an experimenter is studying a physical or biological system that can be modeled by a diffusion process of the general form \begin{equation} d\xbold_t = \fbold(\xbold_t,\theta,u_t) dt + \Sigma(\xbold_t)^{1/2} dW_t~. \label{eq:sde} \end{equation} Here, $\xbold_t$ is the state vector the system at time $t$, which evolves according to a (deterministic) vector-valued drift function $\fbold(\xbold_t,\theta,u_t)$ that depends on an unknown parameter $\theta$, as well as a time-varying input $u_t$ under the direct control of the experimenter. In addition to the drift $\fbold$, the system is also influenced by noise, modeled as a multivariate white noise $dW_t$ with a state-dependent covariance matrix $\Sigma(\xbold,t)$~. The functional forms of $\fbold$ and $\Sigma$ are assumed known, and experiments are to be conducted to determine the value of $\theta$. We consider the problem: How can the experimenter design a protocol for setting the control signal $u_t$ in real time, using the information available up to time $t$, so as to optimize the information obtained about $\theta$? Fig.~\ref{fig:schematic} provides a schematic illustration of our system. \begin{figure} \input{schematic.tex} \caption{Schematic representation of overall strategy. A physical or biological system is modeled as a diffusion process with unknown parameters, and experiments are to be conducted to determine the values of these parameters. Using available data up to time $t$, an estimate $\hat{\xbold}_t$ of the state of the system at time $t$ is estimated. This is used to compute (in real time) a time-dependent control signal $u_t$ (underlined above), which drives the system toward states that are maximally-informative for the parameters in question. The protocol, or {\em control policy}, is represented by the function $F(\xbold,t)$, which can be computed off-line.} \label{fig:schematic} \end{figure} As a concrete example, consider a typical {\em in vitro} neuron experiment: an electrode is attached to an isolated neuron, through which the experimenter can inject a time-dependent current $I_t$ and measure the resulting membrane voltage $v_t$. By injecting a sufficiently large current, the experimenter can elicit a rapid electrical pulse, or ``spike,'' from the neuron. These are generated by the flow of ionic currents across voltage-sensitive membrane ion channels. A standard model for spike generation in neurons is the Morris-Lecar model \citep{MorrisLecar81}, which has the form \begin{equation} \label{eq:ML} \begin{array}{rl} C_m dv_t =& \big[I_t - g_K~ w_t~(v_t-E_K) - g_{Ca}~m_\infty(v_t)~(v_t-E_{Ca})\\[1ex] &~~~~~- g_l~(v_t-E_l)\big]~dt +~\beta_vdW_1 \\[2ex] dw_t =& \frac{\phi}{\tau_w(v_t)}~(w_\infty(v_t) - w_t)~dt + \beta_w~\gamma(v_t,w_t)~dW_2\\ \end{array} \end{equation} where the auxiliary functions $w_\infty$, $\tau_w$, $m_\infty$~, and $\gamma$ are given in Appendix~\ref{app:ml}. In these equations, $I_t$ represents the externally applied current, which we treat as the control variable; the variable $v_t$ is the membrane voltage. The first equation expresses Kirchoff's current conservation law, and the constants $E_K$, $E_{Ca}$ and $E_l$ are so-called ``reversal potentials'' associated with the potassium, calcium, and leakage currents, respectively and represent the equilibrium voltage that would be attained if the corresponding current were the only one present. The quantities $g_K w_t$, $g_{Ca} m_\infty(v_t)$, and $g_l$ are the corresponding conductances; when divided by the membrane capacitance $C_m$, they give the equilibration rate associated with each type of current \citep{termantrout}. The dimensionless variable $w_t$ is a ``gating variable.'' It takes value in $[0,1]$, and describes the fraction of membrane ion channels that are open at any time. The white noise terms model membrane voltage fluctuations and ion channel noise, respectively. (This model is discussed in more detail in Sect.~\ref{sec:neuro}.) In typical experiments, the current $I_t$ is a relatively simple function of $t$, e.g., a step, square pulse, or a sinusoid, and the amplitude of $I_t$ is adjusted so that a spike is triggered. In the context of the framework outlined in Fig.~\ref{fig:schematic}, we seek instead to set $I_t$ in real-time\footnote{This can potentially be accomplished in ``dynamic clamp'' experiments, in which the recorded voltages are fed directly into a computer, which also sets the injected current. } according to the measured $v_t$ and a pre-computed control policy. The latter is designed to optimize the information gained about a parameter of interest, e.g., $g_{Ca}$ or $C_m$~, so that a precise estimate can be obtained more quickly. In this paper, we propose a general, two-step strategy for designing optimal control policies for such dynamic experiments. First, assuming the experimenter can observe the full state vector $\xbold_t$ for all $t$, we show that there exists a function $F$ such that if the experimenter sets the control parameter to be $u_t = F(\xbold_t,t)$ at all times $t$, then the Fisher information of the unknown parameters will be maximized. We call this the {\em full observation} case. The optimal ``control policy'' $F$ in this case can be precomputed off-line using stochastic optimal control techniques. Second, in situations where one only has partial, infrequent, and/or noisy measurements of the system state (the {\em partial observation} case, which is more common in practice), we show that stochastic optimal control can be usefully combined with standard filtering methods to improve the quality of parameter estimates. We illustrate the methods on three examples: an illustrative example taken from statistical physics (a particle in a double-well potential); the neuron example mentioned above; and an ecological experiment, in which models of the form \eqref{eq:sde} are used to describe relative population sizes in a contained glass tank or chemostat, and the rate at which resources are added to and removed from the system is the control signal. Together, these model systems represent useful contrasts in the quality of data, system behavior, and level of stochastic variation in system dynamics. For example, data from chemostat experiments can be gathered at only infrequent intervals, and are subject to variation due to binomial sampling of the tank and counting error. Their dynamics exhibit relatively high stochasticity but tend to an over-all fixed level. In contrast, in neuronal recordings, membrane voltage can be measured nearly continuously and with very high precision, and the signals exhibit small, but important, stochastic fluctuations on top of regular (e.g., oscillatory) behavior. The effect and utility of adaptive control is therefore substantially different in these models. We view the main purpose (and contribution) of the present paper Accordingly, the systems studied here represent the simplest possible models and experiments in their scientific domain, i.e., they are all low-dimensional, involving only one or two state variables. The strategy proposed in this paper can potentially be applied to systems with more than two state variables, but direct application of the numerical methods employed in this paper to systems with more than a few dimensions may be prohibitively expensive. Higher-dimensional systems thus require substantially new numerical techniques and ideas; this is beyond the scope of the present paper, and is the subject of on-going work. \paragraph{Related work} Recent statistical literature has given considerable attention to providing methods for performing inference in mechanistic, non-linear dynamic systems models, both those described by ordinary differential equations \citep*{Brunel08,RamsayDE,GCC10,HuangWu08} and explicitly stochastic models \citep*{Ionides06,AitShahalia08,Wilkinson06} along with more general modeling concerns such as providing diagnostic methods for goodness of fit and model improvement \citep{Hooker08,MullerYang10}. However, little attention has been given to the problem of designing experiments on dynamic systems so as to yield maximal information about the parameters of interest. In this paper, we use stochastic optimal control techniques to construct dynamic, data-driven experimental protocols, which can improve the accuracy of parameter estimates based on dynamic experiments. To our knowledge, this is the first time that experimental design has been employed to adaptively select time-varying inputs for explicitly stochastic, nonlinear systems for the purposes of improving statistical inference. Within the context of stochastic models, adaptation can be expected to be particularly important; information about parameters is typically maximized at particular parts of the state space and the choice of $u_t$ that will maintain a system close to high information regions will depend on the current state of the system and cannot be determined prior to the experiment. In related work, experimental design has been considered for ordinary differential equation models in \citet*{Bauer00} and the choice of subject and sampling times has been investigated in \citet{WuDing02} for mixed-effects models with ordinary differential equations. \citet{HugginsPaninski11} consider the optimal placement of probes on dendritic trees following linear dynamics. The problem considered here is distinct from the optimization of simulation experiments \citep[e.g. in][]{Kleijnen2007,marzouk} in that we consider direct real-world adaptive intervention in the system under study rather than the optimization of simulation methods. The techniques applied in this paper rely on dynamic programming and resemble methods in sequential designs, particularly for nonlinear regression problems and item response theory. See \citet{ChaudhuryMykland93} for likelihood-based methods or \citet{ChalonerVerdinelli95} for a review of Bayesian experimental design methods, \citet{Berger94} in item response theory and \citet{BartroffLai10} for recent developments. Our problem, however, is quite distinct from the sequential choice of treatments for independent samples that is generally considered. In this paper, dynamic programming is employed to account for the dependence of the future of the stochastic process on the current choice of the control variable rather than its effect on future design choices via an updated estimate (or posterior distribution) for the parameters of interest. Indeed, unlike most sequential design methods we will not attempt to estimate parameters during the course of the experiment (although this can certainly be incorporated in our framework if necessary). In many contexts -- particularly single neuron experiments -- this would be computationally infeasible given the time-scales involved, although see \citet{ThorbergssonHooker} for techniques in discrete systems. Where the optimal design depends on the parameter, we propose averaging the Fisher Information over a prior, corresponding to the use of a quadratic loss function in \citet{ChalonerLarntz89}. \paragraph{Organization} In Sect.~\ref{sec:ControlTheory}, we provide a precise formulation of the dynamic experimental design problem and outline our solution strategy for ``full observation'' problems where we can observe the state vector continuously. Sect.~\ref{sec:Filtering} discusses our approximate strategy for solving partial observation problems. An illustrative example, taken from statistical physics, is studied in Sect.~\ref{sect:kramers}. A detailed study of the Morris-Lecar neuron model is presented in Sect.~\ref{sec:neuro}, and of the chemostat model in Sect.~\ref{sec:chemostat}. Some final remarks and directions for further research are detailed in Sect.~\ref{sec:Conclusion}. We have implemented all the algorithms described in this paper in R, and used the implementation to compute the examples. The source code is available as an online appendix. \section{Maximizing Fisher Information in Diffusion Processes} \label{sec:ControlTheory} In this section we demonstrate that when the full state vector of the system is continuously observable, the problem of designing optimal dynamic inputs can be cast as a problem in control theory. Once this is done, we can follow well-developed methods. A brief review of some such methods is provided here; interested readers are referred to \citet{Kushner00} for further details. The solution of the ``full observation'' problem described in this section will form the basis for the more realistic but challenging ``partial observation'' problem, which we discuss in Sect.~\ref{sec:Filtering}. \subsection{Problem formulation and general solution strategy} \label{sect:Full Observations} Consider the multivariate diffusion process \eqref{eq:sde}. Our goal is to estimate $\theta$ using observations of $\xbold_t$ for $t\in[0,T]$, up to some final time $T$. At each moment in time, we assume the experimenter can adjust the control $u_t$ using all available information from the past, i.e., $\xbold_s$ for $s<t$. Our problem is to find an {\em optimal control policy,} i.e., a procedure for choosing a control value $u_t$ based on past observations such that the resulting estimator of $\theta$ is ``optimal,'' which we interpret here as meaning ``minimum asymptotic variance.'' We note that the framework proposed here is quite general and can easily accommodate other notions of optimality. To begin, recall that the likelihood of $\theta$ based on a single realization or sample path of the process $\xbold_t$ in \eqref{eq:sde} is given by \begin{align*} l(\theta|\xbold) &= \frac{1}{2}\int_0^T \fbold(\xbold_t,\theta,u_t)^T\cdot \Sigma^{-1}\big(\xbold_t\big)\cdot\fbold(\xbold_t,\theta,u_t)~dt \\ &- \int_0^T \fbold(\xbold_t,\theta,u_t)\cdot\Sigma^{-1}\big(\xbold_t\big)\cdot d\xbold_t \end{align*} \citep[see, e.g.,][]{Rao99}. Since we are interested in choosing $u_t$ to minimize the asymptotic variance of the parameter estimate, we should maximize the associated Fisher information \begin{equation} \label{fofi} I(\theta,u) = E \int_0^T \left|\left| \frac{d}{d\theta} \fbold(\xbold_t,\theta,u_t) \right|\right|^2_{\Sigma(\xbold_t)} dt \end{equation} with $||\zbold||_{\Sigma} = \zbold^T \Sigma^{-1} \zbold$. Note that the Fisher Information \eqref{fofi} --- and hence the optimal control policy --- in general depends on the parameter $\theta$ which is, of course, unknown. This can be handled by assuming a prior distribution for $\theta$; see Sect.~\ref{sect:Parameter Dependence and Priors}. For the discussion at hand, let us assume for simplicity that a reasonably good initial guess of $\theta$ is available, which we use in computing \eqref{fofi}. {\em A priori}, the control $u_t$ can be any function of the past history $(\xbold_s:s<t)$ and $t$, i.e., it is a stochastic process adapted to the filtration generated by $\xbold_t~.$ A basic fact of optimal control theory is that because the process $\xbold_t$ is Markovian and the functional to be optimized can be expressed as an integral of the form (\ref{fofi}), the optimal control $u_t$ at time $t$ depends only on the current state $\xbold_t$, rather than entire past history. That is, there is a function $F:\R^d\times[0,T_{final}]\to{\cal U}$, where $\R^d$ is the state space, $T_{final}$ is the duration of the experiment, and ${\cal U}$ is the set of permissible control values, such that $u_t = F(\xbold_t,t)$ gives the optimal control policy. In other words, the controlled process is also Markovian. Hereafter we will refer to this function $F$ as the {\em optimal control policy}. Computing the optimal control policy given an equation of the form~(\ref{eq:sde}) is a standard problem in stochastic optimal control. A number of approaches are available. Here, we follow an approach due to Kushner (see, e.g., \citet{Kushner71,Kushner00,Bensoussan} for details), which consists of two main steps: (i) a finite difference approximation is used to discretize the problem in both space and time, thus reducing the problem to one involving a finite-state Markov chain; and (ii) using the well-known dynamic programming algorithm, construct an optimal control policy for the discretized problem. For conceptual and notational simplicity, we first explain the basic dynamic programming idea --- applicable to both continuous and discrete state spaces --- in the context of a time-discretized version of the diffusion process, deferring details of the spatial discretization to Sect.~\ref{app:details}. \paragraph{Dynamic programming} Let us approximate the diffusion process~(\ref{eq:sde}) at discremte times $t_i = i \dt$, $i = 1,\ldots,T$ via the Euler-Maruyama scheme \citep{kloeden1992numerical}, yielding \begin{equation} \xbold_{i+1} = \xbold_i + \fbold(\xbold_i,\theta,u_i)~\dt + \sqrt{\dt}~\Sigma^{1/2}(\xbold_i) \epsilonbold_i \label{eq:euler} \end{equation} with the $\epsilonbold_i$ independent vectors of independent standard normal random variables, and approximate the Fisher Information by \begin{equation} \widehat{FI}(\theta) = \sum_{i=1}^TE\left[ \left|\left| \frac{d}{d\theta} \fbold(\Xbold_i,\theta,u_i) \right|\right|^2_{\Sigma(\Xbold_i)}\right]~\dt~. \label{eq:fi} \end{equation} Note that in this section, we use $\Xbold$ to refer to non-realized random variables over which we take expectations, and use $\xbold$ to refer to specific realizations and deterministic values. Our goal is now to choose $u_i$ at each step to maximize $\widehat{FI}(\theta)~$ over all possible choices of $\ubold = (u_0, u_1, \cdots, u_T)$, with the requirement that (i) $u_i\in{\cal U}$ for all $i$, and (ii) $u_i$ is a function of $\xbold_j$ for $j<i$, i.e., $\ubold$ is adapted to the filtration generated by $(\Xbold_0,\cdots,\Xbold_T)$. To this end, we define for each timestep $i$ the {\em Fisher Information To Go} (FITG) \begin{displaymath} \widehat{FI}_i(\theta,\xbold) = \sup_{{\ubold}}\left( E_{\mbox{\scriptsize $\xbold$},i}\left[~\sum_{j=i}^T \left|\left|\frac{d}{d\theta} \fbold(\Xbold_j,\theta,u_j)\right|\right|^2_{\Sigma(\mbox{\scriptsize$\xbold$})} ~\right]\right)~, \end{displaymath} where $E_{\mbox{\scriptsize$\xbold$},i}(\cdot)$ denotes the conditional expectation over all $\Xbold_j~,j>i$, given $\Xbold_i=\xbold$, and the supremum is taken over all controls $\ubold = (u_i, u_{i+1},\cdots, u_T)$ adapted to $(\xbold, \Xbold_{i+1}, \cdots, \Xbold_T)$. The FITG is the maximal expected FI, given that the we start at step $i$ in state $\Xbold_i = \xbold$~. Note that the the surpremum of the Fisher information $\widehat{FI}(\theta)$ in Eq.~(\ref{eq:fi}) is equal to $E\Big(\widehat{FI}_0(\theta,\Xbold_0)\Big)$, where the expectation averages over all initial conditions. Dynamic programming is based on the observation that we can rewrite the FITG at step $i$ recursively: \begin{equation} \widehat{FI}_i(\theta,\xbold) = \max_{u\in{\cal U}}\left( E_{\mbox{\scriptsize $\xbold$},i,u}\left[ \widehat{FI}_{i+1}(\theta,\Xbold_{i+1}) + \left|\left|\frac{d}{d\theta} \fbold(\xbold,\theta,u)\right|\right|^2_{\Sigma(\mbox{\scriptsize$\xbold$})} \right]\right)~, \label{fitg} \end{equation} where $E_{\mbox{\scriptsize$\xbold$},i}(\cdot)$ is the conditional expectation given $\Xbold_i=\xbold$ and that we choose the control value $u$ at step $i$~. (Note that choice of $u$ affects not only the second term inside the expectation, but also the first term, since choice of $u$ affects $\Xbold_{i+1}$.) This means that if we start at the {\em final} step $i=T$ and work progressively {\em backwards} in time, then at each step $i$, the FITG $\widehat{FI}_{i+1}(\theta,\Xbold_{i+1})$ (the first term inside the expectation) will already be computed. Thus, the ``big'' optimization problem of finding the best $\ubold$ reduces to a sequence of optimizations over ${\cal U}$, which is generally straightforward to do (see below). The resulting optimal control policy $F$ is \begin{equation} F(\xbold, i\dt) = \argmax_{u\in{\cal U}}\left( E_{\mbox{\scriptsize$\xbold$},i,u}\left[ \widehat{FI}_{i+1}(\theta,\Xbold_{i+1}) + \left|\left|\frac{d}{d\theta} \fbold(\xbold,\theta,u)\right|\right|^2_{\Sigma(\mbox{\scriptsize $\xbold$})} \right]\right)~. \label{optcontrol} \end{equation} To compute $F$ on a computer, we can discretize the state space and tabulate $F$ on a discrete grid of $\xbold$'s. In this way, dynamic programming allows us to construct a good approximation of the control policy $F$ for each time step $i$. \subheading{Optimization over ${\cal U}$.} The optimization of $u_i$ in \eqref{optcontrol} may be done in a number of ways, depending on the allowed set of control values ${\cal U}$~. For example, if ${\cal U}$ is an interval or an open region in $\R^n,$ derivative-based optimization techniques like Newton-Raphson may be useful. In the present paper, we assume the control values are drawn from a relatively small finite set ${\cal U}$; the optimization in Eq.~(\ref{optcontrol}) is thus done by picking the maximum over this finite set via exhaustive search. We remark that in more general situations than what we consider in this paper, one may wish to allow ${\cal U}$ to be an unbounded set like $\R^k~.$ In these situations, the FITG may be maximized at very large or even infinite values of $u_t$~. As an example, consider the univariate Ornstein-Uhlenbeck process with additive control \[ dx = \left(-\beta x + u\right) dt + \sigma dW_t \] where at step $T-2$, \[ u_{T-2} = \argmax_u E_{\xbold_{T-1}|\xbold_{T-2},u_{T-2}} \xbold_{T-1}^2 = \left( \left(1 -\dt \beta\right) \xbold_{T-2} + \dt u_{T-2}\right)^2 + \dt \sigma^2 \] which is maximized for $u = \pm \infty$. Aside from the question of mathematical well-posedness of the optimization problem~(\ref{optcontrol}), large values of $|u_t|$ constitutes an important practical issue: in experimental set-ups, $u_t$ may correspond to, say, an electrical current, and allowing $|u_t|$ to become too large may have undesirable consequences. In this paper we restrict $u_t$ to taking a finite set of values. An alternative way to ensure that solution to the control problem remain feasible is to add a penalty to the Fisher Information, so that \eqref{optcontrol} has a finite, well-defined, solution. \subsection{Markov chain approximation} \label{app:details} This section provides technical details of our numerical methods for obtaining the optimal control policy. As noted above, in order to execute the dynamic programming strategy on a computer and tabluate the resulting control policy $F$, it is necessary to discretize the state space. Here we follow a discretization strategy due to Kushner \citep{Kushner71}. The starting point is the time-$t$ distribution $\rho(t,\cdot)$ of the diffusion $\xbold_t$, which satisfies the forward equation \begin{equation} \partial_t\rho(t,\xbold) + {\rm div}\big(\fbold(\xbold, \theta, F(\xbold,t))\rho(t,\xbold) \big) = \frac12 \sum_{i,j} \partial_{x_i}\partial_{x_j} \big(\Sigma_{ij}(\xbold)~\rho(t,\xbold)\big)~, \label{eq:forward} \end{equation} where $F$ is the optimal control plan described above in the limit $\dt\to0~.$ We discretize the optimization problem (\ref{optcontrol}) as follows: (i) we cover the relevant portions of state space with a finite grid of points with spacing $h$; (ii) we then discetize Eq.~(\ref{eq:forward}) in space and time by a finite difference approximation on this grid with a timestep $\dt^h$ depending on $h$; and (iii) we interpret the coefficients of the finite difference approximation as the transition probabilities of a finite-state Markov chain (whose states are the discrete grid points chosen in (i)) to approximate the diffusion process. The dynamic programming algorithm can then be applied directly to this finite state Markov chain in a straightforward manner. As an example of what we mean by (iii), consider the simple diffusion equation $\partial_tu = \partial_x^2u$. The standard finite difference discretization of this PDE is \begin{align*} u(t+\dt,x) &= u(t,x) + \frac{\dt}{h^2}\big(u(t,x+h) + u(t,x-h) - 2u(t,x)\big) \\ &= (1-2\mu)u(t,x) + \mu(u(t,x+h) + u(t,x-h))~, \end{align*} where $\mu=\dt/h^2~.$ Thus, if $\mu\in(0,\sfrac{1}{2})$, we can interpret the equation above as describing a discrete state-space Markov chain whose states are $\{0, \pm h, \pm2h, \cdots\}$ with transition probability $P(X_{k+1}=x|X_k=x) = 1-\mu$ and $P(X_{k+1}=x\pm1|X_k=x) = \mu$~. Note that the condition $0<\mu<1$ imposes an upper bound on the stepsize, specifically $\dt\leq h^2/2$~. Timestep limitations like this are a general feature of finite difference discretization schemes, and a poorly-designed finite difference scheme can lead to a great increase in computational cost. The main technical problem is thus step (ii), since we need to ensure that the coefficients of the finite difference scheme can be interpreted as probabilities. In our implementation, we employ a ``split operator'' finite difference discretization. Roughly speaking, this amounts of discretizing the drift and diffusion terms in the SDE~(\ref{eq:sde}) separately, then composing the resulting difference schemes. This is a standard technique often used for the numerical solution of partial differential equations \citep[see, e.g.,][]{strikwerda}. In our test examples, the use of such a split operator scheme permits us to use larger grid spacings and timesteps while maintaining numerical stability, and can significantly speed up overall running speeds. In detail: first, suppose for simplicity that the state space in Eq.~(\ref{eq:sde}) is $\R$ (generalization to higher dimensions is straightforward, but with more complex notation). For each $h>0$, let ${\cal G}_h$ denote the regular grid $h\Z$ with spacing $h$. For the moment, assume ${\cal G}_h$ extends to all of $\R$; boundary conditions are discussed below. The grid ${\cal G}_h$ will be the state space of our discrete-time approximating Markov chain $\tilde{\xbold}_i$. To choose time step sizes, we fix a constant $\mu>0$ and an integer $r>0$, and set the timestep to be $\dt^h = \mu h^2.$ Then our approximation can be described as follows: \begin{quote} \subheading{Step 1.} Suppose at time $t=i\dt^h$, the chain has state $\tilde{\xbold}_i = sh.$ Then \begin{itemize} \item[-] with probability $|f(sh)|\cdot\dt^h/h$, jump to $(s+1)h$ if $f(sh)>0$ and to $(s-1)h$ if $f(sh)<0$; and \item[-] stay at $sh$ with probability $1-|f(sh)|\cdot\dt^h/h.$ \end{itemize} \subheading{Step 2.} Let $s'h$ denote the state of the chain after the previous step. Then jump to $(s'\pm r)h$ with probability $\tfrac12\Sigma(s'h)\mu/r^2$, and stay at $s'h$ with probability $1-\Sigma(s'h)\mu/r^2$. \end{quote} This discretization scheme treats the drift and diffusion terms in Eq.~(\ref{eq:sde}) separately, hence the name ``split operator.'' It is straightforward to show that the composite finite difference discretization provides a consistent discretization of Eq.~(\ref{eq:forward}). Step 1 interprets the drift term as a biased jump, and Step 2 interprets the diffusion term as one step of a symmetric random walk. Both conceptually and in terms of programming, split operator schemes like the one above are easy to work with; however they also impart an additional variance on the motion of the approximating Markov chain. In the scheme above, this extra variance is on the order of $O(h\dt^h) = O(h^3)$ per step; the relative effect of this extra variance (so-called {\em numerical diffusion}) vanishes as $h\to0$. Clearly, in order for the various expressions above to be valid transition probabilities, we must have $\mu/r^2<\inf_{x\in\R}\Sigma(x)^{-1}.$ We also need $\sup_x|f(x)|\dt^h/h<1$. Since $\dt^h/h = \mu h$, the second condition can always be achieved by taking $h$ sufficiently small. The first condition, however, places a rather stringent constraint on the timestep $\dt^h.$ The purpose of introducing the ``skip factor'' $r$ is precisely to partially alleviate this constraint, at the cost of losing some accuracy. In practice, going from $r=1$ to $r=2$ can have a large impact on the overall running time. If $r=1$, our scheme closely resembles the ``up-wind'' scheme of Kushner \citep{Kushner71}; a scheme similar to our $r=2$ case is also described there. In practice, rather than choosing $h$ and $\mu$, we usually first choose $h$ and $\dt$, and take $r\propto\Sigma_{max}\sqrt{\mu}$ with a constant of proportionality between 1 and 2, where $\Sigma_{max}$ is the maximum of $\Sigma(x)$ over the domain of interest. This guarantees that the scheme converges (see below). We usually use relatively coarse grids, as (i) the structure of the optimal control policy is not too fine, and (ii) we usually operate in the presence of observation and dynamical noise, so there is not much point trying to pin down fine structures in the control policy. \subheading{Convergence.} In general, a sufficient condition for the Markov chain approximation method to be valid is that the approximating chain $\tilde{\xbold}_i$ satisfy \begin{align*} E_i(\tilde{\xbold}_{i+1} - \tilde{\xbold}_{i}) &= \fbold(\tilde{\xbold}_i, u_i)\dt + O(h^\alpha\dt)\\ \var_i(\tilde{\xbold}_{i+1} - \tilde{\xbold}_{i}) &= \Sigma(\tilde{\xbold}_i)\dt + O(h^\alpha\dt)\\ |\tilde{\xbold}_{i+1} - \tilde{\xbold}_{i}| &= O(h) \end{align*} \citep{Kushner71}. In the above, $E_i(\cdot)$ denotes conditional expectation given all information up to step $i$, $\var_i(\cdot)$ denotes the corresponding autocovariance matrix, and $\alpha>0$ is a constant. Note the last line should be interpreted to hold surely, i.e., it says the discrete Markov chain makes jumps of $O(h)$ in size. If the conditions above are satisfied, then as $h\to0$, the optimal control policy for the approximating chain will converge to an optimal control policy for the diffusion~(\ref{eq:sde}). It is easy to see that our scheme satisfies the convergence criteria above with $\alpha=1.$ \subheading{Boundary conditions.} In practice, ${\cal G}_h$ must be a finite grid. We assume that it spans a subset of $\R^d$ sufficiently large that on the timescale of interest, trajectories have very low probability of exiting. We then impose that when the approximating Markov chain attempts to exit the domain, it is forced to stay at its current state. This gives a simple way to obtain a Markov chain on bounded grids. \subheading{Higher dimensions.} The generalization of the preceding scheme to higher dimensions is straightforward: we simply treat each dimension separately and independently. \subsection{Parameter dependence and priors} \label{sect:Parameter Dependence and Priors} Throughout the discussion above, we have constructed the Fisher Information $I(\theta,u)$ and the resulting control policy $u(\xbold,t)$ assuming a specific $\theta$. In general, $\theta$ will not be known --- there would otherwise be little point in the experiment --- and both Fisher Information and the optimal control policy may depend on its value. We address this issue by constructing a prior $\pi(\theta)$ over plausible values of $\theta$ and maximizing $E_{\theta}(I(\theta,u))$. The choice of this prior is important: the dynamics of $\xbold_t$ may depend on the value of $\theta$, and the computed control policy may be ineffective if the value of $\theta$ assumed in computing the optimal control policy is very different from the true value. This averaging is easy to implement numerically: a grid is chosen over the relevant region of parameter space, then the FI \eqref{fofi} is averaged over this grid of $\theta$ (a weighted average can be used to implement a non-uniform prior). It is easy to check that all the algorithms described in Sect.~\ref{sec:ControlTheory}, as well as those described below, apply in a straightforward way in this setting. Averaging Fisher Information over a prior in this way corresponds to the use of a quadratic loss function in a Bayesian design setting as in \citet{ChalonerLarntz89}. We have employed this here as we wish to minimize the variance of the estimated parameter. A Bayesian D-optimal design corresponding to averaging log Fisher Information (or its log determinant if $\theta$ is multivariate) can be employed following the same methods. We note that while averaging the objective over possible values of the parameter is a natural procedure in some situations, it may not always be appropriate. When an estimate of parameters can be obtained and updated as the experiment progresses, it may be preferable to use the control policy associated with using these on-line estimates, e.g., by pre-computing a set of control policies (for a grid of parameter values) and use the current best parameter estimate to choose a control policy. Alternatively, as the experiment progresses, Fisher Information could be averaged over a posterior for the parameters. These ideas are explored in \citet{ThorbergssonHooker} in the context of discrete-state Markov models; such on-line strategies are more complicated to implement for the types of diffusion processes discussed here, and will be examined in a future publication. \section{Partial Observations} \label{sec:Filtering} The algorithms discussed in Sect.~\ref{sec:ControlTheory} are appropriate for a model in which the entire state vector $\xbold_t$ is observed essentially continuously in time, without error. This is rarely the case in practice. In neural recording models, only membrane voltages can be measured, leaving $w_t$ as latent, although voltage measurements have both high accuracy and high frequency. In the chemostat system described in Sect. \ref{sec:chemostat}, algae are measured by placing a slide with a sample from the ecology in a particle counter and are thus subject to sampling and counting errors and can be taken at most every few hours. Nitrogen can also be measured but with less accuracy and greater expense. Generally, not all state variables will be measured in most applications, and measurements will be taken at discrete times and are will likely be corrupted by measurement errors. The observation process causes two problems for the strategy outlined in Sect.~\ref{sect:Full Observations}: first, since not all state variables are being measured, the Fisher information in Eq.~(\ref{eq:fi}) is no longer the correct expression for the asymptotic estimator variance; and second, the dynamic programming methodology outlined in Sect.~\ref{sec:ControlTheory} is no longer applicable. These difficulties associated with {\em partially observed systems} are of a fundamental nature: projections of Markov processes are typically not Markovian, and the Markov property is essential to the dynamic programming algorithm (see Sect.~\ref{sect:Full Observations} and references). Moreover, while one can derive an explicit expression for the Fisher information of partially observed diffusion processes, the functional no longer has the form of a time-integral of a function of the state vector (see \cite{Louis82,ThorbergssonHooker}), making it much more difficult to work with. In this section, we propose a simple approximation strategy aimed at overcoming these difficulties. We also comment on our implementation of maximum likelihood estimation, which we use for estimating $\theta$, and provide some theoretical justification for the approximation strategy in the context of linear systems. We assume throughout this section that the dynamics has been time-discretized (see Sect.~\ref{sect:Full Observations}) to yield a sequence of state vectors $\xbold_0, \xbold_1, \cdots, \xbold_T$, and that every $\tau$ steps a noisy observation is made, yielding the observation vectors $\Ybold = (\ybold_1,\ldots,\ybold_{n})$~, where $n=T/\tau$ is the number of observations. Furthermore, we assume that a noise model $p(\ybold|\xbold)$ is given. The assumption of periodic observations is for simplicity only and can be easily relaxed. \subsection{Filtering and estimation for partially observed systems} \label{sect:Filtering and estimation for partially observed systems} Our strategy for dealing with partially observed systems entails the following steps: \begin{enumerate} \item[(i)] We solve the full observation problem by finding the control policy $F$ that minimizes the full-observation Fisher Information (FOFI) given by \eqref{fofi}. \item[(ii)] During the experiment, filtering techniques are applied to to provide real-time estimates of the system state, which are then plugged into $F$ to obtain a control value. \end{enumerate} While this strategy is not expected to be optimal in general because the control values are determined by the current state rather than the entire past history, we argue that it nevertheless provides an efficient control strategy for problems where accurate filtering is feasible, i.e., when the conditional variation of the process given observations of it is small. Step (i) above follows exactly the methodology from Sect.~\ref{sec:ControlTheory}. The main difference between the partial and full observation cases is the need for filtering. For conceptual and implementation simplicity, in this paper we use {\it particle filters} \citep[see, e.g.,][]{liu}. Specifically, we discretize the diffusion process using the standard Euler-Maruyama approximation (\ref{eq:euler}), and incorporate observations via Bayes's formula. In all the examples considered in this paper, the observations are linear functions of the state vector plus Gaussian error. This allows us to avoid weighting and resampling in the particle filter. Instead, we incorporate the observations within the Euler-Maruyama scheme \eqref{eq:euler} by first applying the drift $\fbold(\Xbold_i,\theta,u_i)~\dt$ and then sampling the Gaussian disturbance $\sqrt{\dt}~\Sigma^{1/2}(\Xbold_i) \epsilonbold_i$ conditional on the observation at time $i+1$. When multiple steps are taken between observations, this update is applied for the final step. In practical terms, our particle filters are simpler and more robust than bootstrap filters, and do not differ significantly from other popular filtering schemes, e.g., the ensemble Kalman filter. One additional implementation detail: when observations are infrequent, we simply hold the control value constant between observations. In principle, we can also use the past information to extrapolate the state trajectory between observations, and use a time-dependent control. But for the examples considered in this paper, the simpler strategy appears sufficient; it is also more realistically applicable in some experiments. \paragraph{Maximum likelihood estimates} Another advantage of using particle filters is that we can easily evaluate the likelihood as a bi-product. Let $I$ be the relevant parameter interval (see Sect.~\ref{sect:Parameter Dependence and Priors}), and for simplicity assume a flat prior over $I~.$ We discretize $I$ into a discrete grid $I_M$ of size $M$. For each putative parameter value in $I_M,$ we run a particle filter and compute an estimate of the log-likelihood using the formula \citep{Ionides06} \[ \mbox{log likelihood} = \sum_{k=0}^{n}\log\big(p(\ybold_k|\ybold_1,\cdots,\ybold_{k-1})\big) \approx \sum_{k=0}^{n}\log\left(\frac1N\sum_{i=1}^Np\big(\ybold_k\big|\xbold^{(i)}_k\big)\right) \] where $\{\xbold^{(i)}_k:i=1,\cdots,N\}$ denotes the ensemble of particles at step $k$. If the computed log-likelihood curve takes on a maximum in the interior of the interval $I,$ we do a standard 3-point quadratic interpolation around the maximum and find the maximizer; this we use this as the MLE. On the rare occasions when the log-likelihood curve does not have a maximum inside the interval $I,$ we use the corresponding endpoint as the estimate (but record the estimate as being ``out of range''). One further ``trick'' can be used to improve the performance of the grid-based MLE: for each simulation experiment, corresponding ``particles'' for different parameter values are driven using the same sequences of Gaussian random numbers. That is, if the $i$th particle in the ensemble for the $j$th parameter value is in state $\xbold^{i,j}_k$ at timestep $k$, then the particle positions $\xbold^{i,1}_{k+1}, \cdots, \xbold^{i,M}_{k+1}$ at the next timestep (where $M=$ number of points in the parameter grid) are generated from the current positions $\xbold^{i,1}_{k}, \cdots, \xbold^{i,M}_{k}$ using the {\em same} Gaussian random numbers. (To ensure correct sampling, the particles $\xbold^{1,j}_t, \cdots, \xbold^{N,j}_t$ for fixed $j$ still receive independent Gaussian random numbers.) This {\em same-noise coupling} or {\em method of same paths} reduces the variance of the resulting estimates \citep{asmussen-glynn}, and produces smooth likelihood curves. For comparison purposes, we also study the full observation problem in some of the examples, i.e., assume the full state vector is continuously and noiselessly observed. For such problems, the log-likelihood associated with particular sample paths are straightforward to calculate directly, and filtering is not needed. \subsection{Approximation Methods in Linear Systems} We now provide a partial theoretical justification of our approximation strategy. Beginning with the approximation to Fisher information, within a partially observed system, the complete data log likelihood can be written as \[ l(\theta|\xbold,\Ybold) = \sum_{i=1}^n \log p(\ybold_i|\xbold_i) + l(\theta|\xbold) \] assuming that $\theta$ does not appear in the observation process. From the formulation for the observed Fisher Information \begin{align*} I(\theta|\Ybold) & = E_{\xbold|\Ybold} \frac{d^2}{d \theta^2} l(\theta|\xbold,\Ybold) - \mbox{var}_{\xbold|\Ybold} l(\theta|\xbold,\Ybold) \end{align*} the (expected) Fisher Information for the partially observed diffusion process can be written as \begin{align*} I_Y(\theta,u) =& E_{\xbold} \int \left| \left|\frac{d}{d \theta} \fbold(\xbold_t,\theta,u_t)\right| \right|_{\Sigma(\xbold_t)}^2 dt \\&- E_Y \mbox{var}_{\xbold|\Ybold} \left[ \int\frac{d}{d \theta} \fbold(\xbold_t,\theta,u_t) \Sigma(\xbold_t)^{-1} \left(d\xbold_t - \fbold(\xbold_t,\theta,u_t)dt\right) \right] \end{align*} \citep[see][]{Louis82} where we observe that the first term is the objective of our dynamic program. We observe that the second term shrinks with $\mbox{var}(d\xbold|\Ybold)$ assuming that $d\fbold/d\theta$ is continuous in $\xbold$. The argument above indicates that, provided the conditional variance of $d\xbold$ given observations $Y$ is small, maximizing FOFI will provide improved information about $\theta$. It remains necessary to demonstrate that employing the filtered estimate will approximately maximize FOFI. For linear systems, $d\xbold_t = (A\xbold_t + Bu_t)dt + dW_t$, with a quadratic objective function $\int \xbold_t^T C(t) \xbold_t~dt$, the Separation Theorem guarantees that this strategy is, in fact, optimal \citep[see][]{Kushner71}. This need not be the case for nonlinear systems or non-quadratic functions. Extensions of the separation theorem have been developed in \citet{KilicaslanBanks09} based on successive approximations of \eqref{eq:sde} in terms of a linear system with time-varying parameters. Particle filter methods can also be employed to average the future cost over the current distribution of the estimated state variables \citep{ADST03}. Both of these schemes require re-running the dynamic program at each time point; this will not always be computationally feasible. In contrast, the approach of merely using the filtered estimate of the state allows the map from state variables to controls to be pre-computed. This was employed in \citet{BotchuUngarala07}, for example and we demonstrate here that it is still helpful for estimating parameters. As above, if the system is such that a bound can be placed on $\left|\left|\xbold^* - \xbold\right|\right|$ where $\xbold^*$ is the filtered estimate, under suitable regularity conditions this procedure will yield approximately optimal results. The details of conditions for this to hold will depend on context-specific factors such as whether the control policy takes discrete or continuous values, and we do not attempt to fill these in here. \section{An illustrative example} \label{sect:kramers} To illustrate our method, we consider a small particle trapped in a double-well potential (see Fig.~\ref{fig:kramers-pot}(a)). This is a commonly-used model in statistical and chemical physics \citep{van-kampen,hanggi1990reaction}; we use it here to illustrate our methodology, and to indicate when our methodology may be particularly effective. The governing equation is \begin{equation} dx_t = -V'_A(x_t) + u_t + \sigma~dW_t~, \label{eq:langevin} \end{equation} where $x_t\in\R$ is the position of the particle and the potential $V$ is given by \begin{equation} V(x) = x^4 - 2x^2 + A e^{-(x/w)^2/2}~. \label{eq:double-well} \end{equation} (See Fig.~\ref{fig:kramers-pot}(a).) Here, we study the model with $w=0.3$, $\sigma=0.1$, and $A\approx 3.8$, which we randomly picked from an interval centered around $A=4$~. In this regime, the unforced system (i.e., with $u_t\equiv 0$) exhibits dynamics on two separate timescales: on the shorter timescale, the particle position fluctuates about the bottom of one of the wells due to thermal fluctuation; on longer timescales, on the order of $e^{2(\Delta V)/\sigma^2}$ where $\Delta V = A+1$ is the depth of the well, the system jumps between the two potential wells \citep{van-kampen,freidlin-wentzell}. A sample path is shown in Fig.~\ref{fig:kramers-pot}(c) (bottom, dark black curve). \begin{figure} \begin{center} \begin{tabular}{ccc} \resizebox{1.8in}{1.25in}{\includegraphics[bb=0in 0in 4in 3in]{kramers-pot.eps}}&& \resizebox{2.2in}{1.25in}{\includegraphics[bb=0in 0in 4in 3in]{kramers-control-t0.eps}}\\ (a) The potential $V(x)$ && (b) Optimal control at $t=0$\\ \end{tabular}\\[4ex] \resizebox{4.5in}{1.25in}{\includegraphics[bb=0in 0in 5in 2in]{kramers-paths.eps}}\\ (c) Controlled and uncontrolled sample paths \end{center} \caption{Particle in a double-well potential. In (a), the potential (\ref{eq:double-well}) is shown. Panel (b) shows a plot of the computed optimal control policy, as a function of position $x$, for $t=0$ and $T=30$. Panel (c) compares the controlled (lighter, blue curve) and uncontrolled (darker, black curve) sample paths.} \label{fig:kramers-pot} \end{figure} Models of this type are paradigms for physical systems with multiple metastable states, and calculating transition rates between metastable states (due to large deviations in the driving white noise process) are of interest in, e.g., in reaction rate theory. By extension, the heights of relevant potential barriers are also of interest. Our goal here is to estimate the barrier height $A$ in Eq.~(\ref{eq:double-well}). For this 1D model, it is straightforward to solve the optimal control problem. Specificially, we discretize the interval $(-5,5)$ (with high probability the position $x_t$ will stay within this range) by 100 grid points. The control values $u_t$ are drawn from the finite set $\{0, \pm2, \pm4, \cdots, \pm10\}$. We also assume a uniform prior for $A$ on the interval $[2,5]$, which we discretize into 10 parameter values. The diffusion process~(\ref{eq:langevin}) is discretized in time with a timestep of $\dt=0.01$; this is sufficiently small to satisfy the criteria set forth in Sect.~\ref{app:details}. An optimal control policy is computed over the time interval $t\in[0,T]$ for various values of $T$. The computed control for $t\ll T$ stabilize rather quickly as $T$ increases; the control at $t=0$ is shown in Fig.~\ref{fig:kramers-pot}(b). Not surprisingly, the control encourages more frequent jumps by pushing left when the particle is in the right well, and vice versa. To see the control policy ``in action,'' we carry out simulations of the controlled diffusion proceess for $T=4$ and for $T=30$. For each choice of $T$, we carry out 256 independent trials, and use the result to estimate the barrier height $A$. The diffusion process is observed every 0.25 units of time; at observation times the system state is estimated, and the control value updated. The observations are assumed to have additive, Gaussian observation noise of standard deviation 0.05. Because of the observation noise, even though this is a 1D model (and so the full state ``vector'' is observed), filtering is still needed. Here, we use a particle filter with $10,000$ particles; far fewer particles would have sufficed for the controlled process, but the uncontrolled process needed more particles to obtain a reasonable parameter estimate. A sample trajectory subjected to the optimal control is shown in Fig.~\ref{fig:kramers-pot}(c) (lighter / blue curve). \begin{table} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c} Duration & Control & N & In range & Mean & Bias & Std.~Dev. & Std.~Dev.~Err. \\\hline\hline 4 & Dynamic & 256 & 100\% & 4.233 & 0.3933 & 0.05947 & 0.002628 \\ 4 & 0 & 256 & 98\% & 4.242 & 0.4016 & 0.3133 & 0.01385 \\ 30 & Dynamic & 256 & 100\% & 4.225 & 0.3854 & 0.02098 & 9.272e-4 \\ 30 & 0 & 256 & 99.6\% & 4.223 & 0.3832 & 0.2888 & 0.01277 \\ \end{tabular}\\[2ex] (a) Continuous noise-free observations\\[4ex] \begin{tabular}{c|c|c|c|c|c|c|c} Duration & Control & N & In range & Mean & Bias & Std.~Dev. & Std.~Dev.~Err \\\hline\hline 4 & Dynamic & 256 & 100\% & 4.225 & 0.3846 & 0.1094 & 0.004833 \\ 4 & 0 & 256 & 77\% & 4.202 & 0.3623 & 0.6048 & 0.02673 \\ 30 & Dynamic & 256 & 100\% & 4.197 & 0.357 & 0.04881 & 0.002157 \\ 30 & 0 & 256 & 71\% & 4.316 & 0.4763 & 0.5953 & 0.02631 \\ \end{tabular}\\[2ex] (b) Infrequent, noisy observations \end{center} \caption{Comparison of energy barrier height estimates using controlled and uncontrolled diffusions.} \label{tab:kramers} \end{table} Table~\ref{tab:kramers} shows the results of these trials. In Table~\ref{tab:kramers}(a), results are shown for full observation trials, in which the system is observed at every timestep, i.e., observation period = $\dt=0.01$~. In addition to the controlled diffusion process, we also computed $A$ estimates for the diffusion process with a range of constant control values $u\in{\cal U}$, and found that $u=0$ gives minimum-variance estimates among the control values tested. As can be seen, the standard deviation of the estimate based on the controlled diffusion is significantly smaller for both $T=4$ and $T=30$ --- for $T=4$ by a factor of $\approx5,$ and for $T=30$ by a factor of $\approx14.$ Table~\ref{tab:kramers}(b) shows the corresponding results for the case of noisy-and-infrequent observations. For $T=4$, we obtain a roughly factor of 6 reduction in standard deviation, while for $T=30$, roughly a factor of 11. The reduction in the estimator variance is significant, though as expected somewhat less than the full observation case, at least over longer timescales. More telling is the fraction of estimates that were ``in-range'' (see the discussion of MLE in Sect.~\ref{sec:Filtering}): the controlled diffusion process always produced estimates that were within the prior parameter range, whereas the uncontrolled diffusion produces a nontrivial number of parameter estimates outside the range. The reason that the optimal control strategy is particularly effective in this model is that while the barrier height $A$ has a significant impact on the dynamics of the system over long timescales, it only impacts the dynamics in a small part of state space, and on shorter timescales it has relatively little effect. The optimal control policy is able to drive the system into crossing the barrier much more frequently, thereby yielding more information about $A$. We expect that, in general, our method will be particularly effective in situations like this. \section{Morris-Lecar (ML) Neuron Model} \label{sec:neuro} The rest of the paper consists of applications of the optimal control methodology to more complex models with different types of dynamics, with the goal of examining how the type of dynamics can affect the efficacy of parameter estimation in the presence of dynamic control. The first is the Morris-Lecar (ML) neuron model mentioned in the Introduction. This model is often used to model membrane voltage oscillations and other properties of neuronal dynamics; it describes how membrane voltage and ion channels interact to generate electrical impulses, or ``spikes,'' which is the primary means for neuronal information transmission. The ML model is planar, and hence more amenable to analysis than higher-dimensional models like the Hodgkin-Huxley equations \citep{HodgkinHuxley52}. At the same time, it faithfully captures certain important aspects of neuronal dynamics \citep{ermentrout-rinzel}, making it a commonly-used model in computational neuroscience. It has a rich bifurcation structure, and exhibits two dramatically different timescales. See \citet{termantrout} and \citet{ermentrout-rinzel} for a derivation and description of this model and its behavior. The ML model is given by \eqref{eq:ML}. To provide further details, the second equation can be interpreted as the master equation for a two-state Markov chain describing the opening and closing of ion channels. The terms with $\beta_v$ and $\beta_w\gamma(v,w)$ are independent noise terms modeling different sources of noise: the $\beta_v$ term models voltage fluctuations, and is simply additive white noise. The $\beta_w\gamma(v,w)$ term models random fluctuations in the number of open ion channels due to finite-size effects; here we use the function \begin{equation} \gamma(v,w) = \sqrt{\frac{\varphi}{\tau_w(v)} \Big(w_\infty(v)\cdot(1-2w) + w\Big)}, \end{equation} to scale the Wiener process. This function arises from an underlying Markov chain model; see, e.g., \citet{kurtz71,kurtz81,smith}. Here, we assume a typical experimental set-up in which an electrode (a ``dynamic clamp'') is attached to the neuron, through which an experimenter can inject a current $I_t$ and measure the resulting voltage $v_t$; the gating variable $w_t$ is not directly observable. The electrode is usually attached directly to a computer, which records the measured voltage trace and generates the time-dependent injected current, making this type of experiment a natural candidate for our method. Neuronal membrane voltages are measured with very high precision (with signal-to-noise ratios of 1000 to 1 or more). On the other hand, dynamical events of interest can take place on timescales of milliseconds or less. So for this application it is vital that one pre-computes as much as possible -- there may not be enough time for extensive on-line computation, though there may be sufficient time for an efficiently-programmed state estimator. These speed requirements may necessitate the use of computationally cheaper --- but less accurate --- state estimation technique, e.g., ensemble Kalman filters. Since we are interested evaluating the utility of dynamic control via simulations, we continue to use particle filters here. We focus on a parameter regime in which the noiseless system can switch between having a globally-attracting fixed point and an unstable fixed point surrounded by a stable limit cycle (Fig.~\ref{fig:ml-hopf-portrait}(a) illustrates the latter). The limit cycle represents a ``tonically'' (periodically) spiking neuron; the effect of noise is to ``smear out'' this limit cycle. The precise parameters we use come from \citet{termantrout}; they are summarized in Appendix~\ref{app:ml}. The key parameter here is the injected DC current; in Fig.~\ref{fig:ml-hopf-portrait}(a) this is $I\equiv 100.0$. As one decreases $I$ from this value, the system undergoes a so-called ``subcritical Hopf bifurcation'': the unstable fixed point becomes stable, and at the same time an unstable periodic orbit emerges around the fixed point \citep{termantrout}. Post-bifurcation, the system is bistable: while tonic spiking continues to be viable, a quiescent state (corresponding to the stable fixed point) has emerged. If we decrease $I$ even further, the stable and unstable cycles collide in a saddle-node bifurcation, leaving behind a single stable fixed point. In this example, our control is $u_t = I_t / C_m.$ The control values we use are $u\in\{0, 3.5, 5\},$ corresponding to $I\in\{0, 70, 100\}$ (we take $C_m=20$). The three values of the injected current place the system in the stable fixed point, bistable, and limit cycle regimes, respectively. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[bb=0in 0in 3.5in 3.5in,scale=0.6]{ml-portrait.eps}& \includegraphics[bb=0in 0in 3.5in 3.5in,scale=0.6]{ml-traces.eps}\\ (a) Noiseless ML phase portrait & (b) Controlled and uncontrolled trajectories\\ \end{tabular} \end{center} \caption{ML phase portrait of a stimulated neuron. Time is measured in ms, voltage in mV, current in pA. In (a), the phase portrait of the noiseless ML system is shown. A stable limit cycle surrounds an unstable cycle, which in turn surrounds a sink. In (b), two trajectories for the noisy system are shown: the dotted curve is a trajectory without control, while the solid curve is a controlled trajectory.} \label{fig:ml-hopf-portrait} \end{figure} \paragraph{Simulation results} We have performed simulations in which we estimated the calcium conductance constant $g_{Ca}$ from simulated data; this is chosen over other parameters because it gives rise to a more complex control policy (as we show below). In detail, we assume a flat prior for $g_{Ca}$ on the interval $[4,5]$; in all experiments reported below, we fix a randomly-generated $g_{Ca}$ value of 4.415, which we take to be the ``true'' value. The optimal control is computed using a timestep of $\approx 2$ ms, and we assume that measurements are available at every time step. State space is discretized by cutting the region $[-80,80]\times[0,1]$ into $72\times72$ bins. Unless otherwise stated, ``constant control'' means $u_t\equiv 5$. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=2in]{ml-fi.eps} & \includegraphics[height=2in]{ml-longcntl.eps} \\ (a) & (b) \\ \end{tabular} \end{center} \caption{Control policy for $g_{Ca}$. In (a), the FI is shown. In (b), the control policy.} \label{fig:ml-gca-control} \end{figure} Fig.~\ref{fig:ml-hopf-portrait}(b) shows examples of controlled and uncontrolled (with $u=5$, or $I=100$) trajectories: the perturbations are measurable, but not large. Simulation results for estimates of $g_{Ca}$ using trajectories of duration $T=1000$ ms are given in Table \ref{mlsim:tab1}. Here, there is a measurable bias in the estimates, which one can correct by simulation. \begin{table} \begin{center} \begin{tabular}{c|c|l|l|l} Observation & Control & Mean & Bias & Std. Dev. \\\hline Full & Dynamic & 4.416 & 6.315e-4 & 0.01106 \\ Full & 5.0 & 4.416 & 8.82e-4 & 0.01405 \\ Noisy partial & Dynamic & 4.413 & -0.002062 & 0.01579 \\ Noisy partial & 5.0 & 4.416 & 0.001458 & 0.01847 \\ \end{tabular} \end{center} \caption{Simulation results for the Morris-Lecar example. ``Full observation'' runs use exact information from the entire state-space trajectory, while ``Noisy partial'' runs correspond to only observing $v_t$ corrupted by observational noise. Here, $T=1000$.} \label{mlsim:tab1} \end{table} For comparison, the Table \ref{mlsim:tab2} shows the results using different values of constant control. Here, dynamic control outperforms constant control in reducing estimator variance . To better understand why, we examine the structure of the control policy and Fisher information. Fig.~\ref{fig:ml-gca-control}(a) shows the Fisher information function for $g_{Ca}$, which is easily shown to be proportional to \begin{displaymath} \big[m_\infty(v_t)~(v_t-E_{Ca})\big]^2~; \end{displaymath} the controlled trajectory is superposed. From the geometry, one expects the optimal control should either find a way to keep the trajectory at a constant voltage or, failing that, try to increase the firing rate, so that trajectories cross the information-rich region as much as possible. As can be seen, the controlled trajectory does exactly that: starting at the resting voltage (around -80 mV), it jumps toward 0 mV, runs along the ``information-rich'' strip around $v=20$ before dipping back toward the resting voltage. In comparison, the uncontrolled trajectory in Fig.~\ref{fig:ml-hopf-portrait}(b) appears to spend less time in this region. Fig.~\ref{fig:ml-gca-control}(b) shows how the optimal control policy accomplishes this by suitably pushing the trajectory at critical parts of its cycle, thus increasing the overall firing rate. \begin{table} \begin{center} \begin{tabular}{c|c|l|l|l} Observation & Constant control & Mean & Bias & Std. Dev. \\\hline Noisy partial & 0.0 & 4.413 & -0.001552 & 0.06893 \\ Noisy partial & 3.5 & 4.416 & 5.43e-4 & 0.05685 \\ Noisy partial & 5.0 & 4.416 & 0.001458 & 0.01847 \\ \end{tabular} \end{center} \caption{Simulation results the Morris-Lecar example with constant controls.} \label{mlsim:tab2} \end{table} \begin{figure} \begin{center} \includegraphics*[height=1in]{ml-vtrace.eps}\\[1ex] \includegraphics[height=1in]{ml-wtrace.eps}\\[-0.5ex] \end{center} \caption{Trajectories of the Morris-Lecar model under constant control (dashed) and optimal control (solid).} \label{fig:ml-traces} \end{figure} To see more clearly the effect of the dynamic control, we plot time traces of the controlled and uncontrolled trajectories in Fig.~\ref{fig:ml-traces}. As can be seen, the dynamic control not only tries to push the trajectory into the information-rich region, it also makes sure it spends more time there. Some final remarks on the structure of the optimal control: \begin{enumerate} \item We note that the optimal control depends a great deal on the parameter being estimated. If one were to try to estimate $C_m$, for example, the resulting control policy is essentially constant for most of phase space. \item In this dynamic regime, with FI as in Fig.~\ref{fig:ml-gca-control}(a) (a unimodel function of $v$), it is natural to expect that increasing firing rate leads to an increase in the FI. This can be accomplished by either dynamic control or by static control. In more general situations, e.g., neuron models with additional currents and exhibiting dynamics on multiple timescales \citep{dayan2001theoretical}, we expect dynamic control to yield a greater gain in FI. However, since such models usually entail $>3$ degrees of freedom, they would be difficult to study using present numerical methods, and we leave the study of such models for future work. \end{enumerate} \section{Chemostat Growth Models} \label{sec:chemostat} Our second example comes from experimental ecology. In these experiments, a glass tank or chemostat is inoculated with a population of algae. The tank is bubbled to ensure that the contents are mixed, and to prevent oxygen deprivation. A nitrogen-rich medium is continuously injected while the contents of the tank are evacuated at the same rate as the injection. Algae are assumed to consume nitrogen in proportion to their population size and nitrogen concentration, until the population size stabilizes due to saturation. For ecological models a set of stochastic differential equations can be proposed with drift term \begin{align} dN_t & = \left(\delta(t) \eta_I(t) - \frac{ \rho C_t N_t }{ \kappa + N_t} - \delta(t)N_t\right)dt \label{eqn:chemo} \\ dC_t & = \left(\frac{ \chi \rho C_t N_T }{ \kappa + N_t } - \delta(t) C\right)dt \nonumber \end{align} where the model has a mechanistic interpretation for an infinite population given in terms of: \begin{description} \item[$N_t$] ($\mu$ mol/l) represents the nitrogen concentration in the chemostat \item[$C_t$] ($10^9$ cells per liter) gives the relative algal density \item[$\delta(t)$] (percentage per day) is the dilution rate; i.e., the rate at which medium is injected and the chemostat evacuated \item[$\eta_I(t)$] ($\mu$ mol/l) is the nitrogen concentration in medium \item[$\rho$] ($\mu$ mol/$10^9$ cells) is the rate of algal consumption of nitrogen \item[$\kappa$] ($\mu$ mol/l) is a half-saturation constant indicating the value of $N$ at which $N/(\kappa + N)$ is half-way to its asymptote \item[$\chi$] ($10^9$ cells/$\mu$ mol) is the algal conversion rate; how fast algae turn consumed nitrogen into new algae. \end{description} This system is made stochastic by multiplicative log-normal noise. This is equivalent to a diffusion process on $N^*_t = \log(N_t)$ and $C^*_t = \log(C_t)$ with additive noise: \begin{align*} dN^*_t & = \left(\delta(t) \eta_I(t) - \frac{ \rho e^{C^*_t+N^*_t} }{ \kappa + e^{N^*_t}} - \delta(t)e^{N^*_t}\right)e^{-N^*_t}dt + \sigma_1 dW_1 \\ dC^*_t & = \left(\frac{ \chi \rho e^{C^*_t+N^*_t} }{ \kappa + e^{N^*_t}} - \delta(t)e^{C^*_t} \right)e^{-C^*_t}dt + \sigma_2 dW_2. \end{align*} Here, the diffusion terms provide an approximation to stochastic variation for large but finite populations and account for other sources of extra-demographic variation. While alternative parametrizations of stochastic evolution can be employed, the diffusion approximation is convenient for our purposes. A typical goal is to estimate the algal conversion rate $\chi$ and the half-saturation constant $\kappa$. This experiment involves several experimental parameters that can be used as the dynamic input: \begin{itemize} \item The dilution rate of the chemostat $\delta(t)$ \item The concentration of nitrogen input $\eta_I(t)$ \item The times at which the samples are taken from the chemostat \item The quantities that are observed. \end{itemize} In this paper we use $\delta(t)$ as the control parameter, which fits well within the design methodology outlined above. Realistic values for the parameters, estimated from previous experiments, are $(\eta_I,\rho,\chi,\kappa) = (160,270,0.0027,4.4)$. We chose $\sigma_1 = \sigma_2 = 0.1$ based on visual agreement with past experiments; since these act multiplicatively on the dynamics it is reasonable for them to be of the same order of magnitude. We hold $\eta_I(t)$ constant; within the experimental apparatus, $\eta_I(t)$ can only be modified by changing between discrete sources of medium. Chemostats are typically inoculated with a small number of algal cells and the models above give algal density relative to a total population carrying capacity in the tens of millions. Under these circumstances, initial conditions can be given by $N^*_0 = \log \eta_I$ (the input concentration) and $C^*_0 \propto \log 10^{-5}$ representing an initial population of a few hundred cells. The experimental apparatus places further constraints on the problem: the dilution rate must be positive, and cannot exceed the maximal rate at which medium can be pumped through the chemostat. We focus on the estimation of $\kappa.$ In this model system, one can measure both $C_t$ and $N_t$, though nitrogen measurements are both more costly to make and prone to distortion by various factors. Continuous measurements are not practical for either quantity; one can at best make measurements a few times a day. Here, we focus mainly on the complete-but-infrequent observation regime, where we assume we can make (noisy) measurements of both state variables. This system represents a most-marginal case for adaptive designs in that the system has a stable fixed point for each dilution rate and most information is gained near the fixed point. Further, the value of $\kappa$ mostly affects the fixed point of $N_t$. We thus expect that adaptive controls will exhibit little improvement over the best choice of a constant control policy and that estimates based solely on $C_t$ -- by far the easier quantity to measure -- will have poor statistical properties whatever policy is used. Figure \ref{fig:chemo-phase} shows an example trajectory of this model along with the fixed point as a function of $\kappa$. A more detailed analysis of the dynamics of this system along with our choice of controls is deferred to Appendix \ref{chemo.appendix}. \begin{figure} \begin{center} \includegraphics[bb=0in 0in 4in 3in,scale=0.8]{chemo-traj1.eps} \end{center} \caption{The state space of the chemostat model. The solid curve is a single trajectory for the uncontrolled system. The horizontal dashed lines are the nullclines of the noise-free system. The thick gray line shows the movement of the fixed point as $\kappa$ varies from 2 to 12. Here, $\delta=0.3$ and $\kappa=4.4~.$} \label{fig:chemo-phase} \end{figure} \paragraph{Simulation results} We now examine the effectiveness of the optimal control methodology via simulations of the chemostat model. We start at $(C^*_0, N^*_0)=(-4,2)$, corresponding to typical experimental conditions. With this initial condition, it typically takes a trajectory about 7 days or so (physical time) to reach equilibrium. In our simulations, we carry out experiments of duration $T$ for various values of $T$, ranging from 4 to 30 days. All simulation results are based on 256 realizations of the (controlled) diffusion process. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=4cm]{chemo-ctraj.eps} & \includegraphics*[height=4cm]{chemo-controls.eps} \\ (a) A controlled trajectory & (b) The control $u_t$ \\ \includegraphics[height=4cm]{chemo-payoff.eps} & \includegraphics[height=4cm]{chemo-ctraj-n.eps} \\ (c) Payoff / $T$ versus $T$ & (d) The controled $N^*_t$ \\ \end{tabular} \end{center} \caption{(a) Controlled trajectories of the chemostat system \eqref{eqn:chemo} along with the control used (panel b) and the resulting $N^*_t$ (d). Panel (c) shows $\overline{FI}(T) / T$, where $\overline{FI}(T)$ is the mean ``pay-off,'' i.e., Fisher information averaged over initial conditions and experimental realizations.} \label{fig:chemo-traj} \end{figure} The optimal control policies, for different values of $T$, are shown in Fig.~\ref{fig:chemo-controls}. The control values are drawn from the set $\{0.1, 0.3, 0.5, 0.68\}$ (remaining below the dilution rate at which all the algae is eventually removed from the system), and we assume a flat prior for $\kappa$ over the interval $[3.5,5.5].$ As can be seen, the computed control policy is nontrivial and depends on $T.$ However, we observed that after about $T\approx 10$, the control policy stops changing. The ``long-time,'' steady-state control policy uses only the extreme values 0.1 and 0.68; on shorter timescales, the control policy uses all available values. Fig.~\ref{fig:chemo-traj}(c) shows the quantity $\overline{FI}(T)/T$, where $\overline{FI}(T)$ is the mean Fisher information (averaged over experimental realizations and initial conditions) as a function of duration $T.$ As can be seen, by $T=30$, the system has begun to approach the asymptotic regime, in which we expect $\overline{FI}(T)\propto T.$ To illustrate the effects of the optimal control, a controlled trajectory is shown in Fig.~\ref{fig:chemo-traj}(a). The corresponding control values are shown in Fig.~\ref{fig:chemo-traj}(b), as is the corresponding values of $N^*_t$ in Fig.~\ref{fig:chemo-traj}(d) (not much of interest happens to $C^*_t$). \begin{table} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c} T & Control & N & In-range & Mean & Bias & Std.~Dev. & Std.~Dev.~Err \\\hline\hline 4 & Dynamic & 256 & 54\% & 4.555 & 0.1551 & 0.7837 & 0.03464 \\ 4 & 0.1 & 256 & 62\% & 4.658 & 0.2577 & 0.7157 & 0.03163 \\ 4 & 0.3 & 256 & 26\% & 4.514 & 0.1137 & 0.9043 & 0.03996 \\ 4 & 0.68 & 256 & 16\% & 4.624 & 0.2243 & 0.938 & 0.04145 \\\hline 7 & Dynamic & 256 & 100\% & 4.402 & 0.002351 & 0.0395 & 0.001746 \\ 7 & 0.1 & 256 & 100\% & 4.399 & -0.001245 & 0.04593 & 0.00203 \\ 7 & 0.3 & 256 & 35\% & 4.679 & 0.2788 & 0.8459 & 0.03738 \\ 7 & 0.68 & 256 & 18\% & 4.463 & 0.06275 & 0.9384 & 0.04147 \\\hline 15 & Dynamic & 256 & 100\% & 4.402 & 0.001752 & 0.00955 & 4.221e-4 \\ 15 & 0.1 & 256 & 100\% & 4.403 & 0.002518 & 0.01284 & 5.674e-4 \\ 15 & 0.3 & 256 & 100\% & 4.399 & -8.251e-4 & 0.01969 & 8.704e-4 \\ 15 & 0.68 & 256 & 19\% & 4.446 & 0.04579 & 0.9314 & 0.04116 \\\hline 30 & Dynamic & 256 & 100\% & 4.402 & 0.001827 & 0.004443 & 1.964e-4 \\ 30 & 0.1 & 256 & 100\% & 4.402 & 0.001629 & 0.005766 & 2.548e-4 \\ 30 & 0.3 & 256 & 100\% & 4.403 & 0.002547 & 0.01075 & 4.752e-4 \\ 30 & 0.68 & 256 & 21\% & 4.532 & 0.1324 & 0.9216 & 0.04073 \\ \end{tabular} \end{center} \caption{Full observation results for the chemostat model. We perform $N$ independent trials for each experimental set-up. {\em in-range} = number of trials where estimator falls within uniform prior; {\em bias} = difference between the mean over tries and the true value; {\em std. dev.} = standard deviation of the estimator; and {\em std.~dev.~err.} = our estimate of the standard error of the standard deviation.} \label{tab:chemo-perfo} \end{table} To assess the quality of $\kappa$ estimates, we estimated the variance of of the MLE, $\hat{\kappa}$ in simulations. Table~\ref{tab:chemo-perfo} shows the results these. As can be seen, on shorter timescales, dynamic and constant controls do not differ much in terms of performance. But, on longer timescales (e.g., $T=30$), the optimal control performs significantly better than most constant control values, with the exception of $\delta = 0.1$: with $\delta=0.1$, the constant-control system achieves close to the performance of the dynamic control. The reason for this is that given enough time, trajectories in this system all converge to the stable fixed point. At and near this fixed point, the optimal policy sets $\delta = 0.1$, so once the trajectory is close to the fixed point, the optimal policy is largely indistinguishable from the constant policy. In principle, the dynamic control can offer a potential gain in information on shorter timescales (during the transient phase of the dynamics). But in this particular system, the dilution rate $\delta(t)$ has little effect on the dynamics during the transient, and controlled and uncontrolled dynamics do not differ much. In Appendix \ref{chemo.appendix}, we present further simulations based on infrequent, noisy observations and on a regime with observations of only $C_t$. As expected, in both these regimes, the use of a dynamic control does not provide improvement over an optimal choice of constant control. As is also expected, $C_t$-only observations yield almost no information about $\kappa$ in any control regime. \section{Discussion} \label{sec:Conclusion} There has been increasing interest in combining experimental data and statistical methods with mechanistic dynamical models describing system behavior. This paper adds to this literature in considering the problem of designing inputs into such experiments that are directed at improving the precision of parameter estimates that result from them. In particular, we have demonstrated that in diffusion processes, maximizing Fisher Information about a parameter can be cast as a problem of optimal control and shown that using this strategy can substantially improve parameter estimates. In order to make control methods feasible when systems are observed with noise or only some state variables are observable, we employ the strategy of estimating the value of the state on-line and using this within a pre-computed control policy. We have demonstrated this approach on three examples that showcase when this form of adaptive control is most likely to be useful. One situation when we expect significant benefit from dynamic control is when visits to the information-rich regions of state space are relatively rare, as in the double-well example. As we have seen, optimal control can be effective in increasing the frequency of visits to information-rich regions. In systems where trajectories naturally return to information-rich regions in a recurrent fashion even with static controls, our method may still yield modest benefits, but the exact degree of information gain will depend on the specific dynamic situation. In systems with stable fixed points that are already informative about the parameters of interest, dynamic control is not likely to be better than simply choosing an optimal static control. However, our methods also yield information about which constant control is likely to be most useful; this may be helpful in, e.g., chemostat experiments, where measurements are relatively infrequently and dynamic control is relatively easy to implement. There remain numerous unresolved problems and open areas in which these methods can be extended. Computational cost, both in speed and memory, represent the largest limiting factor in employing these methods. In particular, the storage and computation costs of the policy scale exponentially with the number of state variables. Possible strategies here involve the application of sparse grid strategies for discretizing the state variables \citep{xiu} or -- more heuristically -- intensive Monte Carlo methods to simulate the state variables forward combined with machine learning methods to estimate control strategies. It may also be possible to pre-compute control policies only for high-probability regions of the state space and employ techniques of approximate dynamic programming \citep{powell}. Further extensions involve alternative targets for the control policy. \citet{ThorbergssonHooker} explores maximizing the Fisher Information for a partially observed Markov decision process, although this exacerbates the computational difficulties discussed above. Multiple parameters of interest can be readily accommodated by maximizing the trace or determinant of the Fisher Information matrix. We have dealt with parameter uncertainty before the experiment by averaging the Fisher Information over the prior within our objective. Alternative strategies such as maximizing the minimum Fisher Information in a range of parameters can be implemented within the numerical control strategy. We have also not experimented with updating our prior as the experiment progresses; where the optimal control policy depends strongly on the system parameters this may be expected to produce significant improvements. \bibliographystyle{chicago}
{ "timestamp": "2013-06-10T02:00:46", "yymm": "1210", "arxiv_id": "1210.3739", "language": "en", "url": "https://arxiv.org/abs/1210.3739" }
\section{Introduction} The development of solution methods and computational algorithms for solving multidimensional extremal problems is one of the important concerns in the linear tropical (idempotent) algebra \cite{Baccelli92Synchronization,Cuninghamegreen94Minimax,Kolokoltsov97Idempotent,Golan03Semirings,Heidergott05Maxplus,Litvinov07Themaslov,Butkovic10Maxlinear}. The problems under consideration involve the minimization of linear and nonlinear functionals defined on finite-dimensional semimodules over idempotent semifields and may have additional constraints imposed on the feasible solution set in the form of linear tropical equalities and inequalities. Among these problems are idempotent analogues of the linear programming problems \cite{Zimmermann03Disjunctive,Butkovic09Introduction,Butkovic10Maxlinear} and their extensions with nonlinear objective functions \cite{Krivulin05Evaluation,Krivulin06Solution,Krivulin06Eigenvalues,Krivulin09Methods,Krivulin11Algebraic,Krivulin11Analgebraic,Krivulin11Anextremal,Gaubert12Tropical,Krivulin12Anew}. There are solutions to certain problems where both objective function and constraints appear to be nonlinear \cite{Tharwat08Oneclass,Butkovic09Onsome}. Many extremal problems are formulated and solved only in terms of one idempotent semifield, say the classical semifield $\mathbb{R}_{\max,+}$ in \cite{Butkovic09Introduction,Butkovic10Maxlinear,Gaubert12Tropical}. Some other problems including those considered in \cite{Zimmermann03Disjunctive,Krivulin06Eigenvalues,Krivulin09Methods,Krivulin11Algebraic,Krivulin11Analgebraic,Krivulin11Anextremal,Krivulin12Anew} are treated in a more general setting, which includes the semifield $\mathbb{R}_{\max,+}$ as a particular case. Furthermore, proposed solutions frequently take (see, e.g. \cite{Tharwat08Oneclass,Butkovic09Introduction,Butkovic09Onsome,Butkovic10Maxlinear,Gaubert12Tropical}) the form of an iterative algorithm that produces a solution if any, or indicates that there is no solutions, otherwise. In other cases, as in \cite{Zimmermann03Disjunctive,Krivulin05Evaluation,Krivulin06Eigenvalues,Krivulin09Methods,Krivulin11Algebraic,Krivulin11Analgebraic,Krivulin11Anextremal,Krivulin12Anew}, direct solutions are given in closed form. Finally, note that most of the existing approaches offer some particular solutions rather than give comprehensive solutions to the problems. In this paper, we consider a multidimensional extremal problem that is a generalization of the problems examined in \cite{Krivulin05Evaluation,Krivulin06Eigenvalues,Krivulin09Methods,Krivulin11Algebraic,Krivulin11Analgebraic,Krivulin11Anextremal,Krivulin12Anew}. Particular cases of the problem arise in various applications including growth rate estimation for the state vector in stochastic dynamic systems with event synchronization \cite{Krivulin05Evaluation,Krivulin09Methods} and single facility location with Chebyshev and rectilinear metrics \cite{Krivulin11Analgebraic,Krivulin11Anextremal}. On the basis of implementation and further development of methods and techniques proposed in \cite{Krivulin05Evaluation,Krivulin06Solution,Krivulin06Eigenvalues,Krivulin09Methods,Krivulin11Algebraic,Krivulin11Analgebraic,Krivulin11Anextremal,Krivulin12Anew,Krivulin12Solution}, we give a complete solution to the problem in a closed form that provides an appropriate basis for both formal analysis and development of efficient computational procedures. The rest of the paper is as follows. We begin with an introduction to idempotent algebra and outline basic results that underlie the subsequent solutions. Furthermore, examples of tropical extremal problems are presented and their solutions are briefly discussed. A new extremal problem is then introduced and a closed-form solution to the problem under general conditions is established. Finally, solutions to some particular cases and extensions of the problem are given. \section{Basic Definitions and Results} We begin with basic algebraic definitions and preliminary results from \cite{Krivulin06Solution,Krivulin06Eigenvalues,Krivulin09Methods} to provide a formal framework for subsequent analysis and solutions presented in the paper. Additional details and thorough investigation of the theory can be found in \cite{Baccelli92Synchronization,Cuninghamegreen94Minimax,Kolokoltsov97Idempotent,Golan03Semirings,Heidergott05Maxplus,Litvinov07Themaslov,Butkovic10Maxlinear}. \subsection{Idempotent Semifield} We consider a set $\mathbb{X}$ that is closed under binary operations, addition $\oplus$ and multiplication $\otimes$, and equipped with their related neutral elements, zero $\mathbb{0}$ and identity $\mathbb{1}$. We suppose that the algebraic system $\langle\mathbb{X},\mathbb{0},\mathbb{1},\oplus,\otimes\rangle$ is a commutative semiring with idempotent addition and invertible multiplication. Since for all $\bm{x}\in\mathbb{X}_{+}$, where $\mathbb{X}_{+}=\mathbb{X}\setminus\{\mathbb{0}\}$, there exists its multiplicative inverse $x^{-1}$, the semiring is commonly referred to as the idempotent semifield. The integer power is introduced in the usual way to represent iterated multiplication. Moreover, we assume that the power with rational exponent is also defined and so consider the semifield to be radicable. In what follows we omit the multiplication sign $\otimes$ as it is usual in the conventional algebra. The power notation is always used in the above sense. The idempotent addition naturally induces a partial order on the semifield. Furthermore, we assume that the partial order can be completed to a total order, thus allowing the semifield to be linearly ordered. In the following, the relation signs and the $\min$ symbol are thought of as in terms of this linear order. Examples of linearly ordered radicable idempotent semifields include \begin{align*} \mathbb{R}_{\max,+} &= \langle\mathbb{R}\cup\{-\infty\},-\infty,0,\max,+\rangle, \\ \mathbb{R}_{\min,+} &= \langle\mathbb{R}\cup\{+\infty\},+\infty,0,\min,+\rangle, \\ \mathbb{R}_{\max,\times} &= \langle\mathbb{R}_{+}\cup\{0\},0,1,\max,\times\rangle, \\ \mathbb{R}_{\min,\times} &= \langle\mathbb{R}_{+}\cup\{+\infty\},+\infty,1,\min,\times\rangle, \end{align*} where $\mathbb{R}$ is the set of reals, $\mathbb{R}_{+}=\{x\in\mathbb{R}|x>0\}$. \subsection{Idempotent Semimodule} Consider the Cartesian product $\mathbb{X}^{n}$ with column vectors as its elements. A vector with all components equal to $\mathbb{0}$ is called the zero vector and denoted by $\mathbb{0}$. The operations of vector addition $\oplus$ and scalar multiplication $\otimes$ are routinely defined component-wise through the scalar operations introduced on $\mathbb{X}$. The set $\mathbb{X}^{n}$ with these operations forms a finite-dimensional idempotent semimodule over $\mathbb{X}$. A vector is called regular if it has no zero components. The set of all regular vectors of order $n$ over $\mathbb{X}_{+}$ is denoted by $\mathbb{X}_{+}^{n}$. For any nonzero column vector $\bm{x}=(x_{i})\in\mathbb{X}^{n}$ we define a row vector $\bm{x}^{-}=(x_{i}^{-})$, where $x_{i}^{-}=x_{i}^{-1}$ if $x_{i}\ne\mathbb{0}$, and $x_{i}^{-}=\mathbb{0}$ otherwise, $i=1,\ldots,n$. \subsection{Matrix Algebra} For conforming matrices with entries from $\mathbb{X}$, addition and multiplication of matrices together with multiplication by scalars follow the conventional rules using the scalar operations defined on $\mathbb{X}$. A matrix with all entries that are equal to $\mathbb{0}$ is called the zero matrix and denoted by $\mathbb{0}$. A matrix is row (column) regular if it has no zero rows (columns). Consider the set of square matrices $\mathbb{X}^{n\times n}$. As in the conventional algebra, a matrix is diagonal if its off-diagonal entries are equal to $\mathbb{0}$. A diagonal matrix that has only $\mathbb{1}$ as the diagonal entries is the identity matrix and denoted by $I$. Finally, the exponent notation stands for repeated multiplication for any square matrix $A$ with the obvious condition that $A^{0}=I$. The set $\mathbb{X}^{n\times n}$ with the matrix addition and multiplication forms an idempotent semiring with identity. For any matrix $A=(a_{ij})$ its trace is given by $$ \mathop\mathrm{tr}A = \bigoplus_{i=1}^{n}a_{ii}. $$ A matrix is called reducible if simultaneous permutations of rows and columns put it into a block-triangular normal form, and irreducible otherwise. The normal form of a matrix $A\in\mathbb{X}^{n\times n}$ is given by \begin{equation}\label{E-MNF} A = \left( \begin{array}{cccc} A_{11} & \mathbb{0} & \ldots & \mathbb{0} \\ A_{21} & A_{22} & & \mathbb{0} \\ \vdots & \vdots & \ddots & \\ A_{s1} & A_{s2} & \ldots & A_{ss} \end{array} \right), \end{equation} where $A_{ii}$ is either irreducible or zero matrix of order $n_{i}$, whereas $A_{ij}$ is an arbitrary matrix of size $n_{i}\times n_{j}$ for all $i=1,\ldots,s$, $j<i$, and $n_{1}+\cdots+n_{s}=n$. \subsection{Spectrum of Matrices} Any matrix $A\in\mathbb{X}^{n\times n}$ defines on the semimodule $\mathbb{X}^{n}$ a linear operator with certain spectral properties. Specifically, if the matrix $A$ is irreducible, it has a unique eigenvalue that is given by \begin{equation} \lambda = \bigoplus_{m=1}^{n}\mathop\mathrm{tr}\nolimits^{1/m}(A^{m}) \label{E-lambda} \end{equation} whereas all corresponding eigenvectors are regular. Let the matrix $A$ be reducible and have the form \eqref{E-MNF}. All eigenvalues of $A$ are among the eigenvalues $\lambda_{i}$ of the diagonal blocks $A_{ii}$, $i=1,\ldots,s$. The value $\lambda=\lambda_{1}\oplus\cdots\oplus\lambda_{s}$ is always an eigenvalue, it is calculated by \eqref{E-lambda} and called the spectral radius of $A$. \subsection{Linear Inequalities} Suppose there are a matrix $A\in\mathbb{X}^{m\times n}$ and a regular vector $\bm{d}\in\mathbb{X}_{+}^{m}$. The problem is to solve with respect to the unknown vector $\bm{x}\in\mathbb{X}^{n}$ the linear inequality \begin{equation} A\bm{x} \leq \bm{d}. \label{I-Axd} \end{equation} Clearly, if $A=\mathbb{0}$, then any vector $\bm{x}$ is a solution. Assume now the matrix $A\ne\mathbb{0}$ to have zero columns. It is easy to see that each zero column in $A$ allows the corresponding element of the solution vector $\bm{x}$ to take arbitrary values. The other elements can be found from a reduced inequality with a matrix that is formed by omitting zero columns from $A$ and so becomes column-regular. The solution to the inequality for column-regular matrices is as follows. \begin{lemma}\label{L-IAxd} A vector $\bm{x}$ is a solution of inequality \eqref{I-Axd} with a column-regular matrix $A$ and a regular vector $\bm{d}$ if and only if $$ \bm{x} \leq (\bm{d}^{-}A)^{-}. $$ \end{lemma} For a given square matrix $A\in\mathbb{X}^{n\times n}$ and a vector $\bm{b}\in\mathbb{X}^{n}$, we now find all regular solutions $\bm{x}\in\mathbb{X}_{+}^{n}$ to the inequality \begin{equation} A\bm{x}\oplus\bm{b} \leq \bm{x}. \label{I-Axbx} \end{equation} To solve the problem, we follow an approach based on the implementation of the function $\mathop\mathrm{Tr}$ that maps each square matrix $A\in\mathbb{X}^{n\times n}$ to a scalar $$ \mathop\mathrm{Tr}(A) = \bigoplus_{m=1}^{n}\mathop\mathrm{tr} A^{m}. $$ For any matrix $A\in\mathbb{X}^{n\times n}$, we introduce a matrix $$ A^{\ast} = I\oplus A\oplus\cdots\oplus A^{n-1}. $$ Assume a matrix $A$ to be represented in its normal form \eqref{E-MNF}. We define a diagonal matrix $$ D = \left( \begin{array}{ccc} A_{11} & & \mathbb{0} \\ & \ddots & \\ \mathbb{0}& & A_{ss} \end{array} \right), $$ and a low triangular matrix $$ T = \left( \begin{array}{cccc} \mathbb{0}& \ldots & \ldots & \mathbb{0} \\ A_{21} & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \vdots \\ A_{s1} & \ldots & A_{s,s-1} & \mathbb{0} \end{array} \right), $$ which present the diagonal and triangular parts of the decomposition of $A$ in the form \begin{equation} A = D\oplus T. \label{E-ADT} \end{equation} Note that if the matrix $A$ is irreducible, we put $D=A$ and $T=\mathbb{0}$. \begin{theorem}\label{T-IAxbx} Let $\bm{x}$ be the general regular solution of inequality \eqref{I-Axbx} with a matrix $A$ in the form of \eqref{E-ADT}. Then the following statements are valid: \begin{enumerate} \item If $\mathop\mathrm{Tr}(A)\leq\mathbb{1}$, then $\bm{x}=(D^{\ast}T)^{\ast}D^{\ast}\bm{u}$ for all $\bm{u}\in\mathbb{X}_{+}^{n}$ such that $\bm{u}\geq\bm{b}$. \item If $\mathop\mathrm{Tr}(A)>\mathbb{1}$, then there is no regular solution. \end{enumerate} \end{theorem} \section{Tropical Extremal Problems} We now turn to the discussion of multidimensional extremal problems formulated in terms of idempotent algebra. The problems are established to minimize both linear and nonlinear functionals defined on semimodules over idempotent semifields, subject to constraints in the form of linear equalities and inequalities. In this section, the symbols $A$ and $C$ stand for given matrices, $\bm{b}$, $\bm{d}$, $\bm{p}$, $\bm{q}$, $\bm{g}$ and $\bm{h}$ for vectors, and $r$ and $s$ for numbers. We start with an idempotent analogue of linear programming problems examined in \cite{Butkovic09Introduction,Butkovic10Maxlinear} and defined in terms of the semifield $\mathbb{R}_{\max,+}$ to find the solution $\bm{x}$ to the problem \begin{gather*} \min\ (\bm{p}^{T}\bm{x}\oplus r), \\ A\bm{x}\oplus\bm{b}\leq C\bm{x}\oplus\bm{d}. \end{gather*} A solution technique that is based on an iterative algorithm and called the alternating method is proposed which produces a solution if any, or indicates that there is no solution otherwise. The technique is extended in \cite{Gaubert12Tropical} to provide an iterative computational scheme for a problem with nonlinear objective function given by \begin{gather*} \min\ (\bm{p}^{T}\bm{x}\oplus r)(\bm{q}^{T}\bm{x}\oplus s)^{-1}, \\ A\bm{x}\oplus\bm{b}\leq C\bm{x}\oplus\bm{d}. \end{gather*} There are certain problems which can be solved directly in a closed form. Specifically, an explicit formula is proposed in \cite{Zimmermann03Disjunctive} within the framework of optimization of max-separable functions under disjunctive constraints for the solution of the problem \begin{gather*} \min\ \bm{p}^{T}\bm{x}, \\ C\bm{x}\geq\bm{b}, \\ \bm{g}\leq\bm{x}\leq\bm{h}. \end{gather*} Furthermore, in \cite{Krivulin05Evaluation,Krivulin06Eigenvalues,Krivulin09Methods,Krivulin11Algebraic,Krivulin11Analgebraic}, a problem is examined which is to find regular solutions $\bm{x}$ that provide $$ \min\ \bm{x}^{-}A\bm{x}. $$ To get a closed form solution to the problem, an approach is applied that uses results of the spectral theory of linear operators in idempotent algebra. Finally, a closed form solution based on a technique of solving linear equations and inequalities is derived in \cite{Krivulin12Anew} for the problem \begin{gather*} \min\ (\bm{x}^{-}\bm{p}\oplus\bm{q}^{-}\bm{x}), \\ A\bm{x}\leq\bm{x}. \end{gather*} In the rest of the paper, we consider a problem with a general objective function that actually contains the objective functions of the last two problems as particular cases. For the problem when there are no additional constraints imposed on the solution, a general solution is given in a closed form. \section{A New General Extremal Problem} Given a matrix $A\in\mathbb{X}^{n\times n}$ and vectors $\bm{p},\bm{q}\in\mathbb{X}^{n}$, consider the problem to find $\bm{x}$ that provides \begin{equation} \min_{\bm{x}\in\mathbb{X}_{+}^{n}}(\bm{x}^{-}A\bm{x}\oplus\bm{x}^{-}\bm{p}\oplus\bm{q}^{-}\bm{x}). \label{P-xAxxpqx} \end{equation} A complete explicit solution to the problem under general conditions as well as to some particular cases and extensions is given in the subsequent sections. \subsection{The Main Result} We start with a solution to the problem in a general setting that is appropriate for many applications. \begin{theorem}\label{T-xAxxpqx} Suppose $A$ is a matrix in the form \eqref{E-MNF}, $\bm{p}$ is a vector, $\bm{q}$ is a regular vector, $\lambda$ is the spectral radius of $A$, and $$ \Delta = (\bm{q}^{-}\bm{p})^{1/2}, \qquad \mu = \lambda\oplus\Delta\ne\mathbb{0}. $$ Define a matrix $$ A_{\mu} = \mu^{-1}A = D_{\mu}\oplus T_{\mu}, $$ where $D_{\mu}$ and $T_{\mu}$ are respective diagonal and lower triangular parts of $A_{\mu}$, and a matrix $$ B = (D_{\mu}^{\ast}T_{\mu})^{\ast}D_{\mu}^{\ast}. $$ Then the minimum in \eqref{P-xAxxpqx} is equal to $\mu$ and attained if and only if $$ \bm{x} = B\bm{u} $$ for all regular vectors $\bm{u}$ such that $$ \mu^{-1}\bm{p} \leq \bm{u} \leq \mu(\bm{q}^{-}B)^{-}. $$ \end{theorem} \begin{proof} We show that both $\lambda$ and $\Delta$ are lower bounds for the objective function in \eqref{P-xAxxpqx}, and then get all regular vectors $\bm{x}$ that yield the value $\mu=\lambda\oplus\Delta$ of the function. To verify that $\lambda$ is a lower bound, we write $$ \bm{x}^{-}A\bm{x}\oplus\bm{x}^{-}\bm{p}\oplus\bm{q}^{-}\bm{x} \geq \bm{x}^{-}A\bm{x}. $$ Assume the matrix $A$ to be irreducible and $\lambda$ to be its unique eigenvalue. We take a corresponding eigenvector $\bm{x}_{0}$ and note that for all $\bm{x}\in\mathbb{X}_{+}^{n}$, $$ \bm{x}_{0}^{-}\bm{x}_{0} = \mathbb{1}, \qquad \bm{x}\bm{x}_{0}^{-} \geq(\bm{x}^{-}\bm{x}_{0})^{-1}I. $$ Furthermore, we have \begin{multline*} \bm{x}^{-}A\bm{x} = \bm{x}^{-}A\bm{x}\bm{x}_{0}^{-}\bm{x}_{0} = \bm{x}^{-}A(\bm{x}\bm{x}_{0}^{-})\bm{x}_{0} \\ \geq \bm{x}^{-}A\bm{x}_{0}(\bm{x}^{-}\bm{x}_{0})^{-1} = \lambda\bm{x}^{-}\bm{x}_{0}(\bm{x}^{-}\bm{x}_{0})^{-1} = \lambda. \end{multline*} Consider an arbitrary matrix $A$ taking the form \eqref{E-MNF}. Any vector $\bm{x}$ now admits a decomposition into subvectors $\bm{x}_{1},\ldots,\bm{x}_{s}$ according to the decomposition of $A$ into column blocks. With the above result for irreducible matrices, we obtain $$ \bm{x}^{-}A\bm{x} = \bigoplus_{i=1}^{s}\bigoplus_{j=1}^{i} \bm{x}_{i}^{-}A_{ij}\bm{x}_{j} \geq \bigoplus_{i=1}^{s} \bm{x}_{i}^{-}A_{ii}\bm{x}_{i} \geq \bigoplus_{i=1}^{s}\lambda_{i} = \lambda. $$ Now we show that $\Delta=(\bm{q}^{-}\bm{p})^{1/2}$ is also a lower bound for the objective function. We have $$ \bm{x}^{-}A\bm{x}\oplus\bm{x}^{-}\bm{p}\oplus\bm{q}^{-}\bm{x} \geq \bm{x}^{-}\bm{p}\oplus\bm{q}^{-}\bm{x}. $$ Let us take any vector $\bm{x}\in\mathbb{X}_{+}^{n}$ and denote $$ r = \bm{x}^{-}\bm{p}\oplus\bm{q}^{-}\bm{x}. $$ The last equality leads to two inequalities $$ r \geq \bm{q}^{-}\bm{x} > \mathbb{0}, \qquad r \geq \bm{x}^{-}\bm{p}. $$ Multiplication of the first inequality by $r^{-1}\bm{x}^{-}$ from the right gives $\bm{x}^{-}\geq r^{-1}\bm{q}^{-}\bm{x}\bm{x}^{-}\geq r^{-1}\bm{q}^{-}$. Substitution of $\bm{x}^{-}\geq r^{-1}\bm{q}^{-}$ into the second results in $r\geq r^{-1}\bm{q}^{-}\bm{p}=r^{-1}\Delta^{2}$, whence it follows that $$ \bm{x}^{-}\bm{p}\oplus\bm{q}^{-}\bm{x} = r \geq \Delta. $$ By combining both bounds, we conclude that $$ \bm{x}^{-}A\bm{x}\oplus\bm{x}^{-}\bm{p}\oplus\bm{q}^{-}\bm{x} \geq \lambda\oplus\Delta = \mu. $$ It remains to find all regular solutions $\bm{x}$ of the equation $$ \bm{x}^{-}A\bm{x}\oplus\bm{x}^{-}\bm{p}\oplus\bm{q}^{-}\bm{x} = \mu. $$ Since $\bm{x}^{-}A\bm{x}\oplus\bm{x}^{-}\bm{p}\oplus\bm{q}^{-}\bm{x}\geq\mu$ for all $\bm{x}\in\mathbb{X}_{+}^{n}$, the set of regular solutions of the equation coincides with that of the inequality $$ \bm{x}^{-}A\bm{x}\oplus\bm{x}^{-}\bm{p}\oplus\bm{q}^{-}\bm{x} \leq \mu, $$ which itself is equivalent to the system of inequalities \begin{align} \bm{x}^{-}A\bm{x}\oplus\bm{x}^{-}\bm{p} &\leq \mu, \label{I-xAxxpmu} \\ \bm{q}^{-}\bm{x} &\leq \mu. \label{I-qxmu} \end{align} Let us consider inequality \eqref{I-xAxxpmu}. After multiplication of the inequality by $\mu^{-1}\bm{x}$ from the left, we write $$ A_{\mu}\bm{x}\oplus\mu^{-1}\bm{p}\leq\mu^{-1}\bm{x}\bm{x}^{-}A\bm{x}\oplus\mu^{-1}\bm{x}\bm{x}^{-}\bm{p}\leq\bm{x}, $$ and then arrive at the inequality $$ A_{\mu}\bm{x}\oplus\mu^{-1}\bm{p} \leq \bm{x}. $$ On the other hand, left multiplication of the obtained inequality by $\mu\bm{x}^{-}$ directly yields inequality \eqref{I-xAxxpmu}, and thus both inequalities are equivalent. Since $\mathop\mathrm{Tr}(A_{\mu})=\mathop\mathrm{Tr}(\mu^{-1}A)\leq\mathbb{1}$, we can apply Theorem~\ref{T-IAxbx} to the last inequality so as to get the general solution of inequality \eqref{I-xAxxpmu} in the form $$ \bm{x} = (D_{\mu}^{\ast}T_{\mu})^{\ast}D_{\mu}^{\ast}\bm{u} = B\bm{u}, $$ where $\bm{u}\in\mathbb{X}_{+}^{n}$ is any vector such that $\bm{u}\geq\mu^{-1}\bm{p}$. Substitution of the solution into inequality \eqref{I-qxmu} gives an inequality $\bm{q}^{-}B\bm{u}\leq\mu$. Application of Lemma~\ref{L-IAxd} to the last inequality yields $\bm{u}\leq\mu(\bm{q}^{-}B)^{-}$. By combining lower and upper bounds obtained for the vector $\bm{u}$, we finally arrive at the solution $$ \bm{x} = B\bm{u}, $$ for all $\bm{u}\in\mathbb{X}_{+}^{n}$ such that $$ \mu^{-1}\bm{p} \leq \bm{u} \leq \mu(\bm{q}^{-}B)^{-}. \qedhere $$ \end{proof} \subsection{Particular Cases and Extensions} Consider problem \eqref{P-xAxxpqx} with an irreducible matrix $A$. Since in this case $D_{\mu}=A_{\mu}$, $T_{\mu}=\mathbb{0}$, and $B=A_{\mu}^{\ast}$, the statement of Theorem~\ref{T-xAxxpqx} takes a reduced form. \begin{corollary} If $A$ is an irreducible matrix, then the solution set of \eqref{P-xAxxpqx} is given by $$ \bm{x} = A_{\mu}^{\ast}\bm{u}, $$ for all $\bm{u}\in\mathbb{X}_{+}^{n}$ such that $$ \mu^{-1}\bm{p} \leq \bm{u} \leq \mu(\bm{q}^{-}A_{\mu}^{\ast})^{-}. $$ \end{corollary} Specifically, when $A=\mathbb{0}$, we have $B=A_{\mu}^{\ast}=I$ and $\mu=\Delta$. The solution set is further reduced to $$ \Delta^{-1}\bm{p} \leq \bm{x} \leq \Delta\bm{q}, $$ which coincides with that in \cite{Krivulin12Anew}. Suppose the vector $\bm{q}$ in problem \eqref{P-xAxxpqx} is irregular. In this case, the matrix $\bm{q}^{-}B$ in the inequality $$ \bm{q}^{-}B\bm{u}\leq\mu $$ may be not column-regular, which prevents direct application of Lemma~\ref{L-IAxd} as in Theorem~\ref{T-xAxxpqx}. Let $J=\mathop\mathrm{supp}(\bm{q}^{-}B)$ be the set of indices of nonzero elements in the row vector $\bm{q}^{-}B$. Denote by $(\bm{q}^{-}B)_{J}$ and $\bm{u}_{J}$ subvectors that have only components with indices from $J$. The solution of the above inequality is given by the constraints $\bm{u}_{J}\leq\mu(\bm{q}^{-}B)_{J}^{-}$ for the subvector $\bm{u}_{J}$, whereas the rest components of the vector $\bm{u}$ can take arbitrary values. Now we can somewhat weaken conditions of Theorem~\ref{T-xAxxpqx} as follows. \begin{theorem} Under the assumptions of Theorem~\ref{T-xAxxpqx}, let $\bm{q}\ne\mathbb{0}$ be arbitrary vector and $J=\mathop\mathrm{supp}(\bm{q}^{-}B)$. Then the minimum in \eqref{P-xAxxpqx} is equal to $\mu$ and attained if and only if $$ \bm{x} = B\bm{u} $$ for all regular vectors $\bm{u}$ such that $$ \mu^{-1}\bm{p} \leq \bm{u}, \quad \bm{u}_{J} \leq \mu(\bm{q}^{-}B)_{J}^{-}. $$ \end{theorem} Finally note, that when $\bm{q}=\mathbb{0}$ we have $J=\emptyset$ and so the upper bound for $\bm{u}$ disappears. \section{Conclusion} A complete closed-form solution has been derived for a tropical extremal problem with nonlinear objective function and without constraints. The solution actually involves performing simple matrix and vector operations in terms of idempotent algebra and provides a basis for the development of efficient computational algorithms and their software implementation. As a suggested line of further research, solutions to the problems under constraints in the form of tropical linear equalities and inequalities are to be considered. Practical examples of successful application of the results obtained are also of great interest.
{ "timestamp": "2012-10-16T02:00:59", "yymm": "1210", "arxiv_id": "1210.3658", "language": "en", "url": "https://arxiv.org/abs/1210.3658" }
\section{Introduction} Recent years, the models based on the hypothesis that our universe is a four-dimensional space-time hypersurface (3-brane) embedded in a fundamental multi-dimensional space \ci{rushap},\ci{otherbr} have become quite popular, see, for example, the reviews \ci{RuBar}-\ci{loc6} and the references therein. The number of extra dimensions, their characteristic size and the number of physical fields, which are spread out the bulk space, may be different in various approaches. At the same time it is assumed that the additional space size is large enough, and additional dimensions can be, in principle, detected in terrestrial experiments planned in the near future and/or in astrophysical observations. Four dimensions of our world can be ensured, in particular, by the localization mechanism of matter fields on three-dimensional hypersurfaces in multidimensional space, i.e. 3-branes. Different scenarios of domain walls description and their applications to elementary particle physics and cosmology can be found in a number of reviews \ci{rev1} - \ci{loc6}. The influence of gravity is especially interesting, which plays an important role in a (quasi)localization of matter fields on the brane \ci{rev12} - \ci{singl1}. The question arises, under what circumstances the (quasi)localization of matter fields with spin zero on a brane is still possible when the minimal interaction with gravity is present? This work is devoted partially to answer this question. In this paper we consider a model of the domain wall formation with finite thickness ("thick" branes ) by self-interacting scalar fields and gravity in five-dimensional noncompact space-time \ci{aags2} with anti-de Sitter geometries on both sides of the brane. The formation of "thick" brane with the localization of light particles on it was obtained earlier in \ci{aags1} with the help of a background scalar and the gravitational fields, when their vacuum configurations have nontrivial topology. Appearance of scalar states with (almost) zero mass on a brane has happened to be possible. However, as it was previously shown \ci{rev19}, the existence of the centrifugal potential in the second variation of scalar-field action may lead to absence of localized modes on a brane. In the present work the scalar matter is composed of two fields with $O(2)$ symmetric self interaction. One of them ("branon" \cite{branon}) is mixed with gravity scalar modes and plays role of the brane formation mode (due to a kink background) and another one is a fermion mass generating (FMG) field (replacing a Higgs field). The soft breaking of $O(2)$ symmetry by tachyon mass terms for both fields is introduced which eventually generates spontaneous breaking of translational symmetry due to formation of kink-type field v.e.v. Furthermore for special values of tachyon mass terms the critical point of spontaneous $\tau$ symmetry breaking exists when the v.e.v. of the FMG scalar field occurs. In the first phase the only nontrivial v.e.v. is given by a kink configuration. But the branon fluctuations around kink in the presence of gravity are suppressed by the universal repulsive centrifugal potential which survives in the zero gravity limit \ci{rev19}. Thus gravity induces a discontinuity in the branon field spectrum. However the FMG field in this phase decouples from branons, is massive and exhibits a more regular weak gravity behavior. In the second phase the Higgs-type field obtains a localized v.e.v. to be used for generation of fermion masses \ci{aags1}. Both fields, branons and FMG scalars, are mixed and the scalar mass spectrum and eigenstates must be found by functional matrix diagonalization. The work starts (Section 2) with brief motivation of necessity for two scalar fields to provide fermion localization on domain wall \ci{loc1} -\ci{loc5} and to supply localized Dirac fermions with masses. In Section 3 the model of two scalar fields with their minimal coupling to gravity is formulated for arbitrary potential and the equations of motion are derived. In the subsection 3.2 the scalar potential is restricted with a quartic $O(2)$ symmetric potential and soft breaking of $O(2)$ symmetry quadratic in fields ( as it could arise from the fermion induced effective action \ci{aags2}). For this Lagrangian the gaussian normal coordinates are introduced and the appropriate equations of motion are obtained. The existence of two phases which differ in presence or absence of v.e.v. for the FMG field is revealed and the solutions for classical background of both scalar fields are found in the leading approximation of the gravity coupling expansion. In the Subsection 3.3 the next-to-leading approximation is performed. In the section 4 the full action is derived up to quadratic order in fluctuations in a vicinity of a background metric. It is dedicated to the separation of equations in respect to different degrees of freedom. At the end of this section the action of scalar fields for the brane and gravity is obtained in gauge invariant variables. In sect. 5 the mass spectrum for the "thick" brane in the theory with a quartic $O(2)$ symmetric potential and soft breaking of $O(2)$ symmetry quadratic in fields is investigated around the critical point in the weak gravity expansion. In conclusion, we discuss results and prospects of the proposed model. \section{Motivation of the two scalar field model} Let start with elucidating how to trap fermion matter on a domain wall -- "thick brane". The latter one emerges in the model of five-dimensional fermion bi-spinors $\psi(X)$ coupled to a scalar field $\Phi(X)$. The extra-dimension coordinate is assumed to be space-like, $$ (X_\alpha) = (x_\mu, z)\ , \quad (x_\mu) = (x_0, x_1, x_2, x_3)\ , \quad (\eta_{\alpha\alpha}) = (+,-,-,-,-) $$ and the subspace of $x_\mu$ corresponds to the four-dimensional Minkowski space. The extra-dimension size is assumed to be infinite (or large enough). The fermion wave function then obeys by the Dirac equation \begin{equation} [\,i\gamma_\alpha \partial^\alpha - \Phi(X)\,]\psi(X) = 0\ , \quad \gamma_\alpha = (\gamma_\mu, -i\gamma_5)\ ,\quad \{\gamma_\alpha, \gamma_\beta\} = 2\eta_{\alpha\beta}\ , \la{5dir} \end{equation} with $\gamma_\alpha$ being a set of four-dimensional Dirac matrices in the chiral representation. The trapping of light fermion states on a four-dimensional hyper-plane -- the domain wall -- the "thick brane" is provided by localization mechanism in the fifth dimension at $z = z_0$. It is facilitated by a certain $z$-dependent background configuration of the scalar field $\langle\Phi(X)\rangle_0 = \varphi (z)$, which provides the appearance of zero-modes in the four-dimensional fermion spectrum. For the four-dimensional space-time interpretation, Eq.\,\gl{5dir} can be decomposed into the infinite set of fermions with different masses calculable from the following squared Dirac equation, \begin{eqnarray} &&[\,i\gamma_\alpha \partial^\alpha + \varphi(z)\,][\,i\gamma_\alpha \partial^\alpha - \varphi(z)\,]\psi(X) \equiv (- \partial_\mu \partial^\mu - \widehat m^2_z) \psi(X)\ ;\nonumber\\ &&\widehat m^2_z = - \partial_z^2 + \varphi^2 (z) - \gamma_5 \varphi'(z) = \widehat m^2_{+} P_L + \widehat m^2_{-} P_R\ , \la{f-s} \end{eqnarray} where $P_{L,R} = \frac12 (1 \pm \gamma_5)$ are projectors on the left- and right-handed states. Thus the mass squared operator $\widehat m^2_z$ consists of two chiral partners \begin{eqnarray} \widehat m_\pm^2 &=& - \partial_z^2 + \varphi^2 (z) \mp \varphi'(z) = [\,-\partial_z \pm \varphi(z)\,][\,\partial_z \pm \varphi(z)\,]\ ; \la{fact}\\ \widehat m_{+}^2\,q^+ &=& q^+\,\widehat m_{-}^2,\quad \widehat m_{-}^2\,q^- = q^-\,\widehat m_{+}^2\ ,\quad q^\pm \equiv \mp \partial_z + \varphi(z)\ . \la{susy} \end{eqnarray} Due to such a supersymmetry \ci{susy1}-\ci{susy3}, for non-vanishing masses, the left- and right-handed spinors in \eqref{susy} form the bi-spinor describing a dim-4 massive Dirac particle which is, in general, not localized at any point of the extra-dimension for asymptotically constant field configurations $\varphi(z)$. Such a spectral equivalence may be broken by a normalizable zero mode of one of the mass operators $\widehat m_\pm^2$. This mode is read out of Eqs.\ \gl{fact} and \gl{susy} \begin{equation} q^-\psi^{+}_0(x,z) = 0\ , \quad \psi^{+}_0(x,z) = \psi_L(x) \, \exp\left\{-\int^z_{z_0} dw\varphi(w)\right\}\ , \end{equation} where $\psi_L(x) = P_L \psi (x)$ is a free-particle Weyl spinor in the four-dimensional Minkowski space. Evidently, if a scalar field configuration has the appropriate asymptotic behavior, $$ \varphi(z)\stackrel{z \rightarrow \pm\infty}{\sim} \pm C_\pm |z|^{\nu_\pm}\ ,\quad \mbox{\rm Re} \nu_\pm > -1\ , \quad C_\pm > 0\ , $$ then the wave function $\psi^{+}_0(x,z)$ is normalizable on the $z$ axis and the corresponding left-handed fermion is a massless Weyl particle localized in the vicinity of a four-dimensional domain wall. If $\varphi(z)$ is asymptotically constant, with $ C_\pm > 0$ and $\nu_\pm = 0$ then there is a gap for the massive Dirac states. In this paper we restrict ourselves with generating parity symmetric branes by field configurations of definite parity. The example of a parity odd topological configuration is realized by a kink-like scalar field background (of possibly dynamical origin, see below) \begin{equation} \varphi^{+} = M\, \mbox{\rm tanh}(Mz)\ . \la{soli} \end{equation} The two mass operators have the following potentials \begin{equation} \widehat m^2_{+} =- \partial_z^2 + M^2 \left[\,1-2{\rm sech}^2(Mz)\,\right];\quad \widehat m^2_{-} =- \partial_z^2 + M^2, \la{chpot} \end{equation} and the left-handed normalized zero-mode is localized around $z=0$, \begin{equation} \psi^{+}_0(x,z) = \psi_L(x)\,\psi_0 (z)\ ,\qquad \psi_0 (z) \equiv \sqrt{M/2}\ {\rm sech}(Mz)\ . \la{locmod} \end{equation} Evidently the threshold for the continuum is at $ M^2$ and the heavy Dirac particles may have any masses $m > M$. The corresponding wave functions are spread out in the fifth dimension. But the fermions of the Standard Model are mainly massive and composed of both left- and right-handed spinors. Therefore, for light fermions on a brane one needs at least two five-dimensional fermions $\psi_1(X), \psi_2(X)$ in order to generate left- and right-handed parts of a four-dimensional Dirac bi-spinor as zero modes. The required zero modes with different chiralities for $\langle\Phi(X)\rangle_0 = \varphi^+(z)$ arise when the two fermions couple to the scalar field $\Phi(X)$ with opposite charges,, \begin{equation} [\,i\not\!\partial - \tau_3\Phi(X)\,]\Psi(X) = 0\ ,\quad \not\!\partial \equiv \widehat\gamma_\alpha \partial^\alpha\ ,\quad \Psi(X) =\left\lgroup\begin{array}{c}\psi_1(X)\\ \psi_2(X)\end{array}\right\rgroup\ , \la{2fer} \end{equation} where $\widehat\gamma_\alpha \equiv \gamma_\alpha\otimes {\bf 1}_2$ are Dirac matrices and $\tau_a \equiv {\bf 1}_4 \otimes \sigma_a,\ a=1,2,3 $ are the generalizations of the Pauli matrices $\sigma_a$ acting on the bi-spinor components $\psi_i(X)$. In this way one obtains a massless Dirac particle on the brane and the next task is to supply it with a light mass. As the mass operator mixes left- and right-handed components of the four-dimensional fermion it is embedded in the Dirac operator \gl{2fer} with the mixing matrix $\tau_1 m_f$ of the fields $\psi_1(X)$ and $\psi_2(X)$. If realizing the Standard Model mechanism of fermion mass generation by means of dedicated scalars, one has to introduce the second scalar field $H(x)$, replacing the bare mass $\tau_1 m_f \longrightarrow \tau_1 H(x)$ in the Lagrangian density \cite{aags1}, \begin{eqnarray} {\cal L}^{(5)} (\overline{\Psi},\Psi,\Phi, H) = \overline{\Psi} ( i\!\not\!\partial - \tau_3 \Phi -\tau_1 H) \Psi . \la{aux} \end{eqnarray} Both scalar fields may be dynamical and their self-interaction should justify the spontaneous symmetry breaking by certain classical configurations trapping light massive fermions on the domain wall. If the lagrangian of scalar fields is symmetric under reflections $\Phi, -\,\Phi$ and $H \longrightarrow -\,H$ then the invariance may hold under discrete $\tau$-symmetry transformations, \begin{eqnarray} &&\Psi \longrightarrow \tau_1 \Psi\ ;\quad \Phi \longrightarrow -\,\Phi\ ;\\ &&\Psi \longrightarrow \tau_2 \Psi\ ;\quad \Phi, H \longrightarrow -\,\Phi, -\,H\ ;\\ &&\Psi \longrightarrow \tau_3 \Psi\ ;\quad H \longrightarrow -\,H\ , \la{tausim} \end{eqnarray} the $\tau_2$ symmetry in fact can be extended to the continuous $U_\tau(1)$ symmetry under rotations, \begin{equation} \Psi \longrightarrow \exp\{i \alpha \tau_2/2\} \Psi ;\quad \Phi \longrightarrow \cos\alpha \Phi +\sin\alpha H,\quad H \longrightarrow - \sin\alpha \Phi +\cos\alpha H , \label{utau} \end{equation} which could be a high-energy symmetry if the scalar field lagrangian adopts it for large values of fields. But at full these symmetries do not allow the fermions to acquire a mass unless translational invariance is spontaneously broken in the scalar sector. There may be several patterns of the partial $\tau$ symmetry breaking by scalar field backgrounds. The first one is generated by a $z$-inhomogeneous v.e.v. of only one of the fields, say, the field $\Phi(z)$ with $H (z) = 0$ . Then the $\tau_3$ symmetry certainly survives but the $\tau_{1,2}$ symmetries are broken. Still if the function $\Phi(z)$ is odd against reflection in $z$ the latter symmetries can be restored being supplemented by reflection $z \longrightarrow - z$ ($\tau_3 P$(arity) symmetries). The second pattern is supported by $z$-inhomogeneous v.e.v.'s of both scalar fields provided that $\Phi(z) \not\sim H(z)$. Then, in general, none of the $\tau$ symmetries holds. But if $\Phi(z)$ and $H (z)$ are odd and even functions respectively, the $\tau_3 P$ symmetry may again survive. Thus one may anticipate a phase transition between the phases with different symmetry patterns which is presumably of the second order if the v.e.v. $H (z)$ is continuous in coupling constants of the model. This realization is welcome to implement light fermion masses near a phase transition which are governed by a small deviation in parameters of the scalar field potential around a scaling point much less than the localization scale $M$. Further on we assume that the dynamics of fermions and scalar fields is $\tau$- and $U_\tau(1)$-symmetric \eqref{tausim}, \eqref{utau} at high energies whereas at low energies $U_\tau(1)$-symmetry is broken softly and $\tau$-symmetry is violated spontaneously. Accordingly the scalar field potential contains even powers of fields $\Phi(z)$ and $H (z)$ and its profile induces the required spontaneous symmetry breaking. A concrete model for two phases with broken translational invariance is presented in the next section. \section{Formulation of the model in bosonic sector} \subsection{General two-boson potentials: conformal coordinates} Eventually we want to examine the properties of scalar matter generating gravity. Therefore let us supply the five-dimensional space with gravity providing it with a pseudo Riemann metric tensor $g_{AB}$. This tensor in flat space and for the rectangular coordinate system is reduced to $\eta^{AB}$. We define the dynamics of two real scalar fields $ \Phi(X)$ and $ H(X)$ with a minimal interaction to gravity by the following action functional, \begin{equation} S[g, \Phi, H ] = \int {d^5 X}\sqrt {\left| g \right|} {\cal L}(g, \Phi, H ), \label{1} \end{equation} \begin{equation} {\cal L} = \left\{ { - \frac12 M_\ast ^3 R + \frac12 (\partial _A \Phi \partial ^A \Phi +\partial _A H \partial ^A H) - V\left( \Phi, H \right)} \right\}, \end{equation} where $R$ stands for a scalar curvature, $ \left| g \right| $ is the determinant of the metric tensor, and $ M_\ast $ denotes a five-dimensional gravitational Planck scale. The equations of motion are \begin{eqnarray}&& R_{AB} - \frac{1}{2}g_{AB} R = \frac{1}{{M_\ast^3 }}T_{AB} ,\nonumber\\ && D^2 \Phi = - \frac{{\partial V}}{{\partial \Phi }} ,\quad D^2 H = - \frac{{\partial V}}{{\partial H }}, \end{eqnarray} where $D^2$ is a covariant D'Alambertian, and the energy-momentum tensor reads, \begin{equation} T_{AB} = \partial _A \Phi \partial _B \Phi + \partial _A H \partial _B H - g_{AB} \left(\frac12 {\partial _C \Phi \partial ^C \Phi + \partial _C H \partial ^C H - V\left( \Phi, H \right)} \right). \end{equation} In order to build a thick $3 + 1$-dimensional brane we study such classical vacuum configurations which do not violate spontaneously 4-dimensional Poincare invariance. In this Section the metric is represented in the conformally flat form, $g_{AB} = A^2 \left(z \right) \eta_{AB} $. This kind of metric suits well for interpretation of scalar fluctuation spectrum and their resonance effects (i.e. scattering states). For this metric the equations of motion read, \begin{eqnarray} &&\left(\frac{A'}{A^2}\right)'= - \frac{ \Phi'^2 + H'^2}{3 M^3_\ast A},\quad -2A^5 V( \Phi, H) = 3 M^3_\ast \Bigl( A^2A'' + 2 A (A')^2\Bigr), \label{eoms0}\\ &&\left(A^3 \Phi'\right)'= A^5\frac{\partial V}{\partial \Phi} , \quad \left(A^3 H'\right)'= A^5\frac{\partial V}{\partial H}. \label{eoms}\end{eqnarray} One can prove \ci{aags1}, that only three of these equations are independent. Following the arguments of the previous section we assume that the potential is analytic in scalar fields, exhibits the discrete symmetry under reflections $ \Phi \longrightarrow -\, \Phi$ and $ H \longrightarrow -\, H$ and has a set of minima for nonvanishing v.e.v. of scalar fields. Correspondingly there exist constant background solutions $\{ \Phi_{min},\ H_{min}\}$ which are compatible with the Einstein equations provided that $\langle V ( \Phi, H)\rangle = V(\{ \Phi_{min},\ H_{min}\}) \equiv \lambda_{cosm} M^3_\ast < 0$, i.e. for positive cosmological constant $\lambda_{cosm}$. In this case the warped geometry will be of Anti-de-Sitter type, $1/A \sim \pm k z$ with AdS curvature $k = \sqrt{-\lambda_{cosm}/6}$ as in the Randall-Sundrum model II \cite{RSII} . \subsection{Minimal realization in $\phi^4$ theory: gaussian normal coordinates } In this Subsection we study the formation of a brane in the theory with a minimal stable potential admitting kink solutions. It possesses a quartic scalar self-interaction and wrong-sign mass terms for both scalar fields This potential is designed with $U_\tau(1)$-symmetry of dim-4 vertices but with different quadratic couplings. The conveniently normalized effective action has the form, \begin{eqnarray} &&S_{eff}(\tilde\Phi ,g) = \frac12 M_\ast^3\int {d^5 X} \sqrt {|g|} \Bigl\{ -R +2 \lambda_{cosm} +\frac{3\kappa}{M^2}\Big(\partial _A \tilde\Phi \partial ^A \tilde\Phi +\partial _A \tilde H \partial ^A \tilde H \nonumber\\ &&\phantom{S_{eff}(\Phi ,g) = \frac12 M_\ast^3\int {d^5 X} \sqrt {|g|}} + 2M^2 \tilde\Phi^2 +2\Delta_H \tilde H^2- (\tilde\Phi^2 + \tilde H^2)^2 -\tilde V_0\Big) \Bigr\}, \label{36} \end{eqnarray} where the normalization of the kinetic term of scalar fields $\kappa $ is chosen differently from \eqref{1} in order to simplify the Eqs. of motion (see below)\footnote{ It could be inherited from the low-energy effective action of composite scalar fields induced by the one-loop dynamics of five-dimensional pre-fermions \cite{aags2}.}. They are connected as follows, \begin{equation} \Big[ \Phi, H\Big] = \left(\frac{3\kappa M^3_\ast}{M^2}\right)^{1/2} \Big[\tilde\Phi, \tilde H\Big]. \label{variab}\end{equation} For relating it to the weak gravity limit we guess that $ \kappa \sim M^3/M_\ast^3 $ is a small parameter, which characterizes the interaction of gravity and matter fields. Let us take $M^2 > \Delta_H $ then the true minima are achieved at $\tilde\Phi_{min} = \pm M,\ \tilde H_{min} = 0 $ and a constant shift of the potential energy must be set $V_0 = M^4$ in order to determine properly the cosmological constant $\lambda_{cosm}$. Now we change the coordinate frame to the warped metric in gaussian normal coordinates, \begin{equation} ds^2 = \exp \left( { - 2\rho \left( y \right)} \right)dx_\mu dx^\mu - dy^2 ,\quad y = \int^z_0 dz' A(z').\label{met}\end{equation} This choice happens to be more tractable for analytic calculations than the conformal one used for \eqref{eoms}. With the definition \eqref{met} the function $y(z)$ is monotonous and $z \rightarrow - z \Longrightarrow y \rightarrow - y $. The Eqs. of motion \eqref{eoms} for this metric take the form, \begin{eqnarray} &&\tilde\Phi'' = - 2M^2\tilde\Phi + 4\rho'\tilde\Phi' + 2\tilde\Phi(\tilde\Phi^2 + \tilde H^2) ,\label{38}\\ && \tilde H'' = - 2 \Delta_H \tilde H + 4\rho' \tilde H ' + 2 \tilde H(\tilde\Phi^2 + \tilde H^2) ,\label{39}\\ &&\rho'' = \frac{{\kappa }}{{M^2 }}\ (\tilde\Phi'^2 + \tilde H'^2), \label{40A}\\ &&\lambda_{cosm} = - 6\rho '^2 + \frac{3{\kappa }}{{2M^2 }}\left\{ (\tilde\Phi)'^2 +(\tilde H')^2 + 2M^2\tilde\Phi ^2 + 2\Delta_H \tilde H^2 - (\tilde\Phi^2 + \tilde H^2)^2 - M^4 \right\} .\label{401} \end{eqnarray} When compared to Eqs. \eqref{eoms} one finds that in the gaussian coordinates the equations \eqref{38}, \eqref{39}, \eqref{40A} are algebraically simpler being linear in the metric factor $\rho(y)$. It allows to calculate few first orders in gravitational perturbation theory analytically. As expected for constant background solutions $\tilde\Phi_{min} = \pm M,\ \tilde H_{min} = 0 $ the cosmological constant $\lambda_{cosm}$ completely determines the metric factor $\rho' = \sqrt{-\lambda_{cosm}/6}$. In general, for any classical solution, the right-hand side of (\ref{401}) is an integration constant that can be proven by differentiating this equation. Thus $\lambda_{cosm}$ is indeed a true constant at the classical level. The above equations contain terms which have different orders in small parameter $ \kappa $, and accordingly they can be solved by perturbation theory assuming that, \[ \frac{{\left| {\rho '(y)} \right|}}{M} = O(\kappa ) = \frac{{\left| {\rho ''(y)} \right|}}{{M^2 }}. \] Then in the leading order in $ \kappa $ the equations for the fields $ \tilde\Phi \left(y \right) , \tilde H \left(y \right) $ do not contain the metric factor, and the metric is completely governed by matter order by order in $\kappa$. Depending on the relation between quadratic couplings $M^2$ and $\Delta_H$ there are the two types of $z$-inhomogeneous solutions of the equations (\ref{401}) which have the form of a two-component kink \cite{aags1}. {\it For gravity switched off} the first one holds for $\Delta_H \leq M^2/2$, \begin{equation} \tilde\Phi \rightarrow \Phi_0 = \pm M\tanh \left( {My} \right) + O\left( {\kappa } \right),\quad \tilde H(y) = 0 , \label{kink0}\end{equation} and therefore the conformal factor to the leading order in $\kappa$ reads, \begin{equation} \rho_1 \left( y \right) = \frac{2{\kappa }}{3}\left\{ {\ln \cosh \left( {M y} \right)+\frac{1}{4} \tanh^2( M y)}\right\}+ O\left( {\kappa ^2 } \right) , \label{rho0}\end{equation} which is chosen to be an even function of $y$ in order to preserve the remaining $\tau$ symmetry. The second one arises only when $M^2/2 \leq \Delta_H \leq M^2$, i.e. $2\Delta_H = M^2 +\mu^2, \ \mu^2 < M^2$, \begin{equation} \Phi_0(y) = \pm M\tanh \left( {\beta M y} \right), \quad H_0 (y) = \pm \frac{\mu}{\cosh \left( {\beta M y} \right)} ,\quad \beta = \sqrt{1 - \frac{\mu^2}{M^2}}, \label{zeroap} \end{equation} wherefrom one can find the conformal factor to the leading order in $\kappa$ in the following form, \begin{equation} \rho_1 \left( y \right) = \frac{{\kappa }}{3}\left\{ \left(3 - \beta^2\right) {\ln \cosh \left( {\beta M y} \right)+\frac{1}{2} \beta^2 \tanh^2(\beta M y)}\right\}+ O\left( {\kappa ^2 } \right) , \label{rho1}\end{equation} as well symmetric against $y \rightarrow - y$. One can see that the asymptotic AdS curvature $k$ (defined in the limit $y \gg 1/M$ when $\rho (y) \sim k y$) is somewhat different in the $\tau$ symmetry unbroken and broken phases, \begin{equation} k_{unbroken} = \frac23 \kappa M \quad\mbox{vs.}\quad k_{broken} = \frac23 \kappa M \Big(1 + \frac{\mu^2}{2M^2}\Big)\sqrt{1 - \frac{\mu^2}{M^2}} < k_{unbroken}. \label{asymp} \end{equation} As the scalar potential is invariant under reflections $\tilde\Phi(y) \longrightarrow - \tilde\Phi(y)$ and $\tilde H (y)\longrightarrow - \tilde H(y)$ one finds replicas of the kink-type solutions which can be uniquely selected out from coupling to fermions if to specify their chirality ($+M$ for left-handed ones) and the sign of induced masses ($+\mu$ for positive masses). Let's choose the positive signs further on. Evidently the second solution generates the fermion mass in \eqref{aux} whereas the first kink leaves fermions massless. The solution breaks $\tau$ symmetry and is of main interest for our model building. Thus there are two phases with different scalar backgrounds and it can be shown (see below) that if $\Delta_H < M^2/2$ the first kink provides a local minimum but for some $M^2/2 < \Delta_H < M^2$ it gives a saddle point whereas the second kink with $\tilde H\not= 0$ guarantees a local stability. \subsection{Relationship to conformal coordinate metric\label{confgauss}} To the leading order in $\kappa$ one can derive a simple relation between conformal factor $A(z)$ and $\rho_1(y)$. Namely, with a certain ansatz for $A(z)$ the first equation for the metric factor in \eqref{eoms0} taken in the variables \eqref{variab} is linearized,\begin{equation} A(z) = \frac{1}{1 +f(z)},\,\, f(0) = 0;\qquad f'' = \kappa \frac{(\tilde\Phi')^2 + (\tilde H')^2}{M^2}\Big(1+ f\Big) . \label{conffac} \end{equation} Then the expansion in powers of gravitational coupling constant $\kappa$ is given by $f = \sum_{n=1}^\infty \kappa^n f_n$ and the leading order $f_1$ obviously coincides in functional dependence with \eqref{40A} for the $\tau$-symmetry unbroken phase or \eqref{rho1} for the broken phase, \begin{equation} \kappa f_1(z) = \rho_1(y\rightarrow z) = \frac{{\kappa }}{3}\left\{ \left(3 - \beta^2\right) {\ln \cosh \left( {\beta M z} \right)+\frac{1}{2} \beta^2 \tanh^2(\beta M z)}\right\}+ O\left( {\kappa ^2 } \right) . \end{equation} However the perturbative expansion in $\kappa$ is not valid for any $z$ . Indeed for \mbox{$\beta Mz \gg 1/\kappa \gg 1$} the asymptotic $f(z)$ is linearly growing and the second term in the right-hand side of Eq. \eqref{conffac} dominates over the first one which generated the perturbation series. In spite of that in conformal reference frame the coordinate asymptotic at $Mz \gg 1$ is given by the leading order in $\kappa$ as $f(z) \rightarrow k z$ the next orders in $\kappa$ have a more complicated nonanalytic structure. Thereby the perturbation theory in gaussian normal coordinates happens to be more tractable. \subsection{Next approximation in $\kappa$: unbroken $\tau$ symmetry} Let us find the modifications of kink profiles and the shift of critical point under gravity influence. In the unbroken phase (zero order in $\mu$) the expansion in $\kappa$ reads, \begin{equation} \tilde\Phi = M \sum_{n=0}^\infty \kappa^n \Phi_n,\quad \rho = \sum_{n=1}^\infty \kappa^n \rho_n . \label{phiexp} \end{equation} In order to simplify the asymptotic behavior and analytic structure we introduce also the coupling dependence into the argument of iterated functions similar to Eq.\eqref{zeroap}),$ \beta \rightarrow \beta (\kappa) $ with the expansion, \begin{equation} \frac{1}{\beta^2(\kappa)} = \sum_{n=0}^\infty \kappa^n \left(\frac{1}{\beta^2}\right)_n ;\quad \left(\frac{1}{\beta^2}\right)_0 = 1. \label{betaexp} \end{equation} After rescaling $y = \tau / (\beta M), \tilde\Phi \rightarrow M \tilde\Phi$ the next-to-leading order for $\tilde\Phi$ obeys the equation, \begin{equation} (\partial_\tau^2+2-6\Phi_{0}^2)\Phi_{1}=4\rho_{1}'\Phi_{0}' - 2 \kappa\left(\frac{1}{\beta^2}\right)_1 \Phi_0(1-\Phi_0^2)\equiv {\cal G}_1(\tau),\label{phione} \end{equation} where the definitions \eqref{kink0} and \eqref{rho0} have been used. Its real parity-odd solution can be found by integration of \eqref{phione}, \begin{equation} \Phi_{1} =\frac{1}{\cosh^2\tau}\int^{\tau}_0 d\tau' \cosh^4\tau'\int^{\tau'}_{-\infty} d\tau''\frac{1}{\cosh^2\tau''} {\cal G}_1(\tau). \end{equation} It decreases at infinity for $\left(\frac{1}{\beta^2}\right)_1 = 4/3$ and looks as follows, \begin{equation} \Phi_{1}=-\frac{2}{9}\frac{\sinh{\tau}}{\cosh^3{\tau}}. \label{phi1} \end{equation} Therefrom the appropriately iterated function $\tilde\Phi(y)$ can be represented as, \begin{equation} \tilde\Phi(y) = M\tanh{\beta M y}\left(1 -\kappa\frac{2}{9 \cosh^2{\beta M y}}\right) + {\cal O}(\kappa^2);\quad \beta = 1 - \frac23\kappa . \label{phi12} \end{equation} The second approximation of conformal factor $\rho_{2}'$ derived directly from \eqref{phi1} obeys the equation, \begin{equation} \rho_{2}''= 2\tilde\Phi_{0}'\tilde\Phi_{1}', \end{equation} which can be integrated to, \begin{equation} \rho_{2}'=- \frac{2M}{135}\tanh{My}\left(38+ \frac{19}{\cosh^2{My}} + \frac{18}{\cosh^4{My}}\right) . \end{equation} Accordingly the iterated result could be assembled in, \begin{eqnarray} \rho (\tau) &=& \frac23 \kappa(1 -\frac{8}{45}\kappa) \log\cosh{\tau} + \frac{1}{6}\kappa\left(1 - \frac{26}{45}\kappa\right) \nonumber\\&& - \frac{1}{6}\kappa\left(1 - \frac{8}{45}\kappa\right)\frac{1}{\cosh^2{\tau}} + \frac{1}{15} M\kappa^2\frac{1}{\cosh^4{\tau}} + {\cal O}(\kappa^3) ,\nonumber\\ &=& - \frac13 \kappa(1 -\frac{8}{45}\kappa) \log(1- \tanh^2{\tau}) \nonumber\\&&+ \frac{\kappa}{6}(1 -\frac{44}{45}\kappa) \tanh^2{\tau} + \frac{\kappa^2}{15} \tanh^4{\tau}+ {\cal O}(\kappa^3) ,\label{rho12} \end{eqnarray} where the first expansion is ordered in accordance to its decreasing at large $y$ and the second one characterizes better the vicinity of $y = 0$ where the normalization $\rho(0) = 0$ is employed. \subsection{Next approximation in $\kappa$: broken $\tau$ symmetry phase} Above the phase transition point one discovers nontrivial solutions for $\tilde H (\tau)$ which satisfy the properly normalized Eq. \eqref{39} . When a weak gravity is present then all functions and constants are taken depending on $\kappa$, \begin{eqnarray} &&\tilde H(\tau) = M\sum^\infty_{n,m=0}\kappa^n \Bigl(\frac{\mu}{M}\Bigr)^{2m+1}H_{n,m}(\tau);\quad \tilde\Phi(\tau) = M\sum^\infty_{n,m=0}\kappa^n \Bigl(\frac{\mu}{M}\Bigr)^{2m}\Phi_{n,m}(\tau);\quad \Phi_{n,0}\equiv \Phi_{n}, \nonumber\\&& \rho(\tau) = \kappa \sum^\infty_{n,m=0}\kappa^n \Bigl(\frac{\mu}{M}\Bigr)^{2m}\rho_{n+1,m}(\tau);\quad \rho_{n,0}\equiv \rho_{n}, \nonumber\\&&\Delta_H = \Delta_{H,c}(\kappa) + \frac12 \mu^2 ,\end{eqnarray} as well as, \begin{equation} \frac{1}{\beta^2} = \sum^\infty_{n,m=0}\kappa^n \Bigl(\frac{\mu}{M}\Bigr)^{2m} \Bigl(\frac{1}{\beta^2}\Bigr)_{n,m};\quad \Bigl(\frac{1}{\beta^2}\Bigr)_{0,0} = 1;\quad \Bigl(\frac{1}{\beta^2}\Bigr)_{0,1} = 1;\quad \Bigl(\frac{1}{\beta^2}\Bigr)_{1,0} =\frac43 . \end{equation} The position of the critical point $\mu = 0$ is generically shifted, \begin{equation} \Delta_{H,c}(\kappa) =\frac12 M^2 \sum^\infty_{n=0}\kappa^n \Delta_H^{n} = \frac12 M^2 \left(1 - \frac{44}{27}\kappa\right) + {\cal O}(\kappa^2), \label{critshift} \end{equation} which can be established from the consistency of integrated EoM. Indeed, in the leading approximation against its normalization scale $\mu$ the function $\tilde H (\tau)$ satisfies the equation, \begin{eqnarray}&& (\partial_\tau^2+1-2\Phi_{0,0}^2)H_{1,0}=\\&& -\kappa\left(\Delta^{1}_H + \left(\frac{1}{\beta^2}\right)_{1,0}\right) H_{0,0} + 4\rho_{1}' H_{0,0}' + 2 \kappa\left(\frac{1}{\beta^2}\right)_{1,0} H_{0,0}\Phi_{0,0}^2 + 4 H_{0,0}\Phi_{0,0}\Phi_{1,0}\equiv {\cal F}_1(\tau).\nonumber \label{Hone} \end{eqnarray} Its solution can be found by integration of \eqref{Hone}, \begin{equation} H_{1,0} =\frac{1}{\cosh\tau}\left[C^H_{1,0} + \int^{\tau}_0 d\tau' \cosh^2\tau'\int^{\tau'}_{0} d\tau''\frac{1}{\cosh\tau''} {\cal F}_1(\tau)\right]. \end{equation} and it is given by, \begin{equation} H_{1,0} = \frac{2}{27 \cosh\tau} \left(C^H_{1,0} - 2\log\cosh\tau + 3\tanh^2\tau\right) ,\label{h10} \end{equation} provided that \eqref{critshift} holds. The integration constant $C^H_{1,0}$ is not fixed at this order in $\kappa, \mu$. Mixed orders in $\kappa$ and $\mu^2/ M^2$ practically irrelevant as in realistic models $\kappa \sim 10^{-15}$ and $\mu^2/ M^2 \sim 10^{-3}$ (see \cite{aags2} and the Sect. 7). Correspondingly, $\kappa\mu^2/ M^2 \ll \kappa \ll \mu^2/ M^2$. Therefore the overlapping of classical solutions \eqref{zeroap}, \eqref{rho1} with solutions \eqref{phi12} \eqref{rho12}, \eqref{h10} provides our calculations with required precision in the case when the perturbation expansion works well. The latter seems to be flawless for classical EoM. \section{Field fluctuations around the classical solutions} \subsection{Quadratic action and infinitesimal diffeomorphisms} We consider small localized deviations of the fields from the average background values and find the action-square corresponding to them. Action \eqref{1} is invariant under diffeomorphisms. Infinitesimal diffeomorphisms correspond to the Lie derivative along an arbitrary vector field $ \tilde \zeta ^A (X) $, defining the coordinate transformation $ X \to \tilde X = X +\tilde \zeta \left (X \right) $ . Let us introduce the fluctuations of the metric $ h_ {AB} \left (X \right) $ and the scalar fields $ \phi \left (X \right) $ and $\chi \left (X \right) $ on the background solutions of the equations of motion, \begin{eqnarray} &&g_{AB} \left( X \right) = A^2 \left( z \right)\left( {\eta _{AB} + h_{AB} \left( X \right)} \right);\nonumber\\&& \Phi \left( X \right) = \Phi \left( z \right) + \phi \left( X \right);\quad H \left( X \right) = H \left( z \right) + \chi \left( X \right) \label{conf} . \end{eqnarray} Since 4D Poincare symmetry is not broken, we select the corresponding 4D part of the metric $ h_ {\mu \nu} $ and introduce the notation for gravivectors $ h_ {5 \mu} \equiv v_ \mu $ and graviscalars $ h_ {55} \equiv S $. By rescaling the vector fluctuations $ \tilde \zeta _ \mu = A ^ 2 \zeta _ \mu $ and the scalar ones $ \tilde \zeta _5 = A \zeta _5 $, we obtain the following gauge transformations in the first order of $ \zeta^ A(X) $ , \[ h_{\mu \nu } \to h_{\mu \nu } - \left( {\zeta _{\mu ,\nu } + \zeta _{\nu ,\mu } - \frac{{2A'}}{{A^2 }}\eta _{\mu \nu } \zeta _5 } \right) ,\qquad v_\mu \to v_\mu - \left( {\frac{1}{A}\zeta _{5,\mu } + \zeta '_\mu } \right) ,\]\begin{equation} S \to S - \frac{2}{A}\zeta '_5 ,\qquad \phi \to \phi + \zeta _5 \frac{{\Phi '}}{A} ,\qquad \chi \to \chi + \zeta _5 \frac{{H '}}{A} ,\label{10} \end{equation} with an accuracy of order $O\left( {\zeta ^2 ,h^2 ,h\zeta } \right)$. Herein "$,$"\ denotes a partial derivative. Now expand the action to quadratic order in fluctuations. The full action after this procedure is a sum, \begin{equation} {\cal L}_{(2)}= {\cal L}_{h}+{\cal L}_{\phi,\chi}+{\cal L}_S+{\cal L}_{V}, \end{equation} where \begin{eqnarray} \sqrt {\left| g \right|}{\cal L}_{h}&\equiv&\ -\frac12 M^3_\ast A^3\ \Bigl\lbrace -\frac{1}{4}\ h_{\alpha\beta,\nu}h^{\alpha\beta,\nu} -\frac{1}{2}\ h^{\alpha\beta}_{,\beta}h_{,\alpha}+\frac{1}{2}\ h^{\alpha\nu}_{,\alpha}h^{\beta}_{\nu,\beta}+\frac{1}{4}\ h_{,\alpha}h^{,\alpha} \nonumber\\ &&+ \frac{1}{4}h'_{\mu\nu}h'^{\mu\nu}-\frac{1}{4}h'^2 \Bigr\rbrace , \end{eqnarray} \begin{eqnarray} \sqrt {\left| g \right|} {\cal L}_{\phi,\chi}&\equiv&\frac{1}{2} A^3 (\phi_{,\mu}\phi^{,\mu}- \phi'^2+ \chi_{,\mu} \chi^{,\mu}- (\chi')^2) - \frac{1}{2}\ A^5 \Big( \frac{\partial^2 V}{\partial\Phi^2} \phi^2 + 2 \frac{\partial^2 V}{\partial\Phi\partial H} \phi \chi+ \frac{\partial^2 V}{\partial H^2} \chi^2\Big)\nonumber\\ &&+ \frac{1}{2} A^3 h'(\Phi'\phi+ H'\chi) , \end{eqnarray} \begin{eqnarray} \sqrt {\left| g \right|}{\cal L}_{S}&\equiv& \frac{1}{4}\Big(-A^5VS^2+ S \Bigl( M^3_\ast A^3 \left(h^{\mu\nu}_{,\mu\nu}-h^{,\mu}_{,\mu} \right) + M^3_\ast \left( A^3\right)'h'\nonumber\\&&+ 2 \big(A^3(\Phi'\phi+ H'\chi)\big)' - 4A^3(\Phi'\phi'+ H'\chi')\Bigr)\Big),\label{S} \end{eqnarray} \begin{eqnarray} \sqrt {\left| g \right|}{\cal L}_{V}&\equiv& - \frac{1}{8}M^3_\ast A^3 v_{\mu\nu}v^{\mu\nu}+ \frac12 v^\mu \Big[ - M^3_\ast A^3 \left(h_{\mu\nu}^{,\nu}-h_{,\mu} \right)'\nonumber\\&&+ 2A^3(\Phi'\phi_{,\mu}+ H'\chi_{,\mu}) + M^3_\ast\Bigl( A^3\Bigr)'S_{,\mu}\Big] \label{V} , \end{eqnarray} where $v_{\mu\nu}=v_{\mu,\nu}-v_{\nu,\mu}$, $h=h_{\mu\nu}\eta^{\mu\nu}$. Transformations (\ref{10}) allow to eliminate gauge degrees of freedom. \subsection{Disentangling the physical degrees of freedom} A physical sector can be determined after the separation of different spin components of the field $ h_ {\mu \nu} $ in the system. It can be accomplished by description of ten components of 4-dim metric in terms of the traceless-transverse tensor, vector and scalar components \cite{rev15,bar}, \begin{eqnarray} h_ {\mu \nu} = b_ {\mu \nu} + F_ {\mu, \nu} + F_ {\nu, \mu} + E_ {, \mu \nu} + \eta_ {\mu \nu} \psi, \label {deco} \end{eqnarray} where $ b_ {\mu \nu } $ and $ F_ \mu $ obey the relation $ b_ {\mu \nu} ^ {, \mu} = b = 0 = F_ {\mu} ^ {, \mu} $. Obviously, the gravitational fields $ b_ {\mu \nu} $ are gauge invariant and thereby describe graviton fields in the 4-dim space. Let's expand the gauge parameter $ \zeta_ \mu $ and vector fields $ v_ \mu $ into the transverse and longitudinal parts, \begin{equation} \zeta_ \mu = \zeta_ \mu ^ \perp + \partial_ \mu C, \qquad \partial ^ \mu \zeta_ \mu ^ \perp = 0; \qquad v_ \mu = v_ \mu ^ \perp + \partial_ \mu \eta, \qquad \partial ^ \mu v_ \mu ^ \perp = 0. \end{equation} Then the vector fields are transformed as follows, \begin{equation} F_ \mu \rightarrow F_ \mu - \zeta_ \mu ^ \perp, \quad v_ \mu ^ \perp \rightarrow v_ \mu ^ \perp - {\zeta'_ \mu } ^ \perp, \end{equation} i.e. the expression $ F'_ \mu - v_ \mu ^ \perp $ is gauge invariant. In turn, the scalar components $ \eta, E, \psi, S, \phi $ change under gauge transformations in the following way, \begin{eqnarray} & & \eta \rightarrow \eta - \frac {1} {A} \zeta_5 - C ' ; \qquad E \rightarrow E - 2C, \nonumber\\ & & \psi \rightarrow \psi + \frac {2A '} {A ^ 2} \zeta_5, \qquad S \rightarrow S - \frac {2} {A} \zeta'_5, \qquad \phi \rightarrow \phi + \frac {\Phi '} {A} \zeta_5 , \qquad \chi \rightarrow \chi + \frac {H '} {A} \zeta_5. \end{eqnarray} Therefrom we can find four independent gauge invariants, \begin{eqnarray} \frac12 E' - \eta - \frac{A}{2A'} \psi;\quad -\psi + \frac{2A'}{A \Phi'} \phi;\quad \frac12 A S + \left(\frac{A}{\Phi'} \phi\right)';\quad H' \phi - \Phi' \chi . \end{eqnarray} Using the parametrization (\ref{deco}) we can calculate components of the quadratic action, \begin{eqnarray} && h \equiv h^\mu_\mu = \square E + 4 \psi;\quad h^{\alpha\beta}_{,\beta} = \square ( F^\alpha +E^{,\alpha}) + \psi^{,\alpha};\quad h^{\alpha\beta}_{,\alpha\beta} = \square^2 E + \square \psi;\nonumber\\ && h^{\mu\nu}_{,\mu\nu}-h^{,\mu}_{,\mu} = - 3 \square \psi;\quad h_{\mu\nu}^{,\nu}-h_{,\mu} = \square F_\mu - 3 \psi_{,\mu} . \end{eqnarray} Thus, the decomposition \eqref {deco} entails a partial separation of degrees of freedom in the lagrangian quadratic in fluctuations, \begin{eqnarray} \sqrt{\left| g \right|} {\cal L}_{(2)} &=& \frac{1}{8} M^3_\ast A^3\ \Bigl\lbrace \ b_{\mu\nu,\sigma} b^{\mu\nu,\sigma} - (b')_{\mu\nu} (b')^{\mu\nu} - f_{\mu\nu} f^{\mu\nu}\Bigr\rbrace\nonumber\\ &&+ \frac{3}{4} M^3_\ast A^3\ \Bigl\lbrace - \psi_{,\mu} \psi^{,\mu} + \psi_{,\mu} S^{,\mu} + 2 (\psi')^2 + 4 \frac{A'}{A} \psi' S\Bigr\rbrace\nonumber\\ &&+ \frac12 A^3\ \Bigl\lbrace \phi_{,\mu} \phi^{,\mu} - (\phi')^2 + \chi_{,\mu} \chi^{,\mu}- (\chi')^2 - \ A^2 \Big( \frac{\partial^2 V}{\partial\Phi^2} \phi^2 + 2 \frac{\partial^2 V}{\partial\Phi\partial H} \phi \chi+ \frac{\partial^2 V}{\partial H^2} \chi^2\Big)\nonumber\\ &&-\frac{1}{2}A^2 V(\Phi, H) S^2 + 4 \psi'(\Phi' \phi +H' \chi) + S\Bigl(- \Phi' \phi' - H' \chi'+ A^2 \Big(\frac{\partial V}{\partial\Phi} \phi + \frac{\partial V}{\partial H} \chi\Big) \Bigr)\Bigr\rbrace\nonumber\\ &&+ \frac{3}{4} M^3_\ast A^3\ \square(E' - 2\eta) \Bigl(\frac{A'}{A} S + \psi' + \frac{2}{3M^3_\ast} (\Phi' \phi + H' \chi )\Bigr), \label{decoup} \end{eqnarray} where $ f_\mu \equiv F'_\mu - v_\mu^\perp,\quad f_{\mu\nu} \equiv f_{\mu,\nu} - f_{\nu,\mu} $ . We see that some redundant degrees of freedom exist , one of vectors $ F'_ \mu, v_ \mu ^ \perp $ and one of scalars $ E ', \eta $. They can be removed to provide $ v_ \mu = 0 $. Obviously, in the quadratic approximation graviton, gravivector and graviscalar are decoupled from each other. From the last line it follows that the scalar $ E '$ is a Lagrange multiplier and generates a gauge-invariant constraint, \begin{eqnarray} \label{cond1} \frac{A'}{A} S + \psi '= - \frac {2 } {3M ^ 3_ \ast}(\Phi' \phi + H' \chi ) . \end{eqnarray} Thus taking this constraint into account only two independent scalar fields remain. \section{Scalar field action in gauge invariant variables} The further analysis of the scalar spectrum is convenient to perform in gauge invariant variables. Let us perform the following rotation in $(\phi,\chi)$ sector: \begin{eqnarray} \phi=\check{\phi}\cos{\theta}+\check{\chi}\sin{\theta},\quad \chi=-\check{\phi}\sin{\theta}+\check{\chi}\cos{\theta}\nonumber\\ \cos{\theta}=\frac{\Phi'}{\mathcal{R}},\quad\sin{\theta}=\frac{H'}{\mathcal{R}},\quad \mathcal{R}^2=(\Phi')^2+(H')^2 \end{eqnarray} While $\check{\chi}$ is gauge invariant $\check{\phi}$ is not. We can exclude redundant gauge invariance introducing three gauge invariant variables: \begin{eqnarray} \check{\psi}=\psi-\frac{2A'}{A\mathcal{R}}\check{\phi},\quad \check{S}=S+\frac{2}{\mathcal{R}}\check{\phi}'-\frac{2A}{\mathcal{R}^2} \left(\frac{\mathcal{R}}{A}\right)'\check{\phi},\quad \check{\eta}=E'-2\eta-\frac{2}{\mathcal{R}}\check{\phi} . \end{eqnarray} Accordingly the scalar part of the lagrangian quadratic in fluctuations takes the form: \begin{eqnarray} \sqrt{\left| g \right|} {\cal L}_{(2),scal} &=& \frac{3}{4} M^3_\ast A^3\ \Bigl\lbrace - \check{\psi}_{,\mu} \check{\psi}^{,\mu} + \check{\psi}_{,\mu} \check{S}^{,\mu} + 2 (\check{\psi}')^2 + 4 \frac{A'}{A} \check{\psi}' \check{S}\Bigr\rbrace+ \frac12 A^3\ \Bigl\lbrace \check{\chi}_{,\mu} \check{\chi}^{,\mu}- (\check{\chi}')^2 -\nonumber\\ && -\Big[(\theta')^2+\frac{A^2}{\mathcal{R}^2} \Big( \frac{\partial^2 V}{\partial\Phi^2} (H')^2 - 2 \frac{\partial^2 V}{\partial\Phi\partial H} \Phi'H'+ \frac{\partial^2 V}{\partial H^2} (\Phi')^2\Big)\Big]\check{\chi}^2\Bigr\rbrace+A^3\mathcal{R}\theta'\check{S}\check{\chi}-\nonumber\\ && -\frac{1}{4}A^5 V(\Phi, H) \check{S}^2+ \frac{3}{4} M^3_\ast A^3\ \square\check{\eta}\Bigl(\frac{A'}{A} \check{S} + \check{\psi}'\Bigr)\Bigr) . \end{eqnarray} where $\theta'=(\arctan{\frac{H'}{\Phi'}})'=(H''\Phi'-\Phi''H')/\mathcal{R}^2$ From the last line it follows that the scalar field $\check{\eta}$ is a gauge invariant Lagrange multiplier and generates a gauge invariant constraint, \begin{eqnarray} \label{cond2} \frac{A'}{A} \check{S} + \check{\psi}'= 0. \end{eqnarray} Thus after taking this constraint into account only two independent scalar fields remain and the scalar action takes the following form, \begin{eqnarray} \sqrt{\left| g \right|}{\cal L}_{(2),scal}= \frac{A^5\mathcal{R}^2}{8(A')^2}\left\{\partial_{\mu}\check{\psi}\partial^{\mu}\check{\psi} -(\partial_z\check{\psi})^2\right\}- \frac{A^4}{A'}\mathcal{R}\theta'(\partial_z\check{\psi})\check{\chi}+\nonumber\\ +\frac{A^3}{2}\left\{\partial_{\mu}\check{\chi}\partial_{\mu}\check{\chi} -(\partial_z\check{\chi})^2- \left((\theta')^2+\frac{A^2}{\mathcal{R}^2}\begin{pmatrix}H'\\ -\Phi'\end{pmatrix}^{\dag}\partial^2V\begin{pmatrix}H'\\-\Phi'\end{pmatrix}\right)\check{\chi}^2\right\} \end{eqnarray} To normalize kinetic terms the fields should be redefined $\hat{\chi}=A^{3/2}\check{\chi}$, $\hat{\psi}=\Omega\check{\psi}$, where $\Omega=A^{5/2}\mathcal{R}/2A'$. \begin{eqnarray} \sqrt{\left| g \right|} {\cal L}_{(2),scal}=\frac{1}{2}\left\{\partial_{\mu}\hat{\psi}\partial^{\mu}\hat{\psi} -(\partial_z\hat{\psi})^2-\frac{\Omega''}{\Omega}\hat{\psi}^2\right\} -2\theta'\hat{\chi}\left(\partial_z-\frac{\Omega'}{\Omega}\right)\hat{\psi}\nonumber\\ +\frac{1}{2}\left\{\partial_{\mu}\hat{\chi}\partial^{\mu}\hat{\chi} -(\partial_z\hat{\chi})^2-\frac{(A^{3/2})''}{A^{3/2}}-\left((\theta')^2 +\frac{A^2}{\mathcal{R}^2}\begin{pmatrix}H'\\-\Phi'\end{pmatrix}^{\dag} \partial^2V\begin{pmatrix}H'\\-\Phi'\end{pmatrix}\right)\hat{\chi}^2\right\} \end{eqnarray} \section{Fluctuations in different phases and at critical point} \subsection{Fluctuations around a $\tau$ symmetric background} When $H(z) = 0$ the two scalar sectors decouple because $\theta = 0$. The operator which describes the branon mass spectrum, \begin{equation}\hat m^2_\psi = -\partial_z^2 + \frac{\Omega''}{\Omega} = \Big(\partial_z + \frac{\Omega'}{\Omega}\Big)\Big( -\partial_z + \frac{\Omega'}{\Omega}\Big),\end{equation} is positive on functions $\hat{\psi}(z)$ normalizable along the fifth dimension $z$. Indeed, the possible zero mode is singular $\hat{\psi}(z) \sim \Omega \sim 1/z\Big|_{z\rightarrow 0}$. It corresponds to the centrifugal barrier in the potential $ \Omega''/\Omega$ at the origin \cite{rev19}. Thus in the presence of gravity there is no a (normalizable) Goldstone zero-mode related to spontaneous breaking of translational symmetry. The cause is evident: the corresponding brane fluctuation represents, in fact, a gauge transformation \eqref{10} and does not appear in the invariant part of the spectrum. One could say that in the presence of gravity induced by a brane the latter becomes more rigid as only massive fluctuations are possible around it. Of course, the very gauge transformation \eqref{10} leaves invariant only the quadratic action and thereby a track of Goldstone mode may have influence on higher order vertices of interaction between gravity and scalar fields. This option is beyond the scope of the present investigation. As to the possible localized states with positive $m^2_\psi > 0$ they may exist with masses of order $M$. However for the action \eqref{36} they happen to be unstable resonances as it will be evident from the spectral problem formulated in gaussian normal coordinates. The fluctuations of the second, mass generating field $H(x)$ do not develop any centrifugal barrier and as $<H> = 0$ their mass spectrum is described by the operator, \begin{equation} \hat m^2_\chi = - \partial_z^2 + \frac{(A^{3/2})''}{A^{3/2}} +A^2 \frac{\partial^2 V(\Phi,H))}{(\partial H)^2}\Big|_{H = 0} \equiv - \partial_z^2 + {\cal V}(z). \label{higg} \end{equation} Its potential is not singular and for background solutions delivering a minimum this operator must be positive. For the minimal potential with quartic self-interaction \eqref{36} (in terms of the rescaled variables \eqref{variab}) one can come to more quantitative conclusions. Indeed, for gravity switched off the background $\tilde\Phi(z) = \Phi_0(z)$ (pay attention to $y \rightarrow z$!) is defined by \eqref{kink0} . Accordingly the mass spectrum operator receives the potential \begin{equation} {\cal V}(z) = - 2\Delta_H + 2 \Phi_0^2 = (M^2 - 2\Delta_H) + M^2\Big(1 - \frac{2}{\cosh^2 Mz}\Big). \end{equation} The only localized state of the mass operator $\hat m^2_\chi$ is $\hat\chi \rightarrow \chi_0 \simeq 1/\cosh(Mz)$ with the corresponding mass $m^2_0 = M^2 - 2\Delta_H$ as expected. Thus in the unbroken phase with $M^2 > 2\Delta_H$ the lightest scalar fluctuation in $\chi$ channel possesses a positive mass and the system is stable. In the critical point, $M^2 = 2\Delta_H$, a lightest fluctuation is massless and for $M^2 < 2\Delta_H \leq 2M^2$ the localized state $\chi_0$ represents a tachyon and brings instability providing a saddle point. Instead the solution \eqref{zeroap} provides a true minimum (see \cite{aags1}). Qualitatively the spectrum pattern in the gravity background remains similar. But the derivation of localized eigenfunctions uniformly in coordinate $z$ encounters certain difficulties as explained in the Subsection \ref{confgauss} and therefore it will be done in gaussian normal coordinates. \subsection{Fluctuations in gaussian normal coordinates} To simplify analytical calculations let us represent the quadratic action for scalar fields in the gaussian normal coordinates $x_\mu, y$, \begin{equation} ds^2 = A^2 \left( z \right)\left( {dx_\mu dx^\mu - dz^2 } \right) = \exp \left( { - 2\rho \left( y \right)} \right)dx_\mu dx^\mu - dy^2 .\end{equation} We remind the formulas for the transition, \[ z = \int {\exp \rho \left( y \right)dy} ,\quad A\left( z \right) = \exp \left( { - \rho \left( y \right)} \right) . \] Below the prime denotes differentiation with respect to $y$. Further on we focus on the minimal potential with quartic self-interaction \eqref{36} in terms of the rescaled variables \eqref{variab}). To simplify the form of the action let us introduce $\widetilde{\mathcal{R}}=\exp(\rho)\mathcal{R}$ and in addition redefine the fields in order to normalize kinetic term, $\hat{\psi}=\exp(-\rho/2)\tilde{\psi}$,$\hat{\chi}=\exp(-\rho/2)\tilde{\chi}$. \begin{eqnarray} &&\!\!\!\!\!\!\!\!\!\!\!S_{(2),scal}=\int d^4xdy\left[\frac{1}{2}\partial_{\mu}\tilde{\psi}\partial^{\mu}\tilde{\psi} +\frac{1}{2}\partial_{\mu}\tilde{\chi}\partial^{\mu}\tilde{\chi} -2\exp(-2\rho)\theta'\tilde{\chi}\left(\partial_y+\rho'+\frac{\rho''}{\rho'} -\frac{\widetilde{\mathcal{R}}'}{\widetilde{\mathcal{R}}}\right)\tilde{\psi}-\right.\\ &&\!\!\!\!\!\!\!\!\!\!\!-\frac{1}{2}\exp(-2\rho)\tilde{\psi}\left\{\left(-\partial_y+\frac{\rho''}{\rho'} -\frac{\widetilde{\mathcal{R}}'}{\widetilde{\mathcal{R}}}\right)\left(\partial_y +\frac{\rho''}{\rho'}-\frac{\widetilde{\mathcal{R}}'}{\widetilde{\mathcal{R}}}\right) +2\rho'\partial_y+3(\rho')^2+3\rho''-4\rho' \frac{\widetilde{\mathcal{R}}'}{\widetilde{\mathcal{R}}}\right\}\tilde{\psi}-\nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\left.-\frac{1}{2}\exp(-2\rho)\tilde{\chi}\left\{-\partial_y^2+(\theta')^2 +\frac{1}{\widetilde{\mathcal{R}^2}}\begin{pmatrix}\tilde H'\\ -\tilde\Phi'\end{pmatrix}^{\dag}\partial^2V\begin{pmatrix}\tilde H'\\ -\tilde\Phi'\end{pmatrix}+2\rho'\partial_y+3(\rho')^2-\rho''\right\}\tilde{\chi}\right].\nonumber \end{eqnarray} where the second variation of the field potential reads, \begin{eqnarray} \partial^2V = \left(\begin{array}{cc}- 2M^2 +6 \tilde\Phi^2 + 2 \tilde H^2 & 4 \tilde\Phi \tilde H\\ 4\tilde\Phi\tilde H& -2\Delta_H + 2\tilde\Phi^2 +6 \tilde H^2\end{array}\right) . \end{eqnarray} Let us perform the mass spectrum expansion, \begin{eqnarray} &&\tilde{\psi}(X)=\exp(\rho)\sum_m \Psi^{(m)}(x)\psi_m(y),\quad\tilde{\chi}(X)=\exp(\rho)\sum_m\Psi^{(m)}(x)\chi_m(y),\nonumber\\ &&\partial_{\mu}\partial^{\mu}\Psi^{(m)}=-m^2\Psi^{(m)}, \end{eqnarray} where the factor $\exp(\rho)$ is introduced to eliminate first derivatives in the equations. We obtain the following equations, \begin{eqnarray} \left(-\partial_y+\frac{\rho''}{\rho'}-\frac{\widetilde{\mathcal{R}}'}{\widetilde{\mathcal{R}}} +2\rho'\right)\left(\partial_y+\frac{\rho''}{\rho'} -\frac{\widetilde{\mathcal{R}}'}{\widetilde{\mathcal{R}}}+2\rho'\right)\psi_m-\nonumber\\ -2\theta'\left(\partial_y-\frac{\rho''}{\rho'} +\frac{\widetilde{\mathcal{R}}'}{\widetilde{\mathcal{R}}} -2\rho'+\frac{\theta''}{\theta'}\right)\chi_m=\exp(2\rho)m^2\psi_m,\\ \left(-\partial_y^2+(\theta')^2+\frac{1}{\widetilde{\mathcal{R}^2}}\begin{pmatrix}\tilde H'\\ -\tilde\Phi'\end{pmatrix}^{\dag}\partial^2V\begin{pmatrix}\tilde H'\\-\tilde\Phi'\end{pmatrix}+4(\rho')^2 -2\rho''\right)\chi_m+\nonumber\\ +2\theta'\left(\partial_y+\frac{\rho''}{\rho'} -\frac{\widetilde{\mathcal{R}}'}{\widetilde{\mathcal{R}}} +2\rho'\right)\psi_m=\exp(2\rho)m^2\chi_m . \label{flucspec} \end{eqnarray} This is a coupled channel equation of second order in derivative and with the spectral parameter $m^2$ as being a coupling constant of a part of potential (a non-derivative piece). The latter part is essentially negative for all $m^2 >0$. Then as the exponent $\rho(y)$ is positive and growing at very large $y$ it becomes evident that the mass term in the potential makes it unbounded below. Thus any eigenfunction of the spectral problem \eqref{flucspec} is at best a resonance state though it could be quasilocalized in a finite volume around a local minimum of the potential. In \cite{aags2} the probability for quantum tunneling of quasilocalized light resonances with masses $m\ll M$ was estimated as $\sim \exp\{-\frac{3}{\kappa}\ln\frac{2M}{m}\}$ which for phenomenologically acceptable values of $\kappa \sim 10^{-15}$ and $M/m \gtrsim 30$ means an enormous suppression. Moreover in the perturbation theory the decay does not occur as the turning point to an unbounded potential energy is situated at $y\sim 1/\kappa$. Therefore one can calculate the localization of resonances following the perturbation schemes. In the limit $\kappa\longrightarrow 0$ we obtain, \begin{eqnarray} \left(-\partial_y+\frac{\rho_1''}{\rho_1'}-\frac{\widetilde{\mathcal{R}}'}{\widetilde{\mathcal{R}}} \right)\left(\partial_y+\frac{\rho_1''}{\rho_1'} -\frac{\widetilde{\mathcal{R}}'}{\widetilde{\mathcal{R}}}\right)\psi_m-2\theta' \left(\partial_y-\frac{\rho_1''}{\rho_1'}+\frac{\widetilde{\mathcal{R}}'}{\widetilde{\mathcal{R}}} +\frac{\theta''}{\theta'}\right)\chi_m=m^2\psi_m,\\ \left(-\partial_y^2+(\theta')^2+\frac{1}{\widetilde{\mathcal{R}^2}}\begin{pmatrix}\tilde H'\\ -\tilde\Phi'\end{pmatrix}^{\dag}\partial^2V\begin{pmatrix}\tilde H'\\-\tilde\Phi'\end{pmatrix}\right)\chi_m +2\theta'\left(\partial_y+\frac{\rho_1''}{\rho_1'} -\frac{\widetilde{\mathcal{R}}'}{\widetilde{\mathcal{R}}}\right)\psi_m=m^2\chi_m \end{eqnarray} where $\rho_1$ is first order of $\kappa$. \subsection{Phase transition point in the presence of gravity} In the unbroken phase $\tilde H(y) = 0$ and the equation on $\chi$ takes the form, \begin{equation} \Bigl[-\partial_\tau^2+ \frac{1}{\beta^2 M^2}e^{-2\rho}\Bigl(-2\Delta_H+2\Phi^2\Bigr) + 4 (\rho')^2 - 2 \rho''\Bigr]\chi_m= \frac{m^2}{M^2\beta^2}e^{2\rho}\chi_m , \label{gauchi} \end{equation} where the variable $\tau = \beta M y$ is employed and the derivative is defined against it. Let us perform the perturbative expansion in $\kappa$, \begin{equation}\chi_m = \sum_{n=0} \kappa^n \chi_{m,n},\quad \Delta_{H,c} = \frac12 M^2 \sum_{n=1} \kappa^n \Delta^n_H;\quad m^2 = \sum_{n=1} \kappa^n (m^2)_n\end{equation} and use also the expansions \eqref{phiexp} and \eqref{betaexp}. The limit of turned off gravity is smooth and the differential operator on the left-hand side of \eqref{gauchi} can be factorized, \begin{equation} \Bigl[\frac{M^2-2 \Delta_H}{M^2}+(-\partial_\tau+\tanh\tau)(\partial_\tau+ \tanh\tau)\Bigr]\chi_{m,0} =\frac{(m^2)_0}{M^2}\chi_{m,0} , \end{equation} which corresponds to $\Delta^0_H = 1$ for zero scalar mass (phase transition point). In general, for $M^2-2\Delta_H >0$ one finds one localized state with positive $m^2$, \begin{equation} \chi=\frac{1}{\cosh\tau}+O(\kappa),\quad m^2=M^2-2\Delta_H+O(\kappa), \end{equation} as it was already established in the previous Subsection. Let us now examine the phase transition point where $m^2 = 0$ and calculate the next approximation of $\kappa$: \begin{eqnarray} & 0=\Bigl[-\partial_\tau^2 +1 - \frac{2}{\cosh^2\tau} \Bigr]\chi_1+\nonumber\\ &+\Bigr[\Bigl(\frac{1}{\beta^2}\Bigr)_1\Bigl(1 - \frac{2}{\cosh^2\tau}\Bigr)- \Delta_H^{1}+4\Phi_0\Phi_1-2\rho_1''\Bigl]\chi_0 . \label{zerochi} \end{eqnarray} At critical point $\Delta_H=\Delta_{H,c}=\frac{1}{2}M^2(1 -\frac{44}{27}\kappa+O(\kappa^2))$ exactly as it has been obtained in \eqref{critshift}. Accordingly there exists a normalizable solution of \eqref{zerochi} which is a zero-mode corresponding to the second-order phase transition. In this case the corresponding first correction of $\chi$ takes the form, \begin{equation} \chi_1=\frac{1}{9}\frac{1}{\cosh\tau}\Bigl[\frac{1}{\cosh^2\tau}-\frac{40}{3}\ln(2\cosh\tau) + \frac{38}{3} + C_1\Bigr], \end{equation} where for the constant $C_1 = 0$ this correction is orthogonal to $\chi_0$. Thus in the scalar sector not mixed with branon (gravity) fluctuations the localization of massless state occurs in the presence of gravity. It can also be shown that for $\Delta_H < \Delta_{H,c}$ the quasilocalization of light states in this sector takes place. When $\Delta_H>\Delta_{H,c}$ the squared mass becomes negative signalling the instability of the unbroken phase. In broken phase mixing terms are nonzero and one has to study spectrum by perturbation theory near critical point. The calculations are not presented in this paper because of their high complexity but to the leading order in $\kappa$ they provide the same mass for light scalar state as in the model \cite{aags1} without gravity, namely, $m^2=2\mu^2+O(\mu^4/M^2)$. This state is associated with the fermion mass generation (Sect. 2) and substitutes the Higgs field of the Standard Model. \section{Conclusions: consistency of scales and of gravitational coupling with modern data} To consider phenomenological implications we have to study interaction of the scalar matter with fermions, \begin{equation} \mathcal{L}_{f}=\bar{\Psi}(i\partial\!\!\!/-g_{K}\tau_3\Phi-g_{H}\tau_1H)\Psi, \end{equation} where in general we can introduce different Yukawa constants for different fermions of the Standard Model (SM). The localization profile depends on the first coupling $g_K$, \begin{equation} \psi_0=\exp\Bigl(-g_K\int^ydy'\Phi(y')\Bigr)=\frac{1}{\cosh^{\alpha}{M\beta y}},\quad \alpha=\frac{g_K}{\beta}=g_K+O\Bigl(\frac{\mu^2}{M^2}\Bigr). \end{equation} Correspondingly in the leading order in $\mu$ and $\kappa$ the fermion mass is described by \begin{equation} m_f=\frac{\int_{-\infty}^{+\infty}\psi_0(y)^2H(y)dy}{\int_{-\infty}^{+\infty}\psi_0(y)^2} =g_{H}\mu\frac{\Gamma\Bigl(\alpha+\frac{1}{2}\Bigr)^2}{\Gamma\Bigl(\alpha\Bigr)\Gamma\Bigl(\alpha+1\Bigr)}. \end{equation} As it was shown in Sections 3, 6 the scalar fluctuations have a single normalizable state associated with the fermion mass generation, \begin{equation} \Phi=\Phi_0(y)+O\Bigl(\frac{\mu}{M}\Bigr),\quad H=H_0(y)+\chi_{0}(y)h(x)+O\Bigl(\frac{\mu^2}{M^2}\Bigr),\quad \chi_{0}=\frac{1}{\cosh{M\beta y}}, \end{equation} with the mass, $m_h=\sqrt{2}\mu\Bigl(1+O\Bigl(\frac{\mu^2}{M^2}\Bigr)\Bigr)$. For $\mu\ll M$ low energy four-dimensional Lagrangian including only the lightest states takes the following form, \begin{eqnarray} \mathcal{L}_{low}&=&\frac{3\kappa M_{\ast}^3}{2M^3}\int_{-\infty}^{+\infty}\chi_0(y)^2dy\cdot \Bigl(\partial_{\mu}h\partial^{\mu}h-m_h^2h^2\Bigr)+2\int_{-\infty}^{\infty}\psi_0(y)^2dy\cdot\bar{\psi} \Bigl(i\partial\!\!\!/-m_f\Bigr)\psi-\nonumber\\ &&-2g_H\int_{-\infty}^{+\infty}\psi_0(y)^2\chi_0(y)dy\cdot\bar{\psi}h\psi . \end{eqnarray} After normalization, \begin{equation} h\rightarrow h\sqrt{\left.\frac{2}{3\kappa}\Bigl(\frac{M}{M_{\ast}}\Bigr)^3 \middle/\int_{-\infty}^{+\infty}\chi_0(y)^2dt\right.},\quad \psi\rightarrow \left.\psi\middle/\sqrt{2\int_{-\infty}^{+\infty}\psi_0(y)^2dy}\right. , \end{equation} we obtain the following Yukawa coupling constant between Higgs-like boson and fermion, \begin{equation} g_f=\sqrt{\frac{2}{3\kappa}\Bigl(\frac{M}{M_{\ast}}\Bigr)^3}g_{H} \frac{\int_{-\infty}^{+\infty}\psi_0(y)^2\chi_0(y)dy}{\sqrt{\int_{-\infty}^{+\infty}\chi_0(y)^2dt} \int_{-\infty}^{+\infty}\psi_0(y)^2dy}=\sqrt{\frac{2}{3\kappa}\Bigl(\frac{M}{M_{\ast}}\Bigr)^3} \frac{m_f}{m_h}. \end{equation} We can compare it with similar couplings $\lambda, g_{t,SM}$ in the standard Higgs model. We adopt the normalization of coupling constants in the Higgs potential of the Standard Model as follows, \begin{equation} V_{SM}\big(h(x)\big) \equiv - m^2 h^2 + \lambda h^4,\quad \langle h \rangle = \frac{v}{\sqrt{2}} = \frac{m}{\sqrt{2\lambda}} . \end{equation} The scale $v \simeq 246 GeV$ stands for the v.e.v of the Higgs field $h$ in the Standard Model \cite{PDG}. For the top quark channel dominating for the Higgs boson decay via one-loop mechanism one obtains, \begin{equation} m_h=\sqrt{2\lambda} v,\quad m_t=\frac{1}{\sqrt{2}}g_{t,SM}\cdot v\Rightarrow g_{t, SM}=2\sqrt{\lambda}\frac{m_t}{m_h}. \end{equation} Accordingly the relation between the Yukawa coupling constants is given by, \begin{equation} \lambda\frac{g_t^2}{g_{t,SM}^2}=\frac{1}{6\kappa}\Bigl(\frac{M}{M_{\ast}}\Bigr)^3 . \label{yukawa} \end{equation} Let us involve the gravity scales coming from reduction of five-dimensional Einstein-Hilbert action to the four-dimensional one \cite{aags2}, \begin{equation} M^3_\ast = k M^2_{P}, \label{planck} \end{equation} which can be derived from the graviton kinetic action \eqref{decoup} when taking the wave function $b'_{\mu\nu}= 0$ for massless graviton. It determines the four-dimensional gravity scale, the Planck mass, $M_P \simeq 2.5\cdot 10^{18} GeV$ \cite{PDG}. From the experimental bounds on the AdS curvature in extra dimension \cite{adelb} one can estimate the minimal value for the mass scales, $M_\ast, M$ as well as for the dimensional gravitational coupling $\kappa$. Indeed, if combining \eqref{planck}, \eqref{asymp} and \eqref{yukawa} one gets, \begin{equation} M = \sqrt{3\sqrt{\lambda}\ k M_P \frac{g_t}{g_{t,SM}}} ;\quad \kappa = \frac{1}{2\sqrt{\lambda}}\frac{M}{M_P}\frac{g_{t,SM}}{g_t} . \end{equation} The modern bound for the AdS curvature, $k > 0.004 eV$. As well the excess of $\gamma\gamma$ pair production observed recently on LHC \cite{lhc} could be explained by the Higgs particle decay $h\rightarrow\gamma\gamma$ via virtual $\bar t t$ triangle loop if the Yukawa coupling is abnormally larger than the SM value, $\frac{g_t}{g_{t,SM}} = 1 \div 1.5$. All together it entails the following bounds for the scales and couplings of our model, \begin{equation} M > 3.5 TeV;\quad M_\ast > 3\cdot 10^8 GeV;\quad \kappa > 2\cdot 10^{- 15} . \end{equation} Thus we conclude that the gravitational corrections on localization mechanism are indeed very small except for branon spectrum. But the thickness of the brane may affect the high energy scattering processes already at the next LHC running and show up in appearance/disappearance processes, in particular in missing energy events \cite{rev12}, \cite{nesv} . \section*{Acknowledgments} We acknowledge the financial support by Grant RFBR 10-02-00881-a and by SPbSU grant 11.0.64.2010. One of us (A.A.) was partially supported by projects FPA2010-20807, 2009SGR502, CPAN (Consolider CSD2007-00042).
{ "timestamp": "2013-06-04T02:05:04", "yymm": "1210", "arxiv_id": "1210.3698", "language": "en", "url": "https://arxiv.org/abs/1210.3698" }
\section{Introduction} Scaling limits of discrete particle systems is a central question in Statistical Mechanics. In the case of interacting particle systems, where particles evolve according to some rule of interaction, it is of interest to characterize, in the continuum limit, the time trajectory of the spatial density of particles. Such limits are given in terms of solutions of partial differential equations and different particle systems are governed by different types of partial differential equations, with a large literature on the subject. As a reference, we cite the book \cite{kl}. In this work we are concerned with convergence of solutions of a particular partial differential equation emerging from particle systems that we describe as follows. Given $\alpha>0$, denote by $\rho^\alpha$ the unique weak solution of the heat equation with Robin's boundary conditions given by \begin{equation*} \left\{ \begin{array}{ll} \partial_t \rho(t,u) \; =\; \Delta \rho(t,u)\,, &t \geq 0,\, u\in (0,1)\,,\\ \partial_u \rho(t,0) \; =\;\partial_u \rho(t,1)= \alpha(\rho(t,0)-\rho(t,1))\,, &t \geq 0\,,\\ \rho(0,u) \;=\; \rho_0(u), &u \in (0,1)\,. \end{array} \right. \end{equation*} Such equation is related to a particle system with exclusion dynamics, see \cite{fgn,fl}, in a sense which will be precise later. We notice that the boundary conditions of Robin's type as given above mean a passage of mass between $u=0$ and $u=1$. These boundary conditions arise from considering the particle systems evolving on the discrete torus. Moreover, they reflect the Fick's Law: the rate at which the mass crosses the boundary is proportional to the difference of densities in each medium. The main theorem we present here is the following convergence in $L^2$: \begin{equation*} \displaystyle \lim_{\alpha\to 0} \rho^\alpha \; = \; \rho^0\quad \textrm{ and }\quad \displaystyle \lim_{\alpha\to \infty} \rho^\alpha \; = \; \rho^\infty\;, \end{equation*} where $\rho^0$ is the unique weak solution of the heat equation with Neumann's boundary conditions \begin{equation*} \begin{cases} \partial_t \rho(t,u) \; =\; \Delta \rho(t,u)\,,&t \geq 0,\, u\in (0,1)\,,\\ \partial_u \rho(t,0) \; =\;\partial_u \rho(t,1)= 0\,,&t \geq 0\,,\\ \rho(0,u) \;=\; \rho_0(u)\,, &u \in (0,1)\,,\\ \end{cases} \end{equation*} and $\rho^\infty$ is the unique weak solution of the heat equation with periodic boundary conditions \begin{equation*} \begin{cases} \partial_t \rho(t,u) \; =\; \Delta \rho(t,u)\,,\qquad & t \geq 0,\, u\in \bb T\,,\\ \rho(0,u) \;=\; \rho_0(u)\,, &u \in \bb T\,, \end{cases} \end{equation*} where $\bb T$ above is the continuous torus. The outline of its proof is the following. Based on energy estimates coming from the particle system, we obtain that the set $\{\rho^\alpha\,;\,\alpha>0\}$ is bounded in a Sobolev's type norm, implying its relative compactness. On the other hand, a careful analysis shows that the limit along subsequences of $\rho^\alpha$ are concentrated on weak solutions of the corresponding equations, when $\alpha$ goes to zero or to infinity. Uniqueness of weak solutions in each case ensures the convergence. When $\alpha$ goes to zero or to infinity, the corresponding limits of $\rho^\alpha$ are driven by solutions of partial differential equations of a different kind from the original one. For this reason, we employ the term \emph{phase transition}. To the best of our knowledge, this type of result is not a standard one in the partial differential equations literature, without previous nomenclature on it. One of the novelties of this work, besides the aforementioned theorem, is the approach itself: it is used here the framework of probability theory to obtain knowledge on the behavior of solutions for a class of heat equations with Robin's boundary conditions as given above. Next, we describe the particle system which provides the bounding on a Sobolev's type norm for $\rho^\alpha$. This particle system belongs to the class of Markov processes and evolves on the $1$-dimensional discrete torus with $n$ sites. The elements of its state space are called configurations and are such that at each site of the torus, there is at most a particle per site, therefore it coined the name exclusion process. Its dynamics can be informally described as follows. At each bond of the torus is associated an exponential clock in such a way that clocks associated to different bonds are independent. When a clock rings at a bond, the occupation at the vertices of that bond are interchanged. Of course, if both vertices are occupied or empty, nothing happens. All the clocks have parameter one, except one particular bond, whose parameter is given by $\alpha n^{-\beta}$, where $\alpha,\beta>0$. Or else, this bond slows down the passage of particles across it. It is the existence of this special bond that gives rise to the boundary conditions of its associated partial differential equation For $\beta=1$, the hydrodynamic limit of this exclusion process is a particular case of the processes studied in \cite{fl}. There, it was proved that the hydrodynamic limit is driven by a generalized partial differential equation involving a Radon-Nikodym derivative with respect to the Lebesgue measure plus a delta of Dirac. As an additional result, we deduce here another proof of this hydrodynamic limit, identifying $\rho^\alpha$ as a solution of a classical equation, namely the heat equation with Robin's boundary conditions as given above. Furthermore, by the results proved in \cite{fgn,fl}, we get that $\rho^\alpha$ has a Sobolev's type norm bounded by a constant that does not depend on $\alpha$. Such constant corresponds to the entropy bound of any measure defined in the state space of the process with respect to its invariant measure. We point out that, despite knowing the hydrodynamic limit for this process, this different characterization of the limit density of particles, given in terms of a classical partial differential equation, is new. The most delicate step in the proof of last result is the proof of uniqueness of weak solutions of the heat equation with Robin's boundary conditions, requiring the construction of a inverse of the laplacian operator acting on a suitable domain. The motivation of this work came from \cite{fgn} where the hydrodynamic limit for the exclusion process with a slow bond was shown to be given by the heat equation with periodic boundary conditions or the heat equation with Neumann's boundary conditions, depending whether $\beta<1$ or $\beta>1$, respectively. This suggested to us that, when taking the limit in $\alpha$ in the partial differential equation corresponding to $\beta=1$, one should recover both these equations. The paper is divided as follows. We give definitions and state our results in Section \ref{s2}. In Section \ref{s3}, we prove uniqueness of weak solutions of the heat equation with Robin's boundary conditions. In Section \ref{s4}, we introduce the exclusion process with a slow bond, we state and sketch the proof of its hydrodynamic limit and we obtain bounds on a Sobolev's type norm of $\rho^\alpha$. In Section \ref{s5}, we prove our main result, namely the phase transition for the heat equation with Robin's boundary conditions. In the Appendix, we present some results that are needed in due course. \textbf{Notations:} We denote by $\bb T$ the continuous one dimensional torus $\bb R/\bb Z$. By an abuse of notation, we denote $\<\cdot,\cdot\>$ both the inner product in $L^2(\bb T)$ and in $L^2[0,1]$, and we denote by $\|\cdot\|_{L^2[0,1]}$ its norm. We denote by ${\bf{1}}_A(x)$ the function which is equal to one if $x\in A$ and zero if $x\notin A$ and by $\Delta$ the second space derivative. \section{Statement of results}\label{s2} In this section, we begin by defining weak solutions of the partial differential equations that we deal with, namely the heat equation with periodic, Robin's and Neumann's boundary conditions. In the sequence, we present the exclusion process with a slow bond, we explain its relation with those equations, and how to obtain from it the boundedness of a Sobolev's type norm of weak solutions of the heat equation with Robin's boundary conditions. At last, we state our main result. \begin{definition}\label{space C^n,m} For $n,m\in{\mathbb{N}}$ and $A,B$ intervals of $\bb R$ or $\bb T$, let $C^{n,m}(A\times B)$ be the space of real valued functions defined on $A\times B$ of class $C^n$ in the first variable and of class $C^m$ in the second variable. For functions of one variable, we simply write $C^n(A)$. \end{definition} Now, we define weak solutions of the partial differential equations that we deal with. \begin{definition}\label{def edp 1} We say that $\rho$ is a weak solution of the heat equation with periodic boundary conditions \begin{equation} \label{he} \left\{ \begin{array}{ll} \partial_t \rho(t,u) \; =\; \Delta \rho(t,u)\,, \qquad& t \geq 0,\, u\in \bb T\,,\\ \rho(0,u) \;=\; \rho_0(u), &u \in \bb T\,. \end{array} \right. \end{equation} if $\rho$ is measurable and, for any $t\in{[0,T]}$ and any $H\in C^{1,2}([0,T]\times\mathbb{T}) $, \begin{equation}\label{eqint1} \begin{split} &\<\rho_t,\,H_t\>-\<\rho_0,\,H_0\>- \int_0^t\big\< \rho_s,\, \partial_s H_s+\Delta H_s\big\>\, ds\;=\;0\,. \end{split} \end{equation} \end{definition} Above and in the sequel, a subindex in a function means a variable, \emph{not a derivative}. For instance, above by $H_s(u)$, we mean $H(s,u)$. To define a weak solution of the heat equation with Robin's or Neumann's boundary conditions it is necessary to introduce the notion of Sobolev's spaces. \begin{definition}\label{Sobolevdefinition} Let $\mc H^1$ be the set of all locally summable functions $\zeta: [0,1]\to\bb R$ such that there exists a function $\partial_u\zeta\in L^2[0,1]$ satisfying \begin{equation*} \<\partial_uG,\zeta\>\,=\,-\<G,\partial_u\zeta\>\,, \end{equation*} for all $G\in C^{\infty}(0,1)$ with compact support. For $\zeta\in\mc H^1$, we define the norm \begin{equation*} \Vert \zeta\Vert_{\mc H^1}\,:=\, \Big(\Vert \zeta\Vert_{L^2[0,1]}^2+\Vert\partial_u\zeta\Vert_{L^2[0,1]}^2\Big)^{1/2}\,. \end{equation*} Let $L^2(0,T;\mc H^1)$ be the space of all measurable functions $\xi:[0,T]\to \mc H^1$ such that \begin{equation*} \Vert\xi \Vert_{L^2(0,T;\mc H^1)}^2 \, :=\,\int_0^T \Vert \xi_t\Vert_{\mc H^1}^2\,dt\,<\,\infty\,. \end{equation*} \end{definition} \begin{definition}\label{heat equation Robin} We say that $\rho$ is a weak solution of the heat equation with Robin's boundary conditions given by \begin{equation}\label{her} \left\{ \begin{array}{ll} \partial_t \rho(t,u) \; =\; \Delta \rho(t,u)\,, &t \geq 0,\, u\in (0,1)\,,\\ \partial_u \rho(t,0) \; =\;\partial_u \rho(t,1)= \alpha(\rho(t,0)-\rho(t,1))\,, \qquad &t \geq 0,\\ \rho(0,u) \;=\; \rho_0(u), &u \in (0,1)\,. \end{array} \right. \end{equation} if $\rho$ belongs to $L^2(0,T;\mathcal{H}^1)$ and, for all $t\in [0,T]$ and for all $H\in C^{1,2}([0,T]\times [0,1])$, \begin{equation}\label{eqint2} \begin{split} \< \rho_t,H_t\>\!-\!\<\rho_0,H_0\> \!-\!\! \int_0^t\!\!\!\big\< \rho_s, \partial_s H_s +&\Delta H_s\big\> ds\!-\!\!\int_0^t\!\!\!(\rho_s(0)\partial_uH_s(0)-\rho_s(1)\partial_uH_s(1))\,ds\\ &+ \int_0^t \alpha(\rho_s(0)-\rho_s(1))(H_s(0)-H_s(1))\,ds=0\,. \end{split} \end{equation} \end{definition} \begin{definition}\label{heat equation Neumann} We say that $\rho$ is a weak solution of the heat equation with Neumann's boundary conditions \begin{equation}\label{hen} \left\{ \begin{array}{ll} \partial_t \rho(t,u) \; =\; \Delta \rho(t,u)\,, &t \geq 0,\, u\in (0,1)\,,\\ \partial_u \rho(t,0) \; =\;\partial_u \rho(t,1)= 0\,, \qquad &t \geq 0\,,\\ \rho(0,u) \;=\; \rho_0(u), &u \in (0,1)\,. \end{array} \right. \end{equation} if $\rho$ belongs to $L^2(0,T;\mathcal{H}^1)$ and, for all $t\in [0,T]$ and for all $H\in C^{1,2}([0,T]\times [0,1])$, \begin{equation}\label{eqint3} \begin{split} &\< \rho_t,H_t\>\!-\!\<\rho_0,H_0\> \!- \!\!\!\int_0^t\!\!\!\!\big\< \rho_s, \partial_s H_s\!+\!\Delta H_s\big\> ds \!-\!\!\!\int_0^t\!\!\!(\rho_s(0)\partial_u H_s(0)\!-\!\rho_s(1)\partial_uH_s(1))ds=0.\\ \end{split} \end{equation} \end{definition} Since in Definitions \ref{heat equation Robin} and \ref{heat equation Neumann} we required that $\rho\in L^2(0,T;\mathcal{H}^1)$, the integrals at boundary points are well-defined. For more details on Sobolev's spaces, we refer the reader to \cite{e,l}. Heuristically, in order to establish the integral equation for the weak solution of each one of the equations above, one should multiply both sides of the differential equation by a test function $H$, then integrate both in space and time and finally, perform twice a formal integration by parts. Then, applying the respective boundary conditions we are lead to the corresponding integral equation. This reasoning also shows that any strong solution is a weak solution of the respective equation. We define a measure $W_\alpha$ on $\bb T$ by \begin{equation}\label{W} W_\alpha(du)=du+\frac{1}{\alpha}\,\delta_0(du)\,, \end{equation} that is, $W_\alpha$ is the sum of the Lebesgue measure and the Dirac measure concentrated on $0\in\bb T$ with weight $1/\alpha$. We denote by $\<\cdot,\cdot\>_\alpha$ the inner product in $L^2$ of $\bb T$ with respect to the measure $W_\alpha$. \begin{definition}\label{def7} Let $L^2_{W_{\alpha}}([0,T]\times\bb T)$ be the Hilbert space composed of measurable functions $f:[0,T]\times\bb T\rightarrow{\bb R}$ with $\|f\|_{\alpha}^2:= \<\!\<f,f\>\!\>_{\alpha}<\infty$, where for $f,g: [0,T] \times \mathbb{T} \to \mathbb{R}$, \begin{equation*} \<\!\<f,g\>\!\>_{\alpha}\,=\, \int_0^T \int_{\bb T} f_s( u) \, g_s(u)\,{W_\alpha}(du)\,ds\,. \end{equation*} By $\<\!\<\cdot,\cdot\>\!\>$ we denote the usual inner product corresponding to the Hilbert space $L^2([0,T]\times\bb T)$. Or else, \begin{equation*} \<\!\<f,g\>\!\>\,=\, \int_0^T \int_{\bb T} f_s( u) \, g_s(u)\,du\,ds\,. \end{equation*} By abuse of notation, we will use the same notation $\<\!\<\cdot,\cdot\>\!\>$ for the inner product on the Hilbert space $L^2([0,T]\times[0,1])$. \end{definition} \begin{proposition}\label{uniq_sobolev} For any $\alpha>0$, there exists a weak solution $\rho^\alpha:[0,T]\times[0,1]\to [0,1]$ of \eqref{her}. Moreover, such solution is unique and satisfies the inequality \begin{equation* \sup_{H}\Big\{\<\!\<\rho^{\alpha},\partial_u H\>\!\>-2\<\!\<H,H\>\!\>_{\alpha}\Big\}\leq K_0\,, \end{equation*} where $K_0$ is a constant that does not depend on $\alpha$ and the supremum is taken over functions $H\in C^{\,0,1}([0,T]\times\bb T)$, see Definition \ref{space C^n,m}. \end{proposition} The uniqueness of weak solutions stated in the proposition above is proved in Section \ref{s3} via the construction of the inverse of the laplacian operator defined on a suitable domain. The existence of a weak solution and the inequality above are proved through the hydrodynamic limit of the symmetric exclusion process with a slow bond, as shown in Section \ref{s4}. We state now our main result: \begin{theorem}\label{pdePT} For $\alpha>0$, let $\rho^\alpha:[0,T]\times[0,1]\to [0,1]$ be the unique weak solution of \eqref{her}. Then, \begin{equation*} \displaystyle \lim_{\alpha\to 0} \rho^\alpha \; = \; \rho^0\quad \textrm{ and }\quad \displaystyle \lim_{\alpha\to \infty} \rho^\alpha \; = \; \rho^\infty \end{equation*} in $L^2([0,T]\times [0,1])$, where $\rho^0$ and $\rho^\infty$ are the unique weak solutions of equations \eqref{hen} and \eqref{he}, respectively. \end{theorem} \section{Uniqueness of Weak Solutions}\label{s3} We present here the proof of uniqueness of weak solutions of \eqref{her}. Since the equation is linear, it is sufficient to consider the initial condition $\rho_0(\cdot)\equiv 0$. We begin by defining the inverse of the laplacian operator on a suitable domain. Denote by $L^2[0,1]^{\perp 1}$ the set of functions $g\in L^2[0,1]$ such that \begin{equation*} \int_{0}^1 g(u)\,du\,=\, 0\,. \end{equation*} \begin{definition}\label{operatorext} Let $\mathbb H^\alpha$ be the space of functions $H:[0,1]\rightarrow\bb{R}$ satisfying \begin{itemize} \item $H\in C^1([0,1])$ and, moreover, the derivative $\partial_u H$ is absolutely continuous; \item $\Delta H(u)$ exists Lebesgue almost surely and $ \Delta H\in L^2[0,1]^{\perp 1}$; \item $H$ satisfies the boundary conditions \begin{equation}\label{boundary conditions in Hbc} \partial_u H(0)=\partial_u H(1)= \alpha(H(0)- H(1))\,. \end{equation} \end{itemize} \end{definition} In order to obtain the uniqueness of weak solutions, we will construct a inverse of the laplacian operator. However, the laplacian operator is not injective in the domain $\mathbb H^\alpha$. For this reason, let us define $\mathbb H^\alpha_0$ as set of functions $H\in\mathbb H^\alpha$ such that $H(0)=0$. \begin{proposition} The operator $\Delta:\mathbb H^\alpha_0\to L^2[0,1]^{\perp 1}$ is injective. \end{proposition} \begin{proof} Since the operator is linear it is enough to show that its kernel reduces to the null function. For that purpose, let $H\in \mathbb H^\alpha_0$ be such that $\Delta H\equiv 0$ Lebesgue almost surely. Since $\partial_uH$ is absolutely continuous, this implies that $\partial_u H$ is constant. Hence, $H(u)=a+bu$. The unique value of $b$ for which $H$ of this form satisfies \eqref{boundary conditions in Hbc} is $b=0$. Since $H(0)=0$, have that $a=0$, thus $H$ is zero. \end{proof} For $g\in L^2[0,1]^{\perp 1}$, define \begin{equation* [(-\Delta)^{-1}_\alpha g](u)\,:=\,\int_{0}^1 G_\alpha(u,r)\,g(r)\,dr\,, \end{equation*} where the function $G_\alpha:[0,1]\times [0,1]\to \bb R$ is given by \begin{equation*} G_\alpha(u,r)\,=\,\frac{\alpha}{\alpha+1}\,u(1-r)-(u-r){\bf 1}_{\{0\leq r\leq u\leq 1\}}\,. \end{equation*} \begin{proposition}\label{propinvlaplaciano} Let $g\in L^2[0,1]^{\perp 1}$. The operator $(-\Delta)^{-1}_\alpha $ enjoys the following properties: \begin{enumerate} \item[(a)] $(-\Delta)^{-1}_\alpha g\in C^1([0,1])$. Moreover, its first derivative is absolutely continuous. \vspace{0.3cm} \item[(b)] \noindent $\partial_u [(-\Delta)^{-1}_\alpha g](0)=\partial_u [(-\Delta)^{-1}_\alpha g](1)= \alpha([(-\Delta)^{-1}_\alpha g](0)-[(-\Delta)^{-1}_\alpha g](1))$. \vspace{0.3cm} \item[(c)] $(-\Delta)^{-1}_\alpha g\in \mathbb H^\alpha_0$. \vspace{0.3cm} \item[(d)] $(-\Delta) \big[(-\Delta)^{-1}_\alpha g\big]=g$. \vspace{0.3cm} \item[(e)] The operators $(-\Delta):\mathbb H^\alpha_0\to L^2[0,1]^{\perp 1}$ and $(-\Delta)^{-1}_\alpha :L^2[0,1]^{\perp 1}\to \mathbb H^\alpha_0$ are symmetric and non-negative. \end{enumerate} \end{proposition} \begin{proof} By the definition of $(-\Delta)^{-1}_\alpha$, \begin{equation*}\label{eeq2.10} [(-\Delta)^{-1}_\alpha g](u)\,=\, \frac{\alpha}{\alpha+1}\,u\int_0^1(1-r)g(r)\,dr-u\int_0^u g(r)\,dr+\int_0^u r\, g(r)\,dr\,. \end{equation*} By differentiation, we obtain $$\partial_u[(-\Delta)^{-1}_\alpha g](u) = \frac{\alpha}{\alpha+1}\int_0^1(1-r)g(r)\,dr-\int_0^u g(r)dr\,,$$ implying (a). Item (b) follows from the assumption $g\in L^2[0,1]^{\perp 1}$. Items (a) and (b) together imply (c). By differentiating again the previous equality and recalling (c) we are lead to (d). It remains to prove (e). Fix $G,H\in \mathbb H^\alpha_0$. Integration by parts gives \begin{equation*} \< -\Delta G,H\>\,=\,\<\partial_u G,\partial_u H\>+\partial_u G(0) H(0) -\partial_u G(1) H(1)\,. \end{equation*} Since $G,H\in \mathbb H^\alpha$, these functions satisfy \eqref{boundary conditions in Hbc}. As a consequence, \begin{equation* \< -\Delta G,H\>\,=\,\<\partial_u G,\partial_u H\>+\,\frac{1}{\alpha}\,\partial_u G(0) \partial_u H(0)\,, \end{equation*} which implies symmetry and non-negativity of $\Delta$. The same argument applies for $(-\Delta)^{-1}_\alpha $, by item (d). \end{proof} \begin{lemma}\label{H2bc} Let $\rho$ be a weak solution of \eqref{her}. Then, for all $H\in\mathbb H^\alpha$ and for all $t\in [0,T]$ , \begin{equation}\label{ext} \< \rho_t, H\> \,-\, \< \rho_0 , H\> \,=\, \int_0^t \< \rho_s , \Delta H \>\, ds\,. \end{equation} \end{lemma} \begin{proof} Fix $H\in \mathbb H^\alpha$. Let $\{g_n\}_{n\in\bb N}\subset C([0,1])$ be a sequence of functions converging to $\Delta H$ in $L^2[0,1]$ such that $\int_0^1 g_n(u)\,du=0$ for all $n\in\bb N$. Notice that this is possible because $\Delta H$ has zero mean. Define \begin{equation*} G_n(u):=H(0)+\partial_uH(0)\, u+ \int_0^u \int_0^v g_n(r) \, dr\,dv\,. \end{equation*} Since $g_n$ has zero mean, then $\partial_uG_n(0)=\partial_uG_n(1)=\partial_u H(0)=\partial_u H(1)$. Besides that, $G_n(0)=H(0)$. It is easy to verify that $G_n\in C^{1,2}([0,T]\times [0,1])$, see the Definition \ref{space C^n,m}. Since $\rho(\cdot)$ is a weak solution of equation \eqref{her} and $G_n\in C^{1,2}([0,T]\times [0,1])$, then we get that \begin{equation}\label{g_n} \begin{split} \< \rho_t, G_n\> \,-\, \< \rho_0, G_n\> \,=&\, \int_0^t \< \rho_s , g_n \>\, ds+\int_0^t(\rho_s(0)-\rho_s(1))\partial_uH(1)\,ds\\ +&\int_0^t\alpha(\rho_s(0)-\rho_s(1))(H(0)-G_n(1)))\,ds\,. \end{split} \end{equation} We want to take the limit $n\to\infty$ in the previous equation. To this end, notice that $G_n(1)$ converges to \begin{equation* H(0)+\partial_u H(0)+ \int_0^1 \int_0^v \Delta H(r) \, dr\,dv\,. \end{equation*} Since $\Delta H$ is absolutely continuous, the Fundamental Theorem of Calculus can be applied twice, showing that the previous expression is equal to $H(1)$. Therefore, taking the limit $n\to \infty$ in \eqref{g_n}, we obtain that \begin{equation*} \begin{split} \< \rho_t, H\> \,-\, \< \rho_0, H\> \,=&\, \int_0^t \< \rho_s ,\Delta H\>\, ds+\int_0^t(\rho_s(0)-\rho_s(1))\partial_uH(1)\,ds\\ -&\int_0^t\alpha(\rho_s(0)-\rho_s(1))(H(0)-H(1))\,ds\,. \end{split} \end{equation*} By \eqref{boundary conditions in Hbc} the last two integral terms on the right hand side of the previous expression cancel, which ends the proof. \end{proof} \begin{proposition}\label{prop243} Let $\rho$ be a weak solution of \eqref{her} with $\rho_0(\cdot)\equiv{0}$. Then, for all $t\in[0,T]$, it holds that \begin{equation}\label{e2.9} \big\< \rho_t,(-\Delta)^{-1}_\alpha\rho_t\big\>=-2\int_0^t\<\rho_s,\rho_s\>\,ds\,.\\ \end{equation} In particular, since equation \eqref{her} is linear, there exists at most one weak solution with initial condition $\rho_0(\cdot)$. \end{proposition} \begin{proof} We first claim that $\<\rho_t,1\>=0$ for any time $t\in [0,T]$, if $\rho_0(\cdot)\equiv{0}$. This is a consequence of taking a function $H\equiv 1$ in the integral equation \eqref{eqint2}. Since $\rho$ is bounded, we have also that $\rho\in L^2[0,1]^{\perp 1}$. Or else, the function $\rho$ is in the domain of the operator $(-\Delta)^{-1}_\alpha$. Take a partition $0=t_0<t_1<\cdots<t_n=t$ of the interval $[0,t]$. Writing a telescopic sum, we get to \begin{equation*} \begin{split} \<\rho_t,(-\Delta)^{-1}_\alpha\rho_t\>-\<\rho_0,(-\Delta)^{-1}_\alpha\rho_0\> =& \sum_{k=0}^{n-1} \< \rho_{t_{k+1}},(-\Delta)^{-1}_\alpha\rho_{t_{k+1}}\>-\< \rho_{t_k},(-\Delta)^{-1}_\alpha\rho_{t_k}\>\,. \end{split} \end{equation*} By summing and subtracting the term $\< \rho_{t_{k+1}},(-\Delta)^{-1}_\alpha\rho_{t_k}\>$ for each $k$, the right hand side of the previous expression can be rewritten as \begin{equation}\label{eq1} \begin{split} &\sum_{k=0}^{n-1} \< \rho_{t_{k+1}},(-\Delta)^{-1}_\alpha\rho_{t_{k+1}}\>-\< \rho_{t_{k+1}},(-\Delta)^{-1}_\alpha\rho_{t_k}\>\\ +&\sum_{k=0}^{n-1}\< \rho_{t_{k+1}},(-\Delta)^{-1}_\alpha\rho_{t_k}\>-\<\rho_{t_k},(-\Delta)^{-1}_\alpha\rho_{t_k}\>\,. \end{split} \end{equation} We begin by estimating the second sum above. The first one can be estimated in a similar way because $(-\Delta)^{-1}_\alpha $ is a symmetric operator. From item (c) of Proposition \ref{propinvlaplaciano} and Lemma \ref{H2bc} we get that \begin{equation}\label{eq2.10} \begin{split} &\< \rho_{t_{k+1}},(-\Delta)^{-1}_\alpha\rho_{t_{k}}\>-\< \rho_{t_{k}},(-\Delta)^{-1}_\alpha\rho_{t_k}\>= -\!\int_{t_k}^{t_{k+1}}\!\! \<\rho_s,\rho_s\>\,ds+\!\int_{t_k}^{t_{k+1}}\!\!\<\rho_s,\rho_s-\rho_{t_k}\>\,ds.\\ \end{split} \end{equation} The sum over $k$ of the first integral on the right side of last equality is exactly $-\int_{0}^t\<\rho_s,\rho_s\>ds$. We claim now that the sum in $k$ of the last integral on the right hand of the expression above goes to zero as the mesh of the partition goes to zero. To this end, we approximate $\rho$ by a smooth function vanishing simultaneously in a neighborhood of $0$ and $1$. Let $\iota_\delta:\bb R\to \bb R$ be a smooth approximation of the identity. We extend $\rho_s(\cdot)$ as being zero outside of the interval $[0,1]$. It is classical that the convolution $\rho_s*\iota_\delta$ is smooth and converges to $\rho_s(\cdot)$ in $L^2[0,1]$ as $\delta\to 0$. Let $\Phi_\delta:[0,1]\to \bb R$ a smooth function bounded by one, equal to zero in $[0,\delta)\cup (1-\delta, 1]$ and equal to one in $(2\delta,1-2\delta)$. Define $$\rho^\delta_s(u)\,=\,(\rho_s*\iota_\delta)(u)\,\Phi_\delta(u)\,.$$ Then, $\rho^\delta_s(\cdot)$ converges to $\rho_s(\cdot)$ in $L^2[0,1]$. Furthermore, since $\rho^\delta_s(\cdot)$ is smooth and vanishes near $0$ and $1$, it is simple to verify that $\rho^\delta_s\in \mathbb H^\alpha_0$. Adding and subtracting $\rho^\delta$, the second integral on the right hand side of equality \eqref{eq2.10} can be written as \begin{equation*} \int_{t_k}^{t_{k+1}}\<\rho_s-\rho^\delta_s,\rho_s-\rho_{t_k}\>\,ds+ \int_{t_k}^{t_{k+1}}\<\rho^\delta_s,\rho_s-\rho_{t_k}\>\,ds\,. \end{equation*} Fix $\varepsilon>0$. Since $\rho^\delta$ approximates $\rho$, the Dominated Convergence Theorem gives us that the absolute value of the sum in $k$ of the first integral in the expression above is bounded in modulus by $\varepsilon$ for some $\delta(\varepsilon)$ small. Take now $\delta=\delta(\varepsilon)$. Since $\rho^\delta_s\in \mathbb H^\alpha_0$, applying Lemma \ref{H2bc} we get that the second integral above is equal to \begin{equation*} \int_{t_k}^{t_{k+1}}\int_{t_k}^s \<\rho_r,\Delta \rho^\delta_s\>\,dr\,ds\,, \end{equation*} whose absolute value is bounded from above by $C(\rho,\delta)(t_{k+1}-t_k)^2$. This is enough to conclude the proof of \eqref{e2.9}. Let us prove now the uniqueness of weak solutions. We notice that as above, we take $\rho_0(\cdot)\equiv{0}$ and therefore we want to prove that $\rho_t(\cdot)\equiv{0}$. Since $\rho_t\in{\bb{L}^2[0,1]^{\perp}}$, by item (e) of Proposition \ref{propinvlaplaciano}, we have that $\< \rho_t,(-\Delta)^{-1}_\alpha\rho_t\>\geq 0$, for all $t\in[0,T]$. From \eqref{e2.9} and Gronwall's inequality, we conclude that $\< \rho_t,(-\Delta)^{-1}_\alpha\rho_t\>= 0$, for all $t\in[0,T]$. From item (d), fixed $t\in[0,T]$, there exists $f_t\in\mathbb H^\alpha_0$ such that $\rho_t= (-\Delta)f_t$. Hence, \begin{equation*} \< \rho_t,(-\Delta)^{-1}_\alpha\rho_t\>=\< -\Delta f_t,f_t\>\,=\,\<\partial_u f_t,\partial_u f_t\> \,. \end{equation*} Thus, for all $t\in[0,T]$, $\partial_u f_t(\cdot)=0$ Lebesgue almost surely. Coming back to $\rho_t= (-\Delta)f_t$ we get that $\rho(\cdot)$ is equal to zero. This concludes the proof. \end{proof} \section{Hydrodynamics and energy estimates}\label{s4} In this section we introduce a particle system whose scaling limits are driven by the partial differential equations introduced above. We first describe the model, then we state the hydrodynamics result and finally we obtain energy estimates which are crucial for the proof of Proposition \ref{uniq_sobolev}. \subsection{Symmetric slowed exclusion} \quad \vspace{0.2cm} The symmetric exclusion process with a slow bond is a Markov process $\{\eta_t:\, t\geq{0}\}$ evolving on $\Omega:=\{0,1\}^{\bb T_n}$, where $\bb T_n=\bb Z/n\bb Z$ is the one-dimensional discrete torus with $n$ points. It is characterized via its infinitesimal generator $\mathcal{L}_{n}$ which acts on functions $f:\Omega\rightarrow \bb{R}$ as \begin{equation*} \mathcal{L}_{n}f(\eta)=\sum_{x\in \bb T_n}\,\xi^{n}_{x,x+1}\,\big[f(\eta^{x,x+1})-f(\eta)\big]\,, \end{equation*} being the rates given by \begin{equation*} \xi^{n}_{x,x+1}\;=\;\left\{\begin{array}{cl} \alpha n^{-\beta}, & \mbox{if}\,\,\,\,x=-1\,,\\ 1, &\mbox{otherwise\,,} \end{array} \right. \end{equation*} and $\eta^{x,x+1}$ is the configuration obtained from $\eta$ by exchanging the variables $\eta(x)$ and $\eta(x+1)$, namely \begin{equation*} \eta^{x,x+1}(y)=\left\{\begin{array}{cl} \eta(x+1),& \mbox{if}\,\,\, y=x\,,\\ \eta(x),& \mbox{if} \,\,\,y=x+1\,,\\ \eta(y),& \mbox{otherwise}\,. \end{array} \right. \end{equation*} The dynamics of this process can be informally described as follows. At each bond $\{x,x+1\}$, there is an exponential clock of parameter $\xi^{n}_{x,x+1}$. When this clock rings, the value of $\eta$ at the vertices of this bond are exchanged. This means that particles can cross all the bonds at rate $1$, except the bond $\{-1,0\}$, whose dynamics is slowed down as $\alpha n^{-\beta}$, with $\alpha>0$ and $\beta\in{[0,\infty]}$. It is understood here that $n^{-\infty}=0$ and $\infty\cdot 0=0$. It is well known that the Bernoulli product measures on $\Omega$ with parameter $\gamma\in{[0,1]}$, denoted by $\{\nu^n_\gamma : 0\le \gamma \le 1\}$, are invariant for the dynamics introduced above. This means that if $\eta_0$ is distributed according to $\nu^n_\gamma$, then $\eta_t$ is also distributed according to $\nu^n_\gamma$ for any $t>0$. Moreover, the measures $\{\nu^n_\gamma : 0\le \gamma \le 1\}$ are also reversible. In order to keep notation simple, we write $\eta_t:=\eta_{tn^2}$ so that $\{\eta_t: t\ge 0\}$ turns out to be the Markov process on $\Omega$ associated to the generator $\mathcal{L}_n$ speeded up by $n^2$. We notice that we do not index the process neither in $\beta$ nor in $\alpha$. The trajectories of $\{\eta_t : t\ge 0\}$ live on the space $\mc D(\bb R_+, \Omega)$, i.e., the path space of c\`adl\`ag trajectories with values in $\Omega$. For a measure $\mu_n$ on $\Omega$, we denote by $\bb P^{\alpha,\beta}_{\mu_n}$ the probability measure on $\mc D(\bb R_+, \Omega)$ induced by $\mu_n$ and $\{\eta_t : t\ge 0\}$ and we denote by $\bb E_{\mu_n}^{\alpha,\beta}$ expectation with respect to $\bb P^{\alpha,\beta}_{\mu_n}$. \subsection{Hydrodynamical phase transition}\label{hid} \quad \vspace{0.2cm} {In order to state the hydrodynamical limit we introduce the empirical measure process as follows. We denote by $\mc M$ the space of positive measures on $\bb T$ with total mass bounded by one, endowed with the weak topology. For $\eta\in{\Omega}$, let $\pi^{n}(\eta, \cdot) \in \mc M$ be given by: \begin{equation* \pi^{n}(\eta,du) \;=\; \pfrac{1}{n} \sum _{x\in \bb T_n} \eta (x)\, \delta_{x/n}(du)\,, \end{equation*} where $\delta_y$ is the Dirac measure concentrated on $y\in \bb T$. For $t\in{[0,T]}$, let $\pi^{n}_t(\eta,du):=\pi^n(\eta_t,du)$.} For a test function $H:\bb T \to \bb R$ we use the following notation \begin{equation*} \<\pi^n_t, H\>:=\int H(u)\pi_t^n(\eta,du)\;=\; \pfrac 1n \sum_{x\in\bb T_n} H (\pfrac{x}{n})\, \eta_t(x)\,. \end{equation*} We use this notation since for $\pi_t$ absolutely continuous with respect to the Lebesgue measure with density $\rho_t$, we write $\<\rho_t, H\>$ for $\<\pi_t, H\>$. {Fix $T>0$. Let $\mc D([0,T], \mc M)$ be the space of c\`adl\`ag trajectories with values in $\mc M$ and endowed with the \emph{Skorohod} topology. For each probability measure $\mu_n$ on $\Omega$, denote by $\bb Q^{\alpha,\beta}_{n,\mu_n}$ the measure on the path space $\mc D([0,T], \mc M)$ induced by $\mu_n$ and the empirical process $\pi^n_t$ introduced above.} In order to state our first result related to the hydrodynamics of this model, we need to impose some conditions on the initial distribution of the process. \begin{definition} \label{def associated measures} A sequence of probability measures $\{\mu_n\}_{n\in\bb N}$ on $\Omega$ is said to be associated to a profile $\rho_0 :\bb T \to [0,1]$ if, for every $\delta>0$ and every $H\in C(\bb T)$, \begin{equation}\label{associated} \lim_{n\to\infty} \mu_n \Big[ \eta:\, \Big\vert \pfrac 1n \sum_{x\in\bb T_n} H(\pfrac{x}{n})\, \eta(x) - \int_{\bb{T}} H(u)\, \rho_0(u) du \Big\vert > \delta \Big]\;=\; 0\,. \end{equation} \end{definition} Now, we state the dynamical phase transition at the hydrodynamics level for the slowed exclusion process introduce above. We notice that this result is an improvement of the main theorem of \cite{fgn}, { since we are able to identify the hydrodynamic equation for $\beta=1$ as being the heat equation with Robin's boundary conditions as given in \eqref{her}.} \begin{theorem} \label{th:hlrm} Fix $\beta\in [0,\infty]$ and $\rho_0: \mathbb{T} \to [0,1]$ continuous by parts. Let $\{\mu_n\}_{n\in\bb N}$ be a sequence of probability measures on $\Omega$ associated to $\rho_0(\cdot)$. Then, for any $t\in [0,T]$, for every $\delta>0$ and every $H\in C(\mathbb{T})$: \begin{equation*} \lim_{n\to\infty} \mathbb{P}_{\mu_n}^{\alpha,\beta} \Big[\eta_. : \, \Big\vert \pfrac{1}{n} \sum_{x\in\mathbb{T}_n} H\big(\pfrac{x}{n}\big)\, \eta_t(x) - \int_{\bb T}H(u)\rho(t,u)du \Big\vert > \delta \Big] \;=\; 0\,, \end{equation*} where: \begin{itemize} \item if $\beta\in[0,1)$, $\rho(t,\cdot)$ is the unique weak solution of \eqref{he}; \vspace{0.1cm} \item if $\beta=1$, $\rho(t,\cdot)$ is the unique weak solution of \eqref{her}; \vspace{0.1cm} \item if $\beta\in(1,\infty]$, $\rho(t,\cdot)$ is the unique weak solution of \eqref{hen}. \end{itemize} \end{theorem} \begin{proof} The proof of last result is given in \cite{fgn} for $\beta\in{[0,1)}$ and $\beta\in(1,\infty)$. We also notice that for $\beta=\infty$, the same arguments as used in \cite{fgn} for $\beta\in(1,\infty)$ fit the case $\beta=\infty$ and for that reason we also omit the proof in this case. Finally, for $\beta=1$, the proof of the hydrodynamic limit can be almost all adapted from the strategy of \cite{fgn} and is the usual in stochastic process: tightness, which means relative compactness, plus uniqueness of limit points. We recall that the proof of tightness is very similar to the one given in \cite{fgn} and for that reason we omitted it. Nevertheless, the characterization of limit points is essentially different from \cite{fgn}, since here we identify the solutions as weak solutions of the heat equation with Robin's boundary conditions given in \eqref{her}. We proceed by presenting the proof of last statement. Recall the definition of $\{\bb Q^{\alpha,\beta}_{n,\mu_n}\}_{n\in \bb N}$. In order to keep notation simple and since $\beta=1$ we do not index these measures nor $\mathbb{P}_{\mu_n}^{\alpha, \beta}$ on $\beta$. Let $\bb Q_*^{\alpha}$ be a limit point of $\{\bb Q^{\alpha}_{n,\mu_n}\}_{n\in \bb N}$ whose existence is a consequence of Proposition 4.1 of \cite{fgn} and assume, without loss of generality, that $\{\bb Q^{\alpha}_{n,\mu_n}\}_{n\in \bb N}$ converges to $\bb Q_*^{\alpha}$, as $n\to \infty$. Now, we prove that $\bb Q_*^{\alpha}$ is concentrated on trajectories of measures absolutely continuous with respect to the Lebesgue measure: $\pi(t,du) = \rho(t,u) du$, whose density $\rho(t,u)$ is the unique weak solution of \eqref{her}. At first we notice that by Proposition 5.6 of \cite{fgn}, $\bb Q_*^{\alpha}$ is concentrated on trajectories absolutely continuous with respect to the Lebesgue measure $\pi_t(du)=\rho(t,u)\,du$ such that, $\rho(t,\cdot)$ belongs to $L^2(0,T;\mc H^1)$. It is well known that the Sobolev space $\mc H^1$ has special properties: all its elements are absolutely continuous functions with bounded variation, see \cite{e}, therefore with well defined lateral limits. Such property is inherited by $L^2\big(0,T;\mc H^1)$ in the sense that we can integrate in time the lateral limits. Let $H\inC^{1,2}([0,T]\times [0,1])$. We begin by claiming that \begin{equation* \begin{split} \bb Q^\alpha_* \Bigg[\pi_\cdot:\, \<\rho_t, & H_t \> - \<\rho_0, H_0 \> -\int_0^t \big\<\rho_s , \partial_sH_s +\Delta H_s\big\> \,ds \\ &-\,\int_0^t\Big(\rho_s(0)\partial_u H_s(0)-\rho_s(1)\partial_u H_s(1) \Big)\,ds \\ &+\,\int_0^t\alpha\Big(\rho_s(0)-\rho_s(1)\Big)\Big(H_s(0)-H_s(1)\Big)\,ds \,=\,0,\quad\forall t\in[0,T]\, \Bigg]\,=\,1\,. \end{split} \end{equation*} In order to prove last equality, its enough to show that, for every $\delta >0$, \begin{equation*} \begin{split} \bb Q^\alpha_* \Bigg[\pi_\cdot:\sup_{0\leq t\leq T}\,\Bigg\vert\, \<\rho_t,& H_t \> \,-\, \<\rho_0, H_0 \> \,-\, \int_0^t \, \big\<\rho_s , \partial_sH_s +\Delta H_s\big\> \,ds \\ &-\,\int_0^t\Big(\rho_s(0)\partial_u H_s(0)-\rho_s(1)\partial_u H_s(1) \Big)\,ds \\ &+\,\int_0^t\alpha\Big(\rho_s(0)-\rho_s(1)\Big)\Big(H_s(0)-H_s(1)\Big)\,ds \,\Bigg\vert\,>\,\delta\, \Bigg]=0\,. \end{split} \end{equation*} Since the boundary integrals are not well-defined in $\mc D\big([0,T],\mc M\big)$, we cannot use directly Portmanteau's Theorem. To avoid this technical obstacle, fix $\varepsilon>0$ and let $\iota_\varepsilon(u)=\pfrac{1}{\varepsilon}\,\textbf 1_{(0,\varepsilon)}(u)$ and $\tilde \iota_\varepsilon(u)=\pfrac{1}{\varepsilon}\,\textbf 1_{(1-\varepsilon,1)}(u)$ be approximations of the identity in the continuous torus. Now, adding and subtracting the convolution of $\rho(t,u)$ with $\iota_\varepsilon$ and $\tilde \iota_\varepsilon$, we can bound from above the previous probability by the sum of \begin{equation*}\label{prob 1sim} \begin{split} \bb Q^\alpha_* \Bigg[\pi_\cdot:\sup_{0\leq t\leq T}\,\Bigg\vert\, & \<\rho_t, H_t \> \,-\, \<\rho_0, H_0 \> \,-\, \int_0^t \, \big\<\rho_s , \partial_sH_s +\Delta H_s\big\> \,ds \\ &-\,\int_0^t\Big((\rho_s*\iota_\varepsilon)(0)\partial_u H_s(0)-(\rho_s*\tilde\iota_\varepsilon)(1)\partial_u H_s(1) \Big)\,ds \\ &+\,\int_0^t\alpha\Big((\rho_s*\iota_\varepsilon)(0)-(\rho_s*\tilde\iota_\varepsilon)(1)\Big)\Big(H_s(0)-H_s(1)\Big)\,ds \,\Bigg\vert\,>\,\delta/3\, \Bigg]\,, \end{split} \end{equation*} with the probability of two sets, each one of them decreasing as $\varepsilon\to 0$, to sets of null probability as a consequence of convolutions being suitable averages of $\rho$ around the boundary points $0$ and $1$. Now, we claim that we can use Portmanteau's Theorem and Proposition A.3 of \cite{fgn} in order to conclude that the previous probability is bounded from above by \begin{equation* \begin{split} \varliminf_{n\to \infty}\bb Q^{\alpha}_{n,\mu_n} \Bigg[\pi_\cdot:&\sup_{0\leq t\leq T}\,\Bigg\vert\, \<\rho_t, H_t \> \,-\, \<\rho_0, H_0 \> \,-\, \int_0^t \, \big\<\rho_s , \partial_sH_s +\Delta H_s\big\> \,ds \\ &-\,\int_0^t\Big((\rho_s*\iota_\varepsilon)(0)\partial_u H_s(0)-(\rho_s*\tilde\iota_\varepsilon)(1)\partial_u H_s(1) \Big)\,ds \\ &+\,\int_0^ta \Big((\rho_s*\iota_\varepsilon)(0)-(\rho_s*\tilde\iota_\varepsilon)(1)\Big)\Big(H_s(0)-H_s(1)\Big)\,ds \,\Bigg\vert\,>\,\delta/3\, \Bigg]\,. \end{split} \end{equation*} Although the functions $H_t$, $H_0$, $\partial_sH_s +\Delta H_s$, $\iota_\varepsilon(\cdot,1)$ and $\tilde\iota_\varepsilon(\cdot,0)$ may not belong to $C(\bb T)$, we can proceed as in Section 6.2 of \cite{fgn} in order to justify the boundedness of the previous expression. Next we outline the main arguments involved in that procedure. Firstly, we replace each one of these functions by continuous functions which coincide with the original ones in the torus, except on a small neighborhood of their discontinuity points and such that their $L^\infty$-norm is bounded from above by the $L^\infty$-norm of the respective original functions. By the exclusion rule, the set where we compare this change has small probability. Thus, in the presence of continuous functions, we apply Portmanteau's Theorem and Proposition A.3 of \cite{fgn}. After this, we return back to the original functions using the same arguments. Recall that we consider $\bb T_n$ embedded in $\bb T$ and notice that $(\pi^n*\iota_\varepsilon)(\frac{0}{n})=\eta^{\varepsilon n}(0)$ and $(\pi^n*\tilde\iota_\varepsilon)(\frac{1}{n})=\tilde \eta^{\varepsilon n}(n-1)$, where {\begin{equation} \eta^{\varepsilon n}(0)=\frac{1}{\varepsilon n}\sum_{y=1}^{\lfloor \varepsilon n\rfloor}\eta(y) \hspace{1cm} \textrm{and} \hspace{1cm}\tilde\eta^{\varepsilon n}(n-1)=\frac{1}{\varepsilon n}\sum_{y=\lfloor n-\varepsilon n\rfloor}^{ n-1 }\eta(y), \end{equation}} where $\lfloor u\rfloor$ denotes the biggest integer smaller or equal to $u$. By definition of $\bb Q^{\alpha}_{n,\mu_n}$ and by summing and subtracting the term $\int_{0}^tn^2 \mc L_{n}\<\pi_s^n,\, H_s \>ds$ inside the supremum above, we can bound the previous probability by \begin{equation*} \bb P^{\alpha}_{\mu_n} \Bigg[\eta_\cdot\,:\,\sup_{0\leq t\leq T}\,\Bigg\vert\, \<\pi^n_t, H_t \> \,-\, \<\pi^n_0, H_0 \> \,-\, \int_0^t \, \<\pi^n_s ,\partial_s H_s \>+n^2 \mc L_{n}\<\pi_s^n,\, H_s \> \,ds \,\Bigg\vert\,>\,\delta/6\, \Bigg] \end{equation*} and \begin{equation* \begin{split} \bb P^{\alpha}_{\mu_n} \Bigg[\eta_\cdot\,:\,\sup_{0\leq t\leq T}\,\Bigg\vert\,& \int_0^t \, n^2 \mc L_{n}\<\pi_s^n,\, H_s \> \,ds -\, \int_0^t \, \<\pi^n_s ,\Delta H_s \> \,ds \\ &-\,\int_0^t\Big(\eta_s^{\varepsilon n}(0)\partial_u H_s(0)-\tilde\eta_s^{\varepsilon n}(n-1)\partial_u H_s(1) \Big)\,ds \\ &+\,\int_0^t\alpha\Big(\eta_s^{\varepsilon n}(0)-\tilde\eta_s^{\varepsilon n}(n-1)\Big) \Big(H_s(0)-H_s(1)\Big)\,ds \,\Bigg\vert\,>\,\delta/6\, \Bigg]\,. \end{split} \end{equation*} { By Dynkin's formula, the expression inside the supremum in the first probability above, is a martingale that we denote by $\mc M^{n}_t(H)$. A simple computation shows that $\mc M^{n}_t(H)$ converges to zero in $L^2(\bb P^{\alpha}_{\mu_n})$ as $n\to \infty$, and then, by Doob's inequality, the first probability vanishes as $n\to \infty$, for every $\delta>0$.} Now we treat the remaining term. Using the expression for $n^2\mc L_{n}\<\pi_s^n,\, H_s \>$, we can bound the previous probability by the sum of \begin{equation*} \bb P^{\alpha}_{\mu_n} \Bigg[\eta_\cdot\,:\,\sup_{0\leq t\leq T}\,\Bigg\vert\,\int_0^t \, \<\pi^n_s ,\Delta H_s \> \,ds- \int_0^t \pfrac{1}{n}\sum_{x\neq n-1, 0} \eta_{s}(x)\Delta_nH_s \Big(\frac{x}{n}\Big)\,ds \,\Bigg\vert\,>\,\delta/18\, \Bigg]\,, \end{equation*} \begin{equation*} \begin{split} \bb P^{\alpha}_{\mu_n} \Bigg[\eta_\cdot\,:\,\sup_{0\leq t\leq T}\,&\Bigg\vert\, \int_0^t\Big(\eta_s^{\varepsilon n}(0)\partial_u H_s(0)-\tilde\eta_s^{\varepsilon n}(n-1)\partial_u H_s(1) \Big)\,ds \\ &-\int_0^t \Big( \eta_{s}(0)\nabla_{\!n} H_s(0)\,-\,\eta_{s}(n-1)\nabla_{\!n} H_s(n-2)\Big)\,ds \,\Bigg\vert\,>\,\delta/18\, \Bigg] \end{split} \end{equation*} and \begin{equation*} \begin{split} \bb P^{\alpha}_{\mu_n} \Bigg[\eta_\cdot\,:\,\sup_{0\leq t\leq T}\,\Bigg\vert\, \int_0^t\alpha\Big(\eta_s^{\varepsilon n}&(0)-\tilde\eta_s^{\varepsilon n}(n-1)\Big) \Big(H_s(0)-H_s(1)\Big)\,ds\\ &- \int_0^t\alpha\Big(\eta_s(0)-\eta_s(n-1)\Big) \nabla_{\!n} H_s(n-1)\,ds \,\Bigg\vert\,>\,\delta/18\, \Bigg], \end{split} \end{equation*} where for $x\in{\mathbb{T}_n}$, $$\Delta_nH\Big(\frac{x}{n}\Big)=n^2\Big(H\Big(\frac{x+1}{n}\Big)+H\Big(\frac{x-1}{n}\Big)-2H\Big(\frac{x}{n}\Big)\Big)$$ is the discrete laplacian and $$\nabla_n H\Big(\frac{x}{n}\Big)=n\Big(H\Big(\frac{x+1}{n}\Big)-H\Big(\frac{x}{n}\Big)\Big)$$ is the discrete derivative. Since $H\inC^{1,2}([0,T]\times [0,1])$, the discrete laplacian of $H_s$, namely $\Delta_n$, converges uniformly to the continuous laplacian of $H_s$, that is $\Delta H$, as $n\to\infty$, which is enough to conclude that the first probability is null. To prove that the remaining probabilities are null, we observe that the discrete derivative of $H$, namely $\nabla_{\!n} H_s$, converges uniformly to the continuous derivative, that is $\partial_u H_s$, as $n\to\infty$ and $\nabla_{\!n} H_s(n-1)$ converges uniformly to $H_s(0)-H_s(1)$, as $n\to\infty$, since $H\inC^{1,2}([0,T]\times [0,1])$. By the exclusion constrain and approximating the integrals by riemannian sums, the previous probabilities vanish as long as we show that \begin{equation*} \begin{split} \bb P^{\alpha}_{\mu_n} \Bigg[\eta_\cdot\,:\,\sup_{0\leq t\leq T}\,\Bigg\vert\, \int_0^t\Big(\eta_s^{\varepsilon n}(0)&-\eta_{s}(0)\Big)\partial_u H_s(0)\\ &-\Big(\tilde\eta_s^{\varepsilon n}(n-1)-\eta_{s}(n-1)\Big) \partial_u H_s(1) \,ds \,\Bigg\vert\,>\,\delta\, \Bigg]\\ \bb P^{\alpha}_{\mu_n} \Bigg[\eta_\cdot\,:\,\sup_{0\leq t\leq T}\,\Bigg\vert\, \int_0^t\alpha\Big\{\Big(&\eta_s^{\varepsilon n}(0)-\tilde\eta_s^{\varepsilon n}(n-1)\Big)\\&-\Big(\eta_s(0)-\eta_s(n-1)\Big)\Big\} \Big(H_s(0)-H_s(1)\Big)\,ds \,\Bigg\vert\,>\,\delta\, \Bigg]\,. \end{split} \end{equation*} converge to zero, as $\varepsilon\to0$, for all $ \delta>0$. This is a consequence of Lemma 5.4 of \cite{fgn}. \end{proof} \subsection{Energy estimates} \quad \vspace{0.2cm} The proof of Proposition \ref{uniq_sobolev} is a consequence of energy estimates obtained by means of the symmetric slowed exclusion process introduced above and it can be summarized as follows. Firstly, we notice that the existence of weak solutions of equation \eqref{her} is granted by tightness proved in \cite{fgn} together with the characterization of the limiting measure $\bb Q^{\alpha}_*$ given above. Secondly, uniqueness was proved in Section \ref{s3}. Finally, to prove the last statement we introduce the next proposition which is usually denominated by \emph{energy estimate}. It says that any limit measure $\bb Q^{\alpha}_*$ is concentrated on functions with finite energy. Moreover, it says that the expected energy is also finite. Such result makes the link between the particle system $\{\eta_t;\,t\geq{0}\}$ and the weak solution of the heat equation with Robin's boundary condition given in \eqref{her}. \begin{proposition}\label{Prop_03} Let $\bb Q_*^{\alpha}$ be a limit point of $\{\bb Q^{\alpha}_{n,\mu_n}\}_{n\in \bb N}$. Then, \begin{equation*} \bb E_{\bb Q_*^{\alpha}} \Big[ \sup_{H} \Big\{ \<\!\<\rho, \, \partial_u H\>\!\> \,- \, 2 \<\!\<H ,\,H\>\!\>_{\alpha}\Big\} \Big] \, \le \,K_0\,, \end{equation*} where $K_0$ is a constant that does not depend on $\alpha$ and the supremum is taken over functions $H\in C^{\,0,1}([0,T]\times\bb T)$, see Definition \ref{space C^n,m}. \end{proposition} Since the proof of this proposition follows the same lines of \cite[Subsection 5.2]{fgn}, it is omitted.\medskip As seen in this section, the measure $\bb Q^{\alpha}_*$ is concentrated in weak solutions of the heat equation with Robin's boundary conditions as given in \eqref{her}. In Section \ref{s3}, it was proved uniqueness of such weak solutions. This implies that the measure $\bb Q^{\alpha}_*$ is, in fact, a delta of Dirac concentrated on the unique weak solution of \eqref{her}. Denote this solution by $\rho^\alpha$. By the previous proposition, we conclude that \begin{equation*} \sup_{H} \Big\{ \<\!\<\rho^\alpha, \, \partial_u H\>\!\> \,- \, 2 \<\!\<H ,\,H\>\!\>_{\alpha}\Big\} \le \,K_0\,. \end{equation*} where $K_0$ is a constant that does not depend on $\alpha$ and the supremum is taken over functions $H\in C^{\,0,1}([0,T]\times\bb T)$. This proves Proposition \ref{uniq_sobolev}. \section{Proof of Theorem \ref{pdePT}.}\label{s5} Since we proved Proposition \ref{uniq_sobolev}, from now on, for fixed $\alpha>0$, we denote by $\rho^\alpha:[0,T]\times[0,1]\to [0,1]$ the unique weak solution of \eqref{her}. We notice that $\rho^\alpha$ takes values between 0 and 1 since we imposed the same condition for $\rho_0$. Our scheme of proof has the following steps: In Proposition \ref{propositL1}, we prove that the set $\{\rho^{\alpha}:\,\alpha>0\}$ is bounded in $L^2(0,T;\mc H^1)$. This guarantees the relative compactness of this set. In Proposition \ref{propositL2}, we prove that any limit of a convergent subsequence of $\{\rho^{\alpha_n}\}_{n\in\bb N}$ is in $L^2(0,T;\mc H^1)$. In Proposition \ref{novolema}, we obtain some smoothness of $\rho^\alpha$ on time, that we need in order to take limits in $\alpha$. The next step will be to analyze separately each term of integral equation \eqref{eqint2} to obtain asymptotic results in $\alpha$ for its terms. Proposition \ref{lemmaintegrais0} and Proposition \ref{lemmaintegrais2} cover the limit of terms in the integral equation \eqref{eqint2} that can be treated in the same way both for $\alpha\to 0$ and $\alpha\to\infty$. Proposition \ref{lemmapdePT1} and Proposition \ref{lemmapdePT2} cover the limit of a integral term for the case $\alpha\to 0$ and $\alpha\to \infty$, respectively. These convergences of the integral terms will show that any convergent subsequence of $\{\rho^\alpha:\,\alpha>0\}$ with $\alpha\to 0$ converges to the unique weak solution of the heat equation with Neumann's boundary conditions, and any convergent subsequence of $\{\rho^\alpha:\,\alpha>0\}$ with $\alpha\to \infty$ converges to the unique weak solution of the heat equation with periodic boundary conditions. Putting this together with the relative compactness of the set $\{\rho^\alpha:\,\alpha>0\}$ the convergence follows. We introduce first a space of test functions that will be used in the sequel. \begin{definition}\label{def9} The space $C_c$ consists of functions $H\in C^{\,0,1}([0,T]\times [0,1])$ with compact support in $[0,T]\times(0,1)$. \end{definition} \begin{proposition}\label{propositL1} The set $\{\rho^\alpha:\,\alpha>0\}$ is bounded in $L^2(0,T;\mc H^1)$. \end{proposition} \begin{proof} We begin by observing that Proposition \ref{uniq_sobolev} implies the inequality \begin{equation}\label{hip1} \<\!\<\rho^{\alpha},\partial_u H\>\!\>-2\<\!\<H,H\>\!\>\leq K_0\,, \end{equation} for all $H\in C_c$. This is a consequence of the simple fact that, if $H$ vanishes at a neighborhood of $0$ and $1$, then $\<\!\<H,H\>\!\>=\<\!\<H,H\>\!\>_{\alpha}$, for all $\alpha>0$. An application of the Riesz Representation Theorem gives us that \begin{equation*} \sup_{H\in C_c}\,\Big\{ \<\!\<\rho^\alpha, \partial_u H\>\!\>- 2 \<\!\< H,H\>\!\>\Big\}\,=\,\pfrac{1}{8}\int_0^T\Vert\partial_u\rho^\alpha_t\Vert^2\,dt\,. \end{equation*} Aiming to concentrate in the main facts, the proof of previous equality is postponed to Corollary \ref{lema3} of the Appendix. Provided by the inequality above and recalling that $K_0$ does not depend on $\alpha$, we conclude that $\{\rho^\alpha:\,\alpha>0\}$ is bounded in $L^2(0,T;\mc H^1)$. \end{proof} The boundedness of $\{\rho^\alpha:\,\alpha>0\}$ in $L^2(0,T;\mc H^1)$ implies a compact embedding of $\{\rho^\alpha:\,\alpha>0\}$ in $L^2([0,T]\times [0,1])$. This is a particular case of the Rellich-Kondrachov's Theorem for spaces involving time, that can be found in \cite{TE}. To verify it in detail, we list the exact steps: following the notation of \cite[Page 271, Subsection 2.2]{TE}, take $X_0=X=\mc H^1$, $X_1=L^2[0,1]$ and notice that any Hilbert space is reflexive. This attains the hypothesis of \cite[Theorem 2.1, page 271]{TE} and corresponds to the case we consider. By this compact embedding, any sequence $\{\rho^{\alpha_n}\}_{n\in\bb N}$ has a convergent subsequence in $L^2([0,T]\times [0,1])$.\medskip Next, we show that the limit of a convergent subsequence of $\{\rho^{\alpha}:\,\alpha>0\}$ is in the space $L^2(0,T;\mc H^1)$. \begin{proposition}\label{propositL2} If $\rho^*$ is the limit in $L^2([0,T]\times [0,1])$ of some sequence in the set $\{\rho^{\alpha}:\,\alpha>0\}$, then $\rho^* \in L^2(0,T;\mc H^1)$. \end{proposition} \begin{proof} Suppose that $\rho^{\alpha_n}$ converges to $\rho^*$ in $L^2([0,T]\times [0,1])$, as $n\to\infty$. By Proposition \ref{uniq_sobolev}, for each $n\in\bb N$, $\rho^{\alpha_n}$ satisfies \eqref{hip1} for any $H\in C_c$: \begin{equation*} \int_0^T\<\rho^{\alpha_n}_s,\,\partial_u H_s\,\>\,ds-2\int_0^T\<H_s,\,H_s\,\>\,ds\leq K_0 \end{equation*} and $K_0$ does not depend neither on $n$ nor $H$. Taking the limit $n\to \infty$ in the previous inequality, we get that \begin{equation*} \int_0^T\<\rho^{*}_s,\,\partial_u H_s\,\>\,ds-2\int_0^T\<H_s,\,H_s\,\>\,ds\leq K_0\,. \end{equation*} Replacing $H$ by $yH$ in the previous inequality and then minimizing over $y\in \bb R$ gives that \begin{equation*} \begin{split} \varphi:\;&C_c\to{\bb R}\\ &H\mapsto \int_0^T\<\rho^{*}_s,\,\partial_u H_s\,\>\,ds \end{split} \end{equation*} is a bounded linear functional. Notice that the set $C_c$ is dense in $L^2([0,T]\times [0,1])$. Hence, by the Riesz Representation Theorem, there exists $\partial_u\rho^{*}\in L^2([0,T]\times [0,1])$ such that \begin{equation*} \int_0^T\<\rho^{*}_s,\,\partial_u H_s\,\>\,ds=- \int_0^T\<\partial_u\rho^{*}_s,\, H_s\,\>\,ds\,, \end{equation*} for all functions $H\in C_c$, which is the same as saying that is $\rho^*$ belongs to $L^2(0,T;\mc H^1)$. \end{proof} Now, we analyze the integral equation \eqref{eqint2}. Integrating by parts, it can be rewritten as \begin{equation}\label{eqintbyparts} \begin{split} \<\rho^\alpha_t,\,H_t\,\> -\<\rho^\alpha_0,\,H_0\,\>+\int_0^t\<\, \partial_u\rho^\alpha_s& , \,\partial_u H_s\,\>\, ds- \int_0^t\< \rho^\alpha_s, \, \partial_s H_s\,\>\, ds\\ +& \int_0^t\alpha(\rho^\alpha_s(0)-\rho^\alpha_s(1))(H_s(0)-H_s(1))\,ds\;=\;0\;, \end{split} \end{equation} where $\partial_u\rho^\alpha$ is the weak derivative of $\rho^\alpha$. Our goal consists in analyzing the limit, as $\alpha\to 0$ or $\alpha\to\infty$, of the terms in the previous equation. Due to boundary restrictions, the last integral term above is analyzed separately. Moreover, Proposition \ref{lemmapdePT1} covers the case $\alpha\to 0$ and Proposition \ref{lemmapdePT2} covers the case $\alpha\to \infty$. We begin by showing some smoothness of a weak solution of \eqref{her} that will be needed in order to take limits. \begin{proposition}\label{novolema} For any $H\inC^{1,2}([0,T]\times [0,1])$, there exists a constant $C_H^T$ not depending on $\alpha$ such that \begin{equation*} \vert\,\< \rho^\alpha_t, \, H_t\,\>\, -\,\< \rho^\alpha_s, \, H_s\,\>\,\vert \leq C_H^T |\,t-s\,|^{1/2}\,,\qquad\forall s,t\in [0,T]\,. \end{equation*} \end{proposition} \begin{proof} Let $H\inC^{1,2}([0,T]\times [0,1])$. Since $\rho^\alpha$ satisfies the integral equation \eqref{eqintbyparts}, it is sufficient to estimate the absolute value of: \begin{equation*} \begin{split} &R_1:= \int_s^t\<\, \partial_u\rho^\alpha_r , \,\partial_u H_r\>\,dr\,,\\ &R_2:=\int_s^t\< \rho^\alpha_r, \, \partial_r H_r\,\>\, dr\,,\\ &R_3:= \int_s^t\alpha(\rho^\alpha_s(0)-\rho^\alpha_s(1))(H_s(0)-H_s(1))\,dr\,. \end{split} \end{equation*} We start by the case $\alpha\geq 1$. At first we notice that Proposition \ref{fund} guarantees that $R_3$ can be rewritten\footnote{This is essentially the Fundamental Theorem of Calculus by seeing the unit interval as the torus. The proof is technical and is postponed to the Appendix.} as \begin{equation*} \begin{split} & \int_s^t\partial_u\rho^\alpha_r(0)(H_r(0)-H_r(1))\,dr\;. \end{split} \end{equation*} By the Cauchy-Schwarz inequality, \begin{equation*} |R_3|\leq{\Big(\int_0^T(\partial_u\rho^\alpha_r(0))^2\,dr\Big)^{1/2}2\Vert H\Vert_\infty |t-s|^{1/2}}\;. \end{equation*} Since $\alpha\geq 1$, then $\<\!\<H,H\>\!\>_\alpha \leq \<\!\<H,H\>\!\>_1$. As a consequence of Proposition \ref{uniq_sobolev}, the function $\rho^\alpha$ satisfies \begin{equation*} \<\!\<\rho^{\alpha},\partial_u H\>\!\>-2\<\!\<H,H\>\!\>_{1}\leq K_0\,, \end{equation*} for all $H\in C^{\,0,1}([0,T]\times\bb T)$. Thus, by Proposition \ref{lema3Wa} we conclude that \begin{equation*} \begin{split} & \int_0^T(\partial_u\rho^\alpha_r(0))^2\,dr\leq 8K_0\,, \end{split} \end{equation*} from where we get that $$|R_3|\leq{(8K_0)^{1/2}\,2\Vert H\Vert_\infty |t-s|^{1/2}}.$$ Analogously, by the Cauchy-Schwarz inequality, Proposition \ref{uniq_sobolev} and Proposition \ref{lema3Wa}, $$|R_1|\leq{(8K_0)^{1/2}\,2\Vert \partial_u H\Vert_\infty |t-s|^{1/2}}\,.$$ Finally, $R_2$ can be easily bounded from above by $\Vert \partial_r H\Vert_\infty |t-s|$. The case $\alpha<1$ is easier. Since to estimate $R_1$ and $R_2$ we did not impose any restriction on $\alpha$, it remains to estimate $R_3$, which is bounded from above by $4\Vert H\Vert_\infty |t-s|$. To complete the bounds, notice that $ |t-s|\leq{T^{1/2} |t-s|^{1/2}}$, which is true because $0\leq s,t\leq T$. \end{proof} Now, we analyze the limit of the terms in the integral equation \eqref{eqintbyparts} along a subsequence $\rho^{\alpha_n}$. Provided by the next proposition, we will be able to replace $\rho_t^{\alpha_n}$ by its $L^2$-limit in the first, second and fourth terms of the integral equation \eqref{eqintbyparts}. In fact, we will need to take the limit along a subsequence of $\alpha_n$. However, since we aim the uniqueness of limit points, this is not a problem. \begin{proposition}\label{lemmaintegrais0} Suppose that $\rho^{\alpha_n}$ converges to $\rho^*$ in $L^2([0,T]\times [0,1])$, as $n\to\infty$. Then, there exists a function $\tilde{\rho}$ such that $\rho^*=\tilde{\rho}$ almost surely and $t\mapsto\<\,\tilde{\rho}_t,\,H_t\,\>$ is a continuous map. Moreover, there exists a subsequence $n_j$ such that \begin{equation*} \lim_{j\to\infty} \<\,\rho^{\alpha_{n_j}}_t,\,H_t\,\>\,=\,\<\,\tilde{\rho}_t,\,H_t\,\>\,, \end{equation*} for all $t\in[0,T]$ and for all $H\inC^{1,2}([0,T]\times [0,1])$. \end{proposition} \begin{proof} For $H\inC^{1,2}([0,T]\times [0,1])$ and $n\in \bb N$ consider the function \begin{equation*} \begin{split} f_n(\cdot,H):\,&[0,T]\rightarrow{\bb R}\\ &t\mapsto\<\rho^{\alpha_n}_t,\,H_t\>\,. \end{split} \end{equation*} By Proposition \ref{novolema}, the sequence $\{f_n(\cdot,H)\}_{n\in\bb N}$ is uniformly H\"older, hence equicontinuous. Since $\vert f_n(t,H)\vert\leq \Vert H\Vert_\infty$, by the Arzel\`a-Ascoli Theorem, there exists a subsequence $n_k$, depending on $H$, such that $f_{n_k}(\cdot, H)$ converges uniformly in $t$, as $k\to\infty$, to a continuous function $f(\cdot, H)$. Since $C^{1,2}([0,T]\times [0,1])$ is separable, applying a diagonal argument we can find a subsequence $n_j$ such that the convergence above along $n_j$ holds uniformly in $t$, for any function on a countable dense set of $C^{1,2}([0,T]\times [0,1])$. By density, the operator $f(t,\cdot)$ can be extended to a bounded linear functional in $ C^2([0,1])$ with respect to the $L^2$ norm, which in turn can be extended to a bounded linear functional in $L^2[0,1]$. The Riesz Representation Theorem implies the existence of a function $\tilde{\rho}_t\in L^2[0,1]$ such that $f(t,H)=\<\tilde{\rho}_t,H_t\>$. Notice that last equality holds for all $t\in[0,T]$. Uniqueness of the limit ensures that $\rho^*=\tilde{\rho}$ almost surely. \end{proof} We point out that the hypothesis about the convergence in $L^2$ in the proposition above has no special importance. A convergence in other norm would work as well. The $L^2$ norm plays a role indeed on the relative compactness of $\{\rho^\alpha:\,\alpha>0\}$. The next proposition allows us to replace $\rho_t^{\alpha_n}$ by its limit as $n\to\infty$ in the third term of equation \eqref{eqintbyparts}. \begin{proposition}\label{lemmaintegrais2} Suppose that $\rho^{\alpha_n}$ converges to $\rho^*$ in $L^2([0,T]\times [0,1])$. Then, for all $t\in[0,T]$ and for all $H\in C^{1,2}([0,T]\times [0,1])$, \begin{equation*} \lim_{n\to\infty}\int_0^t\<\,\partial_u\rho^{\alpha_n}_s,\,\partial_uH_s\,\>\,ds\,=\, \int_0^t\<\,\partial_u\rho^{*}_s,\,\partial_uH_s\,\>\,ds\,. \end{equation*} \end{proposition} \begin{proof} If $H\inC^{1,2}([0,T]\times [0,1])$, then $\partial_uH$ belongs to the set $C^{\,1,1}([0,T]\times[0,1])$. For this reason, the proof is written in terms of functions belonging to this last domain. Fix a time $t$. Consider first $H\in C^{\,0,1}([0,T]\times[0,1])$ compactly supported in $[0,t]\times(0,1)$. In this case, \begin{equation*} \int_0^t\<\,\partial_u\rho^{\alpha_n}_s,\,H_s\,\>\,ds=- \int_0^t\<\,\rho^{\alpha_n}_s,\,\partial_u H_s\,\>\,ds\,. \end{equation*} because the integrands above vanish for times greater than $t$. Since $\rho^{\alpha_n}$ converges to $\rho^*$ in $L^2([0,T]\times [0,1])$, the previous equality shows that \begin{equation*} \lim_{n\to\infty}\int_0^t\<\,\partial_u\rho^{\alpha_n}_s,\,H_s\,\>\,ds=\int_0^t\<\,\partial_u\rho^{*}_s,\, H_s\,\>\,ds\,. \end{equation*} The next step is to extend the previous equality to functions without that condition on the support. Let $H\in C^{1,1}([0,T]\times[0,1])$ and approximate this function in $L^2([0,T]\times[0,1])$ by a function $H^\varepsilon\in C^{1,1}([0,T]\times[0,1])$ with compact support in $[0,T]\times(0,1)$ and such that $\Vert H^\varepsilon\Vert_\infty\leq\Vert H\Vert_\infty$. For $\delta>0$, let us define the function $\varphi^\delta:[0,T]\to\bb R$ as \begin{equation*} \varphi^\delta(s)= \left\{ \begin{array}{ll} 1,& \mbox{if}\,\,s\in[0,t-\delta]\,, \\ \displaystyle\frac{t-s}{\delta},& \mbox{if}\,\,s\in[t-\delta,t]\,, \\ 0, & \mbox{if}\,\,s\in[t, T]\,.\\ \end{array} \right. \end{equation*} Let $H^{\varepsilon,\delta}_s(u):=H^\varepsilon_s(u)\varphi^\delta(s)$. Then, $ H^{\varepsilon,\delta}\in C^{\,0,1}([0,T]\times[0,1])$ and has compact support contained in $[0,t]\times(0,1)$. Hence, from what we proved above, \begin{equation}\label{imp} \begin{split} \lim_{n\to \infty} \int_0^t\<\, \partial_u\rho^{\alpha_n}_s, \,H^{\varepsilon,\delta}_s\,\>\, ds = \int_0^t\<\, \partial_u\rho^{*}_s, \, H^{\varepsilon,\delta}\,\>\, ds\,. \end{split} \end{equation} By the triangular inequality, \begin{equation}\label{eqintbyparts*} \begin{split} \Big\vert\!\int_0^t\!\< \partial_u\rho^{\alpha_n}_s-\partial_u\rho^{*}_s, H_s\>ds&\,\Big\vert\!\leq{\Big\vert\!\int_0^t\!\< \partial_u\rho^{\alpha_n}_s, H_s-H^{\varepsilon}_s\> ds\Big\vert}+\Big\vert\!\int_0^t\!\< \partial_u\rho^{\alpha_n}_s,H^{\varepsilon}_s-H_s^{\varepsilon,\delta}\> ds\Big\vert\\ &+\Big\vert\!\int_0^t\!\<\partial_u\rho^{\alpha_n}_s-\partial_u\rho^{*}_s, H^{\varepsilon,\delta}_s\> ds\Big\vert+\Big\vert \!\int_0^t\!\<\partial_u\rho^{*}_s, H_s^{\varepsilon,\delta}-H^{\varepsilon}_s\> ds\Big\vert\\ &+ \Big\vert\!\int_0^t\<\partial_u\rho^{*}_s, H^{\varepsilon}_s-H_s\> ds\Big\vert\,. \end{split} \end{equation} We shall estimate each term on the right hand side of the previous inequality. We start by the first one. By the Cauchy-Schwarz inequality, \begin{equation*} \begin{split} &\Big\vert\int_0^t\!\!\<\, \partial_u\rho^{\alpha_n}_s, H_s-H^{\varepsilon}_s\,\>\, ds\,\Big\vert \leq\Big(\int_0^t\!\!\Vert \partial_u\rho^{\alpha_n}_s\Vert^2\, ds\Big)^{1/2} \Big(\int_0^t\!\!\Vert H_s-H^{\varepsilon}_s\Vert^2\, ds\Big)^{1/2}.\\ \end{split} \end{equation*} We notice that by Proposition \ref{uniq_sobolev}, $\rho^{\alpha_n}$ satisfies \begin{equation*} \int_0^t\<\rho^{\alpha_n}_s,\,\partial_u G_s\,\>\,ds-2\int_0^t\<G_s,\,G_s\,\>\,ds\leq K_0\,, \end{equation*} for all $H\in C_c$, see Definition \ref{def9}, and any $t\in{[0,T]}$. This together with Corollary \ref{lema3} ensures that $\int_0^t\Vert \partial_u\rho^{\alpha_n}_s\Vert^2\, ds\leq 8K_0\,.$ Thus, \begin{equation* \begin{split} &\Big\vert\int_0^t\<\, \partial_u\rho^{\alpha_n}_s, \,H_s-H^{\varepsilon}_s\,\>\, ds\Big\vert \leq(8K_0)^{1/2} \Big(\int_0^t\Vert H_s-H^{\varepsilon}_s\Vert^2\, ds\Big)^{1/2} .\\ \end{split} \end{equation*} By Proposition \ref{propositL2}, the same holds for $\rho^*$, i.e., $\int_0^t\Vert \partial_u\rho^{*}_s\Vert^2\, ds\leq 8K_0$, hence the previous inequality follows replacing $\rho^{\alpha_n}$ by $\rho^*$. With this we also estimated the last term on the right hand side of the previous inequality. Now we estimate the second term on the right hand side of \eqref{eqintbyparts*}. Observe that \begin{equation*} \begin{split} \Big\vert\int_0^t\<\, \partial_u\rho^{\alpha_n}_s, \,H_s^{\varepsilon,\delta}-H^{\varepsilon}_s\,\>\, ds\Big\vert&= \Big\vert\int_0^t\int_{0}^1 \partial_u\rho^{\alpha_n}_s(u)[H^\varepsilon_s(u)\varphi^\delta(s)-H^{\varepsilon}_s(u)]\,du\, ds\Big\vert\\ &=\Big\vert\int_{t-\delta}^t(\pfrac{t-s}{\delta}-1)\int_{0}^1 \partial_u\rho^{\alpha_n}_s(u)\,H^\varepsilon_s(u)\,du\,ds\Big\vert\\ &\leq {\int_{0}^T {\bf{1}}_{[t-\delta,t]}(s) \Big\vert\int_{0}^1 \partial_u\rho^{\alpha_n}_s(u)\,H^\varepsilon_s(u)\,du\Big\vert\,\,ds}\,. \end{split} \end{equation*} By the Cauchy-Schwarz inequality we obtain that \begin{equation*} \begin{split} \Big\vert\int_0^t\<\, \partial_u\rho^{\alpha_n}_s, \,H_s^{\varepsilon,\delta}-H^{\varepsilon}_s\,\>\, ds\Big\vert\leq& \sqrt{\delta}\Big(\int_{0}^T\Big\vert \int_{0}^1 \partial_u\rho^{\alpha_n}_s(u)\,H^\varepsilon_s(u)\,du\Big\vert^2\,\,ds\Big)^{1/2}\\ \leq &\sqrt{\delta}\Vert H^\varepsilon\Vert_{\infty}\Big(\int_{0}^T\Big\vert \int_{0}^1 \partial_u\rho^{\alpha_n}_s(u)\,du\Big\vert^2\,\,ds\Big)^{1/2}\\ \leq& \sqrt{\delta}\Vert H\Vert_{\infty}(8K_0)^{1/2}\,.\\ \end{split} \end{equation*} By analogous calculations, we also get the previous estimate replacing $\rho^{\alpha_n}$ by $\rho^*$. Therefore we also estimated the fourth term on the right hand side of \eqref{eqintbyparts*}. Putting together the previous computations, we obtain that the left hand side of \eqref{eqintbyparts*} is bounded from above by \begin{equation*} \Big\vert\int_0^t\<\, \partial_u\rho^{\alpha_n}_s- \partial_u\rho^{*}_s, \, H^{\varepsilon,\delta}\,\>\, ds\,\Big\vert +2(8K_0)^{1/2}\Big\{\Big(\int_0^t\Vert H_s-H^{\varepsilon}_s\Vert^2 ds\Big)^{1/2} \!\!\!+ \!\sqrt{\delta}\Vert H\Vert_{\infty} \Big\}\,. \end{equation*} Employing \eqref{imp}, recalling the definition of $H_\varepsilon$, sending $n\to\infty$, and then $\varepsilon,\delta$ to zero, the proof ends. \end{proof} Finally, in the next two propositions we are able to identify the integral equations for the limit of $\rho^{\alpha_n}$ when $\alpha_n\to 0$ or $\alpha_n\to \infty$ by treating the last term of the integral equation \eqref{eqintbyparts}. We start by showing that the limit of $\rho^{\alpha_n}$ when $\alpha_n\to 0$ is a weak solution of the heat equation with Neumann's boundary conditions. \begin{proposition}\label{lemmapdePT1} Let $\{\alpha_n\}_{n\in \bb N}$ be a sequence of positive real numbers such that $\lim_{n\to\infty} \alpha_n\;=\;0\,.$ If $\{\rho^{\alpha_n}\}_{n\in\bb N}$ converges to $\rho^*$ in $L^2([0,T]\times [0,1])$, then $\rho^*$ is the unique weak solution of \eqref{hen}. \end{proposition} \begin{proof} Proposition \ref{propositL2} says that $\rho^*\in L^2(0,T;\mc H^1)$, which is one of the conditions in Definition \ref{heat equation Neumann}. In order to prove that $\rho^*$ satisfies \eqref{eqint3}, the idea is to take the limit as $n\to\infty$ in \eqref{eqintbyparts} and to analyze the limiting terms. By the previous propositions, it only remains to analyze the limit of the last term in the integral equation \eqref{eqintbyparts}. A simple computation shows that for $t\in{[0,T]}$: \begin{equation*} \Big\vert\int_0^t \alpha_n (\rho^{\alpha_n}_s(0)-\rho^{\alpha_n}_s(1))(H_s(0)-H_s(1))\,ds\Big\vert\;\leq \; 4 T \Vert H\Vert_\infty \, \alpha_n\,, \end{equation*} therefore, when $\alpha_n\to 0$, last integral in \eqref{eqintbyparts} converges to zero, as $n\to\infty$. Therefore, replacing $\rho^\alpha$ by $\rho^{\alpha_n}$ in \eqref{eqintbyparts} and taking the limit, we conclude that $\rho^*$ satisfies: \begin{equation* \< \,\rho^*_t,\,H_t\,\> -\<\,\rho_0,\,H_0\,\>+ \int_0^t\<\, \partial_u\rho^*_s, \, \partial_u H_s\,\>\, ds- \int_0^t\<\, \rho^*_s, \, \partial_s H_s\,\>\, ds\;=\;0\;, \end{equation*} for all $t\in [0,T]$ and for all $H\in C^{1,2}([0,T]\times [0,1])$. Since $\rho^*\in L^2(0,T;\mc H^1)$, performing an integration by parts in the previous equation, we get to: \begin{equation*} \<\rho^*_t,H_t\>\! -\!\<\rho_0,H_0\>- \!\int_0^t\!\< \rho^*_s,\Delta H_s+\partial_s H_s\>ds\\ -\int_0^t\!\!\!\!(\rho^*_s(0)\partial_uH_s(0)-\rho^*_s(1)\partial_uH_s(1))ds\!\!=\!\!0. \end{equation*} for all $t\in [0,T]$ and for all $H\inC^{1,2}([0,T]\times [0,1])$, concluding the proof. \end{proof} In next proposition we treat the last term of the integral equation \eqref{eqintbyparts}. \begin{proposition}\label{lemmapdePT2} Let $\{\alpha_n\}_{n\in \bb N}$ be a sequence of positive real numbers such that $\lim_{n\to\infty} \alpha_n\;=\;\infty\,.$ If $\{\rho^{\alpha_n}\}_{n\in\bb N}$ converges to $\rho^*$ in $L^2([0,T]\times [0,1])$, then $\rho^*$ is the unique weak solution of \eqref{he}. \end{proposition} \begin{proof} Proposition \ref{propositL2} says that $\rho^*\in L^2(0,T;\mc H^1)$, which is one of the conditions in Definition \ref{def edp 1}. We shall prove that $\rho^*$ satisfies \eqref{eqint1}. As before, the idea is to take the limit as $n\to\infty$ in \eqref{eqintbyparts} and to analyze the limiting terms. In this situation, we take $H\in C^{1,2}([0,T]\times\bb T)$, so that \eqref{eqintbyparts} is given by \begin{equation*} \begin{split} & \<\rho^{\alpha_n}_t,\,H_t\,\> -\<\rho_0,\,H_0\,\>+\int_0^t\<\, \partial_u\rho^{\alpha_n}_s , \,\partial_u H_s\,\>\, ds- \int_0^t\< \rho^{\alpha_n}_s, \, \partial_s H_s\,\>\, ds\;=\;0\,. \end{split} \end{equation*} By the first statement of Proposition \ref{lemmaintegrais0} and Proposition \ref{lemmaintegrais2}, taking the limit as $n\to \infty$ in the previous equality, we conclude that $\rho^*$ satisfies: \begin{equation*} \begin{split} & \<\rho^*_t,\,H_t\,\> -\<\rho_0,\,H_0\,\>+\int_0^t\<\, \partial_u\rho^*_s , \,\partial_u H_s\,\>\, ds- \int_0^t\< \rho^*_s, \, \partial_s H_s\,\>\, ds\;=\;0\;, \end{split} \end{equation*} for all $t\in[0,T]$ and for all $H\in C^{1,2}([0,T]\times\bb T)$. To obtain the integral equation \eqref{eqint1} from the equation above, we invoke Proposition \ref{lemmaintegrais2} and perform an integration by parts, we are lead to \begin{equation*} \begin{split} \int_0^t\<\, \partial_u\rho^*_s , \,\partial_u H_s\,\>\, ds&= \!\!\lim_{\alpha_n\to\infty}\int_0^t\<\, \partial_u\rho^{\alpha_n}_s , \partial_u H_s\,\>\, ds\\ &=\!\!\lim_{\alpha_n\to\infty}\!\int_0^t\<\rho^{\alpha_n}_s ,\Delta H_s\> ds -\!\! \int_0^t\!\!\!\Big(\rho^{\alpha_n}_s(0)-\rho^{\alpha_n}_s(1)\Big)\partial_uH_s(b)\,ds. \end{split} \end{equation*} We claim that the previous limit is equal to $\int_0^t\<\rho^{*}_s , \,\Delta H_s\,\>\, ds$. At first we prove that \begin{equation*} \begin{split} \lim_{\alpha_n\to\infty} \int_0^t\Big(\rho^{\alpha_n}_s(0)-\rho^{\alpha_n}_s(1)\Big)\partial_uH_s(0)\,ds\,=\,0\,. \end{split} \end{equation*} By the Cauchy-Schwarz inequality and Proposition \ref{fund}, \begin{equation*} \begin{split} \int_0^t\!\!\Big(\rho^\alpha_s(0)-\rho^\alpha_s(1)\Big)\partial_uH_s(0)\,ds\leq \Big(\int_0^T\!\!(\partial_uH_s(0))^2\,ds\Big)^{1/2}\frac{1}{\alpha} \Big(\int_0^T\!\!(\partial_u\rho^\alpha_s(0))^2\,ds\Big)^{1/2}\!\!, \end{split} \end{equation*} for all $t\in{[0,T]}$. Without loss of generality, we can assume $\alpha\geq 1$. Thus, by Proposition \ref{uniq_sobolev} and the fact that $\<\!\<H,H\>\!\>_{\alpha}\leq{\<\!\<H,H\>\!\>_{1}}$ because $\alpha\geq{1}$, we arrive at \begin{equation*} \<\!\<\rho^{\alpha},\partial_u H\>\!\>-2\<\!\<H,H\>\!\>_{1}\leq K_0\,, \end{equation*} for all $H\in C^{\,0,1}([0,T]\times\bb T)$. From Proposition \ref{lema3Wa}, \begin{equation*} \sup_{H}\Big\{\<\!\<\rho^{\alpha},\partial_u H\>\!\>-2\<\!\<H,H\>\!\>_{1}\Big\}= \frac{1}{8}\int_0^T\Big\{ \Vert \partial_u\rho^\alpha_s\Vert^2+(\partial_u\rho^\alpha_s(0))^2\Big\}\,ds\,. \end{equation*} where the supremum above is taken over functions $H\in C^{\,0,1}([0,T]\times\bb T)$, see Definition \ref{space C^n,m}. Therefore, \begin{equation*} \begin{split} \int_0^t(\rho^\alpha_s(0)-\rho^\alpha_s(1))\partial_uH_s(0)\,ds\leq \frac{1}{\alpha} (8K_0)^{1/2} \Big(\int_0^T(\partial_uH_s(0))^2\,ds\Big)^{1/2}\,, \end{split} \end{equation*} for all $t\in{[0,T]}$ and $\alpha\geq 1$. In order to finish the proof is is enough to show that \begin{equation*} \lim_{\alpha_n\to\infty}\int_0^t\<\rho^{\alpha_n}_s , \,\Delta H_s\,\>\, ds=\int_0^t\<\rho^{*}_s , \,\Delta H_s\,\>\, ds, \end{equation*} which is consequence of the Cauchy-Schwarz inequality. \end{proof} \begin{proof}[Proof of Theorem \ref{pdePT}] As mentioned after Proposition \ref{propositL1}, the set $\{\rho^\alpha:\,\alpha>0\}$ is relatively compact in $L^2([0,T]\times [0,1])$. Therefore, any sequence $\alpha_n\to 0$ has a subsequence $\alpha_{n_j}$ such that $\rho^{\alpha_{n_j}}$ converges to some $\rho^*$. By Proposition \ref{propositL2}, Proposition \ref{lemmapdePT1} and from uniqueness of weak solutions of \eqref{hen}, we conclude that $\rho^*$ is the unique weak solution of \eqref{hen}. Hence, $\lim_{\alpha\to 0}\rho^\alpha=\rho^*$. Analogously, employing Proposition \ref{lemmapdePT2}, we get that $\lim_{\alpha\to \infty}\rho^\alpha=\hat{\rho}\,,$ where $\hat{\rho}$ is the unique weak solution of \eqref{he}. \end{proof}
{ "timestamp": "2013-03-26T01:02:43", "yymm": "1210", "arxiv_id": "1210.3662", "language": "en", "url": "https://arxiv.org/abs/1210.3662" }
\section*{Supplementary material for Bosonic Anderson insulators in a magnetic field} \section*{I. HIERARCHICAL LOOP MODEL} Here we analyze in more detail the hierarchical loop model defined in the main text. This model implements the ideas of the droplet picture in an analytically tractable and mathematically precise way. Our focus will be on the analytical calculation of magnetoconductance in the perturbative and the non-perturbative regimes. However, we have also studied the crossover between the two regimes numerically. We found the crossover, the asymptotic power laws and subleading corrections to be very similar to those observed in the full lattice model of forward-directed paths. This suggests that the hierarchical droplet model captures indeed most of the relevant physical ingredients of magnetoconductance. \subsection{A. Models} Imposing the scaling of individual droplet degrees of freedom actually does not fully specify a hierarchical droplet description, but leaves some freedom in the definition of the model. The resulting models differ in the way they treat correlations between energies of spatially overlapping droplets. As we will see this translates primarily into differences in the numerical coefficients of subleading terms. \subsubsection{1. Normalized recursion} We first discuss a different version of the hierarchical construction from the one in the main text. We define it by iterating the following recursive construction from the largest scale $N$ down to the lattice scale, the loops or branch segments having lengths $L_k = 2^{-k}N$ for $0 \leq k\leq K\equiv \lfloor \log_2(N) \rfloor$, \begin{eqnarray} S_{\mathcal{L}}&=&1, \quad {\rm if}\quad L_\mathcal{L}= 2^{-K}N,\nonumber\\ S_{\mathcal{L}}&=& \frac{S_{\mathcal{L}'_1}S_{\mathcal{L}'_2}+W_\mathcal{L}(B) S_{\mathcal{L}''_1}S_{\mathcal{L}''_2}}{1+W_\mathcal{L}(0)},\label{construction}\\ W_\mathcal{L}(B) &=& \exp\left[-f L_\mathcal{L}^\theta+i B a_\mathcal{L} L_\mathcal{L}^{1+\zeta}\right]\label{weight} \\ &=&\exp\left[-f L_\mathcal{L}^\theta+i a_\mathcal{L} \left(\frac{L_\mathcal{L}}{\ell_B}\right)^{1+\zeta}\right],\nonumber \end{eqnarray} which differs by the normalizing factor $1+W_\mathcal{L}(0)$ from Eq.~(10) in the main text. We have defined $\ell_B\equiv B^{-1/(1+\zeta)}$ and have dropped the explicit dependence of $S_\mathcal{L}$ on $B$. Note that the normalization factor in the denominator in Eq.~(\ref{construction}) ensures that $S_\mathcal{L}(B=0)=1$. Therefore $f L_\mathcal{L}^\theta$ is precisely the free energy difference between the leading and subleading branches of paths, which this model treats as independent from loop energies at smaller scales. In weak fields the magnetic field response will be insensitive to the precise value of the small scale cutoff, $L_{\rm min}=N2^{-K}$, as long as it is much smaller than the relevant magnetic length, $L_{\rm min}\ll \ell_B$. Indeed, up to small corrections, $S_\mathcal{L}(B)\approx 1$ for all loops with $L_\mathcal{L}\ll \ell_B$. \subsubsection{2. Non-normalized recursion} The above model assumes that free energy differences between a dominant and subdominant branch are independent of the energies (and thus interferences) on smaller scales along those branches. A more realistic model should take into account that if positive interferences occurred along a branch, the resulting "free energy" of the branch is statistically smaller than if the interferences were negligible. Such effects can be built into a hierarchical construction by modifying the recursion to \begin{eqnarray} S_{\mathcal{L}}&=&1, \quad {\rm if}\quad L_\mathcal{L}= 2^{-K}N,\nonumber\\ S_{\mathcal{L}}&=& S_{\mathcal{L}'_1}S_{\mathcal{L}'_2}+W_\mathcal{L}(B) S_{\mathcal{L}''_1}S_{\mathcal{L}''_2},\label{construction_nonnorm} \end{eqnarray} with the same weight factor $W_\mathcal{L}(B)$ (\ref{weight}), but dropping the normalization. In this case $S_{\mathcal{L}}(0)$ is not normalized to $1$ at all length scales. Instead, the explicit contribution to free energy difference between two branches, $f L_\mathcal{L}^\theta$, is now supplemented by an extra contribution coming from the sum over paths at smaller scales. This non-normalized recursion follows a similar hierarchical construction by Derrida and Griffith~\cite{Derrida1989}. Those authors assigned to each loop random energies or signs, that however did not scale with the level of the hierarchy. This generated randomly fluctuating free energies, with a free-energy exponent which is only slightly smaller than the value $\theta=1/3$. In this version of the recursion, we retain the spirit of the Derrida-Griffiths approach with the difference that we introduce the DP-scaling by hand through the free-energy $fL_{\mathcal{L}}^\theta$, and do not consider random signs in the recursion relation. \subsection{B. Magnetoconductance} We now study the magnetoconductance of the above models, \begin{eqnarray} \ln \Delta\sigma_N(B) \equiv \overline{\ln \left| S_{\mathcal{L} =\{0N\}}(B)\right|}- \overline{\ln \left| S_{\mathcal{L}=\{0N\}}(0)\right|}, \end{eqnarray} where $\overline{\left[ \cdots \right]}$ denotes the average over the set of reduced free energy and area variables, $\left\lbrace h \equiv (f,a) \right\rbrace $. We assume the two variables associated with each loop to be independent and identically distributed, \begin{eqnarray} P(\left\lbrace h \right\rbrace) = \prod_\mathcal{L} \rho(f_\mathcal{L}, a_\mathcal{L})df_\mathcal{L} da_\mathcal{L}. \end{eqnarray} The product runs over all loops $\mathcal{L}$, the support of $\rho$ being $\{f,a\} \in[0,\infty)\times (-\infty,\infty)$. However, as we will see, only the values of $\rho(f=0,a)\equiv \rho_a(a)$ will enter the analytical results. For quantitative calculations, we will assume a simple Gaussian form, \begin{eqnarray} \label{ProbDist} \rho(f=0,a)\equiv \rho_a(a) = \rho_0 \frac{ \exp[-a^2/2a_0^2] }{\sqrt{2\pi}\,a_0}. \end{eqnarray} The (non-normalized) density $\rho_a(a)$ is a free input parameter of the hierarchical models. More realistic densities could be determined by studying the distributions of loop areas in the full lattice model. Let us now analyze the magnetoconductance, \begin{eqnarray} F(\{h \}) &\equiv& \left[ \ln \left| S_{0N}(B)\right| - \ln \left| S_{0N}(0) \right| \right] ( \{ h\}) \label{FDefinition} \end{eqnarray} as a functional of the disorder realization $(\{ h\})$. $F$ can be viewed as the free energy difference between a directed polymer with $B$-induced complex weights and one in zero field, where all weights are positive. In typical disorder realizations most loops do not play a significant role in modifying the interference of alternative tunneling paths. A loop $\mathcal{L}$ is involved significantly only if $f_\mathcal{L} \lesssim L_\mathcal{L}^{-\theta} $, in which case we refer to it as `active'. Large active loops are dilute, while small ones are more abundant, but contribute very little to magnetoconductance. One can thus expand $F$ in the spirit of a droplet or virial expansion into a sum of terms ${\mathcal V}_k$, which involve an increasing number $k$ of spatially overlapping loops, \begin{eqnarray} F\left(\left\lbrace h\right\rbrace\right) &=& \mathcal{V}_1 + \mathcal{V}_2 + \mathcal{V}_3 + \dots \nonumber\\ &=& \sum_{k\geq 1} \sum_{\{\mathcal{L}_1 \neq ...\neq \mathcal{L}_k\}} F^c\left( h_{\mathcal{L}_1},...,h_{\mathcal{L}_k}\right) \,. \label{VirialExpansion} \end{eqnarray} The sums are over all (non-ordered) sets of distinct loops. The decomposition in Eq.~(\ref{VirialExpansion}) is exact, given that the connected functions $F^c$ are defined recursively as \begin{widetext} \begin{align} F^c(h_{\mathcal{L}_1}) &= F\left( \left\lbrace h_{\mathcal{L}} \left| f_{\mathcal{L}\neq \mathcal{L}_1} \to \infty \right.\right\rbrace\right), \label{OneLoopTerm}\\ F^c(h_{\mathcal{L}_1},h_{\mathcal{L}_2}) &= F\left( \left\lbrace h_{\mathcal{L}} \left| f_{\mathcal{L} \neq \mathcal{L}_1,\mathcal{L}_2} \to \infty \right.\right\rbrace\right) - F^c(h_{\mathcal{L}_1}) - F^c(h_{\mathcal{L}_2}), \label{TwoLoopTerms}\\ \vdots \nonumber\\ F^c(h_{\mathcal{L}_1},...,h_{\mathcal{L}_k}) &= F\left( \left\lbrace h_{\mathcal{L}} \left| f_{\mathcal{L}}\to \infty\, \forall \mathcal{L} \not\in \{\mathcal{L}_1,...,\mathcal{L}_k \} \right.\right\rbrace\right) - \sum_{m=1}^{k-1} \sum_{\{\mathcal{L}'_{1} \neq ...\neq \mathcal{L}'_m\}\subset \{\mathcal{L}_{1},...,\mathcal{L}_k\}} F^c(h_{\mathcal{L}'_1},... , h_{\mathcal{L}'_m}). \label{DefinitionOfFc} \end{align} \end{widetext} The subtraction of the disconnected terms in Eqs.~(\ref{TwoLoopTerms},\ref{DefinitionOfFc}) ensures that $F^c$ tends to $0$ as one of its free energy arguments becomes large, $f_i\to \infty$, which turns the corresponding loop inactive. It is also easy to verify that $F^c$ vanishes, unless the loops associated with its arguments belong to a single spatially entangled cluster. This follows immediately form the fact that disconnected sets of loops contribute additively to $\ln \left| S_{0N}(B)\right|$. This clustering property ensures an extensive result in the large distance limit, $N\gg \ell_B$ (for every order of the expansion $\overline{{\cal V}_k}\sim N$), i.e., we must have \begin{eqnarray} \ln \left| S_{0N}(B)\right| - \ln \left| S_{0N}(0) \right| = -\Delta(\xi^{-1}) N +o(N), \end{eqnarray} where the coefficient $\Delta(\xi^{-1})$ is expected to be self-averaging. As the notation suggests, this coefficient represents a correction to the inverse localization length $\xi^{-1}$. The disorder average is carried out term by term. Thereby, the disorder variables, especially $f_{\mathcal{L}}$, take the role of relative positions of particles played in the cluster expansion of gases. The role of a low gas density is played by the small likelihood of large loops to be active. The term ${\cal V}_k$ in the expansion (\ref{VirialExpansion}) captures the interference contribution from \emph{exactly} $k$ active loops, similar to droplet expansions at low $T$ in related disordered systems.~\cite{Doussal2006,Monthus2004,Doussal2010} This is akin to the virial expansion, which corrects the ideal gas behavior by summing $n$-particle contributions at order $n^k$ in an expansion in the density $n$. The various contributions to ${\cal V}_k$ can easily be represented graphically by enumerating all spatially connected sets of $k$ loops, and summing over their sizes, see Figs. \ref{VirialTerms} and \ref{VirialTermsOrder3} . \subsection{C. Evaluation of leading terms} \subsubsection{1. $1^{st}$ order term} The first term in Eq.~(\ref{VirialExpansion}) can be rewritten as \begin{eqnarray} \label{FirstVirialTerm} \mathcal{V}_1 &=& \sum_{\mathcal{L}} \ln \left| \frac{1+W_\mathcal{L}(B)}{1+W_\mathcal{L}(0)}\right| \\ &=& \frac{1}{2} \sum_{\mathcal{L}} \ln \left[ 1 - \frac{4 e^{-f_{\mathcal{L}}L_\mathcal{L}^\theta} \sin^2\left(\frac{a_\mathcal{L}}{2} B L_\mathcal{L}^{1+\zeta} \right)}{\left(1 + e^{-f_{\mathcal{L}}L_\mathcal{L}^\theta}\right)^2} \right], \nonumber \end{eqnarray} for both the normalized and the non-normalized recursive definitions of the model. Reorganizing this as a sum over looplengths $\ell_k= N 2^{-k}$, and performing the disorder average, we find \begin{eqnarray} \label{FirstVirialTerm2} \overline{\mathcal{V}_1} &=& \frac{1}{2} \sum_{k=0}^K \frac{N}{\ell_k} \overline{\ln \left[ 1 - \frac{\sin^2\left(\frac{a}{2} B \ell_k^{1+\zeta} \right)}{\cosh^2\left(\frac{f}{2} \ell_k^\theta \right)} \right]}^{a,f}\,. \end{eqnarray} \subsubsection{2. $2^{nd}$ order term} The second term in the droplet expansion, ${\cal V}_2$, picks up contributions from disorder realizations where two active loops spatially overlap. This can occur in two distinct ways, c.f., Fig.~\ref{VirialTerms}: either (I) the smaller loop is part of the dominant; or (II) part of the subdominant branch of the larger loop. Let us refer to the bigger and smaller loop as $\mathcal{L}_1$ and $\mathcal{L}_2$, respectively, with lengths $L_{1,2}$. The following expressions apply to the normalized model. The discussion of differences for the non-normalized version will be discussed further below when we evaluate the terms. Denoting $W_{i}(B)\equiv W_{\mathcal{L}_i}(B)$, ${\cal V}_2$ can be written as \begin{eqnarray} \mathcal{V}_2 = \sum\limits_{\substack{\mathcal{L}_1,\mathcal{L}_2 \\ L_{1}>L_{2}}} \left[\mathcal{V}_2^{(I)}(\mathcal{L}_1,\mathcal{L}_2) + \mathcal{V}_2^{(II)}(\mathcal{L}_1,\mathcal{L}_2)\right \label{SecondVirialTerm} \end{eqnarray} where \begin{eqnarray} \label{SecondVirialTermDiagram1} \mathcal{V}_2^{(I)}(\mathcal{L}_1,\mathcal{L}_2) &=& \ln \left| \frac{\frac{1+W_2(B)}{1+W_2(0)}+W_1(B)}{1+W_1(0)} \right|\\ && -\ln \left|\frac{1+W_1(B)}{1+W_1(0)}\right|-\ln \left|\frac{1+W_2(B)}{1+W_2(0)}\right|, \nonumber \\ \mathcal{V}_2^{(II)}(\mathcal{L}_1,\mathcal{L}_2) &=& \ln \left|\frac{1+\frac{1+W_2(B)}{1+W_2(0)}W_1(B)}{1+W_1(0)}\right|\nonumber\\ && -\ln \left|\frac{1+W_1(B)}{1+W_1(0)}\right|. \label{SecondVirialTermDiagram2} \end{eqnarray} Taking the disorder average and writing (\ref{SecondVirialTerm}) as a sum over loop lengths we have \begin{eqnarray} \overline{\mathcal{V}_2} &=& \sum_{k_1=0}^K\sum_{k_2>k_1}^K \frac{N}{2^{k_2}} \overline{\ln \left| 1 + \frac{W_1(B) (Z_2 + Z_2^{-1}-2)}{(1+W_1(B))^2} \right|}^{a_{1,2},f_{1,2} \label{SecondVirialTerm2} \end{eqnarray} where $ Z_2= (1+W_2(B))/(1+ W_2(0))$. \begin{figure} \centering \includegraphics[angle = 0,width = 0.45 \textwidth]{VirialTerms.eps} \caption{Graphic representation of the first two virial terms in Eq.~\ref{VirialExpansion}. {\em Left:} ${\cal V}_1$ is a sum over all loops $\mathcal{L}$, composed of a dominant (thick line) and subdominant (thin line) branch. {\em Right}: The two contributions to ${\cal V}_2$ arise form spatially overlapping loops of length $L_1>L_2$. The two cases distinguish whether the smaller loop belongs to the dominant (I) or subdominant (II) branch.} \vspace*{-0.2 in} \label{VirialTerms} \end{figure} \subsubsection{3. Higher order terms} For sufficiently large loops, $L_{\mathcal{L}}\gg 1$, the disorder average simplifies. Indeed only very small values of $f_{\mathcal{L}_i}$ are relevant, since the connected functions $F^c$ fall off rapidly when one if its arguments $f_{\mathcal{L}_i}$ is larger than $L_{\mathcal{L}_i}^{-\theta}$. On the other hand, we will see that small scales contribute negligibly to magnetoconductance as long as $B\ll1 $, so we can concentrate on $L_\mathcal{L}\gtrsim \ell_B$. Thus, for each variable $f$ in the disorder average, we can safely approximate $\int df da \rho(f,a) ... \approx \int df da \rho_a(a)... $, cf. Eq.~(\ref{ProbDist}). Introducing $F_i\equiv f_{\mathcal{L}_i} L_{\mathcal{L}_i}^\theta$, the disorder-average of the $n$-th virial term becomes \begin{eqnarray} \overline{\mathcal{V}_n} &=& \sum\limits_{\{\mathcal{L}_1\neq ...\neq \mathcal{L}_n\}} \left( \prod\limits_{i=1}^n \frac{1}{L_{\mathcal{L}_i}^\theta}\right) I_n,\label{DisorderAverage} \\ I_n &=& \prod\limits_{i=1}^n \left( \int_0^\infty dF_i \int_{-\infty}^\infty \rho_a (a_i) d a_i \right) F^c(H_1,H_2,\cdots,H_n), \nonumber \end{eqnarray} using the notation $H_i \equiv (F_i,a_i)$. \section{II. Scalings in the droplet expansion} \subsection{A. Weak fields: $BN^{1+\zeta}\ll 1$} For weak fields one can expand $I_n$ in the enclosed fluxes, the result being dominated by the largest scale $N$. Expanding Eq.~(\ref{FirstVirialTerm2}) in $B$, and integrating over the rescaled $F$ variables, we find the leading contribution to the magnetoconductance $ \ln \Delta \sigma_N(B)$ \begin{eqnarray} \overline{\mathcal{V}_1} \approx - \frac{1}{4}\int \rho_a(a) a^2 da\, B^2 N \sum_{k=0}^K \ell_k^2 \nonumber\\ \approx -\frac{1}{3} \int \rho_a(a) a^2 da\, B^2 N^3, \label{weak fields} \end{eqnarray} which is negative, as expected for bosonic magnetoconductance. Likewise, one can check from (\ref{SecondVirialTerm2}), that ${\cal V}_2 \sim O(B^2 N^{2(1+\zeta)-2\theta})$. More generally one finds that higher order terms are suppressed by the prefactors $\prod_{i=1}^n L_i^{-\theta}$ in Eq.~(\ref{DisorderAverage}) with $L_i\sim N$, which leads to the subdominant scaling ${\cal V}_k\sim O(B^2 N^{2(1+\zeta)-k\theta})$. Note that the leading scaling (\ref{weak fields}) is {\em independent} of the wandering exponent, by virtue of the relation $\theta =2\zeta-1$. One therefore obtains the same scaling as in a non-disordered case, for which the exponents $\zeta=1/2, \theta=0$ hold. However, we stress that in the disordered case the result (\ref{weak fields}) arises as a result of disorder averaging, which masks some of the physics. The {\em distribution} of ${\cal V}_1$ is wide, and the average (\ref{weak fields}) is dominated by a few rare disorder configurations. The latter occur with probability $\sim N^{-\theta}$, but contribute a large ${\cal V}_1\sim B^2 N^{2(1+\zeta)}$, while in most other realizations the wavefunctions are much less affected by quantum interference. \footnote{Since the response in the perturbative regime is strongly inhomogeneous, it is not clear whether the logarithmically disorder-{\em averaged} $\Delta\sigma_N$ with $N=R_{\rm hop}$ is the only relevant quantity determining transport. In particular one should be cautious when using these results as inputs for transport problems on larger scales, such as variable range hopping. We are not aware of any theoretical approach which take into account the statistical distribution of the $B$-effects on wavefunction properties, rather than assuming a homogeneous average effect on all wavefunctions.} \begin{figure} \centering \includegraphics[angle = 0,width = 0.45 \textwidth]{VirialTermsOrder3.eps} \caption{Graphic representation of the third order terms ${\cal V}_3$ in Eq.~(\ref{VirialExpansion}) with $ L_1 $ (blue) $\geq L_2$ (red) $\geq L_3$(green).} \vspace*{-0.2 in} \label{VirialTermsOrder3} \end{figure} \subsection{B. Strong fields: $BN^{1+\zeta}\gg 1$} The perturbative expansion holds only for weak fields for which the distance between end points is smaller than the `magnetic length', $N \ll \ell_B$. For stronger fields, the dominant contribution comes from loops at the scale $\ell_B$. To see this, let us approximate the sum over discrete loop sizes in (\ref{FirstVirialTerm2}) as an integral, $\sum_k\approx 1/\ln(2)\int d\ell/\ell$, \begin{eqnarray} \label{FirstVirialTerm3} \overline{\mathcal{V}_1} &\approx & \frac{N}{2\ln(2)} \int_1^N \frac{d\ell}{\ell^2} \, \overline{\ln \left[1 - \frac{\sin^2\left(\frac{a}{2} B \ell^{1+\zeta} \right)}{\cosh^2\left(\frac{f}{2} \ell^\theta \right)} \right]}^{a,f}\,. \end{eqnarray} Rescaling the free energies and changing variables to $u\equiv \ell/\ell_B$, we obtain \begin{eqnarray} \label{FirstVirialTerm3Repeat} \overline{\mathcal{V}_1} &= & -c_1\frac{N}{\ell_B^{1+\theta}} = -c_1 NB^{\frac{2\zeta}{1+\zeta}} = -c_1 N B^{4/5}, \end{eqnarray} with the numerical coefficient \begin{eqnarray} c_1 &\approx & -\frac{1}{2\ln(2)} \int_0^\infty \frac{du}{u^{2+\theta}} \int_0^\infty dF \times\nonumber\\ && \quad \int_{-\infty}^\infty da \rho_a(a) \ln\left[1-\frac{\sin^2(a u^{1+\zeta}/2}{\cosh^2(F/2)}\right]. \end{eqnarray} For the particular choice (\ref{ProbDist}) for $\rho_a(a)$ (with $\rho_0=a_0=1$) we find \begin{eqnarray} c_1\approx 0.86. \end{eqnarray} Note that the dominant contribution comes indeed from $u=O(1)$, i.e., from loops of size $\ell_B$. We have extended the limits of the $u$-integral to $0$ and $\infty$, as it converges rapidly on both sides. This result implies a leading correction to the localization length as \begin{eqnarray} \label{dxi} \Delta(\xi^{-1}) = c_1 B^{\frac{2\zeta}{1+\zeta}}. \end{eqnarray} \subsection{C. Subleading corrections} In the non-perturbative regime, subleading corrections are interesting to analyze in more detail, as they correct the leading behavior (\ref{dxi}). As we shall see below, there is a direct correlation between the order of a term in the virial expansion and its scaling with $B$, which justifies using the virial expansion in the first place. Roughly speaking, each loop contributes a scaling factor of $B^{\frac{\theta}{1+\zeta}}$ on disorder averaging, causing the $n$-th virial term with $n$ spatially overlapping loops to contain as many such factors. A more precise formulation follows. We begin with the second order term $\mathcal{V}_2$. As before, we can rewrite $\overline{\mathcal{V}_2}$ as a sum over pairs of loop lengths, $L_j = 2^{-j} N$ and $L_k = 2^{-k} N $, and take the continuum limit of the discrete sums over $j$ and $k$ \begin{eqnarray} \overline{\mathcal{V}_2} &=& \sum\limits_{\substack{j,k \\k > j}} \frac{N}{L_{k}} \left[ \overline{\mathcal{V}_2^{(I)}}(L_{j},L_k) + \overline{\mathcal{V}_2^{(II)}} (L_{j},L_{k})\right] \label{SecondVirialTermAvg} \\ &\approx& \frac{N}{(\ln 2)^2}\int_1^N \frac{d\ell_1}{\ell_1} \int_1^{\ell_1} \frac{d\ell_2}{\ell_2^2}\left[ \overline{\mathcal{V}_2^{(I)}}(\ell_1,\ell_2) + \overline{\mathcal{V}_2^{(II)}} (\ell_1,\ell_2)\right]. \nonumber \end{eqnarray} Here, $\mathcal{V}_2^{(I)}\left(\ell_1,\ell_2\right) $ and $\mathcal{V}_2^{(II)}\left(\ell_1,\ell_2\right) $ are given by Eqs.~(\ref{SecondVirialTermDiagram1},\ref{SecondVirialTermDiagram2}) (loop $\mathcal{L}_i$ being referred to by its length $\ell_i$). The overbar denotes the average over $f_{1,2}$ and $a_{1,2}$ with the appropriate probability distributions \begin{multline} \overline{\left[ \cdots\right]} \equiv \int_{0}^\infty df_1 \rho_f(f_1) \int_0^\infty df_2 \rho_f(f_2) \\ \int_{-\infty}^\infty da_1 \rho_a(a_1) \int_{-\infty}^\infty da_2 \rho_a(a_2) \left[ \cdots \right], \nonumber \end{multline} Substituting $u_1 = \frac{\ell_1}{\ell_B}$ and $u_2 = \frac{\ell_2}{\ell_B}$, we obtain the subleading correction \begin{eqnarray} \overline{\mathcal{V}_2} &=& -c_2 \frac{N} {\ell_B ^{(1 + 2\theta)}} = -c_2 NB^{\frac{4\zeta - 1}{1+\zeta}} , \end{eqnarray} with the numerical coefficient \begin{widetext} \begin{multline} c_2 \approx -\frac{1}{(\ln 2)^2} \int_0^\infty \frac{du_1}{u_1^{1+\theta}} \int_0^{u_1} \frac{du_2}{u_2^{2+\theta}} \int_0^\infty dF_1 \int_0^\infty dF_2 \int_{-\infty}^\infty da_1 \rho_a(a_1) \int_{-\infty}^\infty da_2 \rho_a(a_2) \\ \times \ln \left| 1 + \frac{ e^{-F_1 + i a_1 u_1^{1+\zeta}}}{\left(1 +e^{-F_1 + i a_1 u_1^{1+\zeta}}\right)^2} \left(\frac{1+e^{-F_2 + i a_2 u_2^{1+\zeta}}}{1 + e^{-F_2}} +\frac{1 + e^{-F_2}}{1+e^{-F_2 + i a_2 u_2^{1+\zeta}}}-2 \right)\right|. \label{c2} \end{multline} \end{widetext} Note that the integrals converge both for $u_{1,2}\to 0$ and $u_{1,2}\to \infty$. We have computed the $(F_1,F_2,a_1,a_2)$-integral in Eq.~(\ref{c2}) using Monte-Carlo sampling, followed by numerically carrying out the $(u_1,u_2)$-integration. With the density $\rho_a$ given in (\ref{ProbDist}), $c_2$ turns out to be negative. To the first subleading order we find the correction to the inverse localization length as \begin{eqnarray} \Delta(\xi^{-1}) = -c_1 B^{4/5}\left(1+\frac{c_2}{c_1}B^{1/5}+O(B^{2/5})\right). \end{eqnarray} Note that the subleading corrections vary slowly with $B$ and thus are expected to affect fits of the magnetoconductance to a simple power law $B^\gamma$. Indeed defining an `effective exponent' as \begin{eqnarray} \!\!\!\gamma = \frac{d \ln|\Delta(\xi^{-1})|}{d\ln B} = \frac{4}{5}+\frac{1}{5}\frac{B^{1/5}}{c_1/c_2+ B^{1/5}}+O(B^{2/5}),\, \end{eqnarray} one expects to see apparent exponents that deviate from the asymptotically exact value 4/5 for any small but finite $B\ll 1$. The sign of the correction depends on the relative sign of $c_1$ and $c_2$. The numerical data obtained for the full lattice model is consistent with a positive correction to the exponent, cf. inset of Fig. 3 in main text.. However, the normalized hierarchical model predicts the opposite sign. We believe that this qualitative difference is due to the fact that the normalized recursion neglects correlations of free energy differences at different scales, as explained above. A more realistic model, which builds in such correlations was given in Eq.~(\ref{construction_nonnorm}), where the normalizing factors are dropped in the recursive definition of path weights. The expression for ${\cal V}_2$ is easy to derive in this case as well, \begin{widetext} \begin{eqnarray} c_2 &\approx& - \frac{1}{(\ln 2)^2} \int_\Lambda^\infty \frac{du_1}{u_1^{1+\theta}} \int_\Lambda^{u_1} \frac{du_2}{u_2^{2+\theta}} \int_1^\infty dF_1 \int_0^\infty dF_2 \int_{-\infty}^\infty da_1 \rho_a(a_1) \int_{-\infty}^\infty da_2 \rho_a(a_2) \left[ \mathcal{V}_2^{(I)} + \mathcal{V}_2^{(II)} \right], \label{c2modified} \\ \mathcal{V}_2^{(I)} &=& \ln \left| \frac{1 + e^{-F_1 + ia_1 u_1^{1+\zeta}} + e^{-F_2 + ia_2 u_2^{1+\zeta}}}{1+e^{-F_1} + e^{-F_2}}\right| -\ln \left| \frac{1 + e^{-F_1 + ia_1 u_1^{1+\zeta}}}{1+e^{-F_1}}\right| - \ln \left| \frac{1 + e^{-F_2 + ia_2 u_2^{1+\zeta}}}{1+e^{-F_2}}\right|, \nonumber \\ \mathcal{V}_2^{(II)} &=& \ln \left| \frac{1 + e^{-F_1 + ia_1 u_1^{1+\zeta}}(1 + e^{-F_2 + ia_2 u_2^{1+\zeta}})}{1+e^{-F_1}(1 + e^{-F_2})}\right| - \ln \left| \frac{1 + e^{-F_1 + ia_1 u_1^{1+\zeta}}}{1+e^{-F_1}}\right|, \nonumber \end{eqnarray} \end{widetext} where the lower cutoff $\Lambda$ is a number $\lesssim 1$, ensuring that the recursion ends at $L_k = \Lambda \ell_B$. We used $\Lambda = 0.125$ for the numerical computation of $c_2$ below. This cutoff is required since the $u_2$-integral in Eq.~(\ref{c2modified}) does not converge at infinitesimally small length-scales. This reflects the fact that the $S_{\mathcal{L}}$ (and thus the loop free energies) have a non-trivial distribution already in $B=0$, due to interferences at small scales $L\ll \ell_B$. This distribution cannot be captured easily by the virial expansion. Instead we have to introduce a small scale cut-off at some fixed length scale $L_k \lesssim \ell_B$. We can safely assume that the small scale interference is incorporated into the free energy differences at that smallest scale. Thereby we rely on the fact that smaller loops enclose negligible flux and thus do not contribute significantly to magnetoconductance, nor affect much the free energy distribution at small scales. Finally, this prescription leads to a similar virial expansion in powers of $B^{1/5}$, however with different coefficients $c_{k>1}$. The integral~(\ref{c2modified}) yields $ c_2 \approx 4.9 \times 10^{-2}$. This has the {\em same} sign as $c_1$and thus leads to an ``effective exponent" which is bigger than 4/5, and thus comes closer to the phenomenology observed in the full lattice model, as one may expect. As mentioned before, the various definitions of the hierarchical construction only affect the coefficients of the {\em subleading} terms in the virial expansion. \subsection{D. Effect of small denominators and resonances} The quantitative effect of the subleading terms is of course non-universal, as are the coefficients $c_{1,2}$. A variation of such effects is actually also found in the full lattice sum of forward-directed paths. It may seem dangerous to evaluate path sums of products of denominators which can become arbitrarily small. While the logarithmic average of such sums is mathematically well-defined, it is known that backscattering and self-energy effects, or a Coulomb gap in the density of states, reduce the influence of such resonances. For this reason the toy models considered in the earlier literature~\cite{Shklovskii1991,Prior2009} have restricted themselves to finite denominators. Numerically evaluating the sum over all paths as given in the maintext, without restricting the occurrence of resonant denominators, we found effective exponent of the order of $\gamma\approx 0.88$. However, the deviation from $4/5$ turned out to be much smaller for a toy model where we restricted onsite energies to the interval $[1/2,1]$. It is thus suggestive to attribute the stronger deviations with resonances included to an enhanced value of $c_2$. \subsection{E. Higher terms in the droplet expansion} It is not difficult to write down the disorder-average of the higher order terms in Eq.~\ref{VirialExpansion} as appropriate integrals. One can check that the generic term $\overline{\mathcal{V}_k}$ varies as \begin{eqnarray} \overline{\mathcal{V}_k} = c_k N B^{\frac{1+k\theta}{1+\zeta}} \end{eqnarray} To illustrate the procedure, we give the diagrams contributing to ${\cal V}_3$ in Fig. \ref{VirialTermsOrder3}. The corresponding expressions for the connected terms are given below. Subscripts 1, 2 and 3 denote three loops with lengths $L_1 \geq L_2 \geq L_3$. For brevity, we only consider for the normalized model and give the {\em connected} terms in $\mathcal{V}_3^{(k)}$ as $ V_3^{(k)}(B) - V_3^{(k)}(B=0)$, where \begin{eqnarray} V_3^{(I)}(B) &=& \ln \left| 1 + W_1 + W_2 + W_3 + W_2 W_3 \right| ,\nonumber \\ V_3^{(II)}(B) &=& \ln \left| 1 + W_1 + W_2 + W_3\right| ,\nonumber\\ V_3^{(III)}(B) &=& \ln \left| 1 + W_1 + W_2 + W_2 W_3\right|,\nonumber \\ V_3^{(IV)}(B) &=& \ln \left| 1 + W_1 + W_2 + W_1 W_3 \right| ,\nonumber\\ V_3^{(V)}(B) &=& \ln \left| 1 + W_1 + W_1 W_2 + W_1W_3 + W_1W_2W_3\right| ,\nonumber \\ V_3^{(VI)}(B) &=& \ln \left| 1 + W_1 + W_1 W_2 + W_1 W_3\right| ,\nonumber \\ V_3^{(VII)}(B) &=& \ln \left| 1 + W_1 + W_1 W_2 + W_1 W_2 W_3\right|, \nonumber \end{eqnarray} where $ W_1 \equiv W_{\mathcal{L}_1}(B)$. Continuing along these lines, $\ln \Delta\sigma_N(B) $ can be calculated to any desired order at a given field $B$. \subsection{F. Remarks on fermions} It might be interesting to generalize the hierarchical model to the case of fermions. Since the locator expansion yields path amplitudes with positive and negative signs, it would seem natural to include random signs $s_\mathcal{L}$ in a hierarchical droplet model. However, several subtleties may need further modifications to capture the details of fermionic magnetoconductance. For example, a weak field can have a significant effect on small loops whose branches have nearly opposite amplitudes. This may reflect in a non-trivial dependence of free energy costs $f_\mathcal{L}$ on $B$, which may enhance subleading corrections and potentially even change their exponent. It is possible that the observed effective fermionic exponents $\gamma<4/5$ in the non-perturbative regime are due to such effects. More detailed investigations are necessary to clarify these issues. \end{document}
{ "timestamp": "2012-10-16T02:02:07", "yymm": "1210", "arxiv_id": "1210.3726", "language": "en", "url": "https://arxiv.org/abs/1210.3726" }
\section{Introduction} Quantum computers take direct advantage of superposition and entanglement to perform computations. Because quantum algorithms compute in ways which classical computers cannot, for certain problems they provide exponential speedups over their classical counterparts \cite{Micheli2006,Ni2008a,Kotochigova2006a,Deiglmayr2008,DeMille2002,Carr2009,Anmer,zhuminima}. That prospect has fostered a variety of proposals suggesting means to implement such a device \cite{DeMille2002, Sorensen2004, Wallraff2004}. DeMille has detailed a prototype design for quantum computation using ultracold polar molecules, trapped in a one-dimensional optical lattice, partially oriented in an external electric field, and coupled by the dipole-dipole interaction \cite{DeMille2002}. This offers a promising platform for quantum computing because scale-up appears feasible to obtain large networks of coupled qubits \cite{Carr2009, Friedrich2009, Lee2005, Kotochigova2006a, Yelin2006, Ni2008a}. In previous work, we focused on entanglement and on consequences of using a strong external electric field with appreciable gradient, required to prevent quenching of the dipole moments by rotation and to enable addressing individual qubit sites \cite{QiPendular}. The molecules were represented as identical, rigid dipoles a fixed distance apart and undergoing pendular oscillations imposed by the external electric field. We determined the dependence of the entanglement of the pendular qubit states, as measured by the concurrence function, on three unitless variables, all scaled by the rotational constant. The first specifies the Stark energy and intrinsic angular shape of the qubits; the second specifies the magnitude of the dipole-dipole coupling; the third variable specifies the thermal energy. Under conditions deemed amenable for proposed quantum computers, we found that both the concurrence and a key frequency shift, $\Delta \omega$, that has a major role in logic gates, become very small for the ground eigenstate. In order that such weak entanglement can suffice for operation of logic gates, the resolution must be high enough to detect the $\Delta \omega$ shift unambiguously. For diatomic molecules, the Stark effect is second-order, therefore a sizable external electric field is required to produce the requisite dipole moments in the laboratory frame. In a subsequent study, we examined symmetric top molecules as candidate qubits \cite{symmetricTop}. Symmetric top molecules offer advantages resulting from a first-order Stark effect, which renders the effective dipole moments nearly independent of the field strength. That permits the use of a much lower external field strength in addressing sites. Moreover, for a particular choice of qubits, the electric dipole interactions become isomorphous with NMR systems. Here we study further aspects of how to implement a set of basic quantum gates for pendular qubit states of polar diatomic or linear molecules. We apply the Multi-Target Optical Control Theory (MTOCT) \cite{Vivie1999,Vivie2004,Vivie2005,Vivie2011,Rabitz2000,Rabitz2010,Yamashita2009a,Yamashita2009b,Yamashita2010,Yamashita2011} to design laser pulses that enable resolving and inducing transitions between specified states of the qubit system. This approach has been previously employed to study optimal control for elements of quantum computation in molecular systems, using as qubits vibrational or rotational states \cite{Yamashita2009b,Bomble2010,Vivie2011,Yamashita2010,Sugny2009}. Our use of pendular qubit states incorporates more fully effects of the external electric field and thereby simplifies the gate operations. Section II specifies the Hamiltonian defining the pendular qubits, as well as fundamental aspects of MTOCT. In Sec. III we present simulation results using MTOCT to obtain optimized laser pulses for realizing NOT, Hadamard and CNOT logic gates; those gates with the addition of the phase gate $\pi/8$ provide the basis for universal quantum computation \cite{Book_Quantum}. Section IV discusses strategies to contend with cases in which the $\Delta \omega$ shift is zero or becomes too small to resolve. \begin{comment} In a previous study, we focused on similar arrays of ultracold polar molecules trapped in an optical lattice and examined their entanglement in pendualr states \cite{QiPendular}. In these arrangements, molecules are represented as identical, rigid dipoles undergoing angular oscillations a fixed distance apart and subject to an external electric field. We examined the dependence of the concurrence on three dimensionless variables. The first specifies the Stark energy and intrinsic angular shape of the qubits. The second variable specifies the magnitude of the dipole-dipole coupling and the third variable is the thermal energy, all scaled by the rotational constant \cite{QiPendular}. We find, under conditions envisioned for the proposed quantum computers that both the concurrence and the frequency shift, $\Delta \omega$, produced by the dipole-dipole interaction, become very small for the ground eigenstate. In principle, such weak entanglement can be sufficient for operation of logic gates provided the resolution is high enough to detect the $\Delta \omega$ shift unambiguously. For diatomic molecules, the Stark effect is second-order, therefore a sizable external electric field is required to produce the requisite dipole moments in the laboratory frame. In a subsequent study, we examined symmetric top molecules as candidate qubits \cite{symmetricTop}. Symmetric top molecules offer advantages resulting from a first-order Stark effect, which renders the effective dipole moments nearly independent of the field strength. That permits the use of a much lower external field strength in addressing sites. Moreover, for a particular choice of qubits, the electric dipole interactions become isomorphous with NMR systems~\cite{symmetricTop}. A first step in designing a quantum computer, requires showing how to implement a set of basic one and two quantum gates. Herein, we study this step for arrays of ultracold molecules using the Multi-Target Optical Control Theory (MTOCT) \cite{Vivie1999,Vivie2004,Vivie2005,Vivie2011,Rabitz2000,Rabitz2010,Yamashita2009a,Yamashita2009b,Yamashita2010,Yamashita2011}. The basic idea is to design laser pulses which allow manipulating transitions within the qubit system separately. This approach has been successfully employed to study optimal control for elements of quantum computation in molecular systems, for both vibrational and rotational states \cite{Yamashita2009b,Bomble2010,Vivie2011,Yamashita2010,Sugny2009}. In this paper, we first consider the pendular states of polar molecules as the platform and apply MTOCT to obtain the optimized laser pulse for realizing quantum logic gates such as NOT, Hadamard and CNOT gates. In section II, we introduce the theoretical framework for the system Hamiltonian as well as fundamental aspects of MTOCT. Detailed simulation parameters, numerical results and analysis are presented. In section III, we comment on some practical consequences. \end{comment} \section{Theory} \subsection{Eigenstates for Polar Molecules in Pendular States} The Hamiltonian for an individual trapped polar diatomic or linear molecule in an external electric field $\boldsymbol{\epsilon}$ can, for our purposes, be reduced just to the rotational kinetic energy and Stark interaction terms \cite{QiPendular}: \begin{equation} \mathcal{H}_{S}=B\cdot\boldsymbol{J}^{2}-\mu\epsilon\cos\theta \label{eq:reduced} \end{equation} This represents a spherical pendulum: $B\boldsymbol{J}^{2}$ is the rotational energy, with $B$ the rotational constant, and $\mu$ the permanent dipole moment; $\theta$ is the polar angle between the molecular axis and the external field direction. At the ultracold temperatures that we consider, the translational kinetic energy of the trapped molecules is very small and nearly harmonic within the trapping well, so the trapping energy is nearly constant and hence is omitted. Interactions involving open shell electronic structure or nuclear spins or quadrupole moments are also omitted (but could be incorporated in familiar ways\cite{PRL109}). The eigenstates of $\mathcal{H}_{S}$, resulting from mixing of the field-free rotational states by the Stark interaction, are designated as pendular states. As proposed by DeMille \cite{DeMille2002}, the qubits $\left|0\right\rangle $ and $\left|1\right\rangle $ are chosen as the two lowest $M=0$ pendular states, with $\tilde{J}=0$ and $1$, respectively. Here the $\tilde{J}$ notation (tilde-hat) indicates it is no longer a good quantum number, due to the Stark mixing. However, $M$ remains good as long as azimuthal symmetry about $\boldsymbol{\epsilon}$ is maintained. The qubits thus are superpositions of spherical harmonics, \begin{equation} \left|0\right\rangle =\sum_{j}a_{j}\cdot Y_{j,\,0}\left(\theta,\,\varphi\right);\;\;\; \left|1\right\rangle =\sum_{j}b_{j}\cdot Y_{j,\,0}\left(\theta,\,\varphi\right) \label{eq:qubits definition} \end{equation} Adding a second molecule into the trap, identical to the first but at distance $r_{12}$ from it, introduces in addition to its pendular term the dipole-dipole interaction, $V_{dd}$. Averaging over the azimuthal angles \cite{QiPendular}, which for $M = 0$ states are uniformly distributed, reduces the dipole-dipole interaction to \begin{equation} V_{dd}=\Omega_{\alpha}\cdot\cos\theta_{1}\cdot\cos\theta_{2} \label{eq:vdd} \end{equation} where $\Omega_{\alpha}=\Omega\left(1-3\cos^{2}\alpha\right)$, with $\Omega=\nicefrac{\mu^{2}}{r_{12}^{3}}$. As depicted in Fig. \ref{skewmap}, $\alpha$ is the angle between the array axis and the electric field direction $\boldsymbol{\epsilon}$; $\theta_{1}$ and $\theta_{2}$ are the polar angles between the dipoles and the field direction. To exemplify logic gate operations, it is sufficient to consider just two molecules. In the basis set of qubit pendular states $\left\{ \left|00\right\rangle ,\,\left|01\right\rangle ,\,\left|10\right\rangle ,\,\left|11\right\rangle \right\} $, the two-molecule Hamiltonian can be expressed as $\mathcal{H}_{tot}=\mathcal{H}_{S1}+\mathcal{H}_{S2}+V_{dd}$, with \begin{eqnarray} \mathcal{H}_{S1} & = & \left(\begin{array}{cc} W_{0}\\ & W_{1} \end{array}\right)\otimes\mathbf{I}_{2};\;\mathcal{H}_{S2}=\mathbf{I}_{2}\otimes\left(\begin{array}{cc} W_{0}^{\prime}\\ & W_{1}^{\prime} \end{array}\right)\nonumber \\ V_{dd} & = & \Omega_{\alpha}\left[\left(\begin{array}{cc} C_{0} & C_{x}\\ C_{x} & C_{1} \end{array}\right)\otimes\left(\begin{array}{cc} C_{0}^{\prime} & C_{x}^{\prime}\\ C_{x}^{\prime} & C_{1}^{\prime} \end{array}\right)\right] \label{eq:vdd1} \end{eqnarray} Here $W_{0}$ and $W_{1}$ are eigenenergies of the pendular qubits for states $\left|0\right\rangle $ and $\left|1\right\rangle $. $\mathbf{I}_{2}$ is the $2\times2$ identity matrix. $C_{0}$, $C_{1}$ are the expectation values of $\cos\theta$ in the basis of $\left|0\right\rangle$ and $\left|1\right\rangle$; while $C_{x}$ indicates the transition dipole moment between $\left|0\right\rangle$ and $\left|1\right\rangle$. The matrix elements thus are defined as: \begin{equation} C_{0}=\left\langle 0\left|\cos\theta\right|0\right\rangle ;\;\; C_{1}=\left\langle 1\left|\cos\theta\right|1\right\rangle ;\;\; C_{x}=\left\langle 0\left|\cos\theta\right|1\right\rangle \label{eq:c01x} \end{equation} In Eq. \ref{eq:vdd}, primes are used to indicate that the external field strength differs at the location of the two molecules, as required to distinguish between the qubit sites. The entanglement of the pendular qubit states of $H_{tot}$ was evaluated in ref \cite{QiPendular}. From numerical results, simple approximate formulas were obtained that provide the concurrence and the key frequency shift, $\Delta \omega$, in terms of two unitless reduced variables: $x=\nicefrac{\mu \epsilon}{B}$ and $\nicefrac{\Omega_{\alpha}}{B}$. When $\nicefrac{\Omega_{\alpha}}{B} \ll 1$, the usual case, the concurrence is proportional to $\Delta \omega$, which is given by \begin{equation} \Delta \omega = \left|\Omega_{\alpha} \right| \left(C_{1}-C_{0}\right)\left(C^{\prime}_{1}-C^{\prime}_{0}\right) \label{eq:deltaomega} \end{equation} Figure \ref{ratio} plots this relation, wherein $\nicefrac{\Delta \omega}{\Omega_{\alpha}}$ depends only on $x$ and $x^{\prime}-x$. \begin{comment} The total Hamiltonian for a polar diatomic or linear molecule trapped in an external electric field $\boldsymbol{\epsilon}$ can be represented as\cite{QiPendular}: \begin{equation} \mathcal{H}=\frac{p^{2}}{2m}+V_{trap}\left(r\right)+B\cdot\boldsymbol{J}^{2}-\boldsymbol{\mu}\cdot\boldsymbol{\epsilon} \label{eq:total H} \end{equation} At an ultracold temperature in the trapping well, the sum of transitional kinetic energy $\left(\frac{p^{2}}{2m}\right)$ and the potential energy $\left(V_{trap}\left(r\right)\right)$ is almost constant and can be omitted from the total Hamiltonian. Thus, the Hamiltonian can be simplified to just the Stark terms as \cite{QiPendular}: which represents a spherical pendulum: $B\boldsymbol{J}^{2}$ is the rotational energy, with $B$ the rotational constant, $\mu$ the permanent dipole moment and $\epsilon$ the external electric field; $\theta$ is the polar angle between the molecular axis and the external field direction. For such a system, the qubits $\left|0\right\rangle $ and $\left|1\right\rangle $ are chosen as the two lowest $M=0$ pendular states, $\tilde{J}=0$ and $1$, respectively\cite{DeMille2002}. Here the $\tilde{J}$ notation indicates it is no longer a good quantum number, as the Stark effect mixes the field-free rotational states. The qubits thus are superpositions of spherical harmonics. If we add a second polar molecule into the trap, at distance $r_{12}$ from the first one, the coupling between the dipoles is given by \cite{QiPendular} \begin{equation} V_{dd}=\frac{\boldsymbol{\mu_{1}}\cdot\boldsymbol{\mu_{2}}-3\left(\boldsymbol{\mu_{1}}\cdot\boldsymbol{n}\right)\left(\boldsymbol{\mu_{2}}\cdot\boldsymbol{n}\right)}{\left|\boldsymbol{r_{1}}-\boldsymbol{r_{2}}\right|^{3}}, \end{equation} where $\boldsymbol{n}$ represents the unit vector along $\boldsymbol{r}_{12}$. After averaging over the azimuthal angles (see Appendix A in Ref. \cite{QiPendular}), the interaction part $V_{dd}$ can be expressed as \begin{equation} V_{dd}=\Omega\left(1-\cos^{2}\alpha\right)\cdot\cos\theta_{1}\cdot\cos\theta_{2} \label{eq:vdd1} \end{equation} where $\Omega=\nicefrac{\mu^{2}}{r_{12}^{3}}$, $\alpha$ is the angle between array axis and the electric field direction $\boldsymbol{\epsilon}$; $\theta_{1}$ ($\theta_{2}$) is the angle between $\boldsymbol{\mu_{1}}$ ($\boldsymbol{\mu_{2}}$) and the external field direction $\boldsymbol{\epsilon}$, as shown in Fig. \ref{skewmap}. In the basis set of qubit pendular states $\left\{ \left|00\right\rangle ,\,\left|01\right\rangle ,\,\left|10\right\rangle ,\,\left|11\right\rangle \right\} $, the total Hamiltonian can be expressed as $\mathcal{H}_{tot}=\mathcal{H}_{S1}+\mathcal{H}_{S2}+V_{dd}$, where $W_{0}$ and $W_{1}$ are eigenenergies of the pendular qubits for states $\left|0\right\rangle $ and $\left|1\right\rangle $. $\mathbf{I}_{2}$ is the $2\times2$ identity matrix. $C_{0}$, $C_{1}$ are the expectation values of $\cos\theta$ in the basis of $\left|0\right\rangle $ and $\left|1\right\rangle $; while $C_{x}$ indicates the transition dipole moment between $\left|0\right\rangle $ and $\left|1\right\rangle$. These matrix elements are defined as: \begin{equation} C_{0}=\left\langle 0\left|\cos\theta\right|0\right\rangle ;\;\; C_{1}=\left\langle 1\left|\cos\theta\right|1\right\rangle ;\;\; C_{x}=\left\langle 0\left|\cos\theta\right|1\right\rangle \label{eq:c01x} \end{equation} The entanglement of the pendular qubit states of this Hamiltonian, $\mathcal{H}_{tot}$, where evaluated in ref. \cite{QiPendular}, as functions of the molecular dipole moments, the rotational constant, the strength of the external field and dipole-dipole interaction. In this study we focus on implementing the basic quantum logical gates, as they form the basis for the full implementation of logical gates \cite{Book_Quantum}. \end{comment} \subsection{Multi-Target Optical Control Theory (MTOCT)} Much attention has recently been devoted to applying Optimal Control Theory for elements of quantum computation in molecular systems\cite{Vivie1999, Vivie2004, Vivie2005, Vivie2011}. The basic idea is to design laser pulses which allow manipulation of transitions within each qubit separately. For implementing basic quantum gates, the aim is to achieve large transition probabilities with the correct phase from a specific initial state into a final target state by application of an external laser field while minimizing the laser energy. For our case, we can construct the following MTOCT objective function, $\Im$, which needs to be maximized~\cite{Yamashita2009b}: \begin{multline} \Im\left[\psi_{ik}\left(t\right),\,\psi_{fk}\left(t\right),\,\epsilon\left(t\right)\right] =\sum_{k=1}^{z}\left\{ \left|\left\langle \psi_{ik}\left(T\right)|\phi_{fk}\right\rangle \right|^{2} \vphantom{\intop_{0}^{T}\left\langle \psi_{fk}\left(t\right)\left|i\left[H-\boldsymbol{\mu}\cdot\boldsymbol{E}\left(t\right)\right]+\frac{\partial}{\partial t}\right|\psi_{ik}\left(t\right)\right\rangle dt}\right. -\alpha_{0}\intop_{0}^{T}\frac{\left|\boldsymbol{E}\left(t\right)\right|^{2}}{S\left(t\right)}dt-2Re\left\{ \left\langle \psi_{ik}\left(T\right)|\phi_{fk}\right\rangle\right.\\ \left.\left.\times\intop_{0}^{T}\left\langle \psi_{fk}\left(t\right)\left|\frac{i}{\hbar}\left[H-\boldsymbol{\mu}\cdot\boldsymbol{E}\left(t\right)\right]+\frac{\partial}{\partial t}\right|\psi_{ik}\left(t\right)\right\rangle dt\right\} \right\} \label{eq:target function} \end{multline} where $z$ is the total number of targets and for $N$ qubits is given by $z=2^{N}+1$, where $2^N$ is the number of input-output transitions in the gate transformation and the supplementary equation is the phase constraint. Thus, for the two dipole system, $z=5$. Here, $\psi_{ik}$ is the wave function of the $k$-th target, driven by the laser field $E\left(t\right)$, with initial condition $\psi_{ik}\left(0\right)=\phi_{ik}$. Whereas $\psi_{fk}$ is the wave function of the $k$-th target driven by the same laser with final condition $\psi_{fk}\left(T\right)=\phi_{fk}$. Thus, the first term on the right-hand side represents the overlap between the laser driven wavefunctions and the desired target states. In the second term, $E\left(t\right)$ is the laser intensity, $S\left(t\right)=sin^{2}\left(\nicefrac{\pi t}{T}\right)$ is the laser envelop function, which guarantees the experimentally appropriate slow turn-on and turn-off laser pulse envelope \cite{Vivie1999,PRL108}. $T$ is the total duration time of the laser. $\alpha_{0}$ is a positive penalty factor chosen to weight the importance of the laser fluence. The last term denotes the time-dependent Schr\"odinger equations for wave functions $\psi_{ik}(t)$ and $\psi_{fk}(t)$ with $\mathcal{H}_{tot}$ the Hamiltonian of the system. Requiring $\delta \Im=0$, specifies the equations satisfied by the wave function, the Lagrange multiplier, and the optimized laser field \cite{Yamashita2009b}: \begin{eqnarray} i\hbar\frac{\partial}{\partial t}\psi_{ik}\left(t\right) & = & \left\{ \mathcal{H}_{tot}-\boldsymbol{\mu}\cdot\boldsymbol{E}\left(t\right)\right\} \psi_{ik}\left(t\right)\nonumber \\ i\hbar\frac{\partial}{\partial t}\psi_{fk}\left(t\right) & = & \left\{ \mathcal{H}_{tot}-\boldsymbol{\mu}\cdot\boldsymbol{E}\left(t\right)\right\} \psi_{fk}\left(t\right)\nonumber \\ & & \psi_{ik}\left(0\right)=\phi_{ik};\;\psi_{fk}\left(T\right)=\phi_{fk} \label{eq:schrodinger}\\ & & and\; k=1\cdots z\nonumber \end{eqnarray} \begin{eqnarray} E\left(t\right) & = & -\frac{z\cdot \mu \cdot S\left(t\right)}{\hbar\cdot\alpha_{0}}\cdot\sum_{k=1}^{z}\mbox{Im}\left\{ \left\langle \psi_{ik}\left(t\right)|\psi_{fk}\left(t\right)\right\rangle \right.\nonumber \\ & & \left.\left\langle \psi_{fk}\left(t\right)\left|cos\theta_{1}+cos\theta_{2}\right|\psi_{ik}\left(t\right)\right\rangle \right\} \label{eq:Et} \end{eqnarray} In order to examine the performance of the optimized laser pulse, we evaluate two factors \cite{Yamashita2009b,Palao2003,Shioya2007}: the average transition probability, given by, \begin{equation} \bar{P}=\frac{1}{z}\cdot\sum_{k=1}^{z}\left|\left\langle \psi_{ik}\left(T\right)\right|\left.\Phi_{fk}\right\rangle \right|^{2} \label{eq:averagetransion} \end{equation} and the fidelity, given by \begin{equation} F=\frac{1}{z^{2}}\cdot\left|\sum_{k=1}^{z}\left\langle \psi_{ik}\left(T\right)\right|\left.\Phi_{fk}\right\rangle \right|^{2} \label{eq:fidelity} \end{equation} The average transition probability involves only the overlap of $\psi_{ik}\left(T\right)$ and $\Phi_{fk}$, but does not reflect the difference of phase information between the laser driven final state $\psi_{ik}\left(T\right)$ and the designed target state $\Phi_{fk}$. Since the phase information is very important for quantum logical gates, we use only the fidelity parameter to assess our simulation results. \section{Simulation Results for Polar Diatomic Molecules} A number of diatomic polar molecules offer properties suitable for a quantum computer \cite{DeMille2002,Lee2005,Yelin2006,QiPendular,Yamashita2009b,Yamashita2011}. For our numerical study, we chose SrO, for which the dipole moment $\mu=8.9$ Debye and rotational constant $B = 0.33\,\unit{cm^{-1}}$ \cite{SrO}. Since we consider trap temperatures in the microkelvin range, with $\nicefrac{k_{B}T}{B}\,\sim\,10^{-6}$, thermal excitations are negligible. To specify the pendular states and other properties requires assigning the external field strengths at the sites of the two molecules and the distance between them. We used field strengths (in scaled units) of $\mu \boldsymbol{\epsilon} = 2$ and 3, corresponding to $\boldsymbol{\epsilon}=4.4$ and 6.6 kV/cm, respectively, at the two sites. Initially, we took $r_{12} = 500\,\unit{nm}$, a typical spacing for molecules trapped in an optical lattice \cite{DeMille2002,QiPendular}. We set the angle $\alpha\,=\,90^{o}$ (cf. Fig. \ref{skewmap}), the usual experimental choice. Then $\nicefrac{\Omega_{\alpha}}{B}=9.7\times10^{-5}$. The corresponding pendular eigenstate reduced energies are $\nicefrac{E_{i}}{B} = -1.65,\,1.19,\, 1.92,\, 4.77$ and the cosine matrix elements are $C_{0} = 0.480$, $C_{1} = -0.208$; $C_{0}^{\prime} = 0.579$ and $C_{1}^{\prime} = -0.164$. Thus, from Eq. \ref{eq:deltaomega} we obtain $\Delta \omega = 51 \; \unit{kHz}$. As seen in Fig. \ref{ratio}, $\Delta \omega$ varies only modestly with the field strength in the range $\nicefrac{\mu \boldsymbol{\epsilon}}{B}= 2 - 5$, considered optimum \cite{DeMille2002,QiPendular}, but $\Delta \omega$ is directly proportional to the dipole-dipole coupling strength, $\Omega_{\alpha}$. At present, it remains an open question whether, in the presence of line broadening induced by the static external electric field, adequate resolution can be obtained to resolve unambiguously a frequency shift of only ~50 kHz \cite{QiPendular}. Such a small $\Delta \omega$ is also a severe handicap for our theoretical simulation of laser-driven logic gates. For instance, the laser pulse duration \cite{Bomble2010} required for realizing the CNOT gate is $\tau\,=\,\nicefrac{10\hbar}{\Delta \omega}$. Hence a frequency shift so small as 50 kHz requires that the laser pulse duration is at least 31$\unit{\mu s}$. Recently, Zaari and Brown studied the effect of laser pulse shaping parameters about the fidelity in realizing the quantum gates. They proved that the amplitude variation and frequency resolution plays the important role in the fidelity \cite{Brown2012}. We will explore the impacts of those two coefficients in the future work. That is much longer than the self-evolution period of the system, about 29 ps. If we set the computation time step at 0.25 ps, a single simulation run would need $1.25\times10^{8}$ steps. Our MTOCT calculations involve many iterative runs; e. g., for a CNOT gate about 440 iterations. With current computers, such a calculation would be daunting: it would need about 700 GB of RAM storage to perform and each iteration would take about 2.8 days, in total nearly 3.5 years for standard i7 core. To make the calculation feasible, we reduced the spacing between the dipoles ten-fold, which increases $\Omega$ by 1000-fold, and thus $\Delta \omega= 50\,\unit{MHz}$ and the laser pulse duration shortens to ~33 ns. Then simulation runs have $\sim 10^{5}$ steps, the RAM needed shrinks to 700 MB, and the computation time to about 4 minutes per iteration, so $\sim30$ hours for 440 iterations. The reduced dipole-dipole spacing, which becomes only 50 nm, actually corresponds to the range recently proposed for plasma-enhanced, electric/electrooptical traps, for which the trap frequencies can exceed 100 MHz \cite{kais2010, Chang2009, Murphy2009}, and might be attainable in an optical ferris wheel device \cite{Franke2007}. Reducing the spacing so markedly is not considered practical, however, because it would strongly foster inelastic, spontaneous Raman scattering of lattice photons and hence induce unacceptably large decoherence \cite{DeMille2002,Carr2009,lattice}. The resort to taking $r_{12}$ unrealistically small was done reluctantly but enables us to illustrate the general utility of MTOCT applied to quantum logic gates. In the simulations, the time evolution of $\Psi_{ik}\left(t\right)$ and $\Psi_{fk}\left(t\right)$ is calculated from Eq. \ref{eq:schrodinger} by the fourth-order Runge-Kutta method, using time steps of 0.25 ps. The penalty factor $\alpha_{0}$ is set as $5\times10^{6}$, the same as in Ref. \cite{Yamashita2009b}. The penalty factor is used to minimize the fluence of the external fields \cite{Yamashita2011}. For the optimized laser pulse E(t) from Eq. \ref{eq:Et}, we adopted a rapidly convergent iteration using a first-order split-operator approach \cite{Rabitz1998}. The maximum iteration number was set at 600. In addition to the usual four-qubit basis set, $\left\{ \left|00\right\rangle ,\,\left|01\right\rangle ,\,\left|10\right\rangle ,\,\left|11\right\rangle \right\}$, we also included the phase correction, introduced into the MTOCT approach by Tesch and de Vivie-Riedle \cite{Vivie2004}. In ref. \cite{Yamashita2009b}, Mishima and Yamashita pointed out that the purpose of the phase constraint is preventing each state evolving to different phases, which can provide the correct quantum logical gates. Recently, Zaari and Brown pointed out that align the phase of all qubits appropriately can lead to effective subsequent quantum gates (laser pulses) \cite{Brown2011}. In Table \ref{table_not}, we show both the initial and target states for the NOT gate for the two dipoles. The optimized laser pulse for NOT gates applied to dipole 1 and dipole 2 separately is shown in the upper panel of Fig. \ref{NOT}. The pulse for the NOT gate applied on dipole 1 was obtained after 146 iterations with the converged fidelity difference of $1.26\times10^{-6}$; the pulse for dipole 2 took 76 iterations with the converged fidelity of 0.985. In our simulation, the total iteration number is decided by two main effects, the fidelity and the difference between the fidelity and the fidelity of the previous step. If the fidelity is greater than 0.9 and the difference is smaller than $10^{-5}$, the iteration will stop. Both pulses have similar maximum intensity, about 1.2 kV/cm. In order to verify the performance of this pulse as well as the time evolution of the system within the laser pulse, we chose an initial state $\cos\left(\frac{\pi}{3}\right)\left|01\right\rangle +\sin\left(\frac{\pi}{3}\right)\left|10\right\rangle $ and examined the population evolution. For an ideal pulse, the final state for the action of a NOT gate on dipole 1 should yield a final state of $\cos\left(\frac{\pi}{3}\right)\left|11\right\rangle +\sin\left(\frac{\pi}{3}\right)\left|00\right\rangle $. Correspondingly, the final state for a NOT gate on dipole 2 should be $\cos\left(\frac{\pi}{3}\right)\left|00\right\rangle +\sin\left(\frac{\pi}{3}\right)\left|11\right\rangle $. The population evolution due to this pulse assistance is shown in the lower panel of Fig. \ref{NOT}. Before achieving the final state, the population of each state oscillates a number of cycles. The NOT pulse for dipole 1 produced the converged population for state $\left|00\right\rangle$ to be 0.756 and that of $\left|11\right\rangle$ to be 0.240, while the pulse for dipole 2 yielded populations of 0.263 and 0.723 for states $\left|00\right\rangle$ and $\left|11\right\rangle$, respectively. As this final yield result approaches the ideal case $\left(\cos^{2}{60^{o}}=0.75; \; \sin^{2}{60^{o}}=0.25\right)$, we conclude that the optimal laser pulse drives the system from an initial state to a target state according to Table \ref{table_not} for NOT gates. The converged laser pulses by which to realize the Hadamard gate for both dipoles are shown in Fig. \ref{Hadamard} and the initial and target states are given in Table \ref{table_Hadamard}. The fidelity for each pulse on each site is 0.944 and 0.902, respectively. The maximum intensity for the pulse is around 1.8 kV/cm, slightly larger than that for NOT gates. We selected $\left|00\right\rangle$ as our initial state and plotted the time evolution in Fig. \ref{Hadamard}. The population evolution of the Hadamard gate for dipole 1 is similar to the situation of the NOT gate (Fig. \ref{NOT}) . The population of different states oscillates and finally yields 0.491 for $\left|01\right\rangle$ and 0.508 for $\left|11\right\rangle$. For the Hadamard gate on dipole 2, the population for $\left|10\right\rangle$ and $\left|11\right\rangle$ is almost zero during the entire evolution. But the population switches between $\left|00\right\rangle$ and $\left|01\right\rangle$ . This population oscillation commences at 6 ns and continues until the end of the pulse with a mean value of 0.5 and amplitude around 0.5. The final converged population is 0.529 for $\left|00\right\rangle$ and 0.470 for $\left|01\right\rangle$. The population transfer, shown in Table \ref{table_Hadamard}, going from an initial state to a target state, indicates that the design pulse successfully implemented the Hadamard gate. There is a small variation among all fidelity values we obtained in the simulation, which range from 0.902 to 0.985. This is due to the short duration of the laser pulse we are using in the simulation. The fidelity could be improved further by extending the duration of the laser pulse. Fig.\ref{CNOT} shows the converged laser pulse which performs the CNOT gate; the pulse is highly oscillatory with total duration time of 33 ns. The rapid oscillation is due to the short system evolution period. The maximum amplitude of the pulse is around 1.5 kV/cm, in the range easily achieved experimentally. The laser pulse was obtained after 441 iterations and yields fidelity of 0.975. If we use the same initial condition as that for NOT gates, the ideal final state should be $\cos\left(\frac{\pi}{3}\right)\left|01\right\rangle +\sin\left(\frac{\pi}{3}\right)\left|11\right\rangle $. The simulated population evolution due to the laser pulse evolved is plotted in the lower panel of Fig.\ref{CNOT}. The final population is 0.0005, 0.2290, 0.0029 and 0.7675 for $\left|00\right\rangle ,\;\left|01\right\rangle ,\;\left|10\right\rangle $ and $\left|11\right\rangle $, respectively. The population evolution in Fig. \ref{CNOT} follows the initial and target results of Table \ref{table_cnot}, thus confirming the correct operation of the CNOT gate. In order to test, at least modestly, how the MTOCT approach responds to a change in pulse duration for the dipole-dipole system, we increased the spacing between the two dipoles to 75 nm. The frequency shift then becomes $\Delta \omega = 14.6 \, \unit{MHz}$ and the duration of the laser pulse is 110 ns. We carried out the simulations just for the CNOT gate. The optimized laser pulse is shown in Fig. \ref{CNOT_15P}. After 500 iterations, the converged fidelity is 0.90, which is slightly smaller than for the 50 nm spacing. Again, we tested one sample initial state. The population evolution is plotted in Fig. \ref{CNOT_15P}. As before, the population oscillation continued during the whole process and the final population obtained confirms the CNOT operation. This serves to indicate that the MTOCT approach is stable and provides a useful general means to implement logic gates for dipole-dipole systems. \section{Discussion} We have applied the MTOCT methodology to pendular states for a pair of polar molecules (SrO) to determine the optimum laser pulse for implementing the NOT, CNOT and Hadamard quantum logic gates. Our results confirm that, for the conditions adopted ($r_{12}=50\,\unit{nm}$, resulting in $\Delta \omega\,\sim\,50\,\unit{MHz}$), a single laser pulse (with minimum duration $\sim\,33$ ns and amplitude $<2$ kV/cm) suffices to operate these gates with high fidelity. However, computational limitations (storage and time) did not permit us to treat conditions ($r_{12}\,=\,500$ nm, with $\sim50$ kHz) that are considered congenial for experimental implementation. This shortcoming is also manifest, in different ways, in two previous applications of MTOCT to assess laser-operated logic gates for polar molecules. Table \ref{table_compare} compares our conditions with those studies. Both nominally emulated the design by DeMille \cite{DeMille2002}, with two ultracold trapped diatomic molecules entangled via dipole-dipole interaction. The version most akin to ours, presented by Bomble, et al \cite{Bomble2010}, considered a pair of NaCs molecules, with the external static field aligned along the intramolecular axis, $r_{12}$. Then $\alpha=0^{o}$ (rather than $90^{o}$ as in our case) and hence $\Omega_{\alpha}\,=\,-2\omega$. The qubits were taken as rotational states mixed by the Stark effect to second order (thus a fairly good approximation to the pendular eigenstates we used). The conditions adopted ($r_{12}\,=\,300$ nm, resulting in $\Delta\omega\sim120\,\unit{kHz}$ ) are considered suitable for experimental implementation. However, in carrying out the MTOCT computations for our system, a very large step size of 10 ps was used. That avoided entirely the storage and time limitations we encountered (with step size 0.25 ps). However, we found that replicate calculations using a 10 ps step size gave markedly irreproducible results for the optimal laser properties and population evolution of the logic gates under our simulation system. The other previous version, presented by Mishima and Yamashita \cite{Yamashita2009b, Yamashita2011} omits altogether an external static electric field, and takes as qubits the lowest "pure" rotational states ($J \, =\,0$, $M\,=\,0$ and $J\,=\,1$, $M\,=\,0$) and lowest vibrational states of each molecule. Specific alignments of the molecular axis with respect to a laboratory fixed z-axis are considered, but that is unrealistic because without an external electric field the molecular axis distribution is isotropic and the laboratory projections of the dipole moments vanish, as emphasized elsewhere \cite{DeMille2002,QiPendular}. Also unrealistic from an experimental perspective is the choice of an extremely small distance between the molecules $\left(r_{12} \,=\, 5\,\unit{nm}\right)$. That produces large entanglement but would induce severe decoherence \cite{DeMille2002,lattice}. The MTOCT treatment of logic gates is nonetheless of interest, since the choice of "pure" rotational states as qubits results in $\Delta \omega \,=\,0$. Then, if the molecules are identical, transitions involving qubits on different sites cannot be resolved. In order to target individually the two sites, two different laser fields were used. In the MTOCT analysis, the separate laser pulses for gate operations then were much shorter and simpler than in our application using the same laser for both sites. For instance, to perform a CNOT gate using "pure" rotational states with two lasers took only $10^{3}$ ps \cite{Yamashita2009b, Yamashita2011}, whereas our use of pendular qubits with one laser required up to $10^{5}$ ps (see Figs. \ref{CNOT} and \ref{CNOT_15P}). Although the two-laser mode is theoretically inviting, it requires spatial resolution adequate for each laser to drive only one of the two molecules. That is not feasible unless the distance between the molecules is much larger than 5 nm. Another means to contend with $\Delta \omega\,=\,0$, or when it too small to resolve, has been exemplified in designs employing superconducting flux qubits \cite{Groot2010}. Again, that method requires spatial resolution sufficient to enable qubits on different sites to be driven individually. Taken together, the three studies of Table \ref{table_compare} illustrate both the utility of MTOCT and the limitations imposed by present-day computational capability. Aptly, those limitations foster yearning for the arrival of a quantum computer. \section{ACKNOWLEDGMENTS} For useful discussions, Jing Zhu thanks Ross Hoehn and Siwei Wei, and appreciates correspondence with Dr Philippe Pellegrini. For support of this work at Purdue, we are grateful to the National Science Foundation CCI center, "Quantum Information for Quantum Chemistry (QIQC)", Award number CHE-1037992, and to the Army Research Office. At Texas A\&M, support was provided by the Institute for Quantum Science and Engineering, as well as the Office of Naval Research and NSF award CHE-0809651.
{ "timestamp": "2012-12-17T02:00:41", "yymm": "1210", "arxiv_id": "1210.3669", "language": "en", "url": "https://arxiv.org/abs/1210.3669" }
\section{Introduction} The magnetic field’s configuration is essential for us to understand solar explosive phenomena such as flares and coronal mass ejections. The corona has been the subject of extensive modeling for decades, but these efforts have been hampered by our limited ability to determine the corona's three-dimensional structure \citep{Schrijver:2011,Sandman:2011}. Since the corona is optically thin, direct measurements of these magnetic fields are very difficult to implement, and the present observations for the magnetic fields based on the spectropolarimetric method (the Zeeman and the Hanle effects) are limited to low layers of solar atmosphere (photosphere and chromosphere). The problem of measuring the coronal field and its embedded electrical currents thus leads us to use numerical modelling to infer the field strength in the higher layers of the solar atmosphere from the measured photospheric field. Due to the low value of the plasma $\beta$ (the ratio of gas pressure to magnetic pressure), the solar corona is magnetically dominated \citep{Gary}. To describe the equilibrium structure of the static coronal magnetic field when non-magnetic forces are negligible, the force-free assumption is appropriate: \begin{equation} (\nabla \times\textbf{B})\times\textbf{B}=0 \label{one} \end{equation} \begin{equation} \nabla \cdot\textbf{B}=0 \label{two} \end{equation} subject to the boundary condition \begin{equation} \textbf{B}=\textbf{B}_{\textrm{obs}} \quad \mbox{on photosphere} \label{three} \end{equation} where $\textbf{B}$ is the magnetic field and $\textbf{B}_{\textrm{obs}}$ is measured vector field on the photosphere. Equation~(\ref{one}) states that the Lorentz force vanishes (as a consequence of $\textbf{J}\parallel \textbf{B}$, where $\textbf{J}$ is the electric current density) and Equation~(\ref{two}) describes the absence of magnetic monopoles. Based on the above assumption, the coronal magnetic field is modelled with nonlinear force-free field (NLFFF) extrapolation \citep{Inhester06,valori05,Wiegelmann04,Wheatland04,Wheatland:2009,tilaye09,Wheatland:2011,Amari:2010,Wiegelmann:2012,Jiang:2012}. From a mathematical point of view appropriate boundary condition for force-free modeling are the vertical magnetic field $B_{n}$ and the vertical current $J_{n}$ prescribed only for one polarity of $B_{n}$ \citep{Amari97,Amari99,Amari}. A direct use of these boundary conditions is implemented in Grad-Rubin codes \citep{Amari99}. \citet{Wheatland:2009} and \citet{Wheatland:2011} implemented the use of $B^{+}_{n}$ and $B^{-}_{n}$ solution together with an error approximation to derive consistent solutions. Using the three components of $B$ as boundary condition requires consistent magnetograms, as outlined in \citet{Aly89}. We use preprocessing and relaxation of the boundary condition to derive these consistent data on the boundary. As an alternative to real measurement, nonlinear force-free field (NLFFF) models are thought to be viable tools for investigating the structure, dynamics, and evolution of the coronae of solar active regions. It has been found that NLFFF models are successful in application to analytic test cases \citep{Schrijver06,Metcalf:2008}, but they are less successful in application to real solar data. {\color{black}{However, NLFFF models have been adopted to study various magnetic field structures and properties in the solar atmosphere. For instance, \citet{Regnier:2002,Regnier:2004,Canou:2009,Canou:2010,Guo:2010,Valori:2012} have substantially studied various magnetic field structures and properties using their respective NLFFF model codes.}} Different NLFFF models have been found to have markedly different field line configurations and to provide widely varying estimates of the magnetic free energy in the coronal volume, when applied to solar data \citep{DeRosa}. The main reasons for that problem are (1) the forces acting on the field within the photosphere, (2) the uncertainties on vector-field measurements, particularly on the transverse component, and (3) the large domain that needs to be modelled to capture the connections of an active region to its surroundings\citep{Tilaye:2010,Tilaye:2012}. In this study, we have considered those three points explicitly into account. However, caution must still be needed while assessing results from this modeling. This is because many aspects of the specific approach to modeling used in this work, such as the use of preprocessed boundary data, the missing boundary data, and the departure of the model fields from the observed boundary fields may influence the results. In this work, we use full-disk SDO/HMI and SOLIS/VSM photospheric magnetic field measurements to model the NLFFF coronal field above multiple solar active regions. Comparison of vector magnetograms for one particular active region observed with two different instruments from SOLIS and HMI and their corresponding force-free models have been studied by \citet[][submitted to AJ]{Thalmann:2012} in Cartesian coordinates. We use a larger computational domain which accommodates most of the connectivity within the coronal region. We use a spherical version of the optimization procedure that has been implemented in \citet{Tilaye:2010}. We compare quantities like the total magnetic energy content and free magnetic energy and the longitudinal distribution of the magnetic pressure in the HMI and VSM-based model volumes in spherical geometry. We relate the appearing differences to the photospheric quantities such as the magnetic fluxes and electric currents but also show the extent of agreement of NLFFF extrapolations from different data sources. \section{Instrumentation and data set} \subsection{Solar Dynamics Observatory(SDO) - Helioseismic and Magnetic Imager(HMI)} The Helioseismic and Magnetic Imager \citep[HMI;][]{ Schou:2012} is part of the Solar Dynamics Observatory (SDO) and observes the full Sun at six wavelengths and full Stokes profile in the Fe {\Rmnum{1}} 617.3 nm spectral line. HMI consists of a refracting telescope, a polarization selector, an image stabilization system, a narrow band tunable filter and two 4096 pixel CCD cameras with mechanical shutters and control electronics. Photospheric line-of-sight LOS and vector magnetograms are retrieved from filtergrams with a plate scale of 0.5 arc-second. From filtergrams averaged over about ten minutes, Stokes parameters are derived and inverted using the Milne-Eddington (ME) inversion algorithm of \citet{Borrero:2011} (the filling factor is held at unity). Within automatically identified regions of strong magnetic fluxes \citep{Turmon:2010}, the full disk inversion data are from the second HMI vector data release (JSOC data series hmi.ME\_720s\_e15w1332). The 180-degree azimuthal ambiguity in the strong field region is resolved using the Minimum Energy Algorithm \citep{Metcalf:1994,Metcalf:2006,Leka:2009}, taken from the AR patches in the second release also (data series hmi.B\_720s\_e15w1332\_cutout). For the weak field region where noise dominates, we adopt a radial-acute angle method to resolve the azimuthal ambiguity. The weak field region is defined as where field strength is below 200 G at disk center, 400 G on the limb, and varies linearly in between. The noise level is $\approx$ 10G and $\approx$ 100G for the longitudinal and transverse magnetic field, respectively. \subsection{Synoptic Optical Long-term Investigations of the Sun(SOLIS) - Vector-SpectroMagnetograph(VSM} The Vector Spectromagnetograph \citep[VSM; see][]{Jones02} is part of the Synoptic Optical Long-term Investigations of the Sun (SOLIS) synoptic facility \citep[SOLIS; see][]{Keller03}. VSM is a full disk Stokes polarimeter. As part of daily synoptic observations, it takes four different observations in three spectral lines: Stokes $I$(intensity), $V$ (circular polarization), $Q$, and $U$ (linear polarization) in photospheric spectral lines Fe {\Rmnum{1}} 630.15 nm and Fe {\Rmnum{1}} 630.25 nm , Stokes $I$ and $V$ in Fe {\Rmnum{1}} 630.15 nm and Fe {\Rmnum{1}} 630.25 nm, similar observations in chromospheric spectral line Ca {\Rmnum{2}} 854.2 nm, and Stokes $I$ in the He {\Rmnum{1}} 1083.0 nm line and the near by Si {\Rmnum{1}} spectral line. Observations of $I$, $Q$, $U$, and $V$ are used to construct full disk vector magnetograms, while $I-V$ observations are employed to create separate full disk longitudinal magnetograms in the photosphere and the chromosphere. The vector data are provided with a plate scale of one arc-second. The lower limits for the noise levels are a few Gauss in the longitudinal and 70G in the transverse field measurements. Quick-look (QL) vector magnetograms were created based on an algorithm by \citet{Auer:1977}. Beginning January 2012, QL vector magnetograms are created using weak-field approximation \citep{Ronan:1987}. The algorithm uses the Milne-Eddington model of solar atmosphere, which assumes that the magnetic field is uniform (no gradients) through the layer of spectral line formation \citep{Unno:1956}. It also assumes symmetric line profiles, disregards magneto-optical effects (e.g., Faraday rotation), and does not distinguish the contributions of magnetic and non-magnetic components in spectral line profiles (i.e., magnetic filling factor is set to unity). A complete inversion of the spectral data is performed later using a technique developed by \citet{Skumanich:1987}. This latter inversion (called ME magnetogram) also employs Milne-Eddington model of atmosphere, but solves for magneto-optical effects and determines the magnetic filling factor i.e., (the fractional contribution of magnetic and non-magnetic components to each pixel). The ME inversion is only performed for pixels with spectral line profiles above the noise level. For pixels below the polarimetric noise threshold, the magnetic field parameters are set to zero. From the measurements, the azimuths of transverse magnetic field can be determined with 180-degree ambiguity. This ambiguity is resolved using the non-potential field calculation \citep[NPFC; see][]{Georgoulis05}. The NPFC method was selected on the basis of a comparative investigation of several methods for 180-degree ambiguity resolution \citep{Metcalf:2006}. Both QL and ME magnetograms can be used for potential and/or force-free field extrapolation. However, in strong fields inside sunspots, the QL field strengths may exhibit an erroneous decrease inside the sunspot umbra due to so-called magnetic saturation. For this study, we choose to use fully inverted ME magnetograms. \section{Method} {\color{black}{Photospheric field measurements are often subject to measurement errors. In addition to this, there are finite non-magnetic forces which make the data inconsistent as a boundary for a force-free field in the corona.}} In order to deal with these uncertainties, one has to: 1.) preprocess the surface measurements in order to make them compatible with a force-free field and 2.) keep a balance between the force-free constraint and deviation from the photospheric field measurements. Both methods contain free parameters, which have to be optimized for use with data from SOLIS/VSM and SDO/HMI. \subsection{Preprocessing of HMI and VSM data} {\color{black}{To serve as suitable lower boundary condition for a force-free modeling, vector magnetograms have to be approximately flux balanced and on average a net tangential force acting on the boundary and shear stresses along axes lying on the boundary have to reduce to zero.}} We use dimensionless parameters, $\epsilon_{flux}$, $\epsilon_{force}$ and $\epsilon_{torque}$, to quantify such properties\citep{Wiegelmann06sak,tilaye09,Aly89,Molodenskii69}. Even if we choose a sufficiently flux balanced region ($\epsilon_{flux}$), we find that the force-free conditions $\epsilon_{force}\ll1$ and $\epsilon_{torque}\ll1$ are not usually fulfilled for measured vector magnetograms. In order to fulfill those conditions, we use preprocessing method as implemented in \citet{Wiegelmann06sak}. The preprocessing scheme of \citet{tilaye09} involves minimizing a two-dimensional functional of quadratic form in spherical geometry similar to \begin{displaymath} \vec{B}=\emph{argmin}(L_{p}), \end{displaymath} \begin{equation} L_{p}=\mu_{1}L_{1}+\mu_{2}L_{2}+\mu_{3}L_{3}+\mu_{4}L_{4},\label{3} \end{equation} where $\vec{B}$ is the preprocessed surface magnetic field from the input observed field $\vec{B}_{obs}$. Each of the constraints $L_{n}$ is weighted by an as yet undetermined factor $\mu_{n}$. The first term $(n=1)$ corresponds to the force-balance condition, the next $(n=2)$ to the torque-free condition, and the last term $(n=4)$ controls the smoothing. The explicit form of $L_{1}$, $L_{2}$, and $L_{4}$ can be found in \citet{tilaye09}. The term $(n=3)$ controls the difference between measured and preprocessed vector fields.. \subsection{Optimization principle} We solve the force-free equations (\ref{one}) and (\ref{two}) by optimization principle, as proposed by \citet{Wheatland00} and generalized by \citet{Wiegelmann04} for cartesian geometry. The method minimizes a joint measure of the normalized Lorentz forces and the divergence of the field throughout the volume of interest, $V$. Throughout this minimization, the photospheric boundary of the model field $\vec{B}$ is matched exactly to the observed $\vec{B}_{obs}$ and possibly preprocessed magnetogram values $\vec{B}$. Here, we use the optimization approach for functional $(L_\mathrm{\omega})$ in spherical geometry \citep{Wiegelmann07,tilaye09} along with the new method, which instead of an exact match enforces a minimal deviation between the photospheric boundary of the model field $\vec{B}$ and the magnetogram field $\vec{B}_{obs}$ by adding an appropriate surface integral term $L_{photo}$ \citep{Wiegelmann10,Tilaye:2010}. These terms are given by \begin{displaymath} \vec{B}=\emph{argmin}(L_{\omega}) \end{displaymath} \begin{equation}L_{\omega}=L_{f}+L_{d}+\nu L_{photo} \label{4} \end{equation} \begin{displaymath} L_{f}=\int_{V}\omega_{f}(r,\theta,\phi)B^{-2}\big|(\nabla\times {\vec{B}})\times {\vec{B}}\big|^2 r^2\sin\theta dr d\theta d\phi \end{displaymath} \begin{displaymath}L_{d}=\int_{V}\omega_{d}(r,\theta,\phi)\big|\nabla\cdot {\vec{B}}\big|^2 r^2\sin\theta dr d\theta d\phi \end{displaymath} \begin{displaymath}L_{photo}=\int_{S}\big(\vec{B}-\vec{B}_{obs}\big)\cdot\vec{W}(\theta,\phi)\cdot\big( \vec{B}-\vec{B}_{obs}\big) r^{2}\sin\theta d\theta d\phi \end{displaymath} where $L_{f}$ and $L_{d}$ measure how well the force-free Eqs.~(\ref{one}) and divergence-free (\ref{two}) conditions are fulfilled, respectively, and both $\omega_{f}(r,\theta,\phi)$ and $\omega_{d}(r,\theta,\phi)$ are weighting functions. The weighting functions $\omega_{f}$ and $\omega_{d}$ in $L_{f}$ and $L_{d}$ in Eq.~(\ref{4}) are chosen to be unity within the inner physical domain $V'$ and decline with a cosine profile in the buffer boundary region \citep{Wiegelmann04,tilaye09,Tilaye:2012a}. They reach a zero value at the boundary of the outer volume $V$. The distance between the boundaries of $V'$ and $V$ is chosen to be $nd=10$ grid points wide. The third integral, $L_{photo}$, is the surface integral over the photosphere which allows us to relax the field on the photosphere towards force-free solution without too much deviation from the original surface field data. $\vec{W}(\theta,\phi)$ is a space-dependent diagonal matrix the element of which are inverse proportional to the estimated squared measurement error of the respective field component. In principle one could compute $\vec{W}$ from the measurement noise and errors obtained from the inversion of measured Stokes profiles to field components. Until these quantities become available, a reasonable assumption is that the magnetic field is measured in strong field regions more accurately than in the weak field and that the error in the photospheric transverse field is at least one order of magnitude higher as the line-of-sight component. Appropriate choices to optimize $\nu$ and $\vec{W}$ for use with SDO/HMI\citep{Wiegelmann:2012} and SOLIS/VSM\citep{Tilaye:2010} magnetograms have been investigated. For a detailed description of the current code implementation, we refer to \citet{Wiegelmann10} and \citet{Tilaye:2010}. \section{Results} Within this work, we use the full disk data from SOLIS/VSM and SDO/HMI instruments obsrved on November 09 2011 around 17:45UT. During this observation there were four active regions (ARs 11338, 11339, 11341 and 11342) along with other smaller sunspots spreading on the disk. {\color{black}{To accommodate the connectivity between those ARs and their surroundings, we adopt a non uniform spherical grid $r$, $\theta$, $\phi$ with $n_{r}=225$, $n_{\theta}=375$, $n_{\phi}=425$ grid points in the direction of radius, latitude, and longitude, respectively, with the field of view of $[r_{\text{min}}=1R_{\sun}:r_{\text{max}}=2R_{\sun}]\times[\theta_{\text{min}}=-50^{\circ}:\theta_{\text{max}}=50^{\circ}]\times[\phi_{\text{min}}=90^{\circ}:\phi_{\text{max}}=270^{\circ}]$.}} Given the twice as large plate scale of VSM, we bin the HMI vector maps to the resolution of VSM in order to compare the photospheric magnetic field and subsequent force free modeling. \begin{figure*}[htp!] \centering \includegraphics[viewport=10 5 828 800,clip,height=13cm,width=15.0cm]{figure1.pdf} \caption{Magnetic vector maps of VSM and HMI on part of the lower boundary. The color coding shows $B_{r}$ on the photosphere and the white arrow indicates the transverse components of the field. The vertical and horizontal axes show latitude, $\theta$ and longitude, $\phi$ on the photosphere respectively. }\label{fig1} \end{figure*} To deal with vector magnetogram data being inconsistent with the force-free assumption, we use a preprocessing routine in spherical geometry, which derives suitable boundary conditions for force-free modeling from the measured photospheric data. Applying this procedure to both SDO/HMI and SOLIS/VSM reduces $\epsilon_{force}$ and $\epsilon_{torque}$ further significantly. The two quantities are very well below unity after preprocessing, which gives us some confidence that the data might serve as suitable boundary condition for a force-free modeling. Doing this, we do not intend to suppress the existing forces in the photosphere. Instead, we try to approximate the magnetic field at a chromospheric level where magnetic forces are expected to be much smaller than in the layers below. Both vector magnetograms are almost flux balanced and the field of view was large enough to cover the full-disk. The unsigned magnetic flux of longitudinal surface magnetic field from HMI is 1.57 times that of VSM magnetogram. This is in agreement with recent comparative study by \citet{Pietarila:2012}, who found that the factor to convert SOLIS/VSM to SDO/HMI increases with flux density from about 1 (weak fields) to about 1.5 (strong fields) {\color{black}{for the line-of-sight full disk magnetograms}}. HMI inverts weak field regions, however, for VSM zeros are assigned to pixels where the measured polarization signal is too weak to perform a reliable inversion. Disregarding these “zero-pixels”, about 20\% of the total number of pixels in the HMI and VSM full disk vector maps are remaining for comparison. HMI is found to detect most transverse field. We used a standard preprocessing parameter set $\mu_{1} = \mu_{2} = 1$ and $\mu_{3}=0.001$, which are similar to the values calculated from vector data used in previous studies\citep{Wiegelmann:2012} for HMI data in Cartesian coordinates. Table~\ref{table1} lists the values of dimensionless parameters for the used HMI and VSM data-sets. In this study, we have found that the optimal value for smoothing parameter is $\mu_{4}=0.05$ for full-disk HMI data. These parameters control the amount of force-freeness, torque-freeness, nearness to the actually observed data and smoothing, respectively. As the result of parameter study, \citet{Tilaye:2010} have found $\mu_{1} = \mu_{2} = 1$, $\mu_{3}=0.03$ and $\mu_{4}=0.45$ as optimal for full-disk VSM data. The preprocessing influences the structure of the magnetic vector data. It does not only smooths $\vec{B}_{t}$ (transverse field) but also alters its values in order to reduce the net force and torque. The change in $\vec{B}_{t}$ is more pronounced than the radial component $\vec{B}_{r}$ (radial field) since $\vec{B}_{t}$ is measured with lower accuracy than the longitudinal magnetic field. Figure~\ref{fig1} shows the preprocessed and observed surface vector magnetic field obtained from SDO/HMI and SOLIS/VSM magnetograms. To identify the similarity of vector components from HMI and VSM on the bottom surface, we calculate their pixel-wise correlations before and after preprocessing. The correlation were calculated from \begin{equation} C_\mathrm{ vec}= \frac{ \sum_i \vec{v}_{i} \cdot \vec{u}_{i}}{ \Big( \sum_i |\vec{v}_{i}|^2 \sum_i |\vec{u}_{i}|^2 \Big)^{1/2}}, \label{6} \end{equation} where $\vec{v}_{i}$ and $\vec{u}_{i}$ are the vectors at each grid point $i$ on the bottom surface. If the vector fields are identical, then $C_{vec}=1$; if $\vec{v}_{i}\perp \vec{u}_{i}$ , then $C_{vec}=0$. Table~\ref{table2} shows the correlation ($C_{vec}$) of the 2D surface magnetic field vectors of observed and preprocessed data from HMI and VSM for the radial and transverse components. The vector correlation between $\vec{B}_{t}$ in the preprocessed HMI and VSM surface vector maps is clearly more closer to unity than the corresponding surface vector maps without preprocessing. There is no such difference in correlations between $\vec{B}_{r}$ before and after preprocessing. This is to be expected since the preprocessing scheme only smooths the longitudinal field while it smooths and alters the transverse field. The mean value of the changes due to preprocessing in the longitudinal field is $10^{-3}$G and for the transverse field on the order of 10 G, i. e., well within the measurement uncertainty of the HMI and VSM. \begin{figure} \centering \includegraphics[viewport=12 20 480 795,clip,height=12.2cm,width=8.9cm]{figure2.pdf} \caption{Mask function for magnetic vector field distribution on full disk from (a) VSM and (b) HMI. The vertical and horizontal axes show latitude, $\theta$ and longitude, $\phi$ on the photosphere respectively.}\label{fig2} \end{figure} \begin{center} \begin{table} \caption{Flux-balance, force and torque free parameters of SOLIS/VSM and SDO/MHI full disk magnetograms.} \label{table1} \begin{tabular}{ccccc} \hline \hline \multilineL{Data set}&\multilineL{$\epsilon_{flux}$} &\multilineL{$\epsilon_{force}$}& \multilineL{$\epsilon_{torque}$} \\ \hline HMI observed &$-0.0621$ &$0.1305$ & $0.1773$\\ HMI preprocessed &$-0.0313$&$0.0001$&$0.0002$\\ SOLIS observed &$-0.0857$&$0.4571$&$0.2947$\\ SOLIS preprocessed &$-0.0460$&$0.0015$&$0.0007$\\ \hline \end{tabular} \end{table} \end{center} Before we perform nonlinear force-free extrapolations we use the preprocessed radial component $\vec{B}_{r}$ of the VSM and HMI-data to compute the corresponding potential fields using spherical harmonic expansion for initializing our code. We implement the new term $L_{photo}$ in Eq.~(\ref{4}) to work with boundary data of different noise levels and qualities or even neglect some data points completely. SOLIS/VSM provides full-disk vector-magnetograms, but for some individual pixels the inversion from line profiles to field values may not have been successfully inverted and field data there will be missing for these pixels. Since the old code without the $L_{photo}$ term requires complete boundary information, it cannot be applied to this set of SOLIS/VSM data. In our new code, these data gaps are treated by setting $W=0$ for these pixels in Eq.~(\ref{4}) \citep{Wiegelmann10,Tilaye:2010}. For those pixels, for which $\vec{B}_{obs}$ was successfully inverted, we allow deviations between the model field $\vec{B}$ and the input fields observed $\vec{B}_{obs}$ surface field using Eq.~(\ref{4}), so that the model field can be iterated closer to a force-free solution even if the observations are inconsistent. The improved optimization scheme allows us to relax the magnetic field also on the lower boundary. The relaxation of the lower boundary introduces a further modification of the vector data, in addition to that by the preprocessing applied before. The mean modification of the longitudinal field due to the relaxation of the lower boundary is $10^{-4}$G and absolute values are on the order of 1 G. The mean changes of the transverse field are on the order of 10G and absolute values can be several 100 G. Given the noise levels of HMI and VSM measurements of the longitudinal ($\approx10$G and a few G, respectively) and transverse field ($\approx100$G and $\gtrsim70$G, respectively), the modifications are on the order of the measurement error. For nonlinear force-free fields we minimize the functional Eq.~(\ref{4}). In order to control the speed with which the lower boundary is injected during the extrapolation, we vary the Langrangian multiplier $\nu$. Unless an exact error computation becomes available from inversion and ambiguity removal of the photospheric magnetic field vector, a reasonable assumption is that the field is measured more accurately in strong field regions and one can carry out computations with the mask $\propto B_{t}$ and $\propto B^{2}_{t}$. We choose a mask function of $W=\big(B_{t}/max(B_{t})\big)^{2}$, which gives more weight to strong field regions than to weak ones as investigated in \citet{Wiegelmann:2012}. Figure~\ref{fig2} shows surface distribution of mask function $W$ for VSM and HMI full-disk data. For strong field regions $W$ is close to unity and decline to zero in weaker field regions. We vary the Langrangian multiplier $\nu$ between $0.1$ and $0.0001$ to investigate the optimal parameter for HMI full-disk data. To evaluate how well the force-free and divergence-free condition are satisfied for different Langrangian multiplier $\nu$, we monitor a number of expressions, such as $L_{f}$, $L_{d}$ and \begin{equation} \sigma_{j}=\Big(\sum_{i}\frac{|\vec{J}_{i}\times \vec{B}_{i}|}{B_{i}}\Big)/\sum_{i}J_{i}, \label{7} \end{equation} where $\sigma_{j}$ is the sine of the current weighted average angle between the magnetic field $\vec{B}$ and electric current $\vec{J}$. \begin{table} \caption{The correlations between the components of surface fields from HMI and VSM data.} \label{table2} \begin{tabular}{cccc} \hline \hline & v & u & $C_\mathrm{vec}$\\ \hline No preprocessing &$(\vec{B}_{HMI})_{r}$&$ (\vec{B}_{VSM})_{r}$ &$0.947$\\ No preprocessing & $(\vec{B}_{HMI})_{t}$&$ (\vec{B}_{VSM})_{t}$ &$0.893$\\ Preprocessed & $(\vec{B}_{HMI})_{r}$&$ (\vec{B}_{VSM})_{r}$ &$0.965$\\ Preprocessed & $(\vec{B}_{HMI})_{t}$&$ (\vec{B}_{VSM})_{t}$ &$0.951$\\ \hline \end{tabular} \end{table} \begin{table} \caption{Evaluation of force-free field models from preprocessed HMI data. The first column names the model and in column 2 shows the used Langrangian multipliers. Columns 3-5 show different force-free consistency evaluations. Column 6 shows the ratio of NLFFF energy density to the corresponding potential energy density and column 7 the computing time.} \label{table3} \begin{tabular}{cccccc} \hline \hline $\nu$ & $L_{f}$ & $L_{d}$ & $sin^{-1}(\sigma_{i})$& $E/E_{\text{pot}}$&Time\\ \hline $0.1$&$21.7$ &$13.4$&$25.8^{\circ}$&$1.06$&2h:17\,min\\ $0.05$&$19.8$ &$10.7$&$18.1^{\circ}$&$1.12$&3h:31\,min\\ $0.001$&$2.9$ &$1.5$&$4.8^{\circ}$&$1.22$&4h:39\,min\\ $0.005$&$5.2$ &$3.9$&$8.9^{\circ}$&$1.23$&11h:47\,min\\ $0.0001$&$7.7$ &$4.3$&$10.2^{\circ}$&$1.26$&48h:53\,min\\ \hline \end{tabular} \end{table} \begin{figure*}[htp!] \centering \includegraphics[bb=10 10 870 830,clip,height=14.2cm,width=14.2cm]{figure3.pdf} \caption{{\color{black}{a) SDO/HMI and b) AIA images and their respective selected magnetic field lines plots reconstructed from c) SOLIS and d) HMI magnetograms using nonlinear force-free modelling. The color coding shows $B_{r}$ on the photosphere. Yellow field lines represent closed field lines, while field lines changing in color from yellow to brown (from bottom to the top) represent the open ones. The gray area indicates the region where magnetic field values are close to zero.}}}\label{fig3} \end{figure*} For a sufficient small Lagrangian multiplier $\nu = 0.001$ we found that the resulting coronal fields are force and divergence free compared to other values as shown in Table~\ref{table3}. The weighted angle between the magnetic field and electric current is about $5^{\circ}$ for $\nu = 0.001$. Injecting the boundary faster by choosing a higher Lagrangian multiplier ($\nu = 0.1$) speeds up the computation, but the residual forces are higher and current and field are not well aligned as investigated by \citet{Wiegelmann:2012} for single AR. To understand the physics of solar flares, including the local reorganization of the magnetic field and the acceleration of energetic particles, one has to estimate the free magnetic energy available for these phenomena {\color{black}{\citep{Regnier:2007a,Aschwanden:2008,Schrijver:2009a}.}} This is the free energy that can be converted into kinetic and thermal energy. From the energy budget and the observed magnetic activity in the active region, \citet{Regnier} and \citet{Thalmann} investigated the free energy above the minimum-energy state for the flare process. We estimate the free magnetic energy to be the difference between the extrapolated force-free fields and the potential field with the same normal boundary conditions in the photosphere. We therefore estimate the upper limit to the free magnetic energy associated with coronal currents of the form \begin{equation} E_\mathrm{free}=\frac{1}{8\pi}\int_{V}\Big(B_{nlff}^{2}-B_{pot}^{2}\Big)r^{2}sin\theta dr d\theta d\phi, \label{ten} \end{equation} \begin{center} \begin{table} \caption{The magnetic energy associated with extrapolated NLFFF field configurations from full disk SDO/HMI and SOLIS/VSM data.} \label{table4} \begin{tabular}{ccc} \hline \hline Model & $E_{nlff}(10^{33}erg)$& $E_\mathrm{free}(10^{33}erg)$\\ \hline SOLIS/VSM &$8.609$&$1.375$\\ SDO/HMI &$8.913$&$1.607$\\ \hline \end{tabular} \end{table} \end{center} where $B_{pot}$ and $B_{nlff}$ represent the potential and NLFFF magnetic field,respectively. The magnetic energy densities associated with the potential field configurations from SDO/HMI and SOLIS/VSM data are found to be $7.306\times10^{33}erg$ and $7.234\times10^{33}erg$, respectively. This has to be expected as the unsigned magnetic flux of longitudinal surface magnetic field from HMI is greater than that of VSM magnetogram. The magnetic energy of NLFFF obtained from HMI data is greater that the one obtained from VSM data as shown in Table~\ref{table4}. This is due to the fact that HMI data has more longitudinal unsigned magnetic flux and detects more transverse field than VSM. To study the influence of the use of preprocessed boundary data from the observed boundary fields on the estimation of free-magnetic energy , we have computed the magnetic energy associated with the potential field and NLFFF configurations from the original SDO/HMI data without preprocessing and with preprocessing. The case for SOLIS/VSM has been studied by \citet{Tilaye:2012}. As preprocessing procedure filters out small scale surface field fluctuations, the magnetic energy associated with NLFFF obtained from preprocessed SDO/HMI boundary data is smaller than the one without preprocessing. Obviously, the potential field energies of boundary data with and without preprocessing are close in value, since the potential field calculation makes use of the radial magnetic field component which is not affected too much by preprocessing procedure. The computed magnetic energy from SDO/HMI original data without preprocessing is about $9.067\times10^{33}\textrm{erg}$, which is about $1.7\%$ higher than the one obtained from preprocessed and modified observational HMI boundary data. However, this energy does not correspond to the nonlinear force-free magnetic field solution since the original boundary data without preprocessing is not a consistent boundary condition for NLFFF modeling\citep{Tilaye:2012}. We investigate the magnetic field configurations of the VSM and HMI models by comparing the vector components. We calculate the vector correlation (using Equation ~(\ref{6}) in the computational volume) of the potential fields and the NLFFF fields. The average vector correlation between the potential fields based on the HMI and VSM data is 0.97. The average vector correlation between the NLFFF fields of HMI and VSM data is 0.94. Figure~\ref{fig3} a. and b. show the surface radial magnetic field component observed by HMI instrument on November 09 2011 and the corresponding AIA (Atmospheric Imaging Assembly) image in 171\AA{}, respectively. {\color{black}{The magnetic field lines obtained from nonlinear force-free extrapolation based on HMI and VSM data have good correlations as shown in Figure~\ref{fig3} c. and d., with the foot points of the field lines from the two magnetograms are identical.}} However, there are some differences. For example, extrapolated field lines from SDO/HMI magnetogram (Figure 3d) do not show transequatorial loops connecting trailing polarity of NOAA AR 11339 (west of central meridian in northern hemisphere) and trailing polarity of AR11338 (southern hemisphere). This transequatorial loop is well represented by NLFFF extrapolation based on SOLIS/VSM. This difference can be attributed to presence of a patch of weak fields between two active regions (in SDO/HMI data). With this weak field patch, SDO/HMI model tends to close field lines originating in trailing polarity of AR11339, while SOLIS/VSM model extends them to AR11338. Both extrapolations indicate loops connecting AR11339 and AR11342 (East of central meridian in Northern hemisphere). Although SDO/AIA image (Figure 3b) does not show coronal loops connecting these two active region, such loops are clearly visible in images taken by X-ray Telescope on Hinode. These loops appear to fit better field lines from SOLIS/VSM model. Despite a relatively good visual agreement in extrapolated fields, the models show some notable disagreement in derived magnetic energy. Thus, for example, the estimated free magnetic energy obtained from SDO/HMI is $14.4\%$ higher than that of SOLIS/VSM. This is due to the fact that HMI data includes small scale magnetic fields measurements. \begin{figure}[htp!] \centering \includegraphics[viewport=5 10 500 795,clip,height=13.2cm,width=8.5cm]{figure4.pdf} \caption{Magnetic pressure $p_{m}$ in the longitudinal cross-section at $\theta=20^{\circ}$ for the (a) VSM and (b) HMI. The vertical and horizontal axes show radial distance in solar radius and longitude, $\phi$ on the photosphere, respectively.}\label{fig4} \end{figure} \begin{figure} \centering \includegraphics[viewport=5 10 480 795,clip,height=12.5cm,width=9.5cm]{figure5.pdf} \caption{Vector plot of the radial component of electric current density and vector field plot of transverse component of electric current density with white arrows. The colour coding shows $J_{r}$ on the photosphere. The vertical and horizontal axes show latitude, $\theta$ and longitude, $\phi$ on the photosphere respectively.}\label{fig5} \end{figure} We study the magnetic pressure, $p_{m}$, in a longitudinal cross-section at about $\theta=20^{\circ}$ as shown in Figure~\ref{fig4}. The overall pattern of $p_{m}$ appears to be the same when calculated from the HMI and VSM NLFFF model volume. In this study we found that the magnetic pressure of NLFFF model field from HMI is greater than that of VSM for same locations in the cross section. {\color{black}{This is expected, since the magnetic pressure is proportional to the magnetic energy density as the magnetic energy of NLFFF model field from HMI is larger than that of VSM.}} The surface radial ($J_{r}$) and transverse ($\vec{J}_{t}$) electric current densities of the NLFFF field models based on HMI and VSM data are shown in in Figure~\ref{fig5}. {\color{black}{The value of the total radial surface electric current density flux of the NLFFF field models based on HMI is greater that than that of VSM. It agrees with fact that the HMI instrument measures more transverse magnetic field than that of VSM instrument.}} The transverse surface electric current density of the NLFFF field model based on HMI spreads more around the active regions than that of VSM as shown in Figure~\ref{fig5}. This could reflect the fact \citep[see]{Pietarila:2012} that the scaling factor between SOLIS/VSM and SDO/HMI is different for weak and strong fluxes. This difference is scaling factor may act as a weighting function when comparing electric currents derived from two models. In addition, the vector correlations of the radial and transverse surface electric current densities of the NLFFF field models based on HMI and VSM are 0.96 and 0.88, respectively. This indicate that there is more pronounced discrepancy in transverse electric current densities than radial one. \section{Conclusion and outlook} We have investigated the coronal magnetic field associated with full solar disk on November 09 2011 by analysing SDO/HMI and SOLIS/VSM data. We carried out nonlinear force-free coronal field extrapolations of a full disk magnetograms. The vector magnetogram is almost perfectly flux balanced and the field of view was large enough to cover all the weak field surrounding the active regions. Both conditions are necessary in order to carry out meaningful force-free computations. We have used the optimization method for the reconstruction of nonlinear force-free coronal magnetic fields in spherical geometry \citep{Wiegelmann07,tilaye09} to campare the final NLFFF model field solution from HMI and VSM full disk data. We have found that the optimal value for smoothing parameter is $\mu_{4}=0.05$ for full-disk HMI data for the purpose of preprocessing. We conclude that the choice $\nu=0.001$ is the optimal choices for HMI full disk data set for our new code as investigated in \citet{Wiegelmann:2012}. The magnetic field lines obtained from nonlinear force-free extrapolation based on HMI and VSM data have good correlations. However, the models show some disagreement on the estimated relative free magnetic energy which can be released during explosive events and surface electric current density. {\color{black}{ Reconstructed magnetic field based on SDO/HMI data have more contents of total magnetic energy, free magnetic energy, the longitudinal distribution of the magnetic pressure and surface electric current density compared to SOLIS/VSM data.}} Since the disagreement in free energy can be attributed to presence of weaker transverse fields in SDO/HMI measurements, it is not clear how important is the found (14.4\%) difference in {\color{black}{free magnetic energy}} for flare and CME processes originating in magnetic fields higher in the corona. This aspect deserves a separate study. \begin{acknowledgements} The authors thank the anonymous referee for helpful comments. Data are courtesy of NASA/SDO and the AIA and HMI science teams. SOLIS/VSM vector magnetograms are produced cooperatively by NSF/NSO and NASA/LWS. The National Solar Observatory (NSO) is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. This work was supported by NASA grant NNX07AU64G and the work of T. Wiegelmann was supported by DLR-grant $50$ OC $453$ $0501$. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2012-10-16T02:01:07", "yymm": "1210", "arxiv_id": "1210.3668", "language": "en", "url": "https://arxiv.org/abs/1210.3668" }
\section{Introduction} Magnetic materials that have high spin polarization at room temperature or higher are highly desirable for spintronic applications\cite{wolf,cro2-96}. Half-metallic materials are good candidates because they can have high spin polarization at high temperature\cite{hm}. In the case of CrO$_2$, 96\% spin polarization has been achieved experimentally\cite{cro2-96}. Since 1998, double perovskite oxides have been explored extensively for this purpose because both half-metallicity and high Curie temperature can be achieved in such materials\cite{lsmo,oxide}. It has been shown experimentally that several double perovskite oxides, such as Sr$_2$FeMoO$_6$ and Sr$_2$CrReO$_6$ materials\cite{oxide}, can keep their ferromagnetic or ferrimagnetic phases far beyond room temperature, and more importantly, high-quality materials have been realized recently \cite{exp1,exp2,exp3,exp4,sum1,sum2}. Here, we present our first-principles exploration on double perovskite oxides of Bi$_2$ABO$_6$ with A being 3d transition metals and B 4d/5d ones. We optimize their structures fully and then investigate their stability, electronic structures, and magnetic properties. We find four half-metallic ferimagnetic materials with negative formation energies. For the best case of Bi$_2$FeMoO$_6$, the half-metallic gap and Curie temperature reach to 0.71 eV and 650 K, respectively. This means that high spin polarization could be realized at high temperatures well above room temperature. More detailed results will be presented in the following. \section{Computational details} We use the pseudo-potential and plane wave methods within the density functional theory (DFT) \cite{dft}, as implemented in package VASP\cite{vasp}. We use generalized gradient approximation (GGA)\cite{pbe96} for the exchange-correlation potential. In addition to usual valence states, the semicore d states are considered for Bi and the semicore p states for Cr, Mn, Fe, Mo, and Os. Scalar approximation is used for relativistic effect\cite{relsa}, and the spin-orbit coupling is neglected because it has little effect on our main conclusion (to be detailed in the following). We use Monkhorst-Pack method to generates the K-point mesh\cite{mpmesh}, choosing 6$\times$6$\times$6 (6$\times$6$\times$4) for structure optimizations and total energy calculations of 10-atom (20-atom) unit cells and 12$\times$12$\times$12 for electronic structure calculations. The cut-off energy is set to 500 eV and the criteria for convergence is $10^{-6}$ eV for electronic steps and 0.005 eV/\AA{} on atoms for ionic steps. Metropolis algorithm and variants are used for our Monte Carlo simulations\cite{mc,metropolis}. Phase transition temperatures $T_c$ are determined through investigating the average magnetization, magnetic susceptibility, and fourth-order Binder's cumulant as functions of temperature\cite{mc}. Several three-dimensional lattices of upto 30$\times$30$\times$30 magnetic unit cells with periodic boundary condition are used in these calculations. The first 90,000 Monte Carlo steps (MCS) of total 150,000 MCS are used for the thermal equilibrium, and the remaining 60,000 MCS are used to calculate the average magnetization for a given temperature. The Curie temperature $T_c$ value is determined through investigating the magnetization as a function of temperature. \section{Main calculated results and analysis} Comparing Sr$_2$FeMoO$_6$ \cite{oxide}, Bi$_2$FeCrO$_6$ \cite{exp1,exp2,bi2fecro6,bi2fecro6a} and others similar \cite{exp3,exp4,sum1,sum2}, we consider double perovskite structure of formula Bi$_2$ABO$_6$, taking some 3d transition-metal elements for A and some 4d/5d for B. We optimize fully their crystal structures in terms of the unit cell of the 10 atoms. The optimized Bi$_2$ABO$_6$ has space group Rc (\#146). This crystal structure, similar to R3c (\#161), is distorted from cubic double perovskite structure, has rhombohedral symmetry, and includes 10 internal parameters\cite{oxide,sum1,sum2}. We shall present four of the Bi$_2$ABO$_6$ compounds, for AB = FeMo, MnMo, MnOs, and CrOs, because they have negative formation energies so that their experimental realization should be probable. Our optimized structural parameters of the four Bi$_2$ABO$_6$ compounds are summarized in Table I. The total magnetic moments per formula unit and the partial moments in the spheres of the magnetic A and B atoms are summarized in Table II. The magnetic moments in the spheres of other atoms are much smaller. The total moments are integers in unit of Bohr magneton $\mu_B$, showing a feature of half-metallicity\cite{hm}. The magnetic moment at the A atom is antiparallel to that at the B atom, which means that the magnetic order in these compounds is ferrimagnetic. \begin{table}[htb] \caption{Optimized structural parameters of double perovskite Bi$_2$ABO$_6$ with the Rc (\#146) crystal structure for AB = FeMo, MnMo, MnOs, and CrOs.}\label{table1} \begin{ruledtabular} \begin{tabular}{lcccc} AB & FeMo & MnMo & MnOs & CrOs \\ \hline $a$ (\AA) & 5.725 & 5.779 & 5.761 & 5.771 \\ $c$ (\AA) & 14.054 & 14.093 & 13.607 & 12.816 \\ $\alpha$ ($^\circ$) & 59.91 & 60.20 & 61.61 & 64.37 \\ \hline Bi $z_1$ & 0.9881 & 0.9859 & 0.9917 & 0.0294 \\ Bi $z_1^\prime$ & 0.4837 & 0.4851 & 0.4923 & 0.5071 \\ A $z_2$ & 0.2589 & 0.2572 & 0.2593 & 0.2682 \\ B $z_2^\prime$ & 0.7675 & 0.7657 & 0.7647 & 0.2681 \\ O $x_3$ & 0.5504 & 0.5612 & 0.5534 & 0.4811 \\ O $x_3^\prime$ & 0.0506 & 0.0471 & 0.0520 & 0.0558 \\ O $y_3$ & 0.9357 & 0.9220 & 0.9273 & 0.9353 \\ O $y_3^\prime$ & 0.4350 & 0.4397 & 0.4348 & 0.4287 \\ O $z_3$ & 0.1022 & 0.1036 & 0.0981 & 0.1075 \\ O $z_3^\prime$ & 0.6099 & 0.6145 & 0.6085 & 0.6009 \\ \end{tabular} \end{ruledtabular} \end{table} In Fig. 1 we present spin-resolved density of states (DOS, in states/eV per formula unit) between -7.7 and 3 eV of the double perovskite Bi$_2$FeMoO$_6$ and Bi$_2$MnMoO$_6$. The total DOS in majority-spin channel is zero at the Fermi level in both of the cases. This indicates that the two double perovskite compounds are half-metallic, in agreement with the integral magnetic moments in unit of $\mu_B$. In Fig. 2 we present spin-resolved density of states between -8 and 3 eV of the double perovskite Bi$_2$MnOsO$_6$ and Bi$_2$CrOsO$_6$. They are both half-metallic, too, but it is in minority-spin channel that the total DOS at the Fermi level is equivalent to zero in these two cases. The filled electronic states near the Fermi level originate mainly from the B atom (Mo or Os). We can use half-metallic gap $E_g$ as the key parameter to describe the half-metallic property\cite{hm,lbg1,lbg2,lbgbook}. The $E_g$ values of the four compounds, from 0.25 to 0.71 eV, are summarized in Table II. For the Bi$_2$FeMoO$_6$, $E_g$ is equivalent to 0.71 eV, which implies that high spin polarization could be robust even after the spin-orbit coupling is taken into account. \begin{figure}[!htb] \includegraphics[width=7cm]{fig1.eps} \caption{(color online) Spin-resolved density of states (DOS, in state/eV per formula unit) of double perovskite Bi$_2$ABO$_6$ for AB=FeMo (a) and AB=MnMo (b). The solid line is total DOS, and short-dashed, dot-dashed, and dotted lines refer to partial DOS projected in the atomic spheres of Bi, A, B, and O, respectively. The upper part in each panel is majority-spin DOS result, and the lower the minority-spin one.}\label{dos1} \end{figure} \begin{figure}[!htb] \includegraphics[width=7cm]{fig2.eps} \caption{(color online) Spin-resolved density of states (DOS, in state/eV per formula unit) of double perovskite Bi$_2$ABO$_6$ for AB=MnOs (a) and AB=CrOs (b). The solid line is total DOS, and short-dashed, dot-dashed, and dotted lines refer to partial DOS projected in the atomic spheres of Bi, A, B, and O, respectively. The upper part in each panel is majority-spin DOS result, and the lower the minority-spin one.}\label{dos2} \end{figure} We investigate their formation energies to determine the stability of these materials towards experimental realization. For achieving reasonable reliability, we choose stable and reachable compounds as our references, and try to use those reference compounds whose valence states concerned are close to those of our compounds. Therefore, we use $\mathrm{Bi_2O_3}$, $\mathrm{Cr_3O_4}$, $\mathrm{Mn_3O_4}$, $\mathrm{Fe_3O_4}$, $\mathrm{MoO_2}$, and $\mathrm{OsO_2}$ for our reference compounds for calculating the formation energies. The formation energy is defined as \begin{equation} E_f = E(\mathrm{Bi_2 ABO_6})-E_{\mathrm{ref}}, \end{equation} where $E(\mathrm{X})$ is the total energy of X, and $E_{\mathrm{ref}}$ is defined as $E(\mathrm{Bi_2O_3 })+\frac 13 E(\mathrm{A_3O_4 })+2E(\mathrm{BO_2})-\frac 76 E(\mathrm{O_2})$. This criteria is much more severe than merely using AO compound or bulk A materials because O atom in the gas state has higher energy than in compounds such as $\mathrm{Fe_3O_4}$. This should be more precise because the bonds in our materials are almost formed between metal atom and O, not between metal atoms. The formation energies for the four compounds are summarized in Table II. The negative values means that they should probably be realized. \begin{table}[!htb] \caption{Calculated values of formation energy ($E_f$ in eV), magnetic moment of atom A ($M_A$ in $\mu_B$), magnetic moment of atom B ($M_B$ in $\mu_B$), total magnetic moment ($M$ in $\mu_B$ per formula unit), half-metallic gap ($E_g$ in eV), and Curie temperature ($T_c$ in K) of double perovskite Bi$_2$ABO$_6$ for AB = FeMo, MnMo, MnOs, and CrOs.}\label{table1} \begin{ruledtabular} \begin{tabular}{ccccc} AB & FeMo & MnMo & MnOs & CrOs \\ \hline $M_A$ & 3.638 & 4.279 & 4.066 & 2.636 \\ $M_B$ & -1.755& -1.391& -1.041& -0.613 \\ $M$ & 2.000 & 3.000 & 3.000 & 2.000 \\ \hline $E_g$ & 0.71 & 0.47 & 0.46 & 0.25 \\ \hline $E_f$ & -0.41 & -0.26 & -0.42 & -0.29 \\ \hline $T_c$ & 650 & 255 & 174 & 201 \\ \end{tabular} \end{ruledtabular} \end{table} In order to estimate the Curie temperatures ($T_c$) of the materials, we calculate the spin exchange interactions between the nearest and next nearest neighboring magnetic atoms (A and B) in terms of the 20-atom unit cells. Rigorously speaking, there are some induced spin density in the spheres of the Bi and O atoms, less than $0.05\mu_B$. Because they are very small compared to those in the spheres of the magnetic atoms, we shall consider only the magnetic atoms in the following. Actually, A and B atoms form a lattice of magnetic unit cells (cubic unit cells of a NaCl crystal structure) \cite{oxide,sum1,sum2}. In these calculations, we fix the structures and change the magnetic orders of A and B atoms. In order to make the electronic steps converge for a magnetic order, the linear mixing parameter should be decreased to an small value, 0.1 or smaller. Through comparing the total energies, we obtain the spin exchange interaction constants: $J_{\mathrm{AB}}$ for the nearest A and B pair, $J_{\mathrm{AA}}$ and $J_{\mathrm{BB}}$ for the A-A and B-B next nearest pairs. $J_{\mathrm{AB}}$ is dominant over the others. The resultant spin Hamiltonian reads: \begin{equation} H=\sum_{\langle ij\rangle}J_{ij}\vec{S}_i\cdot \vec{S}_j \end{equation} where $\vec{S}_i$ is spin operator at site $i$ (in both of the A and B sublattices), the summation is over spin pairs, and the spin interaction constant $J_{ij}$ is limited to the nearest and the next nearest neighboring spins. \begin{figure}[!htb] \includegraphics[width=7cm]{fig3.eps} \caption{(color online) Average normalized magnetizations as functions of temperature for double perovskite Bi$_2$FeMoO$_6$ for four different $L$ values. Monte Carlo simulations are done with $L\times L\times L$ magnetic unit cells.}\label{mag} \end{figure} We carry out Monte Carlo simulations to estimate the $T_c$ of the materials\cite{mc,metropolis}. It is well known that Curie temperature will be a little underestimated if classical approximation to the Heisenberg model (2) is used in the Monte Carlo simulation, but it must be much overestimated if the model (2) is reduced to Ising model. For comparison, we do our Monte carlo simulations with both of the approximate models. We present the average normalized magnetization of Bi$_2$FeMoO$_6$, from classical Heisenberg model, as a representative in Fig. 3. The $T_c$ value can be estimated to be 650 K. The others can be done in the same way. The calculated $T_c$ values are summarized in Table II. In contrast, the Ising model results are 1010, 396, 264, and 270 K for AB = FeMo, MnMo, MnOs, and CrOs, respectively. Therefore, the Curie temperatures for the four half-metallic ferrimagnets are, at least, 650, 255, 174, and 201 K for Bi$_2$FeMoO$_6$, Bi$_2$MnMoO$_6$, Bi$_2$MnOsO$_6$, and Bi$_2$CrOsO$_6$, respectively. High Curie temperature well above room temperature could be realized in Bi$_2$FeMoO$_6$. \section{Discussions and Conclusion} Our calculated results shows that the spin exchange interaction between the nearest A and B atoms is positive, and the A-A and B-B interactions are either weak or negative depending on specific A and B atoms\cite{sum1,sum2}. In the case of Bi$_2$FeMoO$_6$, our calculations show that the nearest A-B spin exchange energy is 39.2 meV, and the nearest A-A and B-B spin exchange energies are 0.13 and -0.71 meV, respectively. The main spin interaction is intermediated by the O atom in between the magnetic A and B atoms, with the A-O-B bond angle being almost 180$^{\circ}$, and therefore, it is an antiferromagnetic superexchange. The A atom contributes a different magnetic moment from the B atom so that ferrimagnetism is formed in these double perovskite compounds. Possible overlapping of the nearest O wave functions should play some roles in these compounds, but the main mechanism for the ferrimagnetism must be the antiferromagnetic superexchange between the nearest A and B atoms. In summary, our first-principles calculations show that four double perovskite oxides, Bi$_2$ABO$_6$ (AB = FeMo, MnMo, MnOs, and CrOs), have negative formation energy, from -0.42 to -0.26 eV per formula unit. In the case of Bi$_2$FeMoO$_6$, our calculated results uncover that its half-metallic gap and Curie temperature reach to 0.71 eV and 650 K, respectively. These indicates that they could probably be realized and high spin polarization could be achieved at high temperature. We believe that at least some of them could be synthesized soon and would prove useful for spintronic applications. \begin{acknowledgments} This work is supported by Chinese Department of Science and Technology (Grant No. 2012CB932302) and by Nature Science Foundation of China (Grant Nos. 11174359 and 10874232). \end{acknowledgments}
{ "timestamp": "2012-10-30T01:00:48", "yymm": "1210", "arxiv_id": "1210.3706", "language": "en", "url": "https://arxiv.org/abs/1210.3706" }
\section*{Results and discussion} \subsection*{Elastic energy of a thin shell} The elastic energy of a deformed spherical shell of radius $R$ is calculated using \emph{shallow-shell theory}~\cite{koiter_stability}. This approach considers a shallow section of the shell, small enough so that slopes measured relative to the section base are small. The in-plane displacements of the shallow section are parametrized by a two-component phonon field $u_i(\mathbf{x})$, $ i={1,2}$; the out-of-plane displacements are described by a field $f(\mathbf{x})$ in a coordinate system $\mathbf{x}=(x_1,x_2)$ tangent to the shell at the origin. We focus on \emph{amorphous} shells, with uniform elastic properties, and can thus neglect the effect of the 12 inevitable disclinations associated with crystalline order on the surface of a sphere~\cite{lidmar_virus_2003}. In the presence of an external pressure $p$ acting inward, the elastic energy for small displacements in terms of the bending rigidity $\kappa$ and Lam\'e coefficients $\mu$ and $\lambda$ reads (see {\it Supplementary Information} for details): \begin{equation} G=\int d^2x\,\left[\frac{\kappa}{2} (\nabla^2 f)^2 +\mu u_{ij}^2 + \frac{\lambda}{2} u_{kk}^2-pf\right], \end{equation} where the nonlinear strain tensor is \begin{equation} u_{ij}(\mathbf{x})=\frac{1}{2}\left(\partial_i u_j+\partial_j u_i +\partial_i f \partial_j f\right)-\delta_{ij}\frac{f}{R}. \end{equation} Here, $d^{2}x \equiv \sqrt{g} dx_{1}dx_{2}$, where $g$ is the determinant of the metric tensor associated with the spherical background metric. Within shallow shell theory, $g \approx 1$ (see {\it Supplementary Information}). If we represent the normal displacements in the form $f(\mathbf{x}) = f_0 + f^\prime(\mathbf{x})$, where $f_0$ represents the uniform contraction of the sphere in response to the external pressure, and $f^\prime$ is the deformation with reference to this contracted state so that $\int d^2x f^\prime =0$, then the energy is quadratic in fields $u_1$, $u_2$ and $f_0$. These variables can be eliminated in a functional integral of $\exp(-G[f^\prime, f_0,u_1,u_2]/k_\mathrm{B}T)$ by Gaussian integration (see {\it Supplementary Information} for details). The effective free energy $G_{\mathrm{eff}}$ which results is the sum of a harmonic part $G_0$ and an anharmonic part $G_1$ in the remaining variable $f^\prime(\mathbf{x})$: \begin{eqnarray} \label{eqn_effectivef_pressure} G_0 & = & \frac{1}{2}\int d^2 x\left[\kappa(\nabla^2 f^\prime)^2-\,\frac{pR}{2} |\nabla f^\prime|^2+\frac{Y}{R^2} {f^\prime}^2\right],\\ G_1& = &\frac{Y}{2}\int d^2 x\left[\left(\frac{1}{2}P^\mathrm{T}_{ij}\partial_i f^\prime \partial_j f^\prime\right)^2-\frac{f^\prime}{R}P^\mathrm{T}_{ij}\partial_i f^\prime \partial_j f^\prime \right]. \nonumber \end{eqnarray} where $Y = 4\mu(\mu+\lambda)/(2\mu+\lambda)$ is the two-dimensional Young modulus and $P^\mathrm{T}_{ij}=\delta_{ij}-\partial_i \partial_j/\nabla^2$ is the transverse projection operator. The ``mass'' term $Y({f^\prime}/R)^2$ in the harmonic energy functional reflects the coupling between out-of-plane deformation and in-plane stretching due to curvature, absent in the harmonic theory of flat membranes (plates). The cubic interaction term with a coupling constant $-Y/2R$ is also unique to curved membranes and is prohibited by symmetry for flat membranes. These terms are unusual because they have system-size-dependent coupling constants. Note that an inward pressure ($p >0$) acts like a negative $R$-dependent surface tension in the harmonic term. As required, the effective elastic energy of fluctuating flat membranes is retrieved for $R\to \infty$ and $p = 0$. In the following, we exclusively use the field $f^\prime(\mathbf{x})$ and thus drop the prime without ambiguity. When only the harmonic contributions are considered, the equipartition result for the thermally generated Fourier components $f_\mathbf{q} = \int d^2x \,f(\mathbf{x})\exp(i\mathbf{q}\cdot\mathbf{x})$ with two-dimensional wavevector $\mathbf{q}$ are \begin{equation} \label{eqn_corrfn_gaussian} \langle f_\mathbf{q}f_\mathbf{q^\prime} \rangle_0 = \frac{Ak_\mathrm{B}T \delta_{\mathbf{q},\mathbf{-q^\prime}}}{\kappa q^4 -\frac{pR}{2}q^2+ \frac{Y}{R^2}}. \end{equation} where $A$ is the area of integration in the $(x_1,x_2)$ plane. Long-wavelength modes are restricted by the finite size of the sphere, \emph{i.e.} $q \gtrsim 1/R$. In contrast to flat membranes for which the amplitude of long-wavelength ($q \to 0$) modes diverges as $k_\text{B}T/(\kappa q^4)$, the coupling between in-plane and out-of-plane deformations of curved membranes cuts off fluctuations with wavevectors smaller than a characteristic inverse length scale~\cite{zhang_scaling_1993}: $$q^* = (\ell^*)^{-1} = \left(\frac{Y}{\kappa R^2}\right)^{1/4} \equiv \frac{\gamma^{1/4}}{R},$$ where we have introduced the dimensionless {\it F\"oppl-von K\'arm\'an number} $\gamma = YR^2/\kappa$ \cite{lidmar_virus_2003}. We focus here on the case $\gamma \gg 1$, so $\ell^{*} \ll R$. As $p$ approaches $p_\mathrm{c} \equiv 4\sqrt{\kappa Y}/R^2$, the modes with $q = q^*$ become unstable and their amplitude diverges. This corresponds to the well-known buckling transition of spherical shells under external pressure~\cite{koiter_stability}. When $p > p_\mathrm{c}$, the shape of the deformed shell is no longer described by small deformations from a sphere, and the shallow shell approximation breaks down. \begin{figure*} \centering{} \includegraphics[width=170mm]{fig2.pdf} \caption{\textbf{Fluctuation spectrum in spherical harmonics.} Spherical harmonic amplitude of the shape fluctuations of elastic shells plotted against the dimensionless spherical wavenumber $l$ for a shell with $R=40r_{0}, Y=577\epsilon/r_{0}^{2}$ and $\kappa=50\epsilon$ at temperatures $k_\text{B}T/\kappa = 7.4\times 10^{-4}$ (blue), 0.07 (red) and 0.18 (yellow). The fluctuation amplitudes are scaled by $k_{\text{B}}T$ so that the spectra at different temperatures would coincide in the harmonic approximation. Each subfigure corresponds to a different value of the external pressure: $p=0$ (\textbf{a}) and $p=0.2p_{\text{c}}$ (\textbf{b}). The symbols are from Monte Carlo simulations, and the solid lines are the theoretical prediction, Eq.~\ref{eqn-almsurf}, using the renormalized elastic constants from perturbation theory (Eqs.~\ref{eqn_smallpyr}--\ref{eqn_smallpkr}), except for the lowest temperature, where the bare elastic constants are used since the anharmonic effects are negligible.}\label{fig_sphfl} \end{figure*} \subsection*{Anharmonic corrections to elastic moduli} The anharmonic part of the elastic energy, neglected in the analysis described above, modifies the fluctuation spectrum by coupling Fourier modes at different wavevectors. Upon rescaling all lengths by $\ell^*$, it can be shown that the size of anharmonic contributions to $\langle |f_{\mathbf{q}}|^2 \rangle$ is set by the dimensionless quantities $k_\text{B}T \sqrt{\gamma}/\kappa$ and $p/p_\text{c}$. The correlation function including the anharmonic terms in Eq.~\ref{eqn_effectivef_pressure} is given by the Dyson equation, \begin{equation} \label{eqn_dyson} \langle |f_\mathbf{q}|^2 \rangle = \frac{1}{\langle |f_\mathbf{q}|^2 \rangle_0^{-1}-\Sigma(\mathbf{q})} \end{equation} where $\Sigma(\mathbf{q})$ is the self-energy, which we evaluate to one-loop order using perturbation theory. While $\langle |f_\mathbf{q}|^2 \rangle$ can be numerically evaluated at any $\mathbf{q}$, an approximate but concise description of the fluctuation spectrum is obtained by expanding the self-energy up to order $q^4$ and defining renormalized values $Y_{\scriptscriptstyle\mathrm{R}}$, $\kappa_{\scriptscriptstyle\mathrm{R}}$ and $p_{\scriptscriptstyle\mathrm{R}}$ of the Young's modulus, bending rigidity and pressure, from the coefficients of the expansion: \begin{equation} Ak_\mathrm{B}T\langle|f_\mathbf{q\rightarrow 0}|^2\rangle^{-1} \equiv \kappa_{\scriptscriptstyle\mathrm{R}} q^4 - \frac{p_{\scriptscriptstyle\mathrm{R}}R}{2} q^2+\frac{Y_{\scriptscriptstyle\mathrm{R}}}{R^2} + O(q^6). \end{equation} To lowest order in $k_\mathrm{B}T/\kappa$ and $p/p_\text{c}$ we obtain the approximate expressions (see {\it Supplementary Information} for details) \begin{equation} \label{eqn_smallpyr} Y_{\scriptscriptstyle\mathrm{R}} \approx Y \left[1 -\frac{3}{256}\frac{k_\mathrm{B}T}{\kappa}\sqrt{\gamma}\left(1+\frac{4}{\pi}\frac{p}{p_{\text{c}}}\right)\right], \end{equation} \begin{equation} \label{eqn_genpressure} p_{\scriptscriptstyle\mathrm{R}} \approx p +\frac{1}{24\pi}\frac{k_\mathrm{B}T}{\kappa}p_\text{c}\sqrt{\gamma}\left(1+\frac{63\pi}{128}\frac{p}{p_{\text{c}}}\right) , \end{equation} and \begin{equation}\label{eqn_smallpkr} \kappa_{\scriptscriptstyle\mathrm{R}} \approx \kappa\left[1 +\frac{61}{4096}\frac{k_\mathrm{B}T}{\kappa}\sqrt{\gamma}\left(1-\frac{1568}{915\pi}\frac{p}{p_{\text{c}}}\right)\right]. \end{equation} (See {\it Supplementary Information} for details of the calculation and the complete dependence on $p/p_\text{c}$.) Thus the long-wavelength deformations of a thermally fluctuating shell are governed by a smaller effective Young's modulus, a larger effective bending rigidity, and a nonzero negative surface tension even when the external pressure is zero. At larger $p/p_\text{c}$, however, both the Young's modulus and the bending modulus fall compared to their zero temperature values, and the negative effective surface tension determined by $p_{\scriptscriptstyle\mathrm{R}}$ gets very large. The complete expressions for the effective elastic parameters, including the full $p/p_\text{c}$-dependence, show that all corrections diverge as $p/p_\text{c} \to 1$. Furthermore, the effective elastic constants are not only temperature-dependent, but also system size-dependent, since $\sqrt\gamma\propto R$. Although the corrections are formally small for $k_{\text{B}}T \ll \kappa$, they nevertheless diverge as $R \to \infty$! The thermally generated surface tension, strong dependence on external pressure, and size dependence of elastic constants are unique to spherical membranes, with no analogue in planar membranes. \subsection*{Simulations of thermally fluctuating shells} \begin{figure*} \centering{} \includegraphics[width=170mm]{fig3.pdf} \caption{\textbf{Temperature dependence of response to point forces.} (\textbf{a}) Force-compression curves for simulations of indented shells (symbols) with $R=20 r_{0}$, $Y=577\epsilon/r_{0}^{2}$ and $\kappa = 50\epsilon$ at low ($k_\text{B}T/\kappa = 2\times 10^{-7}$) and high ($k_\text{B}T/\kappa = 0.5$) temperature. The lines show the expected linear response at small deformations with the spring constant $k_\text{s}$ measured independently from fluctuations in $z_{0}$ ($k_\text{s} = 29.15\epsilon/r_{0}^{2}$ for $k_\text{B}T/\kappa = 2\times 10^{-7}$, $k_\text{s} = 23.63\epsilon/r_{0}^{2}$ for $k_\text{B}T/\kappa = 0.5$). For indentation depths larger than $1-\langle z \rangle/\langle z_{0}\rangle \approx 0.05$, the regions around the poles become inverted and the response becomes nonlinear. Inset: schematic showing the definition of $z_{0}$ (the pole-to-pole distance in the absence of indentations) and $z$ (pole-to-pole distance following an indentation imposed by harmonic springs whose free ends are brought close together) for a snapshot of the fluctuating shell. (\textbf{b}) Blow-up of the boxed region near the origin in (\textbf{a}), highlighting the linear response regime. (\textbf{c}) Spring constants extracted from fluctuations for shells with three different radii as a function of temperature, rescaled by the classical result for linear response of thin shells at zero temperature. The dashed line shows the perturbation theory prediction, Eq.~\ref{eqn0sprconst}. The low-temperature spring constant deviates from the classical result due to a finite mesh size effect which falls with increasing $R$ (increasing mesh size). } \label{fig_indent} \end{figure*} We complement our theoretical calculations with Monte Carlo simulations of randomly triangulated spherical shells with discretized bending and stretching energies that translate directly into a macroscopic 2D shear modulus $Y$ and a bending ridigity $\kappa$~\cite{vliegenthart_forced_2006,vliegenthart_compression_2011}. (Details are provided in {\it Materials and Methods}.) Here we study shells with $600 < \gamma < 35000$ and $2\times 10^{-6} <k_\text{B}T/\kappa < 0.5$. The anharmonic effects are negligible at the low end of this temperature range. The fluctuation spectra of the simulated spherical shells are evaluated using an expansion of the radial displacement field in spherical harmonics~\cite{gompper_random_1996}. The radial position of a node $i$ at angles ($\phi,\theta$) can be written as $r_i(\phi,\theta)=\widetilde{R_0}+f(\phi,\theta)$ with $\widetilde{R_0}$ the average radius of the fluctuating vesicle. The function $f(\phi,\theta)$ can be expanded in (real) spherical harmonics \begin{equation} \label{eqn0sphharm} f(\phi,\theta)=R\sum_{l=0}^{l_M}\sum_{m=-l}^{m=l} A_{lm}Y_{lm}(\phi,\theta) \end{equation} where $l_M$ is the large wavenumber cutoff determined by the number of nodes in the lattice $(l_M+1)^2=N$ \cite{gompper_random_1996}. The theoretical prediction for the fluctuation spectrum including anharmonic effects is ({\it Supplementary Information}) \begin{equation} \begin{split} k_\text{B}T\langle |A_{lm}|^{2}\rangle^{-1} \approx &\kappa_{\scriptscriptstyle\mathrm{R}}(l+2)^{2}(l-1)^{2} -p_{\scriptscriptstyle\mathrm{R}}R^{3}\left[1+\frac{l(l+1)}{2}\right] \\ &+Y_{\scriptscriptstyle\mathrm{R}}R^{2}\left[\frac{3(l^{2}+l-2)}{3(l^{2}+l)-2}\right]. \label{eqn-almsurf} \end{split} \end{equation} Fig.~\ref{fig_sphfl} displays our theoretical and simulation results for the fluctuation spectrum. At the lowest temperature (corresponding to $k_\text{B}T\sqrt\gamma/\kappa \approx 0.1 \ll 1$), the spectrum is well-described by the bare elastic parameters $Y$, $\kappa$ and $p$. At the intermediate temperature ($k_\text{B}T\sqrt\gamma/\kappa \approx 10$) anharmonic corrections become significant, enhancing the fluctuation amplitude for some values of $l$ by about 20\%--40\% compared to the purely harmonic contribution. At this temperature, one-loop perturbation theory successfully describes the fluctuation spectrum. However, at the highest temperature simulated ($k_\text{B}T\sqrt\gamma/\kappa \approx 24$), the anharmonic corrections observed in simulations approach 50\% of the harmonic contribution at zero pressure and over 100\% for the pressurized shell. With such large corrections, we expect that higher-order terms in the perturbation expansion contribute significantly to the fluctuation spectrum and the one-loop result overestimates the fluctuation amplitudes. Similarly, thermal fluctuations modify the mechanical response when a shell is deformed by a deliberate point-like indentation. In experiments, such a deformation is accomplished using an atomic force microscope~\cite{ivanovska_bacteriophage_2004,elsner_mechanical_2006}. In our simulations, two harmonic springs are attached to the north and south pole of the shell. By changing the position of the springs the depth of the indentation can be varied (Fig.~\ref{fig_indent}a, inset). The thermally averaged pole-to-pole distance $\langle z \rangle$ is measured and compared to its average value in the absence of a force, $\langle z_{0} \rangle$. For small deformations, the relationship between the force applied at each pole and the corresponding change in pole--pole distance is spring-like with a spring constant $k_{\text{s}}$: $\langle F \rangle \equiv k_{\text{s}} (\langle z_{0} \rangle - \langle z \rangle)$. The spring constant is related to the amplitude of thermal fluctuations in the normal displacement field in the \emph{absence} of forces by (see {\it Supplementary Information} for the detailed derivation) \begin{equation} \label{eqn0ksfluct} k_{\text{s}} = \frac{k_\text{B}T}{2\langle [f(\mathbf{x})]^{2} \rangle} \approx \frac{k_\text{B}T}{\langle z_{0}^{2}\rangle - \langle z_{0}\rangle^{2}}. \end{equation} This fluctuation-response relation is used to measure the temperature dependence of $k_{\text{s}}$ from simulations on fluctuating shells with no indenters. At finite temperature, anharmonic effects computed above make this spring constant both size- and temperature-dependent: \begin{equation} \label{eqn0sprconst} k_{\text{s}} \approx \frac{4\sqrt{\kappa Y}}{R}\left[1-0.0069\frac{k_\text{B}T}{\kappa} \sqrt\gamma\right]. \end{equation} Fig.~\ref{fig_indent}a shows the force-compression relation for a shell with $R = 20 r_0$ and dimensionless temperatures $k_\text{B}T\sqrt\gamma/\kappa = 1.36 \times 10^{-4}$ and $k_\text{B}T\sqrt\gamma/\kappa = 34$. The linear response near the origin (Fig.~\ref{fig_indent}b) is very well described by $k_{\text{s}}$ measured indirectly from the fluctuations in $z_{0}$ at each temperature, Eq.~\ref{eqn0ksfluct}. The thermal fluctuations lead to an appreciable 20\% reduction of the spring constant for this case. Measuring spring constants over a range of temperatures (Fig.~\ref{fig_indent}c) confirms that the shell response softens as the temperature is increased, in agreement with the perturbation theory prediction. We note, however, a small but systematic shift due to the finite mesh size of the shells, an approximately 5\% effect for the largest systems simulated here. At the higher temperatures ($k_\text{B}T\sqrt\gamma/\kappa >20$), the measured spring constants deviate from the perturbation theory prediction, once again we believe due to the effect of higher-order terms. \begin{figure} \centering{} \includegraphics[width=87mm]{fig4.pdf} \caption{\textbf{Temperature dependence of the buckling pressure.} Buckling pressure for simulated shells at various radii and temperatures, normalized by the \emph{classical} (\emph{i.e.} zero temperature) critical buckling pressure $p_\text{c}$ for perfectly uniform, zero temperature shells with the same parameters. For all shells, $Yr_{0}^{2}/\kappa = 11.54$. In separate sets of symbols, we either vary the shell radius over the range $7.5 \leq R/r_{0} \leq 55$ while keeping the temperature constant ($k_{\text{B}}T = 2\times 10^{-6}\kappa$, blue circles; $k_{\text{B}}T = 0.4\kappa$, yellow squares) or vary the temperature over the range $2\times 10^{-8} \leq k_{\text{B}}T/\kappa \leq 0.4$ while keeping the radius constant at $R=20 r_{0}$ (red triangles). The parameter $k_{\text{B}}T\sqrt\gamma/\kappa$ sets the strength of anharmonic corrections for thermally fluctuating shells. The inset shows the $1/R^{2}$ dependence of the buckling pressure as the radius is varied, for shells at low and high temperature.} \label{fig_critpres} \end{figure} We also simulate the buckling of thermally excited shells under external pressure. When the external pressure increases beyond a certain value (which we identify as the renormalized buckling pressure), the shell collapses from a primarily spherical shape (Fig.~1a) to a shape with one or more large volume-reducing inversions (Fig.~1b). For zero temperature shells, this buckling is associated with the appearance of an unstable deformation mode in the fluctuation spectrum. At finite temperature, the appearance of a mode with energy of order $k_\text{B}T$ is sufficient to drive buckling. Anharmonic contributions, strongly enhanced by an external pressure, also reduce the effective energy associated with modes in the vicinity of $q^{*}$ primarily due to the enhanced negative effective surface tension $p_{\scriptscriptstyle\mathrm{R}}R/2$ (see Eq.~\ref{eqn_genpressure}). As a result, unstable modes arise at lower pressures and we expect thermally fluctuating shells to collapse at pressures below the classical buckling pressure $p_\text{c}$. This is confirmed by simulations of pressurized shells (Fig.~\ref{fig_critpres}). When anharmonic contributions are negligible ($k_\text{B}T\sqrt\gamma/\kappa \ll 1$), the buckling pressure observed in simulations is only $\sim 80\%$ of the theoretical value because the buckling transition is highly sensitive to the disorder introduced by the random mesh. Relative to this low temperature value, the buckling pressure is reduced significantly when $k_\text{B}T\sqrt\gamma/\kappa$ becomes large. \subsection*{Conclusion and outlook} In summary, we have demonstrated that thermal corrections to the elastic response become significant when $k_\text{B}T\sqrt{\gamma}/\kappa \gg 1$ and that first-order corrections in $k_\text{B}T/\kappa$ already become inaccurate when $k_\text{B}T\sqrt{\gamma}/\kappa \gtrsim 20$. Human red blood cell (RBC) membranes are known examples of curved solid structures that are soft enough to exhibit thermal fluctuations. Typical measured values of the shear and bulk moduli of RBC membranes correspond to $Y\approx 25$ $\mu$N/m~\cite{park_measurement_2010,waugh_thermoelasticity_1979}, while reported values of the bending rigidity $\kappa$ vary widely from 6 $k_\text{B}T$ to 40 $k_\text{B}T$~\cite{park_measurement_2010,evans_bending_1983}. Using an effective radius of curvature $R \approx 7$ $\mu$m~\cite{park_measurement_2010} gives $k_\text{B}T\sqrt{\gamma}/\kappa$ in the range 2--35. Thus, RBCs could be good candidates to observe our predicted thermal effects, provided their bending rigidity is in the lower range of the reported values. For continuum shells fabricated from an elastic material with a 3D Young's modulus $E$, thickness $h$ and typical Poisson ratio $\approx 0.3$, $k_\text{B}T\sqrt\gamma/\kappa \approx 100Rk_\text{B}T/(Eh^4)$. Hence very thin shells with a sufficiently high radius-to-thickness ratio ($R/h$) \emph{must} display significant thermal effects. Polyelectrolyte~\cite{elsner_mechanical_2006} and protein-based~\cite{hermanson_engineered_2007} shells with $R/h \approx 10^3$ have been fabricated, but typical solid shells have a bending rigidity $\kappa$ several orders of magnitude higher than $k_\text{B}T$ unless $h \lesssim 5$ nm. Microcapsules of 6 nm thickness fabricated from reconstituted spider silk~\cite{hermanson_engineered_2007} with $R \approx 30$ $\mu\mathrm{m}$ and $E \approx 1$ GPa have $k_\text{B}T\sqrt{\gamma}/\kappa \approx 3$, and could exhibit measurable anharmonic effects. Thermal effects are particularly pronounced under finite external pressure---an indentation experiment carried out at $p = p_\mathrm{c}/2$ on the aforementioned spider silk capsules would show corrections of 10\% from the classical zero-temperature theory. For similar capsules with half the thickness, perturbative corrections at $p=p_\mathrm{c}/2$ are larger than 100\%, reflecting a drastic breakdown of shell theory because of thermal fluctuations. The breakdown of classical shell theory explored here points to the need for a renormalization analysis, similar to that carried out already for flat plates~\cite{statistical_1988}.
{ "timestamp": "2012-10-16T02:01:50", "yymm": "1210", "arxiv_id": "1210.3704", "language": "en", "url": "https://arxiv.org/abs/1210.3704" }
\section{#1}} \def\left{\left} \def\right{\right} \def\mathbf{t}{\mathbf{t}} \newcommand{\arctanh}[1]{\text{arctanh}} \begin{document} \begin{titlepage} \begin{flushright} {\tt hep-th/...} \tt {FileName:....tex} \\ {\tt \today} \end{flushright} \vspace{0.5in} \begin{center} {\large \bf Diffractive and deeply virtual Compton scattering in holographic QCD}\\ \vspace{10mm} Alexander Stoffers and Ismail Zahed\\ \vspace{5mm} {\it \small Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA}\\ \vspace{10mm} {\tt October 13, 2012} \end{center} \begin{abstract} We further analyze the holographic dipole-dipole scattering amplitude developed in \cite{Basar:2012jb,Stoffers:2012zw}. Gribov diffusion at strong coupling yields the scattering amplitude in a confining background. We compare the holographic result for the differential cross section to proton-proton and deeply virtual Compton scattering data. \end{abstract} \end{titlepage} \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{0} \section{Introduction} In \cite{Basar:2012jb,Stoffers:2012zw} a holographic version of the dipole-dipole scattering approach \cite{Mueller:1989st, Mueller:1994gb,Mueller:1993rr,Mueller:1994jq,Iancu:2003uh, Nikolaev:1990ja,Nikolaev:1991et,Salam:1995zd,Salam:1995uy,Navelet:1996jx,Navelet:1997tx,GolecBiernat:1998js} in the Regge limit is used to describe high energy hadron-hadron scattering within the context of holographic QCD. In this limit, the scattering amplitude is dominated by pomeron exchange, i.e. exchange of ordered gluon ladders with vacuum quantum number. The holographic pomeron is argued \cite{Basar:2012jb,Stoffers:2012zw} to be the exchange of a non-critical, closed string in transverse AdS$_3$. Using the dipole-dipole scattering approach, two Wilson loops are correlated via a minimal surface with string tension $\sigma_T$. In the presence of a large rapidity gap $\chi$ and large impact parameter $b$, the closed string exchange is T-dual to an open string exchange subjected to a longitudinal 'electric' field $E=\sigma_T\,{\rm tanh}(\chi/2)$ that causes the oppositely charged string end-points to accelerate~\cite{Basar:2012jb}. This acceleration induces an Unruh temperature $T_U\approx \chi/2\pi b$ in the middle of the string worl-sheet. For large impact parameter, the Unruh temperature is low and only the tachyon mode of the non-critical string is excited. This tachyonic string mode is diffusive in curved AdS$_3$, which is reminiscent of Gribov's diffusion in QCD. In particular, the properly normalized diffusion kernel with suitable boundary conditions in the infrared yields a {\it wee-dipoles} density that is similar to the QCD one in the conformal limit. The convolution of the two {\it wee-dipole} densities yields the eikonalized scattering amplitude and allows for a 'partonic' picture similar to \cite{Brodsky:2006uqa,Brodsky:2006uq}, albeit at strong coupling. The holographic, strong coupling description allows to access the saturation regime at small Bjorken $x$ and small momentum transfer. In \cite{Stoffers:2012zw} the holographic dipole-dipole cross section was compared to DIS data from HERA. This leads to a fit of the t'Hooft coupling $\lambda$ through the slope of the proton structure function $F_2$, while the remaining parameters were adjusted to be in reasonable agreement with QCD expectations. The scattering amplitude was obtained both in a confining background with a hard wall, as well as in the conformal limit. Both backgrounds yield a cross section that is comparable to the data. Exclusive diffractive processes such as proton-proton ($pp$) diffraction and deeply virtual Compton scattering (DVCS) reveal information about the proton shape in the transverse plane. We will use the holographic dipole-dipole scattering model, \cite{Basar:2012jb,Stoffers:2012zw}, to describe $pp$ diffraction and DVCS. At large momentum transfer, the holographic differential $pp$ cross section is sensitive to length scales of the typical string length or $1/\sqrt{\sigma_T}$. In the confining background it is of the order of the IR cutoff. To get a better fit on the two parameters concerning the effective size of the proton and the IR cutoff, we compare the holographic result for diffractive proton-proton and DVCS cross sections with the data. The dipole picture has been used to describe exclusive diffractive processes, see for example \cite{Shoshi:2002in}, \cite{Kowalski:2006hc}, \cite{Donnachie:2000px,McDermott:2001pt, Favart:2004uv,Kopeliovich:2008ct}. Within the gauge/gravity duality, hadron-hadron scattering and the holographic pomeron has been discussed in numerous places, see e.g. \cite{Rho:1999jm,Basar:2012jb,Stoffers:2012zw,Janik:1999zk,Janik:2001sc,Janik:2000aj, Janik:2000pp, Polchinski:2001tt, Polchinski:2002jw, Brower:2006ea,Brower:2007xg, Andreev:2004sy,Andreev:2004jm, Hatta:2007cs, Hatta:2007he,Albacete:2008ze, Albacete:2008vv, Cornalba:2008sp,Cornalba:2010vk}. In particular, $pp$ diffraction was studied in \cite{Domokos:2009hm,Domokos:2010ma} and DVCS in \cite{Gao:2009se,Marquet:2010sf,Nishio:2011xz, Costa:2012fw}. Our construction relies on the holographic results established in~\cite{Basar:2012jb,Stoffers:2012zw} which involve a non-critical string in $D_\perp=3$ dimensions and whereby the pomeron is the tachyon mode at large impact parameter. They are rooted in the widely used approach to dipole-dipole scattering at high-energy. This paper is structured as follows. We will briefly review the main results of \cite{Basar:2012jb,Stoffers:2012zw} in section 2 before comparing the differential $pp$ cross section to CERN ISR and LHC data in section 3. A comparison to DVCS data from HERA is done in section 4 and the conclusions given in section 5. \section{Diffractive scattering as dipole-dipole scattering} In the dipole-dipole scattering approach at high energies ($\chi = \ln (\frac{s}{s_0})\gg 1$), the scattering amplitude for the process $a \ p \rightarrow c \ p$ factorizes and can be written as \begin{eqnarray} {\cal T}(\chi, {\bf b}_\perp) = \int_0^{\infty} du \int_0^{\infty} du' \ \psi^*_{a}(u) \psi^*_p(u') \ {\mathcal T}_{DD}(\chi, {\bf b}_\perp, u, u') \ \psi_{b}(u) \psi_p(u'), \label{amplitude} \end{eqnarray} with transverse impact parameter ${\bf b}_\perp$, rapidity $\chi$ and dipole-dipole amplitude ${\mathcal T}_{DD}$. The wave functions $\psi$ are parametrized by $u=-{\rm ln}(z/z_0)$, with the effective dipole size $z$ and the IR cutoff $z_0$. The dipole-dipole amplitude is evaluated using the gauge/gravity duality and the virtuality of the scatterers is identified with the holographic direction of the curved space \cite{Stoffers:2012zw}, \cite{Polchinski:2001tt,Polchinski:2002jw}, \cite{Brodsky:2006uqa,Brodsky:2006uq}. In the eikonal approximation the differential cross section reads \begin{eqnarray} \frac{d \sigma_{{\tiny{a p \rightarrow c p}}}}{dt}(\chi, |t|) &=& \frac{1}{16 \pi s^2} |{\cal T} (\chi, |t| )|^2 \label{amplitude} \\ &=&\frac{1}{4 \pi} \left|i \int d{\bf b}_\perp \int du \int du' \ e^{iq_\perp \cdot {\bf b}_\perp} \ |\psi_{ab} (u)|^2 |\psi_{p} (u')|^2 \ (1-e^{{\bf WW}}) \right|^2 \label{dsigmadt}\\ &=& \frac{\pi}{4} \left| i \int \ d|{\bf b}_\perp|^2 \int du \int du' \ J_0(\sqrt{|{\bf b}_\perp|^2|t|}) \ |\psi_{ab} (u)|^2 |\psi_p (u')|^2 \ (1-e^{{\bf WW}}) \right|^2 \nonumber \\ \label{dsigmadtbessel} \end{eqnarray} with $t=- q_\perp^2$. Here, $J_0$ is the Bessel function and the overlap amplitude is defined by $|\psi_{ab} (u)|^2 \equiv \psi^*_a(u)\psi_b(u)$. Note that the scattering amplitude in (\ref{amplitude}) is purely imaginary. In \cite{Basar:2012jb,Stoffers:2012zw} the eikonal ${\bf WW}$, which is the correlator of two Wilson loops, was obtained through closed string exchange in a weakly curved, confining space. The string exchange can be viewed as a funnel connecting the two dipoles at a holographic depth $z$ and $z'$. Identifying these positions $z$, $z'$ of the endpoints of the funnel with the effective size of the dipoles gives rise to a density ${\bf N}$ of {\it wee-dipoles} surrounding each parent dipole. For $D_\perp=3$, we identify \cite{Stoffers:2012zw} \begin{eqnarray} {\bf WW} \approx - \frac{g_s^2}{4} \left(2\pi \alpha' \right)^{3/2} z z' \ {\bf N}(\chi, z,z', {\bf b}_\perp) \ . \label{WW} \end{eqnarray} We consider transverse AdS$_3$ \begin{eqnarray} ds_\perp^2=\frac{1}{z^2}(d{\bf b}_\perp^2+dz^2) \ , \label{metric} \end{eqnarray} with a cutoff (hard wall) imposed at some $z_0$. The {\it effective} string tension will be defined as \begin{eqnarray} g_s\equiv \kappa \frac{1}{{4\pi\alpha'}^2 N_c}\equiv\kappa \frac{{\lambda}}{4\pi N_c} \ , \label{GS} \end{eqnarray} with t'Hooft coupling $\lambda$ and $\alpha'=1/(2\pi\sigma_T)\equiv 1/\sqrt{\lambda}$ the string tension (in units of the AdS radius). $N_c$ is the number of colors and the parameter $\kappa$ is fixed by the saturation scale, see \cite{Stoffers:2012zw}. The density reads \begin{eqnarray} {\bf N}(\chi, {\bf b}_\perp, z, z')=\frac{1}{zz'}\,\Delta(\chi,\xi)+\frac{z}{z'z_0^2}\,\Delta(\chi,\xi_*) \ , \label{dipoledensity} \end{eqnarray} and the heat kernel $\Delta$ in the background (\ref{metric}) is given by \begin{eqnarray} \Delta_\perp(\chi,\xi)=\frac{e^{(\alpha_{\bf P}-1) \chi}}{(4 \pi {\bf D} \chi)^{3/2}} \frac{\xi e^{-\frac{\xi^2}{4{\bf D} \chi}}}{\sinh(\xi)} \ , \end{eqnarray} with the chordal distances ($u=-{\ln}(z/z_0)$) \begin{eqnarray} {\rm cosh}\xi&=&{\rm cosh}(u'-u)+\frac 12 {\bf b}_\perp^2\,e^{u'+u}\, \nonumber \\ {\rm cosh}\xi_*&=&{\rm cosh}(u'+u)+\frac 12 {\bf b}_\perp^2 e^{u'-u}\, \ . \end{eqnarray} To leading order in $1/\sqrt{\lambda}$ the pomeron intercept and the diffusion constant read \begin{eqnarray} \alpha_{\bf P}&=&1+\frac{D_\perp}{12} \nonumber \ , \\ {\bf D}&=& \frac{\alpha'}2=\frac 1{2\sqrt{\lambda}} \ . \label{HOLO2} \end{eqnarray} In order to confront the holographic result for the differential proton-proton cross section with the data, we have to fix the parameters entering the eikonal ${\bf WW}$ in (\ref{WW}). In \cite{Stoffers:2012zw} a comparison of the proton structure function $F_2$ to DIS data determined the following numerical values. $N_c$ was set to $3$ and the onium mass taken to give $s_0=0.1 \ GeV^2$. The value of the coupling, $\lambda = 23$, is fitted through the slope of the proton structure function $F_2$ in comparison to the DIS data. Phenomenological considerations on the saturation scale gives $\kappa=2.5$. These numerical values will be used in the following analysis. The identification of the radial direction $z$ with the dipole size (inverse virtuality) and a comparison of the scaling of $F_2(x,Q^2)$ with $Q=1/z$ to the data confirms $D_\perp=3$ . The effective size of the proton and the position of the IR cutoff were taken as $z_p= 1.8 \ GeV^{-1}$, $z_0= 2 \ GeV^{-1}$. While saturation effects become important at low $Q^2$ and small Bjorken $x$, both the confining result as well as the conformal limit for the scattering amplitude lead to a cross section that is comparable to the DIS data. \\ \section{Proton-proton scattering} Diffractive proton-proton scattering at small momentum transfer unravels information about the transverse shape of the proton and the large $|t|$ behavior probes lengths scales of the typical string length, which in the confining background is of the order of $z_0$. We will fit the effective dipole size of the proton, $z_p$, and the position of the hard wall, $z_0$, to the data. All other numerical values remain the same as in \cite{Stoffers:2012zw}, see also section 2.\\ Instead of using diffractive eigenstates \cite{Good:1960ba}, \cite{Ryskin:2012ry}, perturbative \cite{Shoshi:2002in}, \cite{Wirbel:1985ji} or holographic light-front wave functions \cite{Brodsky:2006uqa,Brodsky:2006uq}, we will fit the data assuming the proton distribution is identified with the {\it wee-dipole} distribution, i.e. the proton is sharply peaked at some scale $1/z_p$, \cite{Brower:2010wf}. More explicitly, the square of the wave function will be approximated by a delta-function, $|\psi_p(u)|^2 =\mathcal{N}_p \ \delta(u-u_p)$. We treat the normalization constant $\mathcal{N}_p$ that carries the dipole distribution to the physical proton distribution as a parameter to be fitted to the data. \subsection{Comparison to data: ISR} A comparison of the differential elastic $pp$ cross section, (\ref{dsigmadt}), to the CERN ISR data \cite{Amaldi:1979kd} is made by fitting the position of the dip and the slope of the shoulder region ($|t| > 1.5 \ GeV^2$). We use the full, unitary amplitude including higher order terms in the eikonal ${\bf WW}$. The importance of higher order terms in the eikonalized amplitude was also noted in \cite{Donnachie:2011aa}. A fit yields $z_0=2 \ GeV^{-1}$, $z_p=3.3 \ GeV^{-1}$ and $\mathcal{N}_p=0.16$, see Figure \ref{sigmadiff}. To leading order, the position of the (first) dip is sensitive to the effective size of the scatterer and the energy of the scattering object and scales with $1/({\bf D} \chi z_p^2)$. The fit with $z_p > z_0$ is larger than the cutoff set by the hard-wall at $z_0$. This shortfall is readily fixed by considering a smooth wall which is also more appropriate for describing hadron resonances \cite{Karch:2006pv}. The analysis of the dipole-dipole scattering amplitude in the smooth-wall background will be discussed elsewhere. \begin{figure}[t] \begin{center} \includegraphics[width=8cm]{dsigmadtdelta235.eps} \includegraphics[width=8cm]{dsigmadtdelta307.eps} \includegraphics[width=8cm]{dsigmadtdelta447.eps} \includegraphics[width=8cm]{dsigmadtdelta625.eps} \caption{Differential $pp$ cross section. Dots: data from CERN ISR. Solid line: holographic result. See text.} \label{sigmadiff} \end{center} \end{figure} At high momentum transfer ($|t| > 2 \ GeV^2$), the typical length scales probed are of the size of the fundamental string length, which is of the order of the IR cutoff. Thus, the slope of the shoulder region is fitted by primarily adjusting the value of the confinement scale $z_0$. The result for the cross section in the conformal limit $z_0 \rightarrow \infty$ does not yield a reasonable fit to the data. We note that unlike perturbative QCD reasoning, \cite{Dremin:2012ke}, where the partons are resolved at large $|t|$ leading to a power-like decrease, the slope of the cross section at $|t|\ge 2 \ GeV^2$ is essentially not power-like in our holographic model. \begin{figure}[t] \begin{center} \includegraphics[width=8cm]{density.eps} \includegraphics[width=8cm]{B.eps} \caption{Left: Transverse distribution of the {\it wee-dipole} density ${\bf N}$, (\ref{dipoledensity}), with $\sqrt{s}=20 \ GeV$. Right:Experimental results for the slope parameter \cite{Amaldi:1971kt,Amaldi:1976yf} in comparison with the slope parameter, (\ref{B}), for $|t|=0 \ GeV^2$ (red, solid) and $|t|=1 \ GeV^2$ (red, dashed). See text.} \label{density} \end{center} \end{figure} At $|t| \sim 0 \ GeV^2$, the slope parameter $B(s,|t|)$ gives the mean square proton radius \begin{eqnarray} B(s,|t|=0) \equiv \left(\frac{d}{dt} \ln(\frac{d \sigma_{pp \rightarrow pp}}{dt}(s,t)) \right)\Big|_{t=0} = \frac{1}{2} \frac{\int d|{\bf b}|^2 \ |{\bf b}|^2 \left( 1-e^{\bf WW} \right) }{\int d|{\bf b}|^2 \left( 1-e^{{\bf WW}} \right) } = \frac{1}{2} <|{\bf b}|^2> .\label{B} \end{eqnarray} The {\it wee-dipole} density ${\bf N}$ is peaked at $\frac{|{\bf b}|}{z_p}$ small, see Figure \ref{density}, and expanding the exponential to first order in $g_s^2$ gives \begin{eqnarray} B(s) \sim {\bf D} \chi (z_p^2+z_0^2) \ . \end{eqnarray} The radius of the proton is not only protortional to the effective {\it wee-dipole} size $z_p$ but also receives contributions from the IR cutoff. At strong coupling, the diffusive nature of the eikonalized scattering amplitude is responsible for the scaling of the proton radius with the rapidity, $B(s)\sim {\bf D} \chi \sim \frac{1}{\sqrt{\lambda}} \ln \left(\frac{s}{s_0}\right)$. In the approach taken here, the transverse structure of the proton is modelled by a cloud of {\it wee-dipoles} surrounding a parent dipole. We can easily understand the scaling of the proton size with the coupling. As the coupling increases, the outer part of the cloud becomes more dilute and the proton shrinks. Figure \ref{density} shows the slope parameter for $|t|=0 \ GeV^2$ and $|t|=1 \ GeV^2$. In our setup, the momentum distribution between the two constituents of each dipole is symmetric resulting in a small-size dipole, whereas asymmetric, large-size pairs dominate the small $|t|$ region, see e.g. \cite{Barone}. Thus, we suspect large-size dipoles to dominate the region $|t| \le 1 \ GeV^2$. The Coulomb contribution to the scattering amplitude can be neglected in the kinematic region $|t| > 0.01 \ GeV^2$ \cite{Amos:1985wx,Bernard:1987vq}. \subsection{Comparison to data: LHC} Elastic $pp$ scattering at LHC energies of $\sqrt{s}= 7 \ TeV$, allows us to test the energy dependence of our model. With the numerical values fitted at energies $\sqrt{s} \sim 20-60 \ GeV$, the fit in Figure \ref{TOTEM} indicates a miss match in the energy dependence of the holographic model. In order to get a better fit to the LHC data, the parameters governing the overall strenght ($\kappa$), the position of the dip ($z_p$) and the slope of the shoulder ($z_0$) are adjusted. The fit (blue, dashed line) in Figure \ref{TOTEM} is obtained with $\kappa=3.75$, $z_p= 3.1 \ GeV^{-1}$, $z_0=1.5 \ GeV^{-1}$, while the fit (red, dotted) uses the values in section 3.1, $\kappa=2.5$, $z_p= 3.3 \ GeV^{-1}$, $z_0=2 \ GeV^{-1}$. This new parameter set for the LHC data is overall consistent with the set for the ISR data. \begin{figure}[h] \begin{center} \includegraphics[width=12cm]{TOTEM.eps} \caption{Differential $pp$ cross section. Black dots: data from the TOTEM experiment at LHC, \cite{TOTEM}. Dashed blue line and red dots: holographic result. See text.} \label{TOTEM} \end{center} \end{figure} \section{Deeply virtual Compton scattering} \begin{figure}[t] \begin{center} \includegraphics[width=8cm]{DVCS40.eps} \includegraphics[width=8cm]{DVCS70.eps} \includegraphics[width=8cm]{DVCS82.eps} \includegraphics[width=8cm]{DVCS100.eps} \caption{Holographic result for the differential DVCS cross section compared to the HERA data, \cite{Aaron:2007ab,Chekanov:2008vy,Aaron:2009ac}. $\sqrt{s}=82 \ GeV$: solid - $Q^2=8 \ GeV^2$, dashed - $Q^2=15.5 \ GeV^2$, dotdashed - $Q^2=25 \ GeV^2$. $\sqrt{s}=40, 70, 100 \ GeV$: solid - $Q^2=8 \ GeV^2$, dashed - $Q^2=10 \ GeV^2$, dotdashed - $Q^2=20 \ GeV^2$. See text. } \label{DVCS} \end{center} \end{figure} At high energies DVCS is dominated by pomeron exchange. In the rest frame of the proton, the virtual photon fluctuates into a quark-antiquark dipole that interacts with the proton. We will now use the dipole-dipole amplitude (\ref{WW}) to access the differential DVCS cross section $\frac{d \sigma_{\gamma^* p \rightarrow \gamma p}}{dt}$, (\ref{dsigmadtbessel}). In section 3.1 we have refined the numerical values governing the transverse shape of the proton ($z_p$) and the IR cutoff scale ($z_0$) for the energy range of $\sqrt{s} \sim 20-60 \ GeV$. We will use these values to analyze the DVCS data in the range $\sqrt{s} \sim 40 - 100 \ GeV$. Now that all parameters of the holographic cross section are fixed, a comparison to the DVCS data serves as an additional test for our model. The $\gamma^* \gamma$ overlap amplitude is approximated by a delta function, $|\psi_{\gamma^* \gamma} (u)|^2=\mathcal{N}_{\gamma* \gamma} \delta(u-u_{\gamma* \gamma})$, peaked at some finite virtuality $Q_{\gamma* \gamma} = 1/z_{\gamma* \gamma}$. The normalization constant is fitted to $\mathcal{N}_{\gamma* \gamma}=0.00016$. With the effective size of the proton, $z_p=3.3 \ GeV^{-1}$, and the position of the cutoff, $z_0=2 \ GeV^{-1}$, fixed, we compare our holographic result to the HERA data. Figure \ref{DVCS} illustrates an agreement of the cross section obtained from the holographic dipole-dipole scattering amplitude with the data. \section{Conclusions} High energy hadronic scattering is dominated by pomeron exchange. Due to its non-perturbative nature at strong coupling, the holographic pomeron admits Gribov diffusion in curved space. Within the dipole-dipole scattering approach, the holographic description allows to access the saturation regime at small Bjorken $x$ and small momentum transfer. The parameters of the model developed in \cite{Basar:2012jb} were fitted against DIS data in \cite{Stoffers:2012zw}. In order to refine the numerical values characterizing the proton shape and the IR cutoff, we have confronted the differential cross section with data on $pp$ scattering and DVCS. We have been able to get a reasonable fit to the $pp$ scattering data and obtained a refinement of the effective dipole size $z_p$ of the proton. The slope of the cross section in the region $|t|>2 \ GeV^2$ is sensitive to the IR cutoff scale, indicating the necessity of a confining background. However, the hard wall seems to be a too crude approximation for an IR cutoff. In order to fit the $pp$ data, we need $z_p \ge z_0$ suggesting that the smooth-wall background \cite{Karch:2006pv} is a more suitable setup. While the hard-wall construct allows for explicit and analytical results, the smooth-wall construct is likely numerical and will be addressed elsewhere. The slope parameter $B(s,|t|)$ at small momentum transfer $t \sim 1 \ GeV^2$ agrees with the data. As is typical for diffusive processes, the mean square proton radius scales linear in rapidity. At strong coupling, the proton shrinks with increasing t'Hooft coupling.\\ Having fixed the parameters of the holographic model, an agreement with the DVCS data at small $|t|$ builds further confidence in the holographic approach to hadronic scattering. \vskip0.5cm {\bf Acknowledgements.} A.S. would like to thank Frasher Loshaj for useful discussions. This work was supported by the U.S. Department of Energy under Contract No. DE-FG-88ER40388. \newpage \small
{ "timestamp": "2012-10-16T02:02:05", "yymm": "1210", "arxiv_id": "1210.3724", "language": "en", "url": "https://arxiv.org/abs/1210.3724" }
\section{Introduction}\label{S:introduction} Let $k$ be a field, and let ${k^{\operatorname{sep}}}$ be a separable closure. Let $X$ be a smooth projective geometrically integral $k$-variety, and let ${X^{\operatorname{sep}}} \colonequals X \times_k {k^{\operatorname{sep}}}$. If $k=\mathbb{C}$, then the Lefschetz $(1,1)$ theorem identifies the N\'eron--Severi group $\NS X$ (see Section~\ref{S:notation} for definitions) with the subgroup of ${\operatorname{H}}^2(X(\mathbb{C}),\mathbb{Z})$ mapping into the subspace ${\operatorname{H}}^{1,1}(X)$ of ${\operatorname{H}}^2(X(\mathbb{C}),\mathbb{C})$. Analogously, if $k$ is a finitely generated field, then the Tate conjecture describes $(\NS {X^{\operatorname{sep}}}) \tensor \mathbb{Q}_\ell$ in terms of the action of $\Gal({k^{\operatorname{sep}}}/k)$ on ${\operatorname{H}}^2_{{\textup{\'et}}}({X^{\operatorname{sep}}},\mathbb{Q}_\ell(1))$, for any prime $\ell \ne \Char k$. Can such descriptions be transformed into algorithms for computing $\NS {X^{\operatorname{sep}}}$? To make sense of this question, we assume that $k$ is replaced by a finitely generated subfield over which $X$ is defined; then $X$ and $k$ admit a finite description suitable for computer input (see Section~\ref{S:explicit}). Using the Lefschetz $(1,1)$ theorem involves working over the uncountable field $\mathbb{C}$, while using the Tate conjecture involves an action of an uncountable Galois group on a vector space over an uncountable field $\mathbb{Q}_\ell$, so it is not clear a priori that either approach can be made into an algorithm. In this paper, assuming only the ability to compute the finite Galois modules ${\operatorname{H}}^i_{{\textup{\'et}}}({X^{\operatorname{sep}}},\mu_{\ell^n})$ for each $i \le 2$ and $n$, we give an algorithm for computing $\NS {X^{\operatorname{sep}}}$ that terminates if and only if the Tate conjecture holds for $X$ (Remark~\ref{R:NS}). Moreover, if $k$ is finite, then we can even avoid computing the Galois modules ${\operatorname{H}}^i_{{\textup{\'et}}}({X^{\operatorname{sep}}},\mu_{\ell^n})$, by instead using point-counting to compute the zeta function of $X$, as is well known (Theorem~\ref{T:Num variant}\eqref{I:algorithm B}). Combining this with the truth of the Tate conjecture for K3 surfaces $X$ over finitely generated fields of characteristic not~$2$ (\cites{Nygaard1983,Nygaard-Ogus1985,Maulik-preprint,Charles-TC-preprint,Madapusi-preprint}) yields an unconditional algorithm for computing $\NS {X^{\operatorname{sep}}}$ for all such K3 surfaces (Theorem~\ref{T:unconditional NS for K3}). (See \cite{Tate1994}*{Section~5} and \cite{Andre1996b} for some other cases in which the Tate conjecture is known.) We also provide an unconditional algorithm for computing the torsion subgroup $(\NS {X^{\operatorname{sep}}})_{\operatorname{tors}}$ for any $X$ over any finitely generated field~$k$ (Theorem~\ref{T:NS_tors}). Finally, we prove also statements for cycles of higher codimension. In particular, we describe a conditional algorithm that computes the rank of the group $\Num^p {X^{\operatorname{sep}}}$ of codimension~$p$ cycles modulo numerical equivalence (Theorem~\ref{T:Num}). If ${k^{\operatorname{sep}}}$ is replaced by an algebraic closure ${\overline{k}}$ in any of the results above, the resulting analogue holds (Remarks \ref{R:Num Xbar} and~\ref{R:kbar}). \section{Previous approaches} \label{S:previous approaches} Several techniques exist in the literature for obtaining information on N\'eron--Severi groups: \begin{itemize} \item Lower bounds on the rank are often obtained by exhibiting divisors explicitly. \item An initial upper bound is given by the second Betti number, which is computable (see Proposition~\ref{P:betti}). \item Over $\mathbb{C}$, Hodge theory provides the improved upper bound $h^{1,1}$, which again is computable. (Indeed, software exists for computing all the Hodge numbers $h^{p,q} \colonequals \dim {\operatorname{H}}^q(X,\Omega^p)$, as a special case of computing cohomology of coherent sheaves on projective varieties \cite{Vasconcelos1998}*{Appendix~C.3}.) \item Over a finite field $k$, computation of the zeta function can yield an improved upper bound: see Section~\ref{S:alternative} for details. \item Over finitely generated fields $k$, one can spread out $X$ to a smooth projective scheme $\mathcal{X}$ over a finitely generated $\mathbb{Z}$-algebra and reduce modulo maximal ideals to obtain injective specialization homomorphisms $(\NS {X^{\operatorname{sep}}}) \tensor \mathbb{Q} \to (\NS \mathcal{X}_{\overline{F}}) \tensor \mathbb{Q}$ where $F$ is the finite residue field (see \cite{VanLuijk2007-Heron}*{Proposition~6.2} or \cite{Maulik-Poonen2012}*{Proposition~3.6}, for example). Combining this with the method of the previous item bounds the rank of $\NS {X^{\operatorname{sep}}}$. In some cases, one can prove directly that certain elements of $(\NS \mathcal{X}_{\overline{F}}) \tensor \mathbb{Q}$ are not in the image of the specialization homomorphism, to improve the bound~\cite{Elsenhans-Jahnel2011-oneprime}. \item The previous item can be improved also by using more than one reduction if one takes into account that the specialization homomorphisms preserve additional structure, such as the intersection pairing in the case $\dim X=2$~\cite{VanLuijk2007} or the Galois action~\cite{Elsenhans-Jahnel2011}. In the $\dim X=2$ case, the discriminant of the intersection pairing can be obtained, up to a square factor, either from explicit generators for $(\NS \mathcal{X}_{\overline{F}}) \tensor \mathbb{Q}$~\cite{VanLuijk2007} or from the Artin--Tate conjecture~\cite{Kloosterman2007}. F.~Charles proved that for a K3 surface $X$ over a number field, the information from reductions is sufficient to determine the rank of $\NS {X^{\operatorname{sep}}}$, assuming the Hodge conjecture for $2$-cycles on $X \times X$~\cite{Charles-preprint}. \item If $X$ is a quotient of another variety $Y$ by a finite group $G$, then the natural map $(\NS {X^{\operatorname{sep}}}) \tensor \mathbb{Q} \to ((\NS {Y^{\operatorname{sep}}}) \tensor \mathbb{Q})^G$ is an isomorphism. For instance, this has been applied to \defi{Delsarte surfaces}, i.e., surfaces in $\mathbb{P}^3$ defined by a homogeneous form with four monomials, using that they are quotients of Fermat surfaces~\cite{Shioda1986}. \item When $X$ is an elliptic surface, the rank of $\NS {X^{\operatorname{sep}}}$ is related to the rank of the Mordell--Weil group of the generic fiber \citelist{\cite{Tate1995}*{p.~429}; \cite{Shioda1972}*{Corollary~1.5}; \cite{Shioda1990}*{Corollary~5.3}}. This has been generalized in various ways, for example to fibrations into abelian varieties \citelist{\cite{Kahn2009}; \cite{Oguiso2009}*{Theorem~1.1}}. \item When $X$ is a K3 surface of degree~$2$ over a number field, the Kuga--Satake construction relates the Hodge classes on $X$ to the Hodge classes on an abelian variety of dimension $2^{19}$. B.~Hassett, A.~Kresch, and Yu.~Tschinkel use this to give an algorithm to compute $\NS {X^{\operatorname{sep}}}$ for such $X$~\cite{Hassett-Kresch-Tschinkel-preprint}*{Proposition~19}. \end{itemize} Also, \cite{Simpson2008} shows that if one assumes the Hodge conjecture, then one can decide, given a nice variety $X$ over ${\overline{\Q}} \subseteq \mathbb{C}$ and a singular homology class $\gamma \in {\operatorname{H}}_{2p}(X(\mathbb{C}),\mathbb{Q})$, whether $\gamma$ is the class of an algebraic cycle. \section{Notation} \label{S:notation} Given a module $A$ over an integral domain $R$, let $A_{{\operatorname{tors}}}$ be its torsion submodule, let $\widetilde{A} \colonequals A/A_{{\operatorname{tors}}}$, and let $\rk A \colonequals \dim_K(A \tensor_R K)$ where $K\colonequals \Frac R$. If $A$ is a submodule of another $R$-module $B$, the \defi{saturation} of $A$ in $B$ is $\{b \in B: nb \in A \textup{ for some nonzero $n \in R$}\}$. If $A$ is a $G$-module for some group $G$, then $A^G$ is the subgroup of invariant elements. We say that a $G$-module $A$ is \defi{finite} (resp. \defi{finitely generated}) if it is so as a set (resp.\ abelian group). Given a field $k$, let ${\overline{k}}$ be an algebraic closure, let ${k^{\operatorname{sep}}}$ be the separable closure inside ${\overline{k}}$, let $G_k \colonequals \Gal({k^{\operatorname{sep}}}/k) \simeq \Aut({\overline{k}}/k)$, and let $\kappa$ be the characteristic of $k$. A \defi{variety} $X$ over a field $k$ is a separated scheme of finite type over $k$. For such $X$, let ${X^{\operatorname{sep}}} \colonequals X \times_k {k^{\operatorname{sep}}}$ and ${\overline{X}} \colonequals X \times_k {\overline{k}}$. Call $X$ \defi{nice} if it is smooth, projective, and geometrically integral. Suppose that $X$ is a nice $k$-variety. Let $\Pic X$ be its \defi{Picard group}. Let $\PIC_{X/k}$ be the \defi{Picard scheme} of $X$ over $k$. There is an injection $\Pic X \to \PIC_{X/k}(k)$, but it is not always surjective. Let $\PIC^0_{X/k}$ be the connected component of the identity in $\PIC_{X/k}$. Let $\Pic^0 X \le \Pic X$ be the group of isomorphism classes of line bundles such that the corresponding $k$-point of $\PIC_{X/k}$ lies in $\PIC^0_{X/k}$; any such line bundle $\mathscr{L}$ (or divisor representing it) is called \defi{algebraically equivalent to $0$}. Equivalently, a line bundle $\mathscr{L}$ is algebraically equivalent to $0$ if there is a connected variety $B$ and a line bundle $\mathscr{M}$ on $X \times B$ such that $\mathscr{M}$ restricts to the trivial line bundle above one point of $B$ and to $\mathscr{L}$ above another (this holds even over the ground field $k$: take $B$ to be a component $H$ of $\EffDiv_X$ lying above a translate of $\PIC^0_{X/k}$ as in Lemma~\ref{L:no functors}(\ref{I:nice bijection},\ref{I:A exists})). Define the \defi{N\'eron--Severi group} $\NS X$ as the quotient $\Pic X/\Pic^0 X$; it can be identified with the set of components of $\PIC_{X/k}$ containing the class of a divisor of $X$ over $k$ (which is stronger than assuming that the component has a $k$-point). Then $\NS X$ is a finitely generated abelian group \cite{Neron1952}*{p.~145,~Th\'{e}or\`{e}me~2} (see \cite{SGA6}*{XIII.5.1} for another proof). Let $\PIC^\tau_{X/k}$ be the finite union of connected components of $\PIC_{X/k}$ parametrizing classes of line bundles whose class in $\NS {\overline{X}}$ is torsion. Let $\mathcal{Z}^p(X)$ be the group of codimension~$p$ cycles on $X$. Let $\Num^p X$ be the quotient of $\mathcal{Z}^p(X)$ by the subgroup of cycles numerically equivalent to $0$. Then $\Num^p X$ is a finite-rank free abelian group. Let $\mathcal{Z}^1(X)^\tau$ be the set of divisors $z \in \mathcal{Z}^1(X)$ having a positive multiple that is algebraically equivalent to $0$. Let $(\Pic X)^\tau$ be the image of $\mathcal{Z}^1(X)^\tau$ under $\mathcal{Z}^1(X) \to \Pic X$. If $m \in \mathbb{Z}_{>0}$ and $\kappa \nmid m$, and $i,p \in \mathbb{Z}$, let ${\operatorname{H}}^i({X^{\operatorname{sep}}},(\mathbb{Z}/m\mathbb{Z})(p))$ be the \'etale cohomology group; this is a finite abelian group. For each prime $\ell \ne \kappa$, define ${\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}_\ell(p))\colonequals \varprojlim_n {\operatorname{H}}^i({X^{\operatorname{sep}}},(\mathbb{Z}/\ell^n\mathbb{Z})(p))$, a finitely generated $\mathbb{Z}_\ell$-module; and define ${\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Q}_\ell(p)) \colonequals {\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}_\ell(p)) \tensor_{\mathbb{Z}_\ell} \mathbb{Q}_\ell$, a finite-dimensional $\mathbb{Q}_\ell$-vector space; its dimension $b_i(X)$ is independent of $p$, and is called an \defi{$\ell$-adic Betti number}. \section{Group-theoretic lemmas} \label{S:group-theoretic lemmas} Given any prime $\ell$, let $\ell'\colonequals \ell$ if $\ell \ne 2$, and $\ell'\colonequals 4$ if $\ell=2$. \begin{lemma}[cf.~\cite{Minkowski1887}*{\S1}] \label{L:Minkowski} Let $\ell$ be a prime. Let $G$ be a group acting through a finite quotient on a finite-rank free $\mathbb{Z}$-module or $\mathbb{Z}_\ell$-module $\Lambda$. If $G$ acts trivially on $\Lambda/\ell' \Lambda$, then $G$ acts trivially on $\Lambda$. \end{lemma} \begin{proof} Let $n \colonequals \rk \Lambda$. Write $\ell' \equalscolon \ell^s$. For $r \ge s$, let $U_r \colonequals 1 + \ell^r M_n(\mathbb{Z}_\ell)$. It suffices to show that there are no non-identity elements of finite order in the kernel $U_s$ of $\operatorname{GL}_n(\mathbb{Z}_\ell) \to \operatorname{GL}_n(\mathbb{Z}_\ell/\ell' \mathbb{Z}_\ell)$. In fact, for $r \ge s$ the binomial theorem shows that $1+A \in U_r \setminus U_{r+1}$ implies $(1+A)^\ell \in U_{r+1} \setminus U_{r+2}$, so by induction any non-identity $1+A \in U_s$ has infinitely many distinct powers, and cannot be of finite order. \end{proof} \begin{lemma} \label{L:cohomology} Let a topological group $G$ act continuously on a finite-rank free $\mathbb{Z}_\ell$-module $\Lambda$. Let $r\colonequals \rk \Lambda^G$. Then the following hold. \begin{enumerate}[\upshape (a)] \item\label{I:H1 torsion is finite} The continuous cohomology group ${\operatorname{H}}^1(G,\Lambda)[\ell^\infty]$ is finite. \item\label{I:growth rate} $\#(\Lambda/\ell^n \Lambda)^G = O(\ell^{rn})$ as $n \to \infty$. \end{enumerate} \end{lemma} \begin{proof} For each $n$, taking continuous group cohomology of $0 \to \Lambda \stackrel{\ell^n}\to \Lambda \to \Lambda/\ell^n \Lambda \to 0$ yields \begin{equation} \label{E:short exact sequence} 0 \to \frac{\Lambda^G}{\ell^n(\Lambda^G)} \to \left( \frac{\Lambda}{\ell^n \Lambda} \right)^G \to {\operatorname{H}}^1(G,\Lambda)[\ell^n] \to 0. \end{equation} \begin{enumerate}[\upshape (a)] \item By~\eqref{E:short exact sequence} for $n=1$, the group ${\operatorname{H}}^1(G,\Lambda)[\ell]$ is finite. So if ${\operatorname{H}}^1(G,\Lambda)[\ell^\infty]$ is infinite, it contains a copy of $\mathbb{Q}_\ell/\mathbb{Z}_\ell$, contradicting the $Y=0$ case of \cite{Tate1976}*{Proposition~2.1}. \item In~\eqref{E:short exact sequence}, the group on the left has size $\ell^{rn}$, and the group on the right has size $O(1)$ as $n \to \infty$, by~\eqref{I:H1 torsion is finite}. \qedhere \end{enumerate} \end{proof} \section{Upper bound on the rank of the group of Tate classes} \label{S:stuff} \begin{setup} \label{Setup} Let $k$ be a finitely generated field. Let $G\colonequals G_k$. Let $X$ be a nice variety over $k$. Let $d\colonequals \dim X$. Fix $p \in \{0,1,\ldots,d\}$. For each $m \in \mathbb{Z}_{>0}$ with $\kappa \nmid m$, define $T_m \colonequals {\operatorname{H}}^{2p}({X^{\operatorname{sep}}},(\mathbb{Z}/m\mathbb{Z})(p))$. Fix a prime $\ell \ne \kappa$. Define $T \colonequals {\operatorname{H}}^{2p}({X^{\operatorname{sep}}},\mathbb{Z}_\ell(p))$, and $V \colonequals {\operatorname{H}}^{2p}({X^{\operatorname{sep}}},\mathbb{Q}_\ell(p))$. \end{setup} An element of $V$ is called a \defi{Tate class} if it is fixed by a (finite-index) open subgroup of $G$. Let $V^{\Tate} \le V$ be the $\mathbb{Q}_\ell$-subspace of Tate classes. Let $M$ be the $\mathbb{Z}_\ell$-submodule of elements of $T$ mapping to Tate classes in $V$. Let $r \colonequals \rk M = \dim V^{\Tate}$. \begin{lemma} \label{L:Kummer} For each $i,n \in \mathbb{Z}_{\ge 0}$, there is an exact sequence \[ 0 \to \frac{ {\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}_\ell(p)) }{ \ell^n {\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}_\ell(p)) } \to {\operatorname{H}}^i({X^{\operatorname{sep}}},(\mathbb{Z}/\ell^n \mathbb{Z})(p)) \to {\operatorname{H}}^{i+1}({X^{\operatorname{sep}}},\mathbb{Z}_\ell(p))[\ell^n] \to 0. \] \end{lemma} \begin{proof} Use \cite{MilneEtaleCohomology1980}*{Lemma~V.1.11} to take cohomology of \[ 0 \to \mathbb{Z}_\ell(p) \stackrel{\ell^n}\to \mathbb{Z}_\ell(p) \to (\mathbb{Z}/\ell^n \mathbb{Z})(p) \to 0.\qedhere \] \end{proof} \begin{corollary} \label{C:Kummer} For each $n \ge 0$, there is an exact sequence \[ 0 \to \frac{T}{\ell^n {T}} \to T_{\ell^n} \to {\operatorname{H}}^{2p+1}({X^{\operatorname{sep}}},\mathbb{Z}_\ell(p))[\ell^n] \to 0. \] \end{corollary} \begin{proof} Take $i=2p$ in Lemma~\ref{L:Kummer}. \end{proof} \begin{corollary} \label{C:saturated Kummer} For each $n \ge 0$, there is a canonical injection $M/\ell^n M \hookrightarrow T_{\ell^n}$. \end{corollary} \begin{proof} Since $M$ is saturated in $T$, we have an injection $M/\ell^n M \hookrightarrow T/\ell^n T$. Compose with the first map in Corollary~\ref{C:Kummer}. \end{proof} \begin{lemma} \label{L:size of W_n^G} Let $t \in \mathbb{Z}_{\ge 0}$ be such that $\ell^t T_{\operatorname{tors}} =0$. Assume that $G$ acts trivially on $T_{\ell'}$. \begin{enumerate}[\upshape (a)] \item\label{I:size 1} For any $n \geq t$, we have $\# T_{\ell^n}^G \ge \ell^{r(n-t)}$. \item\label{I:size 2} We have $\# T_{\ell^n}^G = O(\ell^{rn})$ as $n \to \infty$. \item\label{I:rank min} We have \[ r = \min \left\{ \left\lfloor \frac{\log \# T_{\ell^n}^G}{\log \ell^{n-t}} \right\rfloor : n > t \right\}. \] \end{enumerate} \end{lemma} \begin{proof} By Corollary~\ref{C:saturated Kummer}, $G$ acts trivially on $M/\ell'M$, and hence also on $M/\ell M$ and $\widetilde{M}/\ell' \widetilde{M}$. The $G$-orbit of each element of $\widetilde{M}$ is finite by definition of Tate class, and $\widetilde{M}$ is finitely generated as a $\mathbb{Z}_\ell$-module, so $G$ acts through a finite quotient on $\widetilde{M}$. By Lemma~\ref{L:Minkowski}, $G$ acts trivially on $\widetilde{M}$. \begin{enumerate}[\upshape (a)] \item Multiplication by $\ell^t$ on $M$ kills $M_{{\operatorname{tors}}}$, so it factors as $M \to \widetilde{M} \twoheadrightarrow \ell^t M$. Hence $G$ acts trivially on $\ell^t M$, so for $n\geq t$, the quotient $\ell^t M/\ell^n M$ is contained in $(M/\ell^n M)^G$. By Corollary~\ref{C:saturated Kummer}, we deduce the inequality $\# T_{\ell^n}^G \ge \# (M/\ell^n M)^G \ge \# (\ell^t M/\ell^n M) \ge \ell^{r(n-t)}$. \item By definition of $M$, we have $\widetilde{T}^G \subseteq \widetilde{M} = \widetilde{M}^G \subseteq \widetilde{T}^G$, so $\rk \widetilde{T}^G = r$. Dividing the first two terms in Corollary~\ref{C:Kummer} by the images of $T_{{\operatorname{tors}}}$ yields \[ 0 \to \frac{\widetilde{T}}{\ell^n \widetilde{T}} \to \frac{T_{\ell^n}}{I_n} \to {\operatorname{H}}^{2p+1}({X^{\operatorname{sep}}},\mathbb{Z}_\ell(p))[\ell^n] \to 0, \] where $I_n$ is the image of $T_{{\operatorname{tors}}}$ in $T_{\ell^n}$. This implies the second inequality in \[ \# T_{\ell^n}^G \le \# I_n^G \cdot \# \left( \frac{T_{\ell^n}}{I_n} \right)^G \le \# I_n^G \cdot \# \left(\frac{\widetilde{T}}{\ell^n \widetilde{T}}\right)^G \cdot \# \left({\operatorname{H}}^{2p+1}({X^{\operatorname{sep}}},\mathbb{Z}_\ell(p))[\ell^n] \right)^G. \] Since ${\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}_\ell(p))$ is a finitely generated $\mathbb{Z}_\ell$-module for each $i$, the first and third factors on the right are $O(1)$. On the other hand, Lemma~\ref{L:cohomology}\eqref{I:growth rate} yields $\#(\widetilde{T}/\ell^n \widetilde{T})^G = O(\ell^{rn})$. Multiplying shows that $\# T_{\ell^n}^G = O(\ell^{rn})$. \item The statement follows by combining the previous items. \qedhere \end{enumerate} \end{proof} \section{Cycles under field extension}\label{S:field extension} In this section, assume Setup~\ref{Setup}. \begin{proposition} \label{P:Num X} \hfill \begin{enumerate}[\upshape (a)] \item\label{I:Num X injects} For any extension $L$ of $k$, the natural map $\Num^p X \to \Num^p X_L$ is injective. \item\label{I:Num X has finite index} The image of $\Num^p X \to \Num^p {\overline{X}}$ is a finite-index subgroup of $(\Num^p {\overline{X}})^G$. \item\label{I:Num X_sep vs Num Xbar} If $\kappa>0$, the index of $\Num^p {X^{\operatorname{sep}}}$ in $\Num^p {\overline{X}}$ is finite and equal to a power of $\kappa$. \end{enumerate} The same three statements hold for $\NS$ instead of $\Num^p$. \end{proposition} \begin{proof}\hfill \begin{enumerate}[\upshape (a)] \item If $z \in \mathcal{Z}^p(X)$ has intersection number $0$ with all $p$-cycles on $X_L$, then in particular it has intersection number $0$ with all $p$-cycles on $X$. \item Suppose that $[z] \in (\Num^p {\overline{X}})^G$, where $z \in \mathcal{Z}^p({\overline{X}})$. Then $z$ comes from some $z_L \in \mathcal{Z}^p(X_L)$ for some finite extension $L$ of $k$. Let $n \colonequals [L:k]$. Then $n[z] = \tr_{L/k} [z]$ comes from $\tr_{L/k} z_L \in \mathcal{Z}^p(X)$. Hence the cokernel of $\Num^p X \to (\Num^p {\overline{X}})^G$ is torsion, but it is also finitely generated, so it is finite. \item We may assume that $k={k^{\operatorname{sep}}}$. Then $G=\{1\}$, so~\eqref{I:Num X has finite index} implies that $\Num^p {X^{\operatorname{sep}}}$ is of finite index in $\Num^p {\overline{X}}$. Moreover, in the proof of~\eqref{I:Num X has finite index}, $[L:k]$ is always a power of $\kappa$, so the index is a power of $\kappa$. \end{enumerate} Statement \eqref{I:Num X injects} for $\NS$ follows from the fact that the formation of $\PIC^0_{X/k}$ respects field extension~\cite{Kleiman2005}*{Proposition~9.5.3}. The proofs of \eqref{I:Num X has finite index} and~\eqref{I:Num X_sep vs Num Xbar} for $\NS$ are the same as for $\Num^p$. \end{proof} \begin{proposition} \label{P:NS over finite field} If $k$ is finite, then the natural homomorphisms $\Pic X \to (\Pic {X^{\operatorname{sep}}})^G$ and $\NS X \to (\NS {X^{\operatorname{sep}}})^G$ are isomorphisms. \end{proposition} \begin{proof} That $\Pic X \to (\Pic {X^{\operatorname{sep}}})^G$ is an isomorphism follows from the Hochschild--Serre spectral sequence for \'etale cohomology and the vanishing of the Brauer group of $k$. Lang's theorem~\cite{Lang1956} implies ${\operatorname{H}}^1(k,\Pic^0 {X^{\operatorname{sep}}})=0$, so taking Galois cohomology of \[ 0 \to \Pic^0 {X^{\operatorname{sep}}} \to \Pic {X^{\operatorname{sep}}} \to \NS {X^{\operatorname{sep}}} \to 0 \] shows that the homomorphism $\Pic X = (\Pic {X^{\operatorname{sep}}})^G \to (\NS {X^{\operatorname{sep}}})^G$ is surjective. On the other hand, its image is $\NS X$. \end{proof} \section{Hypotheses and conjectures} \label{S:hypotheses and conjectures} Our computability results rely on the ability to compute \'etale cohomology with finite coefficients. Some of the results are conditional also on the Tate conjecture and related conjectures. We now formulate these hypotheses precisely, so that they can be referred to in our main theorems. \subsection{Explicit representation of objects} \label{S:explicit} To specify an ideal in a polynomial ring over $\mathbb{Z}$ in finitely many indeterminates, we give a finite list of generators. To specify a finitely generated $\mathbb{Z}$-algebra $A$, we give an ideal $I$ in a polynomial ring $R$ as above such that $A$ is isomorphic to $R/I$. To specify a finitely generated field $k$, we give a finitely generated $\mathbb{Z}$-algebra $A$ that is a domain such that $k$ is isomorphic to $\Frac A$. To specify a continuous $G_k$-action on a finitely generated abelian group $A$, we give a finite Galois extension $k'$ of $k$ together with an action of $\Gal(k'/k)$ on $A$ such that there exists a $k$-embedding $k' \hookrightarrow {k^{\operatorname{sep}}}$ such that the original $G_k$-action is the composition $G_k \twoheadrightarrow \Gal(k'/k) \to \Aut A$. To specify a $G_k$-action on finitely many finitely generated abelian groups, we use the same $k'$ for all of them. To specify a projective variety $X$, we give its homogeneous ideal for a particular embedding of $X$ in some projective space. To specify a codimension~$p$ cycle on $X$, we give an explicit integer combination of codimension~$p$ integral subvarieties of $X$. \begin{definition} \label{D:map on cycles} Given $k$, $X$, and $p$ as in Setup~\ref{Setup}, to compute a $G_k$-module homomorphism $f$ from $\mathcal{Z}^p({X^{\operatorname{sep}}})$ to an (abstract) finitely generated $G_k$-module $A$ means to compute \begin{itemize} \item a finite Galois extension $k'$ of $k$, \item an explicit finitely generated $\Gal(k'/k)$-module $A'$, and \item an algorithm that takes as input a finite separable extension $L$ of $k'$ and an element of $\mathcal{Z}^p(X_L)$ and returns an element of $A'$, \end{itemize} such that there exists a $k$-embedding $k' \hookrightarrow {k^{\operatorname{sep}}}$ and an isomorphism $A' \overset{\sim}{\rightarrow} A$ such that the composition $\mathcal{Z}^p(X_L) \to A' \overset{\sim}{\rightarrow} A$ factors as $\mathcal{Z}^p(X_L) \to \mathcal{Z}^p({X^{\operatorname{sep}}}) \stackrel{f} \to A$ for some (or equivalently, every) $k'$-embedding $L \hookrightarrow {k^{\operatorname{sep}}}$. \end{definition} \begin{remark} \label{R:Z^tau} A similar definition can be made for $G_k$-module homomorphisms defined only on a $G_k$-submodule of $\mathcal{Z}^p({X^{\operatorname{sep}}})$. \end{remark} \begin{remark} \label{R:k in C} If $k$ is a finitely generated field of characteristic~$0$, we can explicitly identify finite extensions of $k$ with subfields of $\mathbb{C}$ consisting of computable numbers as follows. (To say that $z \in \mathbb{C}$ is computable means that there is an algorithm that given $n \in \mathbb{Z}_{\ge 1}$ returns an element $\alpha \in \mathbb{Q}(i)$ such that $|z-\alpha| < 1/n$.) Let $t_1,\ldots,t_n$ be a transcendence basis for $k$ over $\mathbb{Q}$. Embed $\mathbb{Q}(t_1,\ldots,t_n)$ in $\mathbb{C}$ by mapping $t_j$ to $\exp(2^{1/j})$; these are algebraically independent over $\mathbb{Q}$ by the Lindemann--Weierstrass theorem. As needed, embed finite extensions of $\mathbb{Q}(t_1,\ldots,t_n)$ (starting with $k$) into $\mathbb{C}$ by writing down the minimal polynomial of each new field generator over the subfield generated so far, together with an approximation to an appropriate root in $\mathbb{C}$ good enough to distinguish it from the other roots. \end{remark} Remark~\ref{R:k in C} will be useful in relating \'etale cohomology over ${\overline{k}}$ to singular cohomology over~$\mathbb{C}$. \subsection{Computability of \'etale cohomology} \begin{hypothesis}[Cohomology is computable] \label{H:compute etale 2} There is an algorithm that takes as input $(k,X,\ell)$ as in Setup~\ref{Setup} and $i,n \in \mathbb{Z}_{\ge 0}$, and returns a finite $G_k$-module isomorphic to ${\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})$. \end{hypothesis} \begin{remark} \label{R:compute Tate twist} Hypothesis \ref{H:compute etale 2} implies also that we can compute the Tate twist ${\operatorname{H}}^i({X^{\operatorname{sep}}},(\mathbb{Z}/\ell^n\mathbb{Z})(p))\simeq {\operatorname{H}}^{i}({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})(p)$ for any $p \in \mathbb{Z}$. \end{remark} It seems reasonable to expect that Hypothesis~\ref{H:compute etale 2} is in reach of existing methods, although it has not been fully verified yet. In fact, we will prove it for $k$ of characteristic~$0$ (Theorem~\ref{T:compute etale in char 0}). In arbitrary characteristic, we show only that we can ``approximate ${\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})$ from below'' (Proposition~\ref{P:lower bound on Hyp groups}). Following a suggestion of Lenny Taelman, we use \'etale \v{C}ech\ cocycles. By \cite{Artin1971}*{Corollary 4.2}, every element of ${\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})$ can be represented by a \v{C}ech\ cocycle for some \'etale cover. Any \'etale cover $\mathcal{U} = (U_j \to {X^{\operatorname{sep}}})_{j\in J}$ may be refined by one for which $J$ is finite and the morphisms $U_j \to {X^{\operatorname{sep}}}$ are of finite presentation; from now on, we assume that all \'etale covers satisfy these finiteness conditions. Then we can enumerate all \'etale \v{C}ech\ cochains. Fix a projective embedding of $X$. Choose an \'etale \v{C}ech\ cocycle representing the class of $\mathscr{O}_{{X^{\operatorname{sep}}}}(1)$ in ${\operatorname{H}}^1({X^{\operatorname{sep}}},\mathbb{G}_m)$. Using the Kummer sequence \[ 0 \to \mu_{\ell^n} \to \mathbb{G}_m \to \mathbb{G}_m \to 0 \] compute its coboundary: this is a cocycle representing the class of a hyperplane section in ${\operatorname{H}}^2({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})$ (we ignore the Tate twist for now). Compute its $d$-fold cup product in ${\operatorname{H}}^{2d}({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n) \simeq \mathbb{Z}/\ell^n\mathbb{Z}$; this represents $D$ times the class of a point, where $D$ is the degree of $X$. If $\ell \nmid D$, we can multiply by the inverse of $(D \bmod \ell)$ to obtain the class of a point. In general, let $\ell^m$ be the highest power of $\ell$ dividing $D$; repeat the construction above to obtain a cocycle $\eta_D$ representing $D$ times the class of a point in ${\operatorname{H}}^{2d}({X^{\operatorname{sep}}},\mathbb{Z}/\ell^{m+n}\mathbb{Z}) \simeq \mathbb{Z}/\ell^{m+n}\mathbb{Z}$. Search for another cocycle $\eta_1$ in the same group such that $D \eta_1 - \eta_D$ is the coboundary of another cochain on some refinement. Eventually $\eta_1$ will be found, and reducing its values modulo $\ell^n$ yields a cocycle representing the class of a point in ${\operatorname{H}}^{2d}({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})$. \begin{lemma} \label{L:etale cocycle is 0} There is an algorithm that takes as input $(k,X,\ell)$ as in Setup~\ref{Setup} and $i,n \in \mathbb{Z}_{\ge 0}$ and two \'etale \v{C}ech\ cocycles representing elements of ${\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n)$, and decides whether their classes are equal. \end{lemma} \begin{proof} We can subtract the cocycles, so it suffices to test whether a cocycle $\eta$ represents $0$. By day, search for a cochain on some refinement whose coboundary is $\eta$. By night, search for a cocycle $\eta'$ representing a class in ${\operatorname{H}}^{2d-i}({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})$, an integer $j \in \{1,2,\ldots,\ell^n-1\}$, and a cochain whose coboundary differs from $\eta \cup \eta'$ by $j$ times the class of a point in ${\operatorname{H}}^{2d}({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})$ (see \cite{Liu2002}*{p.~194, Exercise~2.17} for an explicit formula for the cup product). The search by day terminates if the class of $\eta$ is $0$, and the search by night terminates if the class of $\eta$ is nonzero, by Poincar\'e duality \cite{SGA4.5}*{p.~71, Th\'eor\`eme~3.1}. \end{proof} \begin{proposition} \label{P:lower bound on Hyp groups} There is an algorithm that takes as input $(k,X,\ell)$ as in Setup~\ref{Setup} and $i,n \in \mathbb{Z}_{\ge 0}$ such that, when left running forever, it prints out an infinite sequence $\Lambda_0 \subset \Lambda_1 \subset \ldots$ of finite $G_k$-modules that stabilizes at a $G_k$-module isomorphic to ${\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})$. \end{proposition} \begin{proof} By enumerating \v{C}ech cocycles, we represent more and more classes inside ${\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})$. At any moment, we may construct the $G_k$-module structure of the finite subgroup generated by the classes found so far and their Galois conjugates, by using Lemma~\ref{L:etale cocycle is 0} to test which $\mathbb{Z}/\ell^n\mathbb{Z}$-combinations of them are $0$. Eventually this $G_k$-module is the whole of ${\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})$ (even if we do not yet have a way to detect when this has happened). \end{proof} \begin{proposition} \label{P:singular cohomology} There is an algorithm that takes as input $(k,X,\ell)$ as in Setup~\ref{Setup} and $i,n \in \mathbb{Z}_{\ge 0}$, where $k$ is of characteristic~$0$, and computes a finite abelian group isomorphic to the singular cohomology group ${\operatorname{H}}^i(X(\mathbb{C}),\mathbb{Z}/\ell^n\mathbb{Z})$ for some embedding $k \hookrightarrow \mathbb{C}$ as in Remark~\ref{R:k in C}. Similarly, one can compute a finitely generated abelian group isomorphic to ${\operatorname{H}}^i(X(\mathbb{C}),\mathbb{Z})$. \end{proposition} \begin{proof} One approach is to embed $X$ in some $\mathbb{P}^n_k$ and compose $X(\mathbb{C}) \to \mathbb{P}^n(\mathbb{C})$ with the Mannoury embedding~\cite{Mannoury1900} \begin{align*} \mathbb{P}^n(\mathbb{C}) &\hookrightarrow \mathbb{R}^{(n+1)^2} \\ (z_0 : \cdots : z_n) &\mapsto \left( \frac{z_i \bar{z}_j}{\sum_k z_k \bar{z}_k} : 0 \le i,j \le n \right) \end{align*} to identify $X(\mathbb{C})$ with a semialgebraic subset of Euclidean space, and then to apply \cite{Basu-Pollack-Roy2006}*{Remark~11.19(b) and the results it refers to} to compute a finite triangulation of $X(\mathbb{C})$, which yields the cohomology groups with coefficients in $\mathbb{Z}$ or $\mathbb{Z}/\ell^n\mathbb{Z}$. For an alternative approach, see~\cite{Simpson2008}*{Section~2.5}. \end{proof} \begin{theorem} \label{T:compute etale in char 0} Hypothesis~\ref{H:compute etale 2} restricted to characteristic~$0$ is true. \end{theorem} \begin{proof} Identify $k$ with a subfield of $\mathbb{C}$ as in Remark~\ref{R:k in C}. By the standard comparison theorem, the \'etale cohomology group ${\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})$ is isomorphic to the singular cohomology group ${\operatorname{H}}^i(X(\mathbb{C}),\mathbb{Z}/\ell^n\mathbb{Z})$. Use Proposition~\ref{P:singular cohomology} to compute the size of the latter. Run the algorithm in Proposition~\ref{P:lower bound on Hyp groups} and stop once $\#\Lambda_j$ equals this integer. Then $\Lambda_j \simeq {\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})$. \end{proof} \begin{corollary} \label{C:compute etale by lifting} Hypothesis~\ref{H:compute etale 2} restricted to varieties in positive characteristic that lift to characteristic~$0$ is true. \end{corollary} \begin{proof} If $X$ lifts to a nice variety $\mathcal{X}$ in characteristic $0$, then we can search for a suitable $\mathcal{X}$ until we find one, and then compute the size of ${\operatorname{H}}^i(\mathcal{X}^{\textup{sep}},\mathbb{Z}/\ell^n\mathbb{Z})$, which is isomorphic to the desired group ${\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})$. Then run the algorithm in Proposition~\ref{P:lower bound on Hyp groups} as before. \end{proof} \begin{remark} \label{R:Taelman} Our approach to Theorem~\ref{T:compute etale in char 0} above was partially inspired by an alternative approach communicated to us by Lenny Taelman. His idea, in place of Proposition~\ref{P:lower bound on Hyp groups}, was to enumerate \'etale \v{C}ech\ cocycles and compute their images under a comparison isomorphism \[ {\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z}) \to {\operatorname{H}}^i(X(\mathbb{C}),\mathbb{Z}/\ell^n\mathbb{Z}) \] explicitly (this assumes that given an \'etale morphism $U \to {X^{\operatorname{sep}}}$ one can compute compatible triangulations of $U(\mathbb{C})$ and $X(\mathbb{C})$). Eventually a set of cocycles mapping bijectively onto ${\operatorname{H}}^i(X(\mathbb{C}),\mathbb{Z}/\ell^n\mathbb{Z})$ will be found. The Galois action could then be computed by searching for coboundaries representing the difference of each Galois conjugate of each cocycle with some other cocycle in the set. \end{remark} \subsection{The Tate conjecture} See~\cite{Tate1994} for a survey of the relationships between the following two conjectures and many others. \begin{conjectureT}[Tate conjecture] \label{C:Tate} Assume Setup~\ref{Setup}. The cycle class homomorphism \[ \mathcal{Z}^p({X^{\operatorname{sep}}}) \tensor \mathbb{Q}_\ell \to V^{\Tate} \] is surjective. \end{conjectureT} \begin{conjectureE}[Numerical equivalence equals homological equivalence] \label{C:Num=Hom} Assume Setup~\ref{Setup}. An element of $\mathcal{Z}^p({X^{\operatorname{sep}}})$ is numerically equivalent to $0$ if and only if its class in $V$ is $0$. \end{conjectureE} \begin{remark} \label{R:E^1} Conjecture $\operatorname{E}^1(X,\ell)$ holds (see \cite{Tate1994}*{p.~78}). \end{remark} Given $(k,X,p,\ell)$ as in Setup~\ref{Setup}, with $k$ finite, let $V_\mu$ be the largest $G$-invariant subspace of $V$ on which all eigenvalues of the Frobenius are roots of unity. We have $V^{\Tate} \le V_\mu$. \begin{proposition} \label{P:consequences} Fix $X$, $p$, and $\ell$, and assume Conjecture $\operatorname{E}^p(X,\ell)$. Then the following integers are equal: \begin{enumerate}[\upshape (a)] \item\label{I:Z-rank of Num} the $\mathbb{Z}$-rank of the $G_k$-module $\Num^p {X^{\operatorname{sep}}}$, \item\label{I:Z-rank of im Z} the $\mathbb{Z}$-rank of the image of $\mathcal{Z}^p({X^{\operatorname{sep}}})$ in $V$, and \item\label{I:Q_l-dim of im Z} the $\mathbb{Q}_\ell$-dimension of the image of $\mathcal{Z}^p({X^{\operatorname{sep}}}) \tensor \mathbb{Q}_\ell$ in $V$. \end{enumerate} The integer in \eqref{I:Q_l-dim of im Z} is less than or equal to the following equal integers, \begin{enumerate}[\upshape (a)] \item[\upshape (d)]\label{I:Q_l-dim of V^Tate} the $\mathbb{Q}_\ell$-dimension of $V^{\Tate}$ and \item[\upshape (e)]\label{I:Z_l-rank of M} the $\mathbb{Z}_\ell$-rank of the $G_k$-module $M$ of Section~\ref{S:stuff}, \end{enumerate} which, if $k$ is finite, are less than or equal to \begin{enumerate}[\upshape (a)] \item[\upshape (f)]\label{I:Q_l-dim of V_mu} the $\mathbb{Q}_\ell$-dimension of $V_\mu$. \end{enumerate} If moreover, $\operatorname{T}^p(X,\ell)$ holds, then all the integers (including~(f) if $k$ is finite) are equal. Conversely, if (c) equals~(d), then $\operatorname{T}^p(X,\ell)$ holds. \end{proposition} \begin{proof} The only nontrivial statements are \begin{itemize} \item the equality of \eqref{I:Z-rank of im Z} and~\eqref{I:Q_l-dim of im Z}, which is~\cite{Tate1994}*{Lemma~2.5}, and \item the fact that $\operatorname{T}^p(X,\ell)$ and $\operatorname{E}^p(X,\ell)$ for $k$ finite together imply the equality of (d) and~(f); this follows from \cite{Tate1994}*{Theorem~2.9, (b)$\Rightarrow$(c)}. \end{itemize} \end{proof} \section{Algorithms}\label{S:algorithms} \subsection{Computing rank and torsion of \'etale cohomology}\label{betti} \begin{proposition}\label{P:zeta} There is an algorithm that takes as input a nice variety $X$ over $\mathbb{F}_q$, and returns its zeta function \[ Z_X(T) \colonequals \exp \left( \sum_{n=1}^\infty \frac{\#X(\mathbb{F}_{q^n})}{n} T^n \right) \in \mathbb{Q}(T). \] \end{proposition} \begin{proof} {}From \cite{Katz2001}*{Corollary of Theorem~3}, we obtain an upper bound $B$ on the sum of the $\ell$-adic Betti numbers $b_i(X)$. Then $Z_X(T)$ is a rational function of degree at most $B$. Compute $\#X(\mathbb{F}_{q^n})$ for $n \in \{1,2,\ldots,2B\}$; this determines the mod $T^{2B+1}$ Taylor expansion of $Z_X(T)$, which is enough to determine $Z_X(T)$. \end{proof} \begin{proposition}\label{P:betti} There is an algorithm that takes as input a finitely generated field $k$ and a nice variety $X$ over $k$, and returns $b_0(X), \dots, b_{2\dim X}(X)$. \end{proposition} \begin{proof} First assume that $k=\mathbb{F}_q$. Using Proposition~\ref{P:zeta}, we compute the zeta function $Z_X(T)$. For each $i$, the Betti number $b_i(X)$ equals the number of complex poles of $Z_X(T)^{(-1)^i}$ with absolute value $q^{-i/2}$, counted with multiplicity; this can be read off from the Newton polygon of the numerator or denominator of $Z_X(T)$. In the general case, we spread out $X$ to a smooth projective scheme $\mathcal{X}$ over a finitely presented $\mathbb{Z}$-algebra $R = \mathbb{Z}[x_1,\ldots,x_n]/(f_1,\ldots,f_m)$. Search for a finite field $\mathbb{F}$ and a point $a \in \mathbb{F}^n$ satisfying $f_1(a)=\cdots=f_m(a)=0$; eventually we will succeed; then $\mathbb{F}$ is an explicit $R$-algebra. Set $\mathcal{X}_\mathbb{F} = \mathcal{X} \times_R \mathbb{F}$. Standard specialization theorems (e.g., \cite{SGA4.5}*{V,~Th\'eor\`eme~3.1}) imply that $b_i(X)=b_i(\mathcal{X}_\mathbb{F})$ for all $i$, so we reduce to the case of the previous paragraph. \end{proof} The following statement and proof were suggested by Olivier Wittenberg. \begin{proposition} \label{P:torsion computable} Assume Hypothesis~\ref{H:compute etale 2}. There is an algorithm that takes as input $(k,X,\ell)$ as in Setup~\ref{Setup} and an integer $i$ and returns a finite group that is isomorphic to ${\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}_\ell)_{\operatorname{tors}}$. \end{proposition} \begin{proof} For each $j$, let ${\operatorname{H}}^j \colonequals {\operatorname{H}}^j({X^{\operatorname{sep}}},\mathbb{Z}_\ell)$. For integers $j,n$ with $n \ge 0$, let $a_{j,n} \colonequals \# {\operatorname{H}}^j[\ell^n]$ and $b_j \colonequals b_j(X)=\dim_{\mathbb{Q}_\ell}({\operatorname{H}}^j \otimes_{\mathbb{Z}_\ell} \mathbb{Q}_{\ell})$. Since ${\operatorname{H}}^j_{\operatorname{tors}}$ is finite, $\# {\operatorname{H}}^j_{\operatorname{tors}}/\ell^n {\operatorname{H}}^j_{\operatorname{tors}} = \# {\operatorname{H}}^j_{\operatorname{tors}}[\ell^n] = a_{j,n}$, so $\#{\operatorname{H}}^j/\ell^n{\operatorname{H}}^j = \ell^{nb_j} \cdot a_{j,n}$. From Lemma \ref{L:Kummer} we find \begin{equation}\label{E:ajn} \# {\operatorname{H}}^j({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z}) = \#\left({\operatorname{H}}^j/\ell^n{\operatorname{H}}^j\right) \cdot \#\left( {\operatorname{H}}^{j+1}[\ell^n]\right) = \ell^{nb_j} \cdot a_{j,n} \cdot a_{j+1,n}. \end{equation} The left side is computable by Hypothesis~\ref{H:compute etale 2}, and $b_j$ is computable by Proposition~\ref{P:betti}. Since $a_{j,n}=1$ for $j<0$ and for $j>2\dim X$, for any given $n$, we can use \eqref{E:ajn} to compute $a_{j,n}$ for all $j$, by ascending or descending induction. Compute $$ 1=a_{i,0} \leq a_{i,1} \leq a_{i,2} \leq \dots \leq a_{i,N} \leq a_{i,N+1} $$ until $a_{i,N} = a_{i,N+1}$. Then ${\operatorname{H}}^i_{\operatorname{tors}}$ has exponent $\ell^N$ and ${\operatorname{H}}^i_{\operatorname{tors}}$ is isomorphic to $\Directsum_{n=1}^N (\mathbb{Z}/\ell^n\mathbb{Z})^{r_n}$ with $r_n$ such that $\ell^{r_n} a_{i,n-1}a_{i,n+1}= a_{i,n}^2$. \end{proof} \begin{remark} The proof of Proposition~\ref{P:torsion computable} did not require the full strength of Hypothesis~\ref{H:compute etale 2}: computability of the group ${\operatorname{H}}^j({X^{\operatorname{sep}}},\mathbb{Z}/\ell^n\mathbb{Z})$ for all $j<i$ or for all $j \ge i$ would have sufficed. \end{remark} \begin{remark} \label{R:lifts to char 0} If $k$ is of characteristic~$0$ (or $X$ lifts to characteristic~$0$), then combining Theorem~\ref{T:compute etale in char 0} (or Corollary~\ref{C:compute etale by lifting}) with Proposition~\ref{P:torsion computable} lets us compute the group ${\operatorname{H}}^i({X^{\operatorname{sep}}},\mathbb{Z}_\ell)_{\operatorname{tors}}$ unconditionally. \end{remark} \subsection{Computing \texorpdfstring{$\Num^p {X^{\operatorname{sep}}}$}{Num p Xsep}} Throughout this section, we assume Setup~\ref{Setup}. \begin{lemma} \label{L:computing intersection number} There is an algorithm that takes as input $k$, $p$, $X$, and cycles $z \in \mathcal{Z}^p(X)$ and $y \in \mathcal{Z}^{d-p}(X)$, and returns the intersection number $z.y$. \end{lemma} \begin{proof} First, if $z$ and $y$ are integral cycles intersecting properly, use Gr\"obner bases to compute the degree of their scheme-theoretic intersection. If $z$ and $y$ are arbitrary cycles whose supports intersect properly, use bilinearity to reduce to the previous sentence. In general, search for a rational equivalence between $z$ and another $p$-cycle $z'$ such that the supports of $z'$ and $y$ intersect properly; eventually $z'$ will be found; then apply the previous sentence to compute $z'.y$. \end{proof} The following lemma describes a decision problem for which we do not have an algorithm that always terminates, but only a \emph{one-sided} test, i.e., an algorithm that halts if the answer is YES, but runs forever without reaching a conclusion if the answer is NO. \begin{lemma} \label{L:independent in Num} There is an algorithm that takes as input $k$, $p$, $X$, a finite extension $L$ of $k$, and a finite list of cycles $z_1,\ldots,z_s \in \mathcal{Z}^p(X_L)$, and halts if and only if the images of $z_1,\ldots,z_s$ in $\Num^p {\overline{X}}$ are $\mathbb{Z}$-independent. \end{lemma} \begin{proof} Enumerate $s$-tuples $(y_1,\ldots,y_s)$ of elements of $\mathcal{Z}^{d-p}(X_{L'})$ as $L'$ ranges over finite separable extensions of $L$. As each $s$-tuple is computed, compute also the intersection numbers $y_i.z_j \in \mathbb{Z}$ and halt if $\det(y_i.z_j) \ne 0$. \end{proof} \begin{remark} \label{R:independence in Num^p Xsep} In Lemma~\ref{L:independent in Num}, if $L$ is separable over $k$, then it would be the same to ask for independence in $\Num^p {X^{\operatorname{sep}}}$, by Proposition~\ref{P:Num X}\eqref{I:Num X injects}. \end{remark} \begin{corollary} \label{C:lower bounds for Num} There is an algorithm that takes as input $k$, $p$, and $X$, and that when left running forever, prints out an infinite sequence of nonnegative integers whose maximum equals $\rk \Num^p {X^{\operatorname{sep}}}$. \end{corollary} \begin{proof} Enumerate finite $s$-tuples $(z_1,\ldots,z_s)$ of elements of $\mathcal{Z}^p(X_L)$ for all $s \ge 0$ and all finite separable extensions $L$ of $k$, and run the algorithm of Lemma~\ref{L:independent in Num} (using Remark~\ref{R:independence in Num^p Xsep}) on all of them in parallel, devoting a fraction $2^{-i}$ of the algorithm's time to the $i^{{\operatorname{th}}}$ process. Each time one of the processes halts, print its value of $s$. \end{proof} \begin{theorem}[Computing $\Num^p {X^{\operatorname{sep}}}$] \label{T:Num} \hfill \begin{enumerate}[\upshape (a)] \item \label{I:rank of Num} Assume Hypothesis~\ref{H:compute etale 2}. Then there is an algorithm that takes as input $(k,X,p,\ell)$ as in Setup~\ref{Setup} such that, assuming $\operatorname{E}^p(X,\ell)$, \begin{itemize} \item the algorithm terminates if and only if $\operatorname{T}^p(X,\ell)$ holds, and \item if the algorithm terminates, it returns $\rk \Num^p {X^{\operatorname{sep}}}$. \end{itemize} \item\label{I:unconditional Num} There is an unconditional algorithm that takes $k$, $p$, $X$, and a nonnegative integer $\rho$ as input, and computes the following assuming that $\rho = \rk \Num^p {X^{\operatorname{sep}}}$: \begin{enumerate}[\upshape (i)] \item\label{I:Num N} a finitely generated torsion-free $G_k$-module $N$ having a $G_k$-equivariant injection $\Num^p {X^{\operatorname{sep}}} \hookrightarrow N$ with finite cokernel, \item\label{I:map to N} the composition $\mathcal{Z}^p({X^{\operatorname{sep}}}) \to \Num^p {X^{\operatorname{sep}}} \hookrightarrow N$ in the sense of Definition~\ref{D:map on cycles}, and \item the rank of $\Num^p X$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} \hfill \begin{enumerate}[\upshape (a)] \item Let $\ell'$ be as in Section~\ref{S:group-theoretic lemmas}. Use Hypothesis~\ref{H:compute etale 2} to compute $T_{\ell'}$. Replace $k$ by a finite Galois extension to assume that $G_k$ acts trivially on $T_{\ell'}$. Let $M$ and $r$ be as in Section~\ref{S:stuff}. Use the algorithm of Proposition~\ref{P:torsion computable} to compute an integer $t$ such that $\ell^t T_{\operatorname{tors}}=0$. By day, use Hypothesis~\ref{H:compute etale 2} to compute the groups $T_{\ell^n}$ for $n=t+1,t+2,\ldots$, and the upper bounds $\lfloor \log \#T_{\ell^n}^G / \log \ell^{n-t} \rfloor$ on $r$ given by Lemma~\ref{L:size of W_n^G}\eqref{I:rank min}. By night, compute lower bounds on $\rk \Num^p {X^{\operatorname{sep}}}$ as in Corollary~\ref{C:lower bounds for Num}. Stop if the bounds ever match, which happens if and only if equality holds in the inequality $\rk \Num^p {X^{\operatorname{sep}}} \le r$, which by Proposition~\ref{P:consequences} happens if and only if $\operatorname{T}^p(X,\ell)$ holds. In this case, we have computed $\rk \Num^p {X^{\operatorname{sep}}}$. \item \begin{enumerate}[\upshape (i)] \item Search for a finite Galois extension $k'$ of $k$, for $p$-cycles $y_1,\ldots,y_s$, and for codimension~$p$ cycles $z_1,\ldots,z_t$ over $k'$ until the intersection matrix $(y_i.z_j)$ has rank $\rho$. The assumption $\rho=\rk \Num^p {X^{\operatorname{sep}}}$ guarantees that such $k'$, $y_i$, $z_j$ will be found eventually. Let $Y$ be the free abelian group with basis equal to the set consisting of the $y_i$ and their Galois conjugates, so $Y$ is a $G_k$-module. The intersection pairing defines a homomorphism $\phi \colon \Num^p {X^{\operatorname{sep}}} \to \Hom_\mathbb{Z}(Y,\mathbb{Z})$ whose image has rank equal to $\rho=\rk \Num^p {X^{\operatorname{sep}}}$. Since $\Num^p {X^{\operatorname{sep}}}$ is torsion-free, $\phi$ is injective. Compute the saturation $N$ of the $\mathbb{Z}$-span of $\phi(z_1),\ldots,\phi(z_s)$ in $\Hom_\mathbb{Z}(Y,\mathbb{Z})$. Because of its rank, $N$ equals the saturation of $\phi(\Num^p {X^{\operatorname{sep}}})$. Thus $N$ is a finitely generated torsion-free $G_k$-module containing a finite-index $G_k$-submodule $\phi(\Num^p {X^{\operatorname{sep}}})$ isomorphic to $\Num^p {X^{\operatorname{sep}}}$. \item Given $z \in \mathcal{Z}^p(X_L)$ for some finite separable extension $L$ of $k'$, computing its intersection number with each basis element of $Y$ yields the image of $z$ in $N$. \item Because of Proposition~\ref{P:Num X}\eqref{I:Num X has finite index}, $\rk \Num^p X = \rk N^{G_k}$, which is computable.\qedhere \end{enumerate} \end{enumerate} \end{proof} \begin{remark} If we can bound the exponent of $T_{{\operatorname{tors}}}={\operatorname{H}}^{2p}({X^{\operatorname{sep}}},\mathbb{Z}_\ell)_{\operatorname{tors}}$ without using Proposition~\ref{P:torsion computable}, then Theorem~\ref{T:Num}\eqref{I:rank of Num} requires Hypothesis~\ref{H:compute etale 2} only for $i=2p$. In particular, this applies if $\Char k=0$ or if $\Char k > 0$ and $X$ lifts to characteristic~$0$, by Remark~\ref{R:lifts to char 0}. Actually, if $\Char k=0$, we do not need Hypothesis~\ref{H:compute etale 2} at all, because Theorem~\ref{T:compute etale in char 0} says that it is true! \end{remark} \begin{remark} \label{R:Num Xbar} The analogue of Theorem~\ref{T:Num} with ${X^{\operatorname{sep}}}$ replaced by ${\overline{X}}$ also holds. (By Proposition~\ref{P:Num X}\eqref{I:Num X_sep vs Num Xbar}, $\Num^p {X^{\operatorname{sep}}}$ is of finite index in $\Num^p {\overline{X}}$, so in the proof of Theorem~\ref{T:Num}\eqref{I:unconditional Num}(i), the homomorphism $\phi$ extends to a $G_K$-equivariant injective homomorphism $\overline{\phi} \colon \Num^p {\overline{X}} \to \Hom_\mathbb{Z}(Y,\mathbb{Z})$. Because of finite index, the image of $\overline{\phi}$ is contained in $N$. The cokernel of $\Num^p {\overline{X}} \to N$ is finite. \end{remark} \begin{remark} \label{R:intersection pairing} For each $p \in \{0,1,\ldots,d\}$, let $N_p$ be the $N$ in Theorem~\ref{T:Num}\eqref{I:unconditional Num}(i), and define $Q_p \colonequals N_p \tensor \mathbb{Q}$. Then for any $p,q \in \mathbb{Z}_{\ge 0}$ with $p+q \le d$, we can compute a bilinear pairing $Q_p \times Q_q \to Q_{p+q}$ that corresponds to the intersection pairing: indeed, each $Q_p$ is spanned by classes of cycles, whose intersections in the Chow ring can be computed by an argument similar to that used to prove Lemma~\ref{L:computing intersection number}. \end{remark} \subsection{Checking algebraic equivalence of divisors} \begin{lemma} \label{L:algebraically equivalent to 0} There is an algorithm that takes as input $k$, $X$, a finite extension $L$ of $k$, and an element $z \in \mathcal{Z}^1(X_L)$, and halts if and only if $z$ is algebraically equivalent to $0$. \end{lemma} \begin{proof} Enumerate all possible descriptions of an algebraic family of divisors on $X_L$ with a pair of $L$-points of the base (it is easy to check when such a description is valid), and check for each whether the difference of the cycles corresponding to the two points equals $z$. \end{proof} \begin{lemma} \label{L:algebraically torsion p=1} There is an algorithm that takes as input $k$, $X$, a finite extension $L$ of $k$ and $z \in \mathcal{Z}^1(X_L)$, and decides whether $z$ lies in $\mathcal{Z}^1(X_L)^\tau$, i.e., whether the N\'eron--Severi class of $z$ is torsion, i.e., whether $z$ is numerically equivalent to $0$. \end{lemma} \begin{proof} By day, search for a positive integer $n$ and a family of divisors showing that $nz$ is algebraically equivalent to $0$. By night, run the algorithm of Lemma~\ref{L:independent in Num} for $s=1$, which halts if and only if the image of $z$ in $\Num^1 {\overline{X}}$ is nonzero, i.e., if and only if $z \notin \mathcal{Z}^1(X_L)^\tau$. One of these processes will halt. \end{proof} \subsection{Computing the N\'eron--Severi group} In this section, $k$ is an \emph{arbitrary} field. \begin{lemma} \label{L:Keeler} \hfill \begin{enumerate}[\upshape (a)] \item \label{I:Keeler a} Let $X$ be a nice $k$-variety. There exists a divisor $B \in \mathcal{Z}^1_{X/k}$ such that for any ample divisor $D$, the class of $D+B$ is very ample. \item \label{I:Keeler b} There is an algorithm that takes as input a finitely generated field $k$ and a $k$-variety $X$ and computes a $B$ as in~\eqref{I:Keeler a}. \end{enumerate} \end{lemma} \begin{proof} Let $K$ be a canonical divisor on $X$ (this is computable if $k$ is finitely generated). Let $A$ be a very ample divisor on $X$ (e.g., embed $X$ in some projective space, and choose a hyperplane section). By~\cite{Keeler2008}*{Theorem~1.1(2)}, $B \colonequals K + (\dim X + 1) A$ has the required property. \end{proof} Given an effective Cartier divisor of $X$, we have an associated closed subscheme $Y \subseteq X$. Call a closed subscheme $Y \subseteq X$ a divisor if it arises this way. When we speak of the Hilbert polynomial of an effective Cartier divisor on a closed subscheme $X$ of $\mathbb{P}^n$, we mean the Hilbert polynomial of the associated closed subscheme of $X$. \begin{lemma} \label{L:compute Hilbert} There is an algorithm that takes as input a finitely generated field $k$, a closed subscheme $X \subseteq \mathbb{P}^n_k$, and an effective divisor $D \subset X$, and computes the Hilbert polynomial of $D$. \end{lemma} \begin{proof} This is evident already from~\cite{Hermann1926}*{Satz~2}, which can be applied repeatedly to construct a minimal free resolution of $\mathscr{O}_D$. \end{proof} Let $\Hilb X = \Union_P \Hilb_P X$ denote the Hilbert scheme of $X$, where $P$ ranges over polynomials in $\mathbb{Q}[t]$. \begin{lemma} \label{L:Gotzmann} There is an algorithm that takes as input a finitely generated field $k$, a closed subscheme $X \subseteq \mathbb{P}^n_k$, and a polynomial $P \in \mathbb{Q}[t]$, and computes the universal family $\mathcal{Y} \to \Hilb_P X$. \end{lemma} \begin{proof} This is a consequence of work of Gotzmann. Let $S = \Directsum_{d \ge 0} S_d \colonequals k[x_0,\ldots,x_n]$, so $\Proj S = \mathbb{P}^n_k$. Given $d,r \in \mathbb{Z}_{\ge 0}$, let $\Gr_r(S_d)$ be the Grassmannian parametrizing $r$-dimensional subspaces of the $k$-vector space $S_d$. Then \cite{Gotzmann1978}*{\S3} (see also \cite{Iarrobino-Kanev1999}*{Theorem~C.29 and Corollary~C.30}) specifies $d_0 \in \mathbb{Z}_{\ge 0}$ such that for $d \ge d_0$, one can compute $r \in \mathbb{Z}_{\ge 0}$ and a closed subscheme $W \subseteq \Gr_r(S_d)$ such that $W \simeq \Hilb_P \mathbb{P}^n$; under this isomorphism a subspace $V \subseteq S_d$ corresponds to the subscheme defined by the ideal $I_V$ generated by the polynomials in $V$. Moreover, $I_V$ and its saturation have the same $d^{{\operatorname{th}}}$ graded part (see~\cite{Iarrobino-Kanev1999}*{Corollary~C.18}). Let $f_1,\ldots,f_m$ be generators of a homogeneous ideal defining $X$. Choose $d \in \mathbb{Z}$ such that $d \ge d_0$ and $d \ge \deg f_i$ for all $i$. Let $g_1,\ldots,g_M$ be all the polynomials obtained by multiplying each $f_i$ by all monomials of degree $d-\deg f_i$. By the saturation statement above, $\Proj(S/I_V) \subseteq X$ if and only if $g_j \in V$ for all $j$. This lets us construct $\Hilb_P X$ as an explicit closed subscheme of $\Hilb_P \mathbb{P}^n$. Now $\Hilb_P X$ is known as an explicit subscheme of the Grassmannian, so we have explicit equations also for the universal family over it. \end{proof} \begin{lemma} \label{L:EffDiv} Let $X$ be a nice $k$-variety. There exists an open and closed subscheme $\EffDiv_X \subseteq \Hilb X$ such that for any field extension $L \supseteq k$ and any $s \in (\Hilb X)(L)$, the closed subscheme of $X_L$ corresponding to $s$ is a divisor on $X_L$ if and only if $s \in \EffDiv_X(L)$. \end{lemma} \begin{proof} See \cite{Bosch-Lutkebohmert-Raynaud1990}*{p.~215} for the definition of the functor $\EffDiv_X$ (denoted there by $\Div_{X/S}$ for $S=\Spec k$) and its representability by an open subscheme of $\Hilb X$. To see that it is also closed, we apply the valuative criterion for properness to the inclusion $\EffDiv_X \to \Hilb X$: if a $k$-scheme $S$ is the spectrum of a discrete valuation ring and $Z$ is a closed subscheme of $X \times S$ that is flat over $S$ and the generic fiber $Z_\eta$ of $Z \to S$ is a divisor, then $Z$ equals the closure of $Z_\eta$ in $X \times S$, which is an effective Weil divisor on $X \times S$ and hence a relative effective Cartier divisor since $X \times S$ is regular. \end{proof} The existence of the scheme $\EffDiv_X$ in Lemma~\ref{L:EffDiv} immediately implies the following. \begin{corollary} \label{C:divisors under field extension} Let $X$ be a nice $k$-variety. Let $Y$ be a closed subscheme of $X$. Let $L$ be a field extension of $k$. Then $Y$ is a divisor on $X$ if and only if $Y_L$ is a divisor on $X_L$. \end{corollary} \begin{remark} Corollary~\ref{C:divisors under field extension} holds more generally for any finite-type $k$-scheme $X$, as follows from fpqc descent applied to the ideal sheaf of $Y_L \subseteq X_L$. \end{remark} \begin{lemma} \label{L:test for divisor} There is an algorithm that takes as input a finitely generated field $k$, a smooth $k$-variety $X$, and a closed subscheme $Y \subseteq X$, and decides whether $Y$ is a divisor in $X$. \end{lemma} \begin{proof} By \cite{EGA-IV.IV}*{Proposition~21.7.2} or \cite{Eisenbud1995}*{Theorem~11.8a.}, $Y$ is a divisor if and only if all associated primes of $Y$ are of codimension~$1$ in $X$. So choose an affine cover $(X_i)$ of $X$, compute the associated primes of the ideal of $Y \intersect X_i$ in $X_i$ for each $i$ (the first algorithm was given in~\cite{Hermann1926}), and check whether they all have codimension~$1$ in $X_i$ (a modern method for computing dimension uses that the Hilbert polynomial of an ideal equals the Hilbert polynomial of an associated initial ideal, which can be computed from a Gr\"obner basis). \end{proof} \begin{lemma} \label{L:connected components} Let $\pi \colon H \to P$ be a proper morphism of schemes of finite type over a field $k$. Suppose that the fibers of $\pi$ are connected (in particular, nonempty). Then $\pi$ induces a bijection on connected components. \end{lemma} \begin{proof} Let $H_1,\ldots,H_n$ be the connected components of $H$. Let $P_i\colonequals \pi(H_i)$, so $P_i$ is connected. Since $\pi$ is proper, the $P_i$ are closed. Since the fibers of $\pi$ are connected, the $P_i$ are disjoint. Since the fibers are nonempty, $\Union P_i = P$. Since the $P_i$ are finite in number, they are open too, so they are the connected components of $P$. \end{proof} Let $\pi \colon \EffDiv_X \to \PIC_{X/k}$ be the proper morphism sending a divisor to its class. If $\PIC_{X/k}^c$ is a finite union of connected components of $\PIC_{X/k}$ and $L$ is a field extension of $k$, let $\Pic^c X_L$ be the set of classes in $\Pic X_L$ such that the corresponding point of $\left(\PIC_{X/k}\right)_L$ lies in $\left(\PIC^c_{X/k}\right)_L$, and let $\NS^c X_L$ be the image of $\Pic^c X_L$ in $\NS X_L$. \begin{lemma} \label{L:no functors} \hfill \begin{enumerate}[\upshape (a)] \item \label{I:nice bijection} Let $X$ be a nice $k$-variety. Let $\PIC_{X/k}^c$ be any finite union of connected components of $\PIC_{X/k}$. Assume the following: \begin{equation} \label{E:Keeler condition} \begin{aligned} & \textup{For every field extension $L \supseteq k$, every divisor on $X_L$ with class in $\Pic^c X_L$} \\ & \textup{is linearly equivalent to an effective divisor.} \end{aligned} \end{equation} Let $H \colonequals \pi^{-1}(\PIC_{X/k}^c)$. Then $\pi \colon H(L) \to \Pic X_L$ induces a bijection \begin{equation} \label{E:connected components} \{\textup{connected components of $H_L$ that contain an $L$-point}\} \longrightarrow \NS^c X_L. \end{equation} \item \label{I:A exists} For any $\PIC^c_{X/k}$ as in~\eqref{I:nice bijection}, there is a divisor $F$ on $X$ such that the translate $F + \PIC_{X/k}^c$ satisfies \eqref{E:Keeler condition}. \item \label{I:D and e} There is an algorithm that takes as input a finitely generated field $k$, a nice $k$-variety $X$, a divisor $D \in \mathcal{Z}^1(X)$, and a positive integer $e$, and computes the following for $\PIC^c_{X/k}$ defined as the (possibly empty) union of components of $\PIC_{X/k}$ corresponding to classes of divisors $E$ over ${\overline{k}}$ such that $eE$ is numerically equivalent to $D$: \begin{enumerate}[\upshape (i)] \item a divisor $F$ as in~\eqref{I:A exists} for $\PIC^c_{X/k}$, \item the variety $H$ in~\eqref{I:nice bijection} for $F+\PIC^c_{X/k}$, \item the universal family $Y \to H$ of divisors corresponding to points of $H$, \item a finite separable extension $k'$ of $k$ and a finite subset $\mathcal{S} \subseteq \mathcal{Z}^1(X_{k'})$ such that there exists a $k$-homomorphism $k' \hookrightarrow {k^{\operatorname{sep}}}$ such that the composition $\mathcal{Z}^1(X_{k'}) \to \mathcal{Z}^1({X^{\operatorname{sep}}}) \to \NS {X^{\operatorname{sep}}}$ restricts to a bijection $\mathcal{S} \to \NS^c {X^{\operatorname{sep}}}$. \end{enumerate} \end{enumerate} \end{lemma} \begin{proof}\hfill \begin{enumerate}[\upshape (a)] \item Taking $L={\overline{k}}$ in~\eqref{E:Keeler condition} shows that $H \stackrel{\pi}\to \PIC_{X/k}^c$ is surjective. The fibers of $\pi \colon H(L) \to \Pic^c X_L$ are linear systems, and are nonempty by~\eqref{E:Keeler condition}, so the reduced geometric fibers of $\pi \colon H \to \PIC^c_{X/k}$ are projective spaces. In particular, $\pi_L \colon H_L \to \left(\PIC^c_{X/k}\right)_L$ has connected fibers, so by Lemma~\ref{L:connected components}, it induces a bijection on connected components. Under this bijection, the connected components of $H_L$ that contain an $L$-point map to the connected components of $\left(\PIC^c_{X/k}\right)_L$ containing the class of a divisor over $L$. The set of the latter components is $\NS^c X_L$. \item Let $A$ be an ample divisor on $X$. For each of the finitely many geometric components $C$ of $\PIC^c_{X/k}$, choose a divisor $D_C$ on $X_{\overline{k}}$ whose class lies in $C$, and let $n_C \in \mathbb{Z}$ be such that $n_C A + D_C$ is ample. Let $n=\max n_C$, so $n A + D_C$ is ample for all $C$. Let $B$ be as in Lemma~\ref{L:Keeler}\eqref{I:Keeler a}. Let $F=B+nA$. If $L$ is a field extension of $k$ and $E$ is a divisor on $X_L$ with class in $\Pic^c X_L$, let $C$ be the geometric component containing the class of $E_{{\overline{L}}}$ (for some compatible choice of ${\overline{k}} \subseteq {\overline{L}}$); then $E$ is numerically equivalent to $D$, so $n A + E$ is ample too, so $F + E = B + (nA + E)$ is very ample by choice of $B$, so $F+E$ is linearly equivalent to an effective divisor. \item Fix a projective embedding of $X$, and let $A$ be a hyperplane section. \begin{enumerate}[\upshape (i)] \item Let $n \in \mathbb{Z}_{>0}$ be such that $nA+D$ is ample. (To compute such an $n$, try $n=1,2,\ldots$ until $|nA+D|$ determines a closed immersion.) Compute $B$ as in Lemma~\ref{L:Keeler}\eqref{I:Keeler b}. Let $F=B+nA$. Suppose that $L$ is an extension of $k$ and $E$ is a divisor on $X_L$ such that $eE$ is numerically equivalent to $D$. Then $e(nA+E)$ is numerically equivalent to $enA+D = (e-1)nA + (nA+D)$, which is a positive combination of the ample divisors $A$ and $nA+D$, so $nA+E$ is ample. By choice of $B$, the divisor $F+E = B+(nA+E)$ is very ample and hence linearly equivalent to an effective divisor. \item By the Riemann--Roch theorem, the Euler characteristic $\chi(F + sD + tA)$ is a polynomial $f(s,t)$ of total degree at most $d \colonequals \dim X$. For any $s \in \mathbb{Z}$, we can compute $t \in \mathbb{Z}$ such that $F+sD+tA$ is linearly equivalent to an effective divisor, whose Hilbert polynomial can be computed by Lemma~\ref{L:compute Hilbert}, so the polynomial $\chi(F + sD + tA)$ can be found by interpolation. Let $P(t) \colonequals f(1/e,t)$. Compute the universal family $\mathcal{Y} \to \Hilb_P X$ as in Lemma~\ref{L:Gotzmann}. Suppose that $E$ is such that $eE$ is numerically equivalent to $D$. Then the polynomial $\chi(F+sE+tA)$ equals $f(s/e,t)$ since its values match whenever $e|s$. In particular, $\chi(F+E+tA)=P(t)$; i.e., $P(t)$ is the Hilbert polynomial of an effective divisor linearly equivalent to $F+E$. Thus the subscheme $H \subseteq \EffDiv_X \subseteq \Hilb X$ is contained in $\Hilb_P X$, which is a union of connected components of $\Hilb X$. By definition, $H$ is a union of connected components of $\EffDiv_X$, which by Lemma~\ref{L:EffDiv} is a union of connected components of $\Hilb X$, so $H$ is a union of connected components of $\Hilb_P X$. To compute $H$, compute the (finitely many) connected components of $\Hilb_P X$; to check whether a component $C$ belongs to $H$, choose a point $h$ in $C$ over some extension of $k$, apply Lemma~\ref{L:test for divisor} to $\mathcal{Y}_h$ to test whether $\mathcal{Y}_h$ is a divisor, and if so, apply Lemma~\ref{L:algebraically torsion p=1} to $e \mathcal{Y}_h - D$ to check whether $e \mathcal{Y}_h$ is numerically equivalent to $D$. \item Compute $Y \to H$ as the part of $\mathcal{Y} \to \Hilb_P X$ above $H$. \item Compute the connected components of $H_{k^{\operatorname{sep}}}$, which really means computing a finite separable extension $k'$ and the connected components of $H_{k'}$ such that these components are geometrically connected. For each connected component $C$ of $H_{k'}$, use the algorithm of~\cite{Haran1988} to decide whether it has a $k^{\operatorname{sep}}$-point, and, if so, choose a $k'$-point $h$ of $C$, enlarging $k'$ if necessary, and take the fiber $Y_h$. Let $\mathcal{S}$ be the set of such divisors $Y_h$, one for each component $C$ with a ${k^{\operatorname{sep}}}$-point. By~\eqref{I:nice bijection}, the map $\mathcal{S} \to \NS {X^{\operatorname{sep}}}$ is a bijection onto $\NS^c {X^{\operatorname{sep}}}$. \qedhere \end{enumerate} \end{enumerate} \end{proof} \begin{theorem}[Computing $(\NS {X^{\operatorname{sep}}})_{{\operatorname{tors}}}$] \label{T:NS_tors} There is an algorithm that takes as input a finitely generated field $k$ and a nice $k$-variety $X$, and computes the $G_k$-homomorphism $\mathcal{Z}^1({X^{\operatorname{sep}}})^{\tau} \to (\NS {X^{\operatorname{sep}}})_{\operatorname{tors}}$ sending a divisor to its N\'eron--Severi class, in the sense of Definition~\ref{D:map on cycles} and Remark~\ref{R:Z^tau}. \end{theorem} \begin{proof} Apply Lemma~\ref{L:no functors}\eqref{I:D and e} with $D=0$ and $e=1$ to obtain a finite Galois extension $k'$ and a subset $\mathcal{D} \subseteq \mathcal{Z}^1(X_{k'})$ mapping bijectively to $(\NS {X^{\operatorname{sep}}})_{\operatorname{tors}}$. For each pair $D_1,D_2 \in \mathcal{D}$, run Lemma~\ref{L:algebraically equivalent to 0} in parallel on $D_1+D_2-D_3$ for all $D_3 \in \mathcal{D}$ to find the unique $D_3$ algebraically equivalent to $D_1+D_2$; this determines the group law on $\mathcal{D}$. Similarly compute the $G_k$-action. Similarly, given a finite separable extension $L$ of $k'$ and $z \in \mathcal{Z}^1(X_L)^\tau$, we can find the unique $D \in \mathcal{D}$ algebraically equivalent to $z$. \end{proof} If $\mathcal{D} \subseteq \mathcal{Z}^1(X_{k^{\operatorname{sep}}})$, let $(\NS {X^{\operatorname{sep}}})^\mathcal{D}$ be the saturation of the $G_k$-submodule generated by the image of $\mathcal{D}$ in $\NS {X^{\operatorname{sep}}}$, and let $\mathcal{Z}^1({X^{\operatorname{sep}}})^\mathcal{D}$ be the set of divisors in $\mathcal{Z}^1({X^{\operatorname{sep}}})$ whose algebraic equivalence class lies in $(\NS {X^{\operatorname{sep}}})^\mathcal{D}$. \begin{theorem}[Computing $\NS {X^{\operatorname{sep}}}$] \label{T:NS} \hfill \begin{enumerate}[\upshape (a)] \item \label{I:saturate D} Given a finitely generated field $k$, a nice $k$-variety $X$, a finite separable extension $L$ of $k$ in ${k^{\operatorname{sep}}}$ and a finite subset $\mathcal{D} \subseteq \mathcal{Z}^1(X_L)$, we can compute the $G_k$-homomorphism $\mathcal{Z}^1({X^{\operatorname{sep}}})^\mathcal{D} \to (\NS {X^{\operatorname{sep}}})^\mathcal{D}$ in the sense of Definition~\ref{D:map on cycles} and Remark~\ref{R:Z^tau}. \item \label{I:compute the whole NS} There is an algorithm that takes as input $k$ and $X$ as above and a nonnegative integer $\rho$, and computes the $G_k$-homomorphism $\mathcal{Z}^1({X^{\operatorname{sep}}}) \to (\NS {X^{\operatorname{sep}}})$ in the sense of Definition~\ref{D:map on cycles} and Remark~\ref{R:Z^tau} assuming that $\rho = \rk \NS {X^{\operatorname{sep}}}$. \end{enumerate} \end{theorem} \begin{remark} \label{R:NS} Assume Hypothesis~\ref{H:compute etale 2} and $\operatorname{T}^1(X,\ell)$. (Conjecture~$\operatorname{E}^1(X,\ell)$ is proved.) Then Theorem~\ref{T:Num}\eqref{I:rank of Num} lets us compute $\rk \NS {X^{\operatorname{sep}}}$, so Theorem~\ref{T:NS}\eqref{I:compute the whole NS} lets us compute $\NS {X^{\operatorname{sep}}}$. Recall also that Hypothesis~\ref{H:compute etale 2} is true when restricted to characteristic~$0$ (Theorem~\ref{T:compute etale in char 0}) or varieties that lift to characteristic~$0$ (Corollary~\ref{C:compute etale by lifting}). \end{remark} \begin{proof}[Proof of Theorem~\ref{T:NS}] \hfill \begin{enumerate}[\upshape (a)] \item Enlarge $L$ to assume that it is Galois over $k$, and replace $\mathcal{D}$ by the union of its $\Gal(L/k)$-conjugates. There exist $D_1,\ldots,D_t \in \mathcal{D}$ whose images in $(\Num^1 {X^{\operatorname{sep}}}) \tensor \mathbb{Q}$ form a $\mathbb{Q}$-basis for the image of the span of $\mathcal{D}$. Then there exist $1$-dimensional cycles $E_1,\ldots,E_t$ on $X_L$ such that $\det (D_i.E_j) \ne 0$ (the $E_i$ exist over a finite extension of $L$, but can be replaced by their traces down to $L$), and each $D \in \mathcal{D}$ has a positive integer multiple numerically equivalent to an element of the $\mathbb{Z}$-span of $\mathcal{D}$. Search for such $D_1,\ldots,D_t,E_1,\ldots,E_t$ and for numerical relations as above for each $D \in \mathcal{D}$ (use Lemma~\ref{L:algebraically torsion p=1} to verify relations). Let $e \colonequals \left| \det (D_i.E_j) \right|$. Let $\Delta$ be the span of the image of $\mathcal{D}$ in $\Num^1 {X^{\operatorname{sep}}}$. Let $\Delta'$ be the saturation of $\Delta$ in $\Num^1 {X^{\operatorname{sep}}}$. Then $(\Delta':\Delta)$ divides $e$. For each coset of $e \Delta$ in $\Delta$, choose a representative divisor $D$ in the $\mathbb{Z}$-span of $\mathcal{D}$, and check whether the set $\mathcal{S}$ of Lemma~\ref{L:no functors}\eqref{I:D and e} is nonempty to decide whether the numerical equivalence class of $D$ is in $e \Delta'$; if so, choose a divisor in $\mathcal{S}$. The classes of these new divisors, together with those of $D_1,\ldots,D_t$, generate $\Delta'$. Moreover, we know the integer relations between all of these, so we can compute integer combinations $F_1,\ldots,F_t$ whose classes form a \emph{basis} for $\Delta'$. Then \[ (\NS {X^{\operatorname{sep}}})^\mathcal{D} \simeq (\mathbb{Z} F_1 \directsum \cdots \directsum \mathbb{Z} F_t) \directsum (\NS {X^{\operatorname{sep}}})_{{\operatorname{tors}}} \] as abelian groups, and $(\NS {X^{\operatorname{sep}}})_{{\operatorname{tors}}}$ can be computed by Theorem~\ref{T:NS_tors}. The homomorphism $\mathcal{Z}^1({X^{\operatorname{sep}}})^\mathcal{D} \to (\NS {X^{\operatorname{sep}}})^\mathcal{D}$ is computed as follows: given any $z \in \mathcal{Z}^1({X^{\operatorname{sep}}})^\mathcal{D}$ (defined over some finite separable extension $L'$ of $L$ in ${k^{\operatorname{sep}}}$), compute an integer combination $F$ of the $F_i$ such that $F.E_j = z.E_j$ for all $j$, and apply the homomorphism of Theorem~\ref{T:NS_tors} to compute the class of $z-F$ in $(\NS {X^{\operatorname{sep}}})_{{\operatorname{tors}}}$. Applying this to all conjugates of our generators of $(\NS {X^{\operatorname{sep}}})^\mathcal{D}$ lets us compute the $G_k$-action on our model of $(\NS {X^{\operatorname{sep}}})^\mathcal{D}$. \item Assuming that $\rho=\rk \NS {X^{\operatorname{sep}}}$, the algebraic equivalence classes of divisors $D_1,\ldots,D_\rho \in \mathcal{Z}^1({X^{\operatorname{sep}}})$ form a $\mathbb{Z}$-basis for a free subgroup of finite index in $\NS {X^{\operatorname{sep}}}$ if and only if there exist $1$-cycles $E_1,\ldots,E_\rho$ on ${X^{\operatorname{sep}}}$ such that $\det (D_i.E_j) \ne 0$. Search for a finite separable extension $L$ of $k$ in ${k^{\operatorname{sep}}}$, divisors $D_1,\ldots,D_\rho \in \mathcal{Z}^1(X_L)$, and $1$-cycles $E_1,\ldots,E_\rho$ on $X_L$ until such are found with $\det (D_i.E_j) \ne 0$. Then apply \eqref{I:saturate D} to $\mathcal{D} \colonequals \{D_1,\ldots,D_\rho\}$.\qedhere \end{enumerate} \end{proof} \begin{remark} \label{R:kbar} Theorems \ref{T:NS_tors} and~\ref{T:NS} hold for ${\overline{X}}$ instead of ${X^{\operatorname{sep}}}$: the same proofs work, except that we need an algorithm for deciding whether a variety has a ${\overline{k}}$-point; fortunately, this is even easier than deciding whether a variety has a ${k^{\operatorname{sep}}}$-point! \end{remark} \subsection{An alternative approach over finite fields} \label{S:alternative} When $k$ is a finite field, we can compute $\rk \Num^p {X^{\operatorname{sep}}}$ without assuming Hypothesis~\ref{H:compute etale 2}, but still assuming $\operatorname{T}^p(X,\ell)$ and $\operatorname{E}^p(X,\ell)$. The arguments in this section are mostly well-known. The following is a variant of Theorem~\ref{T:Num}\eqref{I:rank of Num}. Recall that for any $(k,X,p,\ell)$ as in Setup~\ref{Setup} with $k$ finite, we let $V_\mu$ denote the largest $G$-invariant subspace of $V={\operatorname{H}}^{2p}({X^{\operatorname{sep}}},\mathbb{Q}_\ell(p))$ on which all eigenvalues of the Frobenius are roots of unity. \begin{theorem} \label{T:Num variant} \hfill \begin{enumerate}[\upshape (a)] \item \label{I:dim V_mu} There is an algorithm A that takes as input $(k,X,p,\ell)$ as in Setup~\ref{Setup}, with $k$ a finite field $\mathbb{F}_q$, and returns $\dim V_\mu$. \item \label{I:algorithm B} There is an algorithm B that takes as input $(k,X,p,\ell)$ as in Setup~\ref{Setup}, with $k$ a finite field $\mathbb{F}_q$, such that, assuming $\operatorname{E}^p(X,\ell)$, \begin{itemize} \item algorithm B terminates on this input if and only if $\operatorname{T}^p(X,\ell)$ holds, and \item if algorithm B terminates, it returns $\rk \Num^p {X^{\operatorname{sep}}}$. \end{itemize} \end{enumerate} \end{theorem} \begin{proof} \hfill \begin{enumerate}[\upshape (a)] \item By Proposition~\ref{P:zeta} there is an algorithm that computes the zeta function $Z_X(T)\in \mathbb{Q}(T)$ of $X$. Then $\dim V_\mu$ is the number of complex poles $\lambda$ of $Z_X(T)$ such that $\lambda$ is a root of unity times $q^{-p}$, counted with multiplicity. \item Algorithm B first runs algorithm A to compute $v_\mu \colonequals \dim V_\mu$, and then runs the algorithm of Corollary~\ref{C:lower bounds for Num} until it prints $v_\mu$, in which case algorithm B returns $v_\mu$. If $\operatorname{T}^p(X,\ell)$ and $\operatorname{E}^p(X,\ell)$ hold, Proposition~\ref{P:consequences} implies that $v_\mu$ equals $\rk \Num^p {X^{\operatorname{sep}}}$, and the algorithm of Corollary~\ref{C:lower bounds for Num} eventually prints the latter, so algorithm B terminates with the correct output. Assume $\operatorname{E}^p(X,\ell)$. Proposition~\ref{P:consequences} implies that $\rk \Num^p {X^{\operatorname{sep}}} \le v_\mu$ with equality if and only if $\operatorname{T}^p(X,\ell)$ holds. So if algorithm B terminates, then $\operatorname{T}^p(X,\ell)$ holds. \qedhere \end{enumerate} \end{proof} \begin{corollary} \label{C:NS over finite field under Tate} There is an algorithm to compute $\NS {X^{\operatorname{sep}}}$ (in the same sense as Theorem~\ref{T:NS}\eqref{I:compute the whole NS}) and its subgroup $\NS X$ for any nice variety $X$ over a finite field such that $\operatorname{T}^1(X,\ell)$ holds for some $\ell$. \end{corollary} \begin{proof} Apply Theorem~\ref{T:Num variant}\eqref{I:algorithm B}, using that $\operatorname{E}^p(X,\ell)$ holds for $p=1$, to obtain $\rk \NS {X^{\operatorname{sep}}}$. Then Theorem~\ref{T:NS}\eqref{I:compute the whole NS} lets us compute the Galois module $\NS {X^{\operatorname{sep}}}$. By Proposition~\ref{P:NS over finite field}, computing its $G_k$-invariant subgroup yields $\NS X$. \end{proof} \subsection{K3 surfaces} \label{S:K3} We now apply our results to K3 surfaces, to improve upon the results of \cite{Charles-preprint} and \cite{Hassett-Kresch-Tschinkel-preprint} mentioned in Section~\ref{S:previous approaches}. \begin{theorem} \label{T:unconditional NS for K3} There is an unconditional algorithm to compute the $G_k$-module $\NS {X^{\operatorname{sep}}}$ for any K3 surface $X$ over a finitely generated field $k$ of characteristic not~$2$. We can also compute the group $(\NS {X^{\operatorname{sep}}})^{G_k}$, in which $\NS X$ has finite index. If $k$ is finite, we can compute $\NS X$ itself. \end{theorem} \begin{proof} By \cite{Deligne1981}, K3 surfaces lift to characteristic~$0$. By \cite{Madapusi-preprint}*{Theorem~1}, $\operatorname{T}^1(X,\ell)$ holds for any K3 surface $X$ over a finitely generated field $k$ of characteristic not~$2$. Hence Remark~\ref{R:NS} lets us compute the $G_k$-module $\NS {X^{\operatorname{sep}}}$. {}From this we obtain $(\NS {X^{\operatorname{sep}}})^{G_k}$. By Proposition~\ref{P:Num X}, $\NS X$ is of finite index in $(\NS {X^{\operatorname{sep}}})^{G_k}$. If $k$ is finite, then $\NS X = (\NS {X^{\operatorname{sep}}})^{G_k}$ by Proposition~\ref{P:NS over finite field}. \end{proof} \begin{remark} For K3 surfaces $X$ over a finite field $k$ of characteristic not~$2$, Corollary~\ref{C:NS over finite field under Tate} yields another way to compute $\NS {X^{\operatorname{sep}}}$, without lifting to characteristic~$0$, but still using \cite{Madapusi-preprint}*{Theorem~1}. \end{remark} \section*{Acknowledgements} We thank Saugata Basu, Fran\c{c}ois Charles, Robin Hartshorne, Moshe Jarden, J\'anos Koll\'ar, Andrew Kresch, Martin Olsson, Lenny Taelman, Burt Totaro, David Vogan, and Olivier Wittenberg for comments. We thank the Banff International Research Station, the American Institute of Mathematics, the Centre Interfacultaire Bernoulli, and the Mathematisches Forschungsinstitut Oberwolfach for their hospitality and support. \begin{bibdiv} \begin{biblist} \bib{Andre1996b}{article}{ author={Andr{\'e}, Yves}, title={On the Shafarevich and Tate conjectures for hyper-K\"ahler varieties}, journal={Math. Ann.}, volume={305}, date={1996}, number={2}, pages={205--248}, issn={0025-5831}, review={\MR {1391213 (97a:14010)}}, doi={10.1007/BF01444219}, } \bib{Artin1971}{article}{ author={Artin, M.}, title={On the joins of Hensel rings}, journal={Advances in Math.}, volume={7}, date={1971}, pages={282--296 (1971)}, issn={0001-8708}, review={\MR {0289501 (44 \#6690)}}, } \bib{Basu-Pollack-Roy2006}{book}{ author={Basu, Saugata}, author={Pollack, Richard}, author={Roy, Marie-Fran{\c {c}}oise}, title={Algorithms in real algebraic geometry}, series={Algorithms and Computation in Mathematics}, volume={10}, edition={2}, publisher={Springer-Verlag}, place={Berlin}, date={2006}, pages={x+662}, isbn={978-3-540-33098-1}, isbn={3-540-33098-4}, review={\MR {2248869 (2007b:14125)}}, } \bib{Bosch-Lutkebohmert-Raynaud1990}{book}{ author={Bosch, Siegfried}, author={L{\"u}tkebohmert, Werner}, author={Raynaud, Michel}, title={N\'eron models}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)]}, volume={21}, publisher={Springer-Verlag}, place={Berlin}, date={1990}, pages={x+325}, isbn={3-540-50587-3}, review={\MR {1045822 (91i:14034)}}, } \bib{Charles-preprint}{misc}{ author={Charles, Fran\c {c}ois}, title={On the Picard number of K3 surfaces over number fields}, date={2011-11-17}, note={Preprint, \texttt {arXiv:1111.4117v1}}, } \bib{Charles-TC-preprint}{misc}{ author={Charles, Fran\c {c}ois}, title={The Tate conjecture for K3 surfaces over finite fields}, date={2012-07-07}, note={Preprint, \texttt {arXiv:1206.4002v2}}, } \bib{Deligne1981}{article}{ author={Deligne, P.}, title={Rel\`evement des surfaces $K3$ en caract\'eristique nulle}, language={French}, note={Prepared for publication by Luc Illusie}, conference={ title={Algebraic surfaces}, address={Orsay}, date={1976--78}, }, book={ series={Lecture Notes in Math.}, volume={868}, publisher={Springer}, place={Berlin}, }, date={1981}, pages={58--79}, review={\MR {638598 (83j:14034)}}, } \bib{EGA-IV.IV}{article}{ author={Grothendieck, A.}, title={\'El\'ements de g\'eom\'etrie alg\'ebrique. IV. \'Etude locale des sch\'emas et des morphismes de sch\'emas IV}, language={French}, journal={Inst. Hautes \'Etudes Sci. Publ. Math.}, number={32}, date={1967}, pages={361}, issn={0073-8301}, review={\MR {0238860 (39 \#220)}}, label={EGA~$\hbox {IV}_4$}, } \bib{Eisenbud1995}{book}{ author={Eisenbud, David}, title={Commutative algebra}, series={Graduate Texts in Mathematics}, volume={150}, note={With a view toward algebraic geometry}, publisher={Springer-Verlag}, place={New York}, date={1995}, pages={xvi+785}, isbn={0-387-94268-8}, isbn={0-387-94269-6}, review={\MR {1322960 (97a:13001)}}, } \bib{Elsenhans-Jahnel2011}{article}{ author={Elsenhans, Andreas-Stephan}, author={Jahnel, J{\"o}rg}, title={On the computation of the Picard group for $K3$ surfaces}, journal={Math. Proc. Cambridge Philos. Soc.}, volume={151}, date={2011}, number={2}, pages={263--270}, issn={0305-0041}, review={\MR {2823134 (2012i:14015)}}, doi={10.1017/S0305004111000326}, } \bib{Elsenhans-Jahnel2011-oneprime}{article}{ author={Elsenhans, Andreas-Stephan}, author={Jahnel, J{\"o}rg}, title={The Picard group of a $K3$ surface and its reduction modulo $p$}, journal={Algebra Number Theory}, volume={5}, date={2011}, number={8}, pages={1027--1040}, issn={1937-0652}, } \bib{Gotzmann1978}{article}{ author={Gotzmann, Gerd}, title={Eine Bedingung f\"ur die Flachheit und das Hilbertpolynom eines graduierten Ringes}, language={German}, journal={Math. Z.}, volume={158}, date={1978}, number={1}, pages={61--70}, issn={0025-5874}, review={\MR {0480478 (58 \#641)}}, } \bib{Haran1988}{article}{ author={Haran, Dan}, title={Quantifier elimination in separably closed fields of finite imperfectness degree}, journal={J. Symbolic Logic}, volume={53}, date={1988}, number={2}, pages={463--469}, issn={0022-4812}, review={\MR {947853 (89i:03057)}}, doi={10.2307/2274518}, } \bib{Hassett-Kresch-Tschinkel-preprint}{misc}{ author={Hassett, Brendan}, author={Kresch, Andrew}, author={Tschinkel, Yuri}, title={Effective computation of Picard groups and Brauer--Manin obstructions of degree two K3 surfaces over number fields}, date={2012-03-10}, note={Preprint, \texttt {arXiv:1203.2214v1}}, } \bib{Hermann1926}{article}{ author={Hermann, Grete}, title={Die Frage der endlich vielen Schritte in der Theorie der Polynomideale}, language={German}, journal={Math. Ann.}, volume={95}, date={1926}, number={1}, pages={736--788}, issn={0025-5831}, review={\MR {1512302}}, doi={10.1007/BF01206635}, } \bib{Iarrobino-Kanev1999}{book}{ author={Iarrobino, Anthony}, author={Kanev, Vassil}, title={Power sums, Gorenstein algebras, and determinantal loci}, series={Lecture Notes in Mathematics}, volume={1721}, note={Appendix C by Iarrobino and Steven L. Kleiman}, publisher={Springer-Verlag}, place={Berlin}, date={1999}, pages={xxxii+345}, isbn={3-540-66766-0}, review={\MR {1735271 (2001d:14056)}}, } \bib{Kahn2009}{article}{ author={Kahn, Bruno}, title={D\'emonstration g\'eom\'etrique du th\'eor\`eme de Lang-N\'eron et formules de Shioda-Tate}, language={French, with English and French summaries}, conference={ title={Motives and algebraic cycles}, }, book={ series={Fields Inst. Commun.}, volume={56}, publisher={Amer. Math. Soc.}, place={Providence, RI}, }, date={2009}, pages={149--155}, review={\MR {2562456 (2010j:14083)}}, } \bib{Katz2001}{article}{ author={Katz, Nicholas M.}, title={Sums of Betti numbers in arbitrary characteristic}, note={Dedicated to Professor Chao Ko on the occasion of his 90th birthday}, journal={Finite Fields Appl.}, volume={7}, date={2001}, number={1}, pages={29--44}, issn={1071-5797}, review={\MR {1803934 (2002d:14028)}}, } \bib{Keeler2008}{article}{ author={Keeler, Dennis S.}, title={Fujita's conjecture and Frobenius amplitude}, journal={Amer. J. Math.}, volume={130}, date={2008}, number={5}, pages={1327--1336}, issn={0002-9327}, review={\MR {2450210 (2009i:14006)}}, doi={10.1353/ajm.0.0015}, } \bib{Kleiman2005}{article}{ author={Kleiman, Steven L.}, title={The Picard scheme}, conference={ title={Fundamental algebraic geometry}, }, book={ series={Math. Surveys Monogr.}, volume={123}, publisher={Amer. Math. Soc.}, place={Providence, RI}, }, date={2005}, pages={235--321}, review={\MR {2223410}}, } \bib{Kloosterman2007}{article}{ author={Kloosterman, Remke}, title={Elliptic $K3$ surfaces with geometric Mordell-Weil rank 15}, journal={Canad. Math. Bull.}, volume={50}, date={2007}, number={2}, pages={215--226}, issn={0008-4395}, review={\MR {2317444 (2008f:14055)}}, doi={10.4153/CMB-2007-023-2}, } \bib{Lang1956}{article}{ author={Lang, Serge}, title={Algebraic groups over finite fields}, journal={Amer. J. Math.}, volume={78}, date={1956}, pages={555--563}, issn={0002-9327}, review={\MR {0086367 (19,174a)}}, } \bib{Liu2002}{book}{ author={Liu, Qing}, title={Algebraic geometry and arithmetic curves}, series={Oxford Graduate Texts in Mathematics}, volume={6}, note={Translated from the French by Reinie Ern\'e; Oxford Science Publications}, publisher={Oxford University Press}, place={Oxford}, date={2002}, pages={xvi+576}, isbn={0-19-850284-2}, review={\MR {1917232 (2003g:14001)}}, } \bib{Madapusi-preprint}{misc}{ author={Madapusi Pera, Keerthi}, title={The Tate conjecture for K3 surfaces in odd characteristic}, date={2013-02-20}, note={Preprint, \texttt {arXiv:1301.6326v2}}, } \bib{Mannoury1900}{article}{ author={Mannoury, G.}, title={Surfaces-images}, journal={Nieuw Arch. Wisk. (2)}, volume={4}, date={1900}, pages={112-–129}, } \bib{Maulik-preprint}{misc}{ author={Maulik, Davesh}, title={Supersingular K3 surfaces for large primes}, date={2012-04-07}, note={Preprint, \texttt {arXiv:1203.2889v2}}, } \bib{Maulik-Poonen2012}{article}{ author={Maulik, Davesh}, author={Poonen, Bjorn}, title={N\'eron-Severi groups under specialization}, journal={Duke Math. J.}, volume={161}, date={2012}, number={11}, pages={2167--2206}, issn={0012-7094}, review={\MR {2957700}}, doi={10.1215/00127094-1699490}, } \bib{MilneEtaleCohomology1980}{book}{ author={Milne, J. S.}, title={\'Etale cohomology}, series={Princeton Mathematical Series}, volume={33}, publisher={Princeton University Press}, place={Princeton, N.J.}, date={1980}, pages={xiii+323}, isbn={0-691-08238-3}, review={\MR {559531 (81j:14002)}}, } \bib{Minkowski1887}{article}{ author={Minkowski, H.}, title={Zur Theorie der positiven quadratischen Formen}, journal={J. reine angew. Math.}, volume={101}, date={1887}, pages={196--202}, } \bib{Neron1952}{article}{ author={N{\'e}ron, Andr{\'e}}, title={Probl\`emes arithm\'etiques et g\'eom\'etriques rattach\'es \`a la notion de rang d'une courbe alg\'ebrique dans un corps}, language={French}, journal={Bull. Soc. Math. France}, volume={80}, date={1952}, pages={101--166}, issn={0037-9484}, review={\MR {0056951 (15,151a)}}, } \bib{Nygaard1983}{article}{ author={Nygaard, N. O.}, title={The Tate conjecture for ordinary $K3$ surfaces over finite fields}, journal={Invent. Math.}, volume={74}, date={1983}, number={2}, pages={213--237}, issn={0020-9910}, review={\MR {723215 (85h:14012)}}, doi={10.1007/BF01394314}, } \bib{Nygaard-Ogus1985}{article}{ author={Nygaard, Niels}, author={Ogus, Arthur}, title={Tate's conjecture for $K3$ surfaces of finite height}, journal={Ann. of Math. (2)}, volume={122}, date={1985}, number={3}, pages={461--507}, issn={0003-486X}, review={\MR {819555 (87h:14014)}}, doi={10.2307/1971327}, } \bib{Oguiso2009}{article}{ author={Oguiso, Keiji}, title={Shioda-Tate formula for an abelian fibered variety and applications}, journal={J. Korean Math. Soc.}, volume={46}, date={2009}, number={2}, pages={237--248}, issn={0304-9914}, review={\MR {2494474 (2009m:14011)}}, doi={10.4134/JKMS.2009.46.2.237}, } \bib{SGA4.5}{book}{ author={Deligne, P.}, title={Cohomologie \'etale}, series={Lecture Notes in Mathematics, Vol. 569}, note={S\'eminaire de G\'eom\'etrie Alg\'ebrique du Bois-Marie SGA $4\frac 12$; Avec la collaboration de J. F. Boutot, A. Grothendieck, L. Illusie et J. L. Verdier}, publisher={Springer-Verlag}, place={Berlin}, date={1977}, pages={iv+312pp}, review={\MR {0463174 (57 \#3132)}}, label={SGA $4\frac 12$}, } \bib{SGA6}{book}{ title={Th\'eorie des intersections et th\'eor\`eme de Riemann-Roch}, language={French}, series={Lecture Notes in Mathematics, Vol. 225}, note={S\'eminaire de G\'eom\'etrie Alg\'ebrique du Bois-Marie 1966--1967 (SGA 6); Dirig\'e par P. Berthelot, A. Grothendieck et L. Illusie. Avec la collaboration de D. Ferrand, J. P. Jouanolou, O. Jussila, S. Kleiman, M. Raynaud et J. P. Serre}, publisher={Springer-Verlag}, place={Berlin}, date={1971}, pages={xii+700}, review={\MR {0354655 (50 \#7133)}}, label={SGA 6}, } \bib{Shioda1972}{article}{ author={Shioda, Tetsuji}, title={On elliptic modular surfaces}, journal={J. Math. Soc. Japan}, volume={24}, date={1972}, pages={20--59}, issn={0025-5645}, review={\MR {0429918 (55 \#2927)}}, } \bib{Shioda1986}{article}{ author={Shioda, Tetsuji}, title={An explicit algorithm for computing the Picard number of certain algebraic surfaces}, journal={Amer. J. Math.}, volume={108}, date={1986}, number={2}, pages={415--432}, issn={0002-9327}, review={\MR {833362 (87g:14033)}}, doi={10.2307/2374678}, } \bib{Shioda1990}{article}{ author={Shioda, Tetsuji}, title={On the Mordell-Weil lattices}, journal={Comment. Math. Univ. St. Paul.}, volume={39}, date={1990}, number={2}, pages={211--240}, issn={0010-258X}, review={\MR {1081832 (91m:14056)}}, } \bib{Simpson2008}{article}{ author={Simpson, Carlos}, title={Algebraic cycles from a computational point of view}, journal={Theoret. Comput. Sci.}, volume={392}, date={2008}, number={1-3}, pages={128--140}, issn={0304-3975}, review={\MR {2394989 (2008m:14021)}}, doi={10.1016/j.tcs.2007.10.008}, } \bib{Tate1976}{article}{ author={Tate, John}, title={Relations between $K_{2}$ and Galois cohomology}, journal={Invent. Math.}, volume={36}, date={1976}, pages={257--274}, issn={0020-9910}, review={\MR {0429837 (55 \#2847)}}, } \bib{Tate1994}{article}{ author={Tate, John}, title={Conjectures on algebraic cycles in $l$-adic cohomology}, conference={ title={Motives}, address={Seattle, WA}, date={1991}, }, book={ series={Proc. Sympos. Pure Math.}, volume={55}, publisher={Amer. Math. Soc.}, place={Providence, RI}, }, date={1994}, pages={71--83}, review={\MR {1265523 (95a:14010)}}, } \bib{Tate1995}{article}{ author={Tate, John}, title={On the conjectures of Birch and Swinnerton-Dyer and a geometric analog}, conference={ title={S\'eminaire Bourbaki, Vol.\ 9}, }, book={ publisher={Soc. Math. France}, place={Paris}, }, date={1995}, pages={Exp.\ No.\ 306, 415--440}, review={\MR {1610977}}, } \bib{VanLuijk2007}{article}{ author={van Luijk, Ronald}, title={K3 surfaces with Picard number one and infinitely many rational points}, journal={Algebra Number Theory}, volume={1}, date={2007}, number={1}, pages={1--15}, issn={1937-0652}, review={\MR {2322921 (2008d:14058)}}, } \bib{VanLuijk2007-Heron}{article}{ author={van Luijk, Ronald}, title={An elliptic $K3$ surface associated to Heron triangles}, journal={J. Number Theory}, volume={123}, date={2007}, number={1}, pages={92--119}, issn={0022-314X}, review={\MR {2295433 (2007k:14077)}}, doi={10.1016/j.jnt.2006.06.006}, } \bib{Vasconcelos1998}{book}{ author={Vasconcelos, Wolmer V.}, title={Computational methods in commutative algebra and algebraic geometry}, series={Algorithms and Computation in Mathematics}, volume={2}, note={With chapters by David Eisenbud, Daniel R. Grayson, J\"urgen Herzog and Michael Stillman}, publisher={Springer-Verlag}, place={Berlin}, date={1998}, pages={xii+394}, isbn={3-540-60520-7}, review={\MR {1484973 (99c:13048)}}, doi={10.1007/978-3-642-58951-5}, } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2013-03-29T01:01:04", "yymm": "1210", "arxiv_id": "1210.3720", "language": "en", "url": "https://arxiv.org/abs/1210.3720" }
\section{Introduction} There is recent interest in constructing theories of covering maps in various settings (Berestovskii-Plaut \cite{BP3} and Brodskiy-Dydak-LaBuz-Mitra \cite{BDLM2}-\cite{BDLM3} for uniform spaces, Fischer-Zastrow \cite{FisZas} and Brodskiy-Dydak-LaBuz-Mitra \cite{BDLM} for locally path-connected spaces, and Dydak \cite{Dyd} for general spaces). Those efforts amount to generalizing the concept of coverings. Another way to proceed (in order to get a workable theory) is to narrow down covering projections. This was done by R.H.Fox \cite{Fox1}, \cite {Fox2} who created the concept of \textbf{overlays}. It is well-known that the classical theory of covers works best for locally semi-simple connected spaces that are locally path-connected. There is an example of Zeeman \cite[6.6.14 on p.258]{HilWyl} that points out the limits of the classical theory. That example amounts to two non-equivalent coverings of non-locally path-connected spaces with the same image of the fundamental groups. Yet this example is not mentioned in current textbooks on topology. \par The purpose of this note is to outline an "evolutionary" way of arriving at examples of anomalous coverings. Our example of non-equivalent coverings of path-connected spaces is stronger than that of Zeeman \cite[6.6.14 on p.258]{HilWyl} in the sense that the total spaces are (naturally) homeomorphic. Also, it captures the essential features of Zeeman's example. \par The note is written in the style of a Moore School textbook (guiding students to results via a sequence of definitions and problems). Hopefully it will be used for student presentations in topology classes. Notice there are no proofs included - a promising student ought to be able to reconstruct them on his/her own. \par Be aware that we are coining a few new terms. While Sine Curve and Warsaw Circle are widely used, Dusty Broom and Zeeman's Palm seem to be brand new. The author is grateful to Greg Conner for pointing out the need to cite overlays of R.H.Fox \cite{Fox1}, \cite {Fox2}. \section{Anomalous coverings} Throughout this section $X$ is a connected space with two path components, each of them simply connected and locally path-connected. \begin{Example} $X$ is the \textbf{Sine Curve}, the closure on the plane of the graph of the function $f(x)=\sin(\frac{\pi}{x})$, $0 < x \leq 1$. \end{Example} \begin{Example} $X$ is the \textbf{Dusty Broom}, the union of the infinite broom (the union of straight arcs joining $(0,0)$ and $(1,\frac{1}{n})$, $n\ge 1$) and a speck of dust in the form of the point $(1,0)$. \end{Example} The major difference between the Sine Curve and Dusty Broom is that the Sine Curve is compact. The simplest way to make $X$ path-connected is to add an arc $A$ joining points $x_0$ and $x_1$ from different path-components of $X$ (with the interior of $A$ disjoint from $X$). We shall refer to such an arc as a \textbf{bridge} joining $x_0$ and $x_1$. In case of the Sine Curve we join $(1,0)$ and $(0,-1)$ resulting in the \textbf{Warsaw Circle}. In case of the Dusty Broom we join $(0,0)$ with $(1,0)$ resulting in \textbf{Zeeman's Palm} (infinitely many fingers and one thumb). \begin{Exercise} Show $X_1=X\cup A$ is path-connected and simply connected. \end{Exercise} \begin{Proposition}\label{FirstCovering} Consider the space $\tilde X_1$ obtained from $X\times \{0,1\}$ by adding two bridges $A_0$ and $A_1$; one joining $(x_0,0)$ and $(x_1,1)$, the other joining $(x_0,1)$ and $(x_1,0)$. The natural extension $p:\tilde X_1\to X_1:=X\cup A$ of the projection $X\times \{0,1\}\to X$ is a two-fold covering map. \end{Proposition} \begin{Exercise} Show $\tilde X_1$ is a connected space with two path components, each of them simply connected. \end{Exercise} We are encountering first anomalies in the theory of coverings: \\ 1. A connected cover of a path-connected space may not be path-connected.\\ 2. A simply connected space $B$ may admit a connected cover $E\to B$ that is not a homeomorphism. What happens if we try to improve $\tilde X_1$ by making it path-connected? Our standard way is to add a bridge $B_0$ joining its two path-components - let's make $B_0$ join $(x_0,0)$ and $(x_1,0)$. Now the image of $\tilde X_1\cup B_0$ is $X_2:=X\cup A\cup B$, where $B$ is another bridge joining $x_0$ and $x_1$. Obviously, we want to extend $p$ to a covering projection, so we need to add one more bridge $B_1$ joining $(x_0,1)$ and $(x_1,1)$ resulting in a path-connected space $\tilde X_2$ and a two-fold covering map $p_1:\tilde X_2\to X_2$. \begin{Exercise} $\pi_1(X_2)=Z$, $\pi_1(\tilde X_2)=Z$, and the image of $\pi_1(p_1):\pi_1(\tilde X_2)\to \pi_1(X_2)$ is $2\cdot Z$. \end{Exercise} Notice there is another covering map $p_2: \tilde X_2\to X_2$ (the Evil Twin of $p_1$) extending the projection $X\times \{0,1\}\to X$. Namely, we exchange the bridges: $A_0$ and $A_1$ are sent homeomorphically onto $B$, and the other two bridges $B_0$, $B_1$ are sent homeomorphically onto $A$. Abstractly speaking, it is the same covering map (we simply change the labelling of the bridges in $\tilde X_2$), so the image of $\pi_1(p_2):\pi_1(\tilde X_2)\to \pi_1(X_2)$ is $2\cdot Z$. Yet \begin{Proposition} There is no continuous $f:\tilde X_2\to \tilde X_2$ such that $p_i\circ f= p_j$ for $i\ne j$, $1\leq i,j\leq 2$. \end{Proposition} \begin{Exercise} Zeeman \cite[6.6.14 on p.258]{HilWyl} constructs a pair of two-fold coverings over the wedge $B$ of the unit circle $S^1$ and Zeeman's Palm $ZP$. In the first one the total space $E_1$ is $S^1$ with two copies of $ZP$ attached at $1$ and $-1$, respectively. The projection $p_1:E_1\to B$ is the natural extension of the two-fold covering $z\to z^2$ of $S^1$ over itself. In the second one the total space $E_2$ is $S^1$ with two copies of the Dusty Broom attached at $1$ and $-1$, respectively. One then adds two bridges: each from the base of one broom to the speck of dust of the other. The projection $p_2:E_2\to B$ is the natural extension of the two-fold covering $z\to z^2$ of $S^1$ over itself. Show $E_1$ and $E_2$ are not homeomorphic. \end{Exercise} \section{Reflection} Let's reflect on why we considered spaces $X$ that are connected with two path components, each of them simply connected and locally path-connected. \begin{Exercise} Suppose $p:E\to B$ is a two-fold covering of connected spaces. If $B$ is path-connected and simply connected, show $E$ has exactly two path components, they are homeomorphic, and each of them is simply connected. \end{Exercise} The last exercise is to check if the reader understands material. \begin{Exercise} Construct a two-fold covering $p:E\to B$ of connected spaces with the following properties:\\ a. $B$ is path-connected and simply connected,\\ b. path-components of $E$ are locally path-connected. \end{Exercise}
{ "timestamp": "2012-10-18T02:03:20", "yymm": "1210", "arxiv_id": "1210.3733", "language": "en", "url": "https://arxiv.org/abs/1210.3733" }
\section{Introduction} NGC\,6822 is an isolated barred dwarf irregular galaxy within the Local Group. It is comparable in size to the SMC but has a slightly higher metallicity (Muschielok et al. 1999; Venn et al. 2001). It contains numerous supergiants and H{\sc ii} regions with obvious signs of star formation and is sometimes referred to as a polar ring galaxy (Demers et al. 2006). Recent HST colour-magnitude studies suggest that over 50 percent of its stars formed in the last 5\,Gyr (Cannon et al. 2012). We present here new multi-epoch $JHK_{S}$ photometry of the central regions of NGC\,6822 which we compare with earlier work and use to identify and characterize large amplitude, Mira, asymptotic giant branch (AGB) variables. Earlier papers used the same photometry to identify the first symbiotic star in NGC\,6822 (Kniazev et al. 2009) and to study the Cepheid variables (Feast et al. 2012). A preliminary analysis of the AGB variables was produced by Nsengiyumva (2010). Earlier studies of red giants and AGB stars in NGC\,6822 have been made by Cioni \& Habing (2005), Kang (2006), Groenewegen et al. (2009), Kacharov, Rejkuba \& Cioni (2012) and by Sibbons et al. (2012), while Battinelli \& Demers (2011) specifically identified AGB variables. This work forms part of a broad study of AGB variables in Local Group Galaxies which so far has covered Leo~I (Menzies et al. 2002; 2010), Phoenix (Menzies et al. 2008), Fornax (Whitelock et al. 2009) and Sculptor (Menzies et al. 2011). These new observations provide an opportunity to compare these dwarf spheroidals with a dwarf irregular surveyed in the same way. \section{Observations} Our survey of NGC\,6822 is confined to the optical bar which is aligned nearly N-S. We used the Japanese-South African IRSF telescope equipped with the SIRIUS camera, which permits simultaneous imaging in the $J, H$ and $K_S$ bands (see Nagayama et al. (2003) for details). We defined 3 overlapping fields, with field 1 centred at $\alpha$(2000.0) = $19^h44^m56^s$ and $\delta$(2000.0) =$-14^o48'06''$. Fields 2 and 3 are centred 6.7 arcmin N and S, respectively, of field 1. The three fields, each approximately 7.8 arcmin square, were observed in $JHK_S$ at 19, 18 and 16 epochs, respectively, over a period of 3.5 years. \begin{table*} \caption[]{Data for stars with standard errors less than 0.1 mag (note that this selection omits the large amplitude variables which are listed in various tables below). The full table is available on-line. The first two columns are the equatorial coordinates in degrees; N is our own identification number; the mean photometry, $JHK_S$ is listed together with its standard deviation, $\delta JHK_S$; NJ, NH and NK are the number of observations used to derive the means.} \begin{center} \begin{tabular}{ccccccccccccccc} \hline RA & Dec & N & $J$& $\delta J$& $H$ & $\delta H$& $K_S$ & $\delta K$ & $J-H$ & $H-K_S$ & $J-K_S$ & NJ & NH & NK\\ \multicolumn{2}{c}{(2000.0)}&& \multicolumn{9}{c}{(mag)}\\ \hline 296.28656 &--14.80424 &10001 &12.613 & 0.009 & 12.050 & 0.006 & 11.939 & 0.026 & 0.563 & 0.111 & 0.674 & 18 &14 &18\\ 296.17661 &--14.78323 &10002 &12.704 & 0.014 & 12.303 & 0.016 & 12.217 & 0.030 & 0.401 & 0.086 & 0.487 & 18 &18 &18\\ 296.18298 &--14.83667 &10008 &13.354 & 0.018 & 12.903 & 0.008 & 12.819 & 0.033 & 0.451 & 0.084 & 0.535 & 18 &16 &18\\ 296.20160 &--14.83465 &10009 &13.602 & 0.009 & 13.112 & 0.004 & 13.020 & 0.021 & 0.490 & 0.092 & 0.582 & 18 &12 &18\\ 296.21399 &--14.82684 &10010 &13.161 & 0.009 & 12.751 & 0.009 & 12.673 & 0.012 & 0.410 & 0.078 & 0.488 & 17 &17 &18\\ \hline \end{tabular} \end{center} \label{tab_main} \end{table*} Further details, including those of the photometric calibration, are given by Feast et al. (2012). The basic data for stars with standard errors less than 0.1 mag in each band are provided on-line, and the first few lines of the catalogue are illustrated in Table~\ref{tab_main} (The Mira variables, discussed in section 5, are not in this table). Numbers of observations, NJ, NH etc. larger than 19 are found for some stars in the areas of overlap between fields. NGC\,6822 is at low galactic latitude, $b=-18^{\circ}.4$, so it experiences some interstellar extinction as well as confusion with Galactic sources. For the interstellar extinction we adopt $A_V=0.77$ mag (amounting to $A_J=0.20$, $A_H=0.12$, $A_K=0.07$ mag) from Clementini et al. (2003) using the information from Schlegel, Finkbeiner \& Davis (1998). We note, however, that the extinction across NGC\,6822 is somewhat variable and that significantly higher values are possible for sources associated with star forming regions. Our discussion of the AGB, and of the large amplitude variables in particular, will not be very sensitive to either the reddening, or its variability. \section{Colour-Magnitude Diagram} The colour-magnitude and two-colour diagrams for stars from Table~\ref{tab_main} plus the variables discussed in section 5, are illustrated in Fig.~\ref{fig_cm1} and Fig.~\ref{fig_cc1}. According to the detailed analysis by Sibbons et al. (2012) the tip of the red giant branch (TRGB) is at $K_0=17.42\pm0.11$ mag (2MASS system). So here we are dealing entirely with AGB stars, together with a sprinkling of red supergiants at the highest luminosity. Following Sibbons et al. we identify stars with $(J-H)_0<0.75$ mag as most likely to be foreground dwarfs and they are shown as gray, unless they have spectral types. Most, but not all, of the stars populating the left of the colour magnitude diagram and the bottom of the two-colour diagram are foreground dwarfs. The morphology of these diagrams is more clearly understood when stars of known spectral type are identified and it is therefore discussed in the next section. \begin{figure*} \includegraphics[width=17.6cm]{fig_cm_sp.ps} \caption{Colour-magnitude diagram showing the stars in our NGC\,6822 catalogue in black and, for those that are probably foreground dwarfs, in gray (unless they have spectral types). The TRGB is at $K_0=17.42$ mag. Stars of known late spectral-type are shown in colour: those with narrow-band colours of M stars and C stars are shown in blue and red, respectively; S stars are shown in cyan and M supergiants in magenta.} \label{fig_cm1} \end{figure*} \begin{figure*} \includegraphics[width=17.6cm]{fig_cc_sp.ps} \caption{Two-colour diagram with the same stars as in Fig.~\ref{fig_cm1}. The line represents the locus for Galactic carbon Miras (equation 2 from Whitelock et al. (2006) converted to 2MASS as: $(H-K_S)_0= 1.003(J-H)_0-0.428$. Note the very clear separation in $(J-H)_0$ of the foreground M-dwarfs (smaller values) and NGC\,6822 M-giants (larger values). The M supergiants have intermediate colours. } \label{fig_cc1} \end{figure*} \section{Spectral Types} Various groups have attempted to separate the AGB stars into M- and C-type on the basis of their $J-K$ colour (Cioni \& Habing (2005), Kang et al. (2006), Sibbons et al. (2012)). Kacharov et al. (2012) obtained spectra, including 148 stars in common with us, two of which are Mira variables. They concluded that 79 percent of the carbon stars had $(J-K)_0>1.28$ mag (on the 2MASS system). Letarte et al. (2002) obtained narrow band photometry over a very large field in order to find the extent of the NGC\,6822 C-star population. Over 5000 stars were found in common with our sample including about 1000 with the colours of M stars ($R-I>1.1$ and $CN-TiO<0$) and 430 with the colours of C stars ($R-I>1.1$ and $CN-TiO>0.3$) and this allows a reasonably good division between M- and C-type stars among our variables (section 5). However, the central field is crowded and a few misidentifications are possible and might explain some outlying points among the C- and M- stars. Levesque \& Massey (2012) discuss red supergiants (RSGs) in NGC\,6822 and use the $V-R$ and $B-V$ colours to separate RSGs from giants. Table~\ref{tab_super} shows the photometry of the M-type supergiants from Levesque \& Massey's table~2, which are also illustrated in Figs.~\ref{fig_cm1} and \ref{fig_cc1}. N10032 is a small amplitude variable (as the uncertainty on the mean magnitudes listed in Table~\ref{tab_super} show). It is not obviously periodic and the full peak-to-peak amplitude is $\Delta K_S \sim 0.3$ mag. Note also that N10015 falls amongst the dwarfs in both figures, although its status as a supergiant is well established. As others have noted (Cioni \& Habing 2005, see also Nikolaev \& Weinberg 2000) there is no clear distinction in the colour-magnitude diagram between the giants and supergiants (or even between the supergiants and foreground dwarfs). The broad morphology of Fig.~\ref{fig_cm1} is now clear. The vertical strip between $(J-K)_0\sim 0.3$ and 0.6 mag is mostly foreground stars, but will include warm supergiant members of NGC\,6822, e.g., the Cepheids (section 5). The vertical strip around $(J-K_S)_0\sim 0.9$ mag is mostly foreground M-dwarfs. The almost vertical strip at around $(J-K)_0\sim 1.1$ mag starts just above the TRGB as M stars on the AGB; at higher luminosity, $K_{S0}<15.8$ mag, it becomes luminous AGB stars. These stars are discussed further in the context of the variable stars (section 5), but are presumably younger than the bulk of the AGB population that evolve to the right of the diagram as C stars. It is here we would expect to find hot bottom burning stars (e.g. Sackman \& Boothroyd 1992) and super-AGB stars (e.g. Siess 2008), prior to the onset of heavy mass-loss. At even higher luminosities, and between the AGB column and the M-dwarf column, fall the M supergiants. The carbon stars concentrate in a diagonal band to the right of the AGB M-type stars. The AGB variables without spectral types\footnote{most of these will be C stars, but this is also the place we expect to find OH/IR stars if such objects exist in NGC\,6822.} extend the C-star sequence to the extreme right of the diagram. It is likely that a small number of the points below the carbon stars ($K_S>16$ mag) in Fig.~\ref{fig_cm1} are actually unresolved galaxies (see e.g. Whitelock et al. 2009). \begin{table*} \caption[]{M Supergiants from Levesque \& Massey (2012).} \begin{center} \begin{tabular}{ccccccccccccccl} \hline LGGS& RA& Dec & N & $J$& $\delta J$& $H$ & $\delta H$& $K_S$ & $\delta K$ & $J-K_S$ & NJ & NH& NK & Sp\\ & \multicolumn{2}{c}{(2000.0)}&& \multicolumn{7}{c}{(mag)}\\ \hline J194445.76-145221.2 & 296.19067& -14.87276& 30016& 13.91& 0.01& 13.09& 0.03& 12.78& 0.02& 1.13& 12& 14& 11 &M1\\ J194447.81-145052.5 & 296.19919& -14.84817& 40115& 14.41& 0.03& 13.57& 0.03& 13.26& 0.02& 1.15& 24& 26& 22 & M1\\ J194450.44-144410.0 & 296.21021& -14.73628& 40177& 15.14& 0.01& 14.33& 0.03& 14.06& 0.02& 1.08& 22& 24& 22 & M2\\ J194453.46-144540.1 & 296.22278& -14.76476& 10089& 15.10& 0.03& 14.29& 0.02& 14.02& 0.02& 1.08& 18& 17& 17 & M4.5\\ J194454.46-144806.2 & 296.22696& -14.80191& 10032& 14.48& 0.05& 13.66& 0.03& 13.34& 0.05& 1.14& 14& 14& 16 &M1\\ J194454.54-145127.1 & 296.22726& -14.85778& 40278& 13.43& 0.03& 12.60& 0.04& 12.33& 0.05& 1.10& 33& 33& 33 &M0\\ J194455.70-145155.4 & 296.23212& -14.86564& 40315& 13.39& 0.06& 12.61& 0.05& 12.33& 0.05& 1.06& 27& 26& 28 &M0\\ J194457.31-144920.2 & 296.23883& -14.82247& 10011& 13.60& 0.04& 12.75& 0.04& 12.46& 0.05& 1.14& 18& 18& 18 &M1\\ J194459.86-144515.4 & 296.24945& -14.75443& 10015& 13.64& 0.00& 12.95& 0.01& 12.70& 0.02& 0.94& 13& 17& 18 &M1\\ J194503.58-144337.6 & 296.26492& -14.72723& 20101& 15.96& 0.02& 15.11& 0.02& 14.81& 0.02& 1.15& 16& 17& 16 &M0\\ \hline \end{tabular} \end{center} \label{tab_super} \end{table*} \begin{table*} \caption[]{Stars with known S spectral-type.} \begin{center} \begin{tabular}{ccccccccccccc} \hline RA & Dec & N & $J$& $\delta J$& $H$ & $\delta H$& $K_S$ & $\delta K$ & $J-K_S$ & NJ & NH& NK \\ \multicolumn{2}{c}{(2000.0)}&& \multicolumn{7}{c}{(mag)}\\ \hline 296.17892 & --14.82286 & 10870 & 17.52& 0.04& 16.53& 0.04& 16.19& 0.05& 1.34& 15& 17& 17\\ 296.21545 & --14.83469 & 10784 & 17.45& 0.03& 16.53& 0.04& 16.20& 0.06& 1.26& 17& 18& 17\\ 296.27341 & --14.80861 & 11004 & 17.60& 0.03& 16.62& 0.03& 16.27& 0.05& 1.33& 15& 16& 15\\ 296.28308 & --14.80497 & 11029 & 17.46& 0.03& 16.55& 0.02& 16.22& 0.05& 1.24& 16& 16& 17\\ 296.25427 & --14.81764 & 12050 & 18.17& 0.07& 17.25& 0.10& 16.70& 0.05& 1.47& 14& 17& 17\\ 296.19156 & --14.89296 & 30528 & 17.76& 0.06& 16.85& 0.04& 16.51& 0.10& 1.26& 12& 12& 14\\ 296.25522 & --14.82579 & 10326 & 16.82& 0.03& 15.86& 0.03& 15.52& 0.02& 1.30& 16& 16& 16\\ \hline \end{tabular} \end{center} \label{tab_s} \end{table*} \subsection {S stars} Six of the nine S stars identified by Kacharov et al. (2012) fall in the area we surveyed and they are listed in Table~\ref{tab_s}. All of these have the $K_S$ magnitudes, and all but one have the colours, anticipated for an evolutionary state between that of the lower luminosity M stars and the C stars (see Fig.~\ref{fig_cm1} and \ref{fig_cc1}). The exception, N12050, has a slightly redder $J-K_S$ and therefore falls amongst the C stars (as noted by Kacharov et al.). The S star identified by Aaronson et al. (1984) is our N10326 which is about a magnitude brighter than the Kacharov et al. S stars. If it is an intrinsic S star (i.e. its s-process elements are the consequence of its own evolution and dredge-up and are not from a close companion) then it must be more massive than the others, perhaps comparable to the hot bottom burning Li-rich S stars in the LMC and SMC (Smith et al. 1995; Whitelock et al. 2003). In that case it may be from the same population as the luminous large-amplitude O-rich variables discussed below. \begin{figure*} \includegraphics[width=17.6cm]{fig_cm_var.ps} \caption{Colour-magnitude diagram illustrating the variable stars. Symbols: Cepheids magenta; M-type Miras and supergiant blue; C-type Miras red; other variables triangles: large amplitude yellow, small amplitude green. All stars to the right of the dotted line were systematically examined for variability.} \label{fig_cm2} \end{figure*} \begin{figure*} \includegraphics[width=17.6cm]{fig_cc_var.ps} \caption{Two-colour diagram showing the same stars as in Fig.~\ref{fig_cm2}.} \label{fig_cc2} \end{figure*} \section{Variables} We examined the light curves of all stars with $K_S<17$ mag for which we had at least 10 observations and which showed a standard deviation in $J$, $H$ or $K_S$ of $>0.2$ mag, going to lower standard deviations for brighter magnitudes. By this approach we found the brightest of the Cepheids, all of the stars listed in Tables~\ref{tab_o}(a) and (b) plus a considerable fraction of those which we list in the later tables and which are discussed below. Given that our primary objective was to find large amplitude AGB variables we then examined stars with $J-K_S>2.2$ mag, finding them all to be variable at some level. It should be noted that one consequence of the use of a reference frame in the $H$ band to provide positions at which ``fixed-position" DoPHOT photometry was performed (Feast et al. 2012) is that extremely red stars or stars with a very large amplitude of variation might have been missed. If the star was not measurable on the reference $H$ frame, then it would not have been found in any other band either. This means that there may be red variables that we have not measured. The limiting magnitudes at $J, H$ and $K_S$ in our catalogue are approximately 20.3, 18.3 and 18.0 mag, respectively. This means that we would have missed red variables with $H-K_S=2.0$ mag that were fainter than about $K_S=16.3$ mag or $J=18.4$ mag at the time the reference frame was obtained, even though these latter values are significantly above the relevant limiting magnitudes. The various variables are identified in Figs.~\ref{fig_cm2} and \ref{fig_cc2}. The Cepheids were discussed by Feast et al. (2012) and the others are considered below. Periods were determined by Fourier analysis and Table~\ref{tab_o} lists the Fourier mean $JHK_S$ magnitudes for the large amplitude (see below) variables with measurable periods (Miras), together with the peak-to-peak amplitudes. The table is split into (a) O-rich, M-type, stars and (b) C-stars, on the basis of $J-K_S$ colour (see section 4). Table~\ref{tab_o} is the only one to list Fourier mean magnitudes. The other tables contain simple mean values of all the observations. Note that the stars in Table~\ref{tab_o} are \underline{not} in the online catalogue, because their mean magnitudes are evaluated differently. Our previous practice has been to define large amplitude, Mira, variables to be those with $\Delta K_S>0.4$ mag (e.g. Whitelock et al. 2006). While the distinction between Miras and SR variables is clear for O-rich stars (Miras were originally defined from observations of Galactic stars, most of which are O-rich), it is not so clear for the C-rich stars (Whitelock 1996) and it is apparent that this cut-off results in far fewer short period ($P<300$ days) Miras than we might expect in NGC\,6822, and presumably in similar galaxies. However, if we are interested in these variables as distance indicators the distinction is important, because low-amplitude variables can fall on any one of several period-luminosity (PL) relations (Wood 2000, Ita et al. 2004) whereas the Miras with periods less than 450\,days fall only on one relation. Nevertheless, we relax the criterion very slightly here to include stars with $0.36<\Delta K_S<0.4$ mag, while noting that we have done so in Table~\ref{tab_c}(b). The $K_S$ light curves of the Mira variables are illustrated in the appendix. With regard to establishing the O- or C-rich nature of the Miras, we follow Letarte at al. (2002) and relaxing their criteria (as they suggest) very slightly to examine stars with $R-I>1.0$ (rather than 1.1) we find the following: 6 Miras have M-type narrow-band colours; they are labeled `M' in the last column of Table~\ref{tab_o}(a), and all have $(J-K_S)_0<1.33$ mag. Similarly, 11 stars have C-type narrow-band colours; these are labeled `C' in the last column of Table~\ref{tab_c}(b), and all have $(J-K_S)_0>1.33$ mag. Two of the Miras have spectral types in Kacharov et al. (2012), as indicated in Table~\ref{tab_c}(b). N\,12790 (P=182 days) has spectral type C6.5 and is the bluest ($(J-K_S)_0=1.31$ mag) of the stars that we consider to be C stars in our PL relation analysis (see section 7 below). Table \ref{tab_var1} contains large amplitude (mostly $\Delta K_S>0.4$ mag) but not obviously periodic variables; it includes some Miras for which we could not estimate periods and probably some unrecognized Miras (see below). Table~\ref{tab_var2} contains small amplitude ($\Delta K_S<0.4$ mag) variables, most of which are not unambiguously periodic; these are probably semi-regular (SR) or irregular variables. This group includes a few which have SR-variations superimposed on an apparently secular trend, which can make the overall change more than 0.4 mag. These are candidates for variables with two periods. The distinction between Tables \ref{tab_var1} and \ref{tab_var2} is to some extent subjective and there is undoubtably overlap between the two groups. Nevertheless, they do show distinctly different colours (see Figs.~\ref{fig_cm2} and \ref{fig_cc2}), with most of the large amplitude variables having larger values of $(J-K_S)$ than those with small amplitudes, suggesting that they have higher mass-loss rates. Given the cadence of our observations and the fact that at least some mass-losing C-rich Miras undergo periods of very erratic variation (e.g. R~For as illustrated in Whitelock et al. (1997) and several LMC C-rich Miras discussed by Whitelock et al. (2003)) we anticipate that a significant fraction of the stars in Table~\ref{tab_var1} will be Miras of this type (see also N21029 in Fig.~A2 in the appendix). There are a few of the low amplitude variables among the supergiants. Note also that many of the AGB stars and supergiants not identified as variables will in fact be low amplitude variables, generally with $\Delta K_S < 0.2$ mag. \begin{table*} \caption[]{(a) Periodic Large Amplitude O-Rich Variables (Miras). Fourier mean magnitudes $\bar{J}$, $\bar{H}$, $\bar{K}$ are listed with the peak-to-peak amplitudes ($\Delta J$, $\Delta H$, $\Delta K_S$) of the best fitting first-order sine curves. The last column shows those with spectral types, or narrow band spectral indices (Letarte et al. 2002).} \begin{center} \begin{tabular}{ccccccccccc} \hline RA & Dec & N & $P$ & $\bar{J}$ & $\bar{H}$ & $\bar{K_S}$ & $\Delta J$ & $\Delta H$ & $\Delta K_S$ & Sp\\ \multicolumn{2}{c}{(2000.0)}&& (days)& \multicolumn{6}{c}{(mag)}\\ \hline 296.18398 & --14.78018 & 12557& 158 & 18.23 & 17.51 & 17.12 & 0.84 & 0.83 & 0.80& M\\ 296.25229 & --14.78475 & 11226& 257 & 17.49 & 16.54 & 16.08 & 0.54 & 0.58 & 0.53&M:\\ 296.20415 & --14.63486 & 20331& 314 & 16.58 & 15.91 & 15.46 & 0.88 & 1.10 & 0.86& M \\ 296.22364 & --14.77473 & 10184& 370 & 16.44 & 15.58 & 15.13 & 0.81 & 0.95 & 0.89& M \\ 296.21816 & --14.88035 & 30133& 401 & 16.30 & 15.53 & 15.09 & 0.85 & 1.02 & 0.94 \\ 296.21322 & --14.68097 & 20134& 402 & 16.35 & 15.55 & 15.07 & 0.89 & 1.00 & 0.92 \\ 296.20428 & --14.74271 & 40139& 545 & 15.25 & 14.32 & 13.92 & 0.62 & 0.72 & 0.66 \\ 296.25088 & --14.76786 & 10198& 602 & 15.50 & 14.68 & 14.19 & 0.75 & 0.93 & 0.80 \\ 296.18801 & --14.87231 & 30292& 637 & 15.88 & 15.19 & 14.68 & 0.96 & 1.00 & 0.97& M \\ 296.26702 & --14.76311 & 10091& 638 & 15.46 & 14.67 & 14.18 & 0.95 & 1.12 & 1.00 &(1)\\ 296.21293 & --14.73224 & 20004& 854 & 13.72 & 12.97 & 12.57 & 0.47 & 0.45 & 0.40& M \\ \hline \end{tabular} \end{center} (1) N10091 was identified as a late-M star with $ \rm H\alpha$ emission by Filippenko \& Chornock (2003). \label{tab_o} \end{table*} \setcounter{table}{3} \begin{table*} \caption[]{(b) Periodic Large Amplitude C-Rich Variables (Miras). Columns as given for Table~\ref{tab_o}a.} \begin{center} \begin{tabular}{cccclcccccl} \hline N & RA & Dec &$P$ & $\bar{J}$ & $\bar{H}$ & $\bar{K_S}$ & $\Delta J$ & $\Delta H$ & $\Delta K_S$ & Sp\\ & \multicolumn{2}{c}{(2000.0)}& (days)& \multicolumn{6}{c}{(mag)}\\ \hline 12790 & 296.20389 & -14.75535 & 182 & 18.03 & 17.11 & 16.59 & 1.09 & 0.82 & 0.51 & C6.5(1)\\ 10817 & 296.23639 & -14.83054 & 214 & 17.70 & 16.76 & 16.04 & 0.75 & 0.61 & 0.45 & \\ 20540 & 296.18011 & -14.71067 & 223 & 17.88 & 16.89 & 16.39 & 0.94 & 0.71 & 0.48 & C\\ 40590 & 296.28567 & -14.73952 & 223 & 18.01 & 17.04 & 16.39 & 0.80 & 0.63 & 0.39 & C *\\ 12751 & 296.21027 & -14.75973 & 231 & 17.89 & 16.91 & 16.29 & 1.07 & 0.72 & 0.50 & \\ 11032 & 296.18048 & -14.80431 & 239 & 18.20 & 16.91 & 15.94 & 1.20 & 0.92 & 0.65 & \\ 10748 & 296.17438 & -14.83893 & 243 & 18.58 & 17.16 & 16.11 & 1.06 & 0.77 & 0.48 & \\ 20578 & 296.29065 & -14.69641 & 246 & 17.80 & 16.82 & 16.14 & 0.77 & 0.79 & 0.43 & C\\ 20542 & 296.17703 & -14.71031 & 255 & 17.84 & 16.76 & 16.02 & 0.73 & 0.66 & 0.50 & C\\ 30430 & 296.21802 & -14.92573 & 269 & 17.67 & 16.56 & 15.86 & 0.87 & 0.63 & 0.38 & C *\\ 12208 & 296.23352 & -14.80653 & 278 & 17.87 & 16.61 & 15.62 & 1.34 & 1.07 & 0.76 & \\ 21419 & 296.27563 & -14.74923 & 278 & 19.93 & 18.27 & 16.57 & 1.50 & 1.42 & 1.05 & \\ 13364 & 296.19489 & -14.82267 & 286 & 18.10 & 16.85 & 15.86 & 1.30 & 1.09 & 0.75 & \\ 12400 & 296.17468 & -14.79389 & 301 & 18.00 & 16.75 & 15.88 & 1.11 & 0.84 & 0.60 & C\\ 30583 & 296.30014 & -14.87872 & 302 & 18.37 & 17.07 & 15.88 & 1.2 & 1.1 & 0.5 & C (4)\\ 20239 & 296.22632 & -14.72554 & 304 & 18.04 & 16.71 & 15.66 & 0.86 & 0.67 & 0.48 & \\ 20558 & 296.19647 & -14.70265 & 304 & 18.21 & 16.60 & 15.31 & 0.41 & 0.40 & 0.44 & (2)\\ 20840 & 296.30347 & -14.74166 & 306 & 18.7 & 17.18 & 15.98 & & 1.14 & 0.88 & (3) \\ 12466 & 296.17462 & -14.78854 & 311 & 18.42 & 17.09 & 16.03 & 1.52 & 1.17 & 0.96 & \\ 40114 & 296.19916 & -14.86160 & 312 & 20.21 & 18.61 & 16.99 & 1.9 & 1.35 & 1.09 & \\ 11059 & 296.21259 & -14.80142 & 319 & 18.38 & 16.90 & 15.79 & 1.20 & 1.01 & 0.75 & \\ 20375 & 296.18277 & -14.74815 & 328 & 18.03 & 16.62 & 15.59 & 0.75 & 0.72 & 0.50 & C\\ 11296 & 296.21729 & -14.77647 & 340 & 18.37 & 16.99 & 15.78 & 1.03 & 0.90 & 0.5 & \\ 30928 & 296.22229 & -14.90997 & 342 & 18.95 & 17.42 & 16.28 & 0.57 & 0.62 & 0.48 & \\ 20657 & 296.22934 & -14.65254 & 343 & 17.98 & 16.59 & 15.56 & 0.64 & 0.48 & 0.43 & C8.2(1)\\ 13106 & 296.24048 & -14.84939 & 354 & 19.10 & 17.38 & 15.96 & 1.51 & 1.34 & 0.98 & \\ 20588 & 296.29919 & -14.69143 & 376 & 17.56 & 16.22 & 15.16 & 0.97 & 0.80 & 0.57 & C\\ 11305 & 296.17957 & -14.77527 & 378 & 18.02 & 16.56 & 15.45 & 0.82 & 0.63 & 0.50 & \\ 30920 & 296.24246 & -14.91158 & 384 & 18.95 & 17.21 & 15.86 & 1.75 & 1.46 & 0.98 & \\ 40363 & 296.23649 & -14.85697 & 398 & 19.18 & 17.73 & 16.32 & 1.00 & 0.91 & 0.85 & C\\ 11140 & 296.22876 & -14.79337 & 405 & 18.75 & 17.25 & 15.93 & 0.91 & 0.95 & 0.83 & \\ 20439 & 296.24640 & -14.73473 & 430 & 19.15 & 17.32 & 15.77 & 1.05 & 0.94 & 0.85 & \\ 10753 & 296.24295 & -14.83836 & 432 & 18.40 & 17.06 & 15.88 & 0.66 & 0.75 & 0.76 & \\ 40520 & 296.26687 & -14.74039 & 432 & 18.41 & 16.86 & 15.51 & 1.25 & 0.91 & 0.74 & \\ 31168 & 296.23883 & -14.87026 & 434 & 19.49 & 17.50 & 15.82 & 2.05 & 1.40 & 1.01 & \\ 21671 & 296.26569 & -14.72024 & 436 & 19.78 & 17.78 & 16.16 & 1.59 & 1.22 & 0.95 & \\ 11174 & 296.21774 & -14.78943 & 440 & 19.12 & 17.42 & 15.90 & 1.31 & 1.24 & 0.89 & \\ 20569 & 296.29114 & -14.69770 & 454 & 19.24 & 17.62 & 16.03 & 1.45 & 1.40 & 2.15 & \\ 12445 & 296.21661 & -14.79057 & 454 & 20.26 & 18.34 & 16.43 & 2.42 & 1.78 & 1.25 & \\ 21141 & 296.29965 & -14.69757 & 456 & 18.16 & 16.62 & 15.42 & 1.82 & 1.66 & 1.23 & \\ 21234 & 296.28125 & -14.67393 & 466 & 20.16 & 18.21 & 16.25 & 1.54 & 1.41 & 1.25 & \\ 12147 & 296.28897 & -14.81087 & 475 & 19.93 & 17.89 & 16.24 & 1.61 & 1.59 & 1.26 & \\ 11299 & 296.25732 & -14.77645 & 494 & 19.06 & 17.30 & 15.73 & 2.16 & 2.07 & 1.67 & \\ 13293 & 296.25696 & -14.83051 & 495 & 20.30 & 18.37 & 16.42 & 1.25 & 1.13 & 0.95 & \\ 21029 & 296.19858 & -14.71837 & 501 & 19.11 & 17.07 & 15.55 & 1.5 & 1.1 & 0.8 &(5) \\ 40102 & 296.19682 & -14.75119 & 526 & 19.69 & 17.43 & 15.69 & 2.12 & 1.31 & 1.05 & \\ 12177 & 296.28424 & -14.80892 & 590 & 20.50 & 18.10 & 16.10 & 1.52 & 1.25 & 1.09 & \\ 10807 & 296.21631 & -14.83196 & 747 & 20.37 & 18.00 & 15.89 & 1.92 & 2.13 & 1.59 & \\ 40623 & 296.29793 & -14.74687 & 897 & 20.40 & 18.08 & 16.11 & 1.76 & 1.77 & 1.45 & \\ 30268 & 296.25891 & -14.88796 & 998 & 16.73 & 15.44 & 14.45 & 1.35 & 1.47 & 1.21 & \\ \hline \end{tabular} \end{center} * Low $K_S$ amplitude ($<0.4$ mag) for a Mira.\\ (1) Spectral type from Kacharov et al. (2012).\\ (2) N20558 was not used in the PL analysis as its image appears blended.\\ (3) N20840 has only 4 good observations at $J$, therefore its mean is uncertain and its amplitude unknown.\\ (4) N30583 has seven observations only; P taken from Battinelli and Demers (2011). \\ (5) N21029 has a long-term trend (see Fig.~A2), mean given here for bright cycle. \label{tab_c} \end{table*} \begin{table*} \caption[]{Large Amplitude Variables.} \begin{center} \begin{tabular}{ccccccccccrrrc} \hline RA & Dec & N & $J$ & $\delta J$& $H$ & $\delta H$& $K_S$ & $\delta K$ & $J-K_S$ & NJ & NH & NK & note\\ \multicolumn{2}{c}{(2000.0)} && \multicolumn{7}{c}{(mag)} \\ \hline 296.29922& -14.83650& 10293& 17.87& 0.88& 16.35& 0.72& 14.98& 0.07& 2.88& 16& 17& 8&\\ 296.19168& -14.82965& 10310& 18.32& 0.31& 16.80& 0.19& 15.67& 0.10& 2.65& 17& 15& 13&\\ 296.28415& -14.81194& 10371& 17.33& 0.16& 15.94& 0.12& 15.04& 0.07& 2.29& 15& 14& 12&\\ 296.23456& -14.77330& 10501& 18.00& 0.37& 16.56& 0.32& 15.48& 0.27& 2.52& 18& 18& 18&\\ 296.20154& -14.81282& 10968& 19.18& 0.60& 17.30& 0.30& 15.91& 0.20& 3.27& 18& 18& 18&\\ 296.18323& -14.76308& 11391& 18.39& 0.32& 16.94& 0.18& 15.82& 0.14& 2.57& 15& 16& 16&\\ 296.27222& -14.76143& 11401& 18.09& 0.24& 16.73& 0.17& 15.62& 0.07& 2.46& 17& 18& 13&\\ 296.22778& -14.76111& 11403& 18.35& 0.34& 17.01& 0.15& 16.18& 0.09& 2.17& 17& 14& 14&\\ 296.27774& -14.82287& 11991& 18.46& 0.18& 17.17& 0.22& 16.13& 0.20& 2.33& 18& 18& 18&\\ 296.24905& -14.81619& 12070& 17.49& 0.09& 16.92& 0.21& 15.98& 0.27& 1.51& 17& 18& 18&1\\ 296.25348& -14.76936& 12660& 18.38& 0.35& 16.83& 0.30& 15.54& 0.27& 2.84& 18& 18& 18&\\ 296.24728& -14.76419& 12711& 18.94& 1.05& 17.79& 0.92& 16.80& 0.63& 2.14& 18& 18& 17&\\ 296.24246& -14.82114& 13390& 19.99& 0.74& 18.40& 0.44& 16.77& 0.21& 3.23& 15& 15& 14&\\ 296.17993& -14.75767& 14105& 19.23& 0.39& 17.44& 0.33& 15.98& 0.26& 3.25& 17& 16& 16&\\ 296.27051& -14.73481& 20438& 19.39& 0.38& 17.70& 0.27& 16.08& 0.21& 3.32& 14& 15& 15&\\ 296.18378& -14.68249& 20608& 19.57& 1.40& 17.64& 0.95& 16.06& 0.67& 3.51& 15& 17& 17&\\ 296.26770& -14.68061& 20614& 19.60& 1.62& 17.60& 0.76& 16.30& 0.53& 3.30& 16& 17& 17&\\ 296.17764& -14.64240& 21316& 18.63& 0.79& 17.47& 0.64& 16.42& 0.45& 2.20& 15& 17& 17&\\ 296.26511& -14.96155& 30767& 18.07& 0.40& 17.09& 0.30& 16.45& 0.20& 1.62& 14& 15& 14&\\ 296.21112& -14.91074& 30924& 19.46& 0.36& 17.55& 0.20& 16.25& 0.13& 3.21& 13& 14& 12&\\ 296.30179& -14.90356& 30961& 19.75& 0.26& 17.75& 0.18& 15.96& 0.44& 3.78& 6& 6& 6&\\ 296.18011& -14.75091& 40030& 17.02& 0.16& 16.27& 0.20& 15.81& 0.12& 1.21& 23& 30& 29&\\ 296.22681& -14.75065& 40275& 17.13& 0.34& 15.92& 0.29& 15.11& 0.28& 2.01& 34& 34& 34&\\ 296.23346& -14.74996& 40327& 17.71& 0.18& 16.93& 0.23& 16.52& 0.18& 1.19& 31& 35& 33&\\ 296.24622& -14.86655& 40419& 17.98& 0.08& 16.67& 0.06& 15.79& 0.08& 2.19& 20& 18& 20&2\\ 296.26105& -14.73764& 40493& 18.29& 0.15& 16.78& 0.16& 15.38& 0.19& 2.91& 22& 21& 28&\\ 296.27310& -14.75146& 40538& 17.31& 0.16& 16.57& 0.17& 15.96& 0.20& 1.35& 29& 30& 31&\\ \hline \end{tabular} \end{center} \label{tab_var1} (1) N12070 is probably a Mira, with $\Delta K_S < 0.6$ mag, and possible periods of around 545 or 215 days, but its image is confused at shorter wavelengths.\\ (2) N40419 is a Mira with a period of 193 days, but its photometry is contaminated by nearby sources. \end{table*} \begin{center} \onecolumn \begin{longtable}{cccccccccccccc} \caption[Small Amplitude Variables.]{Small Amplitude Variables.} \label{tab_var2} \\ \hline RA & Dec & N & $J$& $\delta J$& $H$ & $\delta H$& $K_S$ & $\delta K$ &$J-K_S$ & NJ & NH & NK&\\ \multicolumn{2}{c}{(2000.0)} && \multicolumn{7}{c}{(mag)} \\ \hline \endfirsthead \hline RA & Dec & N & $J$& $\delta J$& $H$ & $\delta H$& $K_S$ & $\delta K$ &$J-K_S$ & NJ & NH & NK&\\ \multicolumn{2}{c}{(2000.0)} && \multicolumn{7}{c}{(mag)} \\ \hline \endhead \multicolumn{12}{l}{{Continued on Next Page\ldots}} \\ \endfoot \endlastfoot 296.22696& -14.80191& 10032& 14.48& 0.05& 13.66& 0.03& 13.34& 0.05& 1.14& 14& 14& 16\\ 296.24005& -14.80796& 10074& 14.74& 0.09& 13.94& 0.10& 13.66& 0.10& 1.08& 18& 18& 18\\ 296.21835& -14.80118& 10077& 15.94& 0.11& 14.97& 0.12& 14.55& 0.11& 1.40& 18& 18& 18\\ 296.26028& -14.81812& 10152& 16.32& 0.17& 15.26& 0.14& 14.71& 0.11& 1.61& 18& 18& 18\\ 296.22321& -14.76671& 10200& 16.47& 0.08& 15.32& 0.06& 14.61& 0.06& 1.86& 17& 16& 17\\ 296.19437& -14.82407& 10330& 17.33& 0.14& 16.16& 0.13& 15.52& 0.11& 1.82& 18& 18& 18\\ 296.17441& -14.81870& 10343& 17.35& 0.11& 16.12& 0.13& 15.44& 0.11& 1.92& 15& 18& 18\\ 296.23013& -14.81558& 10356& 17.00& 0.16& 15.84& 0.16& 15.20& 0.14& 1.80& 18& 18& 18\\ 296.23392& -14.80649& 10400& 16.97& 0.11& 15.74& 0.09& 15.00& 0.07& 1.97& 15& 16& 18\\ 296.24081& -14.80368& 10408& 16.93& 0.13& 15.77& 0.09& 15.11& 0.07& 1.82& 17& 17& 17\\ 296.19797& -14.80259& 10411& 16.92& 0.18& 15.82& 0.14& 15.23& 0.10& 1.69& 18& 18& 18\\ 296.29706& -14.80268& 10412& 17.34& 0.17& 16.08& 0.14& 15.18& 0.13& 2.17& 17& 16& 18\\ 296.22131& -14.79940& 10425& 17.30& 0.18& 16.11& 0.13& 15.40& 0.10& 1.90& 18& 18& 18\\ 296.28729& -14.79597& 10433& 17.46& 0.18& 16.30& 0.14& 15.55& 0.08& 1.91& 18& 18& 16\\ 296.20374& -14.79388& 10439& 17.33& 0.16& 16.08& 0.11& 15.37& 0.08& 1.96& 18& 17& 17\\ 296.28287& -14.78797& 10460& 17.32& 0.26& 16.24& 0.20& 15.66& 0.15& 1.66& 18& 18& 18\\ 296.22369& -14.83960& 10743& 17.57& 0.29& 16.43& 0.24& 15.67& 0.19& 1.90& 18& 18& 18\\ 296.27734& -14.83831& 10755& 17.42& 0.20& 16.23& 0.16& 15.52& 0.13& 1.91& 18& 18& 18\\ 296.27213& -14.83158& 10809& 18.07& 0.14& 16.90& 0.19& 16.13& 0.19& 1.93& 17& 18& 18\\ 296.24747& -14.82692& 10839& 18.15& 0.09& 16.63& 0.09& 15.41& 0.07& 2.74& 14& 15& 15\\ 296.29504& -14.82570& 10850& 17.89& 0.38& 16.68& 0.30& 15.90& 0.18& 1.99& 18& 18& 18\\ 296.19476& -14.82463& 10859& 17.67& 0.21& 16.57& 0.17& 15.91& 0.14& 1.75& 18& 17& 18\\ 296.24554& -14.82244& 10876& 17.71& 0.26& 16.41& 0.20& 15.50& 0.12& 2.21& 18& 18& 18\\ 296.20242& -14.81780& 10917& 17.35& 0.08& 16.07& 0.09& 15.24& 0.07& 2.11& 14& 16& 16\\ 296.28149& -14.81671& 10935& 17.58& 0.10& 16.60& 0.11& 16.16& 0.17& 1.42& 14& 15& 18\\ 296.27066& -14.80596& 11021& 17.54& 0.24& 16.41& 0.19& 15.72& 0.12& 1.82& 18& 18& 17\\ 296.24573& -14.79360& 11139& 17.98& 0.31& 16.72& 0.11& 15.67& 0.12& 2.31& 17& 14& 17\\ 296.24271& -14.78809& 11187& 17.90& 0.08& 16.50& 0.06& 15.68& 0.07& 2.22& 16& 17& 18\\ 296.27832& -14.78018& 11271& 17.42& 0.24& 16.25& 0.21& 15.55& 0.15& 1.87& 18& 18& 18\\ 296.17908& -14.77972& 11273& 17.63& 0.21& 16.43& 0.14& 15.71& 0.07& 1.92& 18& 16& 14\\ 296.18372& -14.77252& 11335& 17.51& 0.19& 16.28& 0.18& 15.41& 0.12& 2.10& 18& 18& 18\\ 296.20221& -14.76910& 11364& 17.59& 0.06& 16.26& 0.03& 15.38& 0.03& 2.21& 15& 13& 17\\ 296.22644& -14.76749& 11372& 17.53& 0.31& 16.46& 0.22& 15.70& 0.13& 1.83& 18& 18& 18\\ 296.19781& -14.76344& 11389& 17.58& 0.23& 16.45& 0.18& 15.64& 0.14& 1.93& 18& 18& 17\\ 296.28235& -14.76346& 11392& 18.41& 0.19& 17.01& 0.08& 15.96& 0.07& 2.46& 17& 16& 18\\ 296.24139& -14.84622& 11764& 18.82& 0.23& 17.52& 0.12& 16.51& 0.10& 2.31& 15& 15& 16\\ 296.29608& -14.84267& 11794& 17.73& 0.24& 16.83& 0.16& 16.36& 0.11& 1.37& 17& 18& 16\\ 296.24057& -14.79573& 12373& 18.57& 0.32& 17.52& 0.24& 16.80& 0.09& 1.77& 16& 17& 13\\ 296.18201& -14.78503& 12496& 18.17& 0.28& 16.81& 0.25& 15.74& 0.22& 2.44& 17& 18& 18\\ 296.25473& -14.75607& 12784& 18.51& 0.22& 17.03& 0.17& 15.89& 0.11& 2.62& 18& 18& 17\\ 296.20816& -14.72616& 20022& 14.09& 0.09& 13.21& 0.08& 12.83& 0.08& 1.26& 16& 16& 17\\ 296.26938& -14.66636& 20311& 17.19& 0.12& 16.20& 0.16& 15.49& 0.07& 1.69& 14& 17& 14\\ 296.23389& -14.72910& 20463& 17.86& 0.16& 16.87& 0.09& 16.51& 0.14& 1.36& 11& 13& 13\\ 296.29358& -14.72387& 20496& 17.36& 0.20& 16.28& 0.17& 15.65& 0.13& 1.71& 16& 17& 17\\ 296.17297& -14.71139& 20539& 17.96& 0.22& 16.59& 0.19& 15.50& 0.10& 2.46& 13& 17& 15\\ 296.26337& -14.70901& 20547& 17.34& 0.09& 16.23& 0.09& 15.66& 0.08& 1.68& 15& 16& 17\\ 296.24963& -14.71943& 21021& 18.72& 0.09& 17.38& 0.05& 16.42& 0.06& 2.30& 14& 15& 16\\ 296.28815& -14.67794& 21217& 17.67& 0.16& 16.75& 0.13& 16.20& 0.13& 1.47& 15& 16& 16\\ 296.25790& -14.90380& 30244& 17.38& 0.14& 16.17& 0.12& 15.44& 0.10& 1.95& 14& 15& 14\\ 296.20166& -14.88640& 30271& 16.68& 0.11& 15.57& 0.10& 14.93& 0.08& 1.75& 13& 14& 14\\ 296.28564& -14.87809& 30285& 16.97& 0.16& 15.88& 0.13& 15.18& 0.04& 1.79& 14& 15& 10\\ 296.20511& -14.87565& 30590& 17.68& 0.28& 16.48& 0.05& 16.01& 0.12& 1.67& 14& 10& 14\\ 296.19254& -14.86788& 30611& 18.29& 0.26& 16.99& 0.20& 16.08& 0.13& 2.21& 14& 15& 14\\ 296.29318& -14.90858& 30934& 18.03& 0.09& 16.89& 0.05& 15.53& 0.04& 2.49& 14& 13& 11\\ 296.17282& -14.75015& 40002& 17.59& 0.11& 16.53& 0.06& 15.91& 0.04& 1.68& 26& 26& 24\\ 296.17505& -14.85986& 40007& 17.97& 0.11& 16.70& 0.12& 15.75& 0.04& 2.22& 24& 30& 22\\ 296.17966& -14.73710& 40026& 17.49& 0.19& 16.36& 0.15& 15.59& 0.10& 1.90& 16& 31& 27\\ 296.20724& -14.74512& 40155& 17.08& 0.09& 16.00& 0.06& 15.40& 0.05& 1.68& 34& 35& 29\\ 296.22379& -14.73659& 40257& 16.01& 0.02& 15.20& 0.05& 14.95& 0.03& 1.06& 18& 21& 17\\ 296.22452& -14.86319& 40261& 17.48& 0.17& 16.42& 0.12& 15.89& 0.10& 1.60& 30& 30& 29\\ 296.24301& -14.74652& 40397& 14.10& 0.07& 13.30& 0.06& 12.99& 0.07& 1.11& 34& 36& 36\\ 296.24622& -14.86655& 40419& 17.98& 0.08& 16.67& 0.06& 15.79& 0.08& 2.19& 20& 18& 20\\ 296.24634& -14.86488& 40421& 17.41& 0.06& 16.40& 0.05& 15.82& 0.04& 1.59& 25& 24& 21\\ 296.25229& -14.74336& 40445& 17.70& 0.18& 16.57& 0.18& 15.80& 0.08& 1.90& 31& 35& 29\\ 296.25253& -14.86060& 40446& 18.61& 0.41& 17.57& 0.43& 16.75& 0.31& 1.86& 33& 33& 29\\ 296.25864& -14.73680& 40476& 17.71& 0.08& 16.44& 0.05& 15.53& 0.03& 2.18& 29& 25& 23\\ 296.25992& -14.74793& 40486& 18.59& 0.17& 17.73& 0.29& 16.70& 0.13& 1.88& 26& 33& 25\\ 296.26254& -14.84794& 40501& 17.26& 0.14& 16.14& 0.09& 15.51& 0.11& 1.76& 29& 25& 28\\ 296.27545& -14.85393& 40547& 17.45& 0.17& 16.30& 0.10& 15.66& 0.10& 1.79& 26& 25& 32\\ \hline \end{longtable} \end{center} \twocolumn \subsection{Comparison with Battinelli and Demers (2011)} Battinelli \& Demers (2011) discuss long period variables discovered in their 32 arcmin square survey of NGC\,6822 using 1.5-m and 1.6-m telescopes, over a period of somewhat over three years. There is considerable overlap with our work although their survey extends further to the east (they also have unpublished data extending to the west). Twenty of their variables fall within the area we surveyed. Table \ref{tab_bd} lists 16 of those 20 variables that we also regard as large amplitude variables and for which we derived periods. It includes the long period Cepheid, our N10170. The other 4 are briefly described below:\\ BD~v13, for which they find a period of 466 days, corresponds to our N20438 (Table~\ref{tab_var1}) and its $K$ light curve is illustrated in Fig.~\ref{fig_long1}. Although it shows large amplitude variations its behaviour was not sufficiently regular for us to identify it as periodic, and it is therefore included in Table~\ref{tab_var1}. However, knowing the period and examining the light curve it is possible to see that it underwent two maxima during the time we observed it, the first at $K_S\sim 15.8$ mag and the second at $K_S\sim 15.3$ mag.\\ BD~v14 is N40538 and is also a variable (Table~\ref{tab_var2}), but without a clearly defined period.\\ BD~v9 is N20287 which is not obviously significantly variable.\\ BD~v21 was very faint on the $H$ frame that we used as a reference, and was therefore not extracted in our survey, although it is clearly seen on the $K$ frames. Battinelli \& Demers determined P=613 days and $\Delta K_S=0.7$ mag for this star. All of the other variables from Battinelli \& Demers's table~3 are outside the positional range of our survey. Fig.~\ref{fig_bp} compares their periods and mean $K_S$ magnitudes with ours for the stars in common. We note that while the periods are in reasonable agreement there does appear to be a systematic difference in the mean $K_S$ magnitude. The mean difference is 0.25 mag, or 0.21 mag leaving out BD~v19=N21141 where the mean magnitudes differed by 0.8 mag. It is disturbing to find such a large difference in the magnitudes of potential distance indicators and the matter is worth further investigation. There are no non-variable stars in Battinelli \& Demers to compare with ours, but we have made the comparison with the UKIRT photometry of Sibbons et al. (2012). The difference between their $K_S$ magnitudes and ours, both uncorrected for interstellar reddening, is only 0.05 mag at $K_S=16$ mag and about 0.1 mag at $K_S=17$ mag. At the fainter magnitudes crowding can be a problem for matching objects between the two catalogues, quite apart from possible photometric difficulties. Neither study has taken account of any possible colour equation, both being essentially on the natural system. Though colour effects are probably not significant if $J-K_S<1.0$ mag, they may need to be considered for the reddest stars considered here, but it is extremely difficult to do the calibration work required to quantify the effect. Given that our entire field falls within the area discussed by Battinelli \& Demers it is a little surprising that they do not find more of the Mira variables that we identify. Six more of them are to be found in their table~4 which lists SR and irregular variables and Table~\ref{tab_bd2} cross references our identification numbers with theirs (note that their BD~v116 has the identical coordinates to their BD~v12). They do not appear to have found the other 39. Among their SR variables BD~v109 is not measurable on our image where it is extended, while BD~v117 is another very red object, visible at $K_S$ but not at shorter wavelengths. The other variables they list are outside of our survey area.\\ \begin{table} \caption[]{Variables in common with Battinelli \& Demers (2011) table 3} \begin{center} \begin{tabular}{cccccc} \hline N & P & $K_S$ & BD & P & $K_S$\\ & (days) & (mag) & & (days) & (mag) \\ \hline 40102 & 526 & 15.69 & 1 & 576 & 15.80\\ 40114 & 312 & 16.99 & 2 & 339 & 17.25\\ 20331 & 314 & 15.46 & 3 & 326 & 15.67\\ 20134 & 402 & 15.07 & 4 & 403 & 15.14\\ 10807 & 747 & 15.89 & 5 & 777 & 16.25\\ 12445 & 454 & 16.43 & 6 & 437 & 16.32\\ 31168 & 434 & 15.82 & 7 & 447 & 16.20\\ 10198 & 602 & 14.19 & 8 & 673 & 14.25\\ 10170 & 123 & 14.92 & 10& 124 & 14.80\\ 30268 & 998 & 14.45 & 11& 992 & 14.85\\ 40520 & 432 & 15.51 & 12& 436 & 15.75\\ 12177 & 590 & 16.10 & 15& 633 & 16.25\\ 40590 & 221 & 16.39 & 16& 223 & 16.80\\ 40623 & 897 & 16.11 & 17& 1100& 16.60\\ 21141 & 456 & 15.42 & 19& 448 & 16.25\\ 30583 & 305: & 15.55 & 20& 302 & 15.85\\ \hline \end{tabular} \end{center} \label{tab_bd} \end{table} \begin{table} \caption[]{Variables in common with Battinelli \& Demers (2011) table 4} \begin{center} \begin{tabular}{ccl} \hline BD & N & remark\\ \hline 100 & 30981\\ 101 & 11173\\ 102 & 11296 & Mira Table~\ref{tab_c}\\ 103 & 11414\\ 104 & 20468\\ 105 &20239 & Mira Table~\ref{tab_c}\\ 106 & 11372 \\ 107 & 10501 & LPV trend Table~\ref{tab_var1}\\ 108 & 30920 & Mira Table~\ref{tab_c}\\ 111 & 20892 \\ 112 & 12660 & LPV trend Table~\ref{tab_var1}\\ 113 & 11299 & Mira Table~\ref{tab_c}\\ 114 & 11362\\ 115 & 40482\\ 116 & 40520 & =ID12 Mira Table~\ref{tab_c}\\ 118 & 20569 & Mira Table~\ref{tab_c}\\ 119 & 20588 & Mira Table~\ref{tab_c}\\ \hline \end{tabular} \end{center} \label{tab_bd2} \end{table} \begin{figure} \includegraphics[width=8.5cm]{fig_bp.ps} \caption{A comparison of the periods and mean $K$ magnitudes derived here and by Battinelli \& Demers (2011).} \label{fig_bp} \end{figure} \begin{figure} \begin{center} \includegraphics[width=6cm]{fig_long1.ps} \includegraphics[width=6cm]{fig_long2.ps} \caption{The light-curves for some of the large amplitude variables without obvious periodicity.} \label{fig_long1} \end{center} \end{figure} \subsection{M-type Miras} The most luminous of the stars in Table~\ref{tab_o}(a), N20004, is a known variable, NGC6822\,V12. Kayser (1967) describes this as a semi-regular and writes `Most of the time it varies regularly with a period of 640 days, but every three to six cycles it does something else for 180 days'. Note that 640 days is significantly different from the 854 days that we found. Massey (1998) gives its spectral type as M2.5-3I: and $JHK$ photometry was published by Elias \& Frogel (1985). Its amplitude is lower, and luminosity higher, than those of the other variables considered here and it is possibly the descendent of a more massive star and different from the other O-rich variables, but that is not entirely clear. Its luminosity is comparable to the supergiants discussed by Levesque \& Massey (2012). N10198 was identified as a variable, v198, by Baldacci et al. (2005). N10184 and N11226 were identified as variables, v1838 and v1534, respectively, by Antonello et al. (2002). N40139 appears in the GCVS as v16. The bolometric magnitudes of the presumed O-rich stars were calculated by fitting a blackbody to the $JHK$ fluxes, following the procedure used by Robertson \& Feast (1981) and by Feast et al. (1989). In practice, for these stars with very thin shells, bolometric magnitudes derived in this way differ insignificantly ($<0.03$ mag) from those calculated using the bolometric corrections defined for the C~stars (section 7). The results are listed in Table~\ref{tab_obol} and illustrated in a PL relation (Fig.~\ref{fig_PL}). The two short period stars are presumably similar to the short period O-rich Miras found in globular clusters (Feast et al. 2002; Whitelock et al. 2008), and their luminosities are comparable to those of the short period C-rich Miras. The brighter, longer period, stars appear to represent a somewhat younger population. They are probably similar to long-period O-rich Miras that are found in the LMC, many of which have s-process enhancements (Lundgren 1988; Smith et al. 1995). They are considered further in the next section. \begin{table} v\caption[]{Bolometric magnitudes of the assumed O-rich Miras. } \begin{center} \begin{tabular}{cccc} \hline N & P & $m_{bol}$ & $(J-K_S)_0$\\ & (days) & \multicolumn{2}{c}{(mag)}\\ \hline 12557 & 158 & 20.13 & 1.05\\ 11226 & 257 & 19.27 & 1.37\\ 20331 & 314 & 18.51 & 1.06\\ 10184 & 370 & 18.27 & 1.25\\ 30133 & 401 & 18.19 & 1.15\\ 20134 & 402 & 18.21 & 1.23\\ 40139 & 545 & 17.05 & 1.28\\ 10198 & 602 & 17.36 & 1.27\\ 30292 & 637 & 17.80 & 1.15\\ 10091 & 638 & 17.34 & 1.24\\ 20004 & 854 & 15.62 & 1.10\\ \hline \end{tabular} \end{center} \label{tab_obol} \end{table} \begin{table} \caption[]{Bolometric magnitudes of the assumed C-rich Miras.} \begin{center} \begin{tabular}{cccc} \hline N & P & $m_{bol}$ & $(J-K_S)_0$\\ & (days) & \multicolumn{2}{c}{(mag)}\\ \hline 12790 & 182 & 19.82 & 1.34\\ 10817 & 214 & 19.41 & 1.56\\ 20540 & 223 & 19.63 & 1.39\\ 40590 & 223 & 19.74 & 1.52\\ 12751 & 231 & 19.62 & 1.50\\ 11032 & 239 & 19.34 & 2.16\\ 10748 & 243 & 19.48 & 2.36\\ 20578 & 246 & 19.50 & 1.56\\ 20542 & 255 & 19.42 & 1.72\\ 11226 & 257 & 19.27 & 1.31\\ 30430 & 269 & 19.25 & 1.71\\ 12208 & 278 & 19.02 & 2.15\\ 21419 & 278 & 19.29 & 3.26\\ 13364 & 286 & 19.26 & 2.14\\ 12400 & 301 & 19.30 & 2.02\\ 30583 & 302 & 19.19 & 2.39\\ 20558 & 304 & 18.47 & 2.80\\ 20239 & 304 & 19.04 & 2.27\\ 20840 & 306 & 19.24 & 2.62\\ 12466 & 311 & 19.40 & 2.29\\ 20331 & 314 & 18.51 & 1.02\\ 40114 & 316 & 19.76 & 3.16\\ 11059 & 319 & 19.11 & 2.49\\ 20375 & 328 & 18.96 & 2.34\\ 11296 & 340 & 19.06 & 2.49\\ 30928 & 342 & 19.58 & 2.57\\ 20657 & 343 & 18.93 & 2.32\\ 13106 & 354 & 18.98 & 3.04\\ 10184 & 370 & 18.27 & 1.20\\ 20588 & 376 & 18.53 & 2.30\\ 11305 & 378 & 18.78 & 2.47\\ 30920 & 384 & 18.94 & 2.99\\ 40363 & 398 & 19.41 & 2.77\\ 30133 & 401 & 18.18 & 1.11\\ 20134 & 402 & 18.22 & 1.18\\ 11140 & 405 & 19.10 & 2.72\\ 20439 & 430 & 18.61 & 3.28\\ 40520 & 432 & 18.64 & 2.80\\ 10753 & 432 & 19.19 & 2.42\\ 31168 & 434 & 18.44 & 3.57\\ 21671 & 436 & 18.85 & 3.52\\ 11174 & 440 & 18.82 & 3.12\\ 12445 & 454 & 18.76 & 3.73\\ 20569 & 454 & 18.89 & 3.11\\ 21141 & 456 & 18.68 & 2.64\\ 21234 & 466 & 18.50 & 3.81\\ 12147 & 475 & 18.88 & 3.59\\ 11299 & 494 & 18.57 & 3.23\\ 13293 & 495 & 18.69 & 3.78\\ 21029 & 501 & 18.34 & 3.46\\ 40102 & 526 & 18.11 & 3.90\\ 40139 & 545 & 17.03 & 1.23\\ 12177 & 590 & 18.08 & 4.30\\ 10198 & 602 & 17.36 & 1.21\\ 30292 & 637 & 17.81 & 1.10\\ 10091 & 638 & 17.34 & 1.19\\ 10807 & 747 & 17.71 & 4.38\\ 20004 & 854 & 15.60 & 1.06\\ 40623 & 897 & 18.07 & 4.39\\ 30268 & 998 & 17.84 & 2.18\\ \hline \end{tabular} \end{center} \label{tab_cbol} \end{table} \subsection{C-type Miras} These are discussed below in section~7. \section{Completeness of large amplitude variable survey} We measured pulsation periods for 61 Mira variables in our survey area, compared to the 20 measured by Battinelli \& Demers (2011) in the same area (only 16 of these are in common). So there is no question that we significantly improved on their count. However, we note that our survey will still be incomplete for the following reasons:\\ (1) We may have missed a very small number of very red, dust enshrouded, large amplitude variables entirely (see section 5.1) and indeed we did miss one of those found by Battinelli \& Demers. Nevertheless it is interesting to see that we have one assumed C-rich Mira, with a period of 998 days (N30268 for which there is no measured spectral type). It has been suggested that at periods longer than about 1000 days the stars are sufficiently massive for hot bottom burning to occur and that will result in O-, rather than C-rich Miras (Feast 2009). The longest period C-rich Miras in the LMC are also just under 1000 days (Whitelock et al. 2003). It would obviously be very interesting to know if we have any OH/IR stars in NGC\,6822 with periods over 1000 days.\\ (2) Some Miras behave erratically and very long-term monitoring is necessary to characterize them. That is one of the reasons for differences with Battinelli \& Demers. Most such stars will appear in Tables~\ref{tab_var1} or \ref{tab_var2}.\\ (3) Confusion, especially in the crowded inner regions, limits our ability to measure the Miras, particularly at $J$. All of these factors are relevant, but the only one that is likely to seriously affect the total count is item (2). \section{Period-Luminosity Relations} There are a variety of ways in which bolometric magnitudes can be measured or estimated, depending on the information available and, to some extent, on the desired objective. Given that one is almost always limited by temporal and/or spectral coverage any chosen approach is a compromise. Kerschbaum, Lebzelter \& Makul (2010) discuss different approaches and show that they can lead to very different results: over 0.5 mag spread at a particular colour. We also note that Kamath et al. (2010) and Groenewegen et al. (2007) derive bolometric magnitudes for several variable stars in the SMC cluster NGC\,419 using the same Spitzer data and slightly different $JHKL$ values. Their bolometric magnitudes differ by amounts that range from --0.1 to 0.4 mag for the same star. These uncertainties present difficulties when attempting to compare bolometric luminosities with theoretical predictions. For instance, in a plot of bolometric magnitude against period (their fig.~7), Kamath et al. place a group of NGC\,419 carbon-rich semi-regular variables on their computed fundamental sequence and a group of brighter shorter period variables on their first overtone sequence. This is contrary to the usually accepted model of the overall evolution that increasing luminosity and period implies decreasing mode. Such a normal evolutionary sequence is supported by the $K-\log P$ plot for these same NGC419 variables. This places them all together in a single group on the first overtone sequence in a ``Wood" PL diagram (for instance the LMC/SMC plots of Ita et al.\ 2004). For the purpose of this paper we follow the same procedure for determining bolometric magnitudes as in our previous papers as this will give us consistent values that are good, at least, for estimating distances via the PL relation. Bolometric magnitudes for the presumed C stars were calculated in the same way as in our earlier papers (e.g. Whitelock et al. 2009), by applying a colour-dependent bolometric correction to the reddening-corrected $K$ magnitudes on the SAAO system. The magnitudes given in Table~\ref{tab_o}, which are on the 2MASS system, are converted to the SAAO system following Carpenter (2001 and web page update\footnote{http://www.astro.caltech.edu/~jmc/2mass/v3/transformations/}). The resulting bolometric magnitudes are listed in Table~\ref{tab_cbol}. Note that the bolometric magnitudes derived for stars with faint $J$ magnitudes are rather uncertain due to photometric errors and the increased possibility of confusion. \begin{figure} \includegraphics[width=8.5cm]{fig_pl.ps} \caption{Bolometric PL for the large amplitude variables in NGC\,6822; open symbols are those from Table~\ref{tab_o}(a); closed symbols represent those from Table~\ref{tab_c}(b). The solid line is the best fit to the closed circles while the dashed line is the best fitting one with the same slope as the LMC PL relation.} \label{fig_PL} \end{figure} Figure \ref{fig_PL} shows a PL relation. A least squares fit to the 50 presumed C-rich Miras gives the following result: \begin{equation} m_{bol}=19.16(\pm0.04)-2.91(\pm0.22)[\log P -2.5], \end{equation} with a scatter of 0.23 mag. This expression is shown as a solid line in Fig.~\ref{fig_PL}. Alternatively, if we assume that the slope of the PL relation is the same as that found in the LMC, $-3.31\pm0.24$ (Whitelock et al. 2009), then we derive a zero point of $19.18\pm0.03$, with a scatter of 0.24 mag for the same 50 stars. This line is also shown in Fig.~\ref{fig_PL}. If we restrict the stars in NGC\,6822 to cover the same range of periods as used to derive the LMC PL relation, i.e. $220 < P < 500 $, then the zero-point is $19.20\pm 0.04$, with a scatter of 0.23 mag for 41 stars. Leaving out the stars with the faintest $J$ magnitudes, $J>20.0$ mag, which have the most uncertain bolometric magnitudes, we find a zero point of $19.16\pm 0.04$, with a scatter of 0.22 mag for 40 stars. Leaving out only star N40114, which is unusually red ($J-K\sim 3.28$) for a P=313 day Mira and rather faint with respect to the PL relation, gives a zero point of $19.17\pm 0.03$, with a scatter of 0.22 mag for 49 stars Thus these results are consistent with the PL relation in NGC\,6822 having the same slope as it does for the LMC. If we assume that the distance of the LMC is $(m-M)_0=18.50$ mag, the PL relation derived from LMC Miras is $$ M_{bol}=-4.38-3.31[\log P -2.5],$$ and using the zero point of $19.18\pm 0.03$, the distance modulus for NGC\,6822 is $(m-M)_0=23.56 (\pm0.03)$ mag. The uncertainty does not include the uncertainty on the LMC distance. This can be compared with values of 23.40 mag and 23.49 derived from Cepheids and RR Lyraes, respectively (Feast et al. 2012; Clementini et al. 2003). \begin{figure} \includegraphics[width=8.5cm]{fig_plk.ps} \caption{PL($K$) for the large amplitude variables in NGC\,6822; symbols as in Fig.~\ref{fig_PL}. The solid line is the PL($K$) derived for the Galactic O-rich Miras assuming $(m-M)_{LMC}=18.5$ mag. The absolute $K$ magnitudes shown here assumes $(m-M)_{N6822}= 23.38$ mag, as was determined from the 4 faintest O-rich Miras.} \label{fig_PLK} \end{figure} In the PL($K$) diagram (Fig.~\ref{fig_PLK}) many of the C-rich Miras fall below the anticipated PL relation, because circumstellar extinction is sufficiently strong to affect their $K$ magnitudes, in some cases by more than one magnitude. This is illustrated by Nsengiyumva (2010 his fig.~3.9) in a plot of the difference between $K$ observed and predicted from the PL relation, as a function of $J-K$ colour. The O-rich Miras fall in approximately the same region as do the C-rich ones at shorter periods, but above $\log P>2.7$ they are consistently brighter than the PL relation. These are the same stars that fall to the upper left of the C stars in the colour-magnitude diagram (Fig.~\ref{fig_cm2}). The same is found for Miras in the LMC as discussed by Whitelock et al. (2003), who suggested that the high luminosity of these O-rich stars was a consequence of hot bottom burning, which is expected for intermediate mass AGB stars. Feast (2009) has suggested that their position in the PL relation is consistent with their being overtone pulsators, which may eventually evolve into long period OH/IR stars. In view of the fact that these O-rich Miras do not have significant circumstellar reddening we can use the PL($K$) relation derived by Whitelock et al. (2008) to derive a distance. For this we use only the four stars with $P<400$ days, as longer period O-rich Miras are usually brighter than the PL($K$) would predict. We use the relation derived for Galactic stars by Whitelock et al. (2008): $$M_K=3.51[logP-2.38]-7.37,$$ which assumes the same LMC distance as above. Transforming the 4 $K$ magnitudes onto the SAAO system as before gives a distance modulus for NGC\,6822 of $23.38\pm0.16$ mag. Given the uncertainties, including the fact that the LMC distance may vary with the sample of LMC stars studied (e.g. Feast et al. 2012), the various estimates of the NGC\,6822 distance modulus are not in conflict. \section{Comparison with the Dwarf Spheroidals} It is instructive to compare what we find here with the period distribution of Miras in other Local Group galaxies and in particular with those in the dwarf spheroidals that were surveyed in the same way. Figure \ref{fig_hist} shows a histogram of the periods for the presumed C-rich Mira variables and compares them with those found in the dwarf spheroidals. The following dwarf spheroidals are involved (Mira periods, in days, given in parenthesis after the names): Sculptor (189, 554, Menzies et al. 2011), Fornax(215, 258, 267, 280, 350, 400, 470, Whitelock et al. 2009), Phoenix (425 Menzies et al. 2008), Leo~I (158, 180, 191, 252, 283, 336, 523, Menzies et al. 2010) and Leo~II (183 unpublished). We include the Phoenix dwarf galaxy with this group, but note that it is generally classed as intermediate between dwarf irregular and dwarf spheroidal (e.g. Battaglia et al. 2012). Figure \ref{fig_hist} also shows the period distribution of probable C-rich Miras in the LMC taken from Soszy\'nski et al. (2009), which is based on the OGLE~III catalogue. Care is required in comparing frequency distributions of AGB variables derived from surveys with different sensitivities and in different wavelength regions (OGLE~III is a $V$ and $I$ survey), because the different selection effects will affect the colour- and period-range of the variables found. In particular longer wavelength surveys tend to find larger numbers of redder and longer period variables. The distribution of large amplitude carbon variables with period found for NGC\,6822 is similar to that shown by carbon AGB variables in both the LMC and SMC (see Soszy\'nski et al. 2011 fig.~3 for the SMC). It is also clear that there are very few short period carbon variables on the Wood sequence C (the Mira sequence) of any amplitude in either Magellanic Cloud. It is notable that the distribution of LMC and SMC C-rich variables in both the amplitude-period and the PL relations appear to be very similar though the ratio of O-rich to C-rich variables differs greatly. While the existence of Miras with periods in excess of 600 days in NGC\,6822 is quite striking, they only constitute 6 percent of the total number. Thus we would only expect to see one in all the Local Group dwarf spheroidals if they were present in the same proportions. However, our survey is probably not complete for long-period variables, as our failure to identify BD~v21 (P=613 days) shows; it was too faint at $H$. The smaller fraction of short period Miras, less than 300 days, in NGC\,6822 and the Magellanic Clouds compared to the dwarf spheroidals, is notable. Of course for short period Miras the magnitudes are fainter and the amplitudes are lower (on average) than they are for the longer period stars, so they are more difficult to find. Nevertheless, we have followed the same procedure here as we did in the dwarf spheroidals to identify variables, so we should have found them if they were there, provided that they did not have exceptional dust shells. For example, BD~v16 (our N40590) was not initially identified as a Mira by us, presumably because its amplitude was slightly lower when we observed it than when they did. It seems that the variables in NGC\,6822 with periods less than 300 days have low amplitudes, i.e., there are no stars like L7020 in Leo\,I which has $\Delta K_S =1.2$ mag and $P=191$ days. However, L2077 in Leo\,I which has $\Delta K_S =1.2$ mag and $P=283$ days is much redder with mean magnitudes of $J=20.9$, $H=19.0$ and $K_S=17.4$. Given that our sensitivity is limited to stars with $H<18.3$ mag this star would have been missed. Since the period decreases with increasing age for Miras, it seems probable that the different period distributions are due to a larger proportion of an older C-rich population in the dwarf spheroidals. The evolutionary status of this population is somewhat problematic, as discussed by Menzies et al (2011). Although we have spectral types for only a few of the Miras in any of these galaxies, the dwarf spheroidals do not have any long period M stars that we know of. This of course is to be expected given what we know about the metallicity and star formation history of the dwarf spheroidals. \begin{figure} \begin{center} \includegraphics[width=6cm]{fig_hist.ps} \caption{Histogram of the periods of the C-rich Miras in NGC\,6822 (lower panel including N40419 P=193 days and BD~v24 P=670 days) and in the four dwarf spheroidals (central panel) and LMC (top panel from Soszy\'nski et al. (2009)).} \label{fig_hist} \end{center} \end{figure} \section{Conclusions} The large number of C-rich Miras now found in NGC\,6822 allows us to demonstrate that the slope of the bolometric PL relation in that galaxy is, within the errors, the same as that in the LMC. The distance modulus found from this relation is in satisfactory agreement with that found by other methods and with that derived from the PL($K$) relation for the small number of shorter period O-rich Miras. Whilst there are problems with determining bolometric magnitudes for C-rich AGB stars, these are not important for distance scale studies provided a consistent method is employed for both programme stars and calibrators. The period distribution of high amplitude carbon-rich AGB variables in NGC\,6822 is probably similar to that in the two Magellanic Clouds but differs from that in Local Group dwarf spheroidals, which contain a population of high amplitude, short period C-rich variables. Since these short period stars are believed to be old, this indicates that the dwarf spheroidals contain an old population that is capable of producing C-rich AGB variables which is absent or relatively rare in both NGC\,6822 and the Magellanic Clouds.\\ \section*{Acknowledgements} This publication makes extensive use of the various databases operated by CDS, Strasbourg, France. MWF, JWM and PAW gratefully acknowledge the receipt of research grants from the National Research Foundation (NRF) of South Africa and FN thanks the National Astrophysics and Space Science Programme (NASSP) of South Africa for financial support. We would also like to thank Serge Demers for sending us data and preprints of his work with Paolo Battinelli in advance of publication and the referee Jacco van Loon for his comments.
{ "timestamp": "2012-10-16T02:01:43", "yymm": "1210", "arxiv_id": "1210.3695", "language": "en", "url": "https://arxiv.org/abs/1210.3695" }
\section{Introduction} Spinor models play important role in contemporary theoretical physics. The famous Nambu--Jona-Lasinio--Vaks--Larkin models \cite{NJL,VL} and Gross--Neveu models \cite{GN,NePa} have been proposed initially as models for describing the strong interactions. Later, with the development of the inverse scattering method (ISM) \cite{ZMNP,FaTa} it was proven that the two-dimensional versions of these models are integrable \cite{zm1}. In the same paper Zakharov and Mikhailov propose a third class of spinor models related to the orthogonal groups. The aim of the present paper is to derive new types of integrable spinor models by applying additional $\bbbz_2$-reductions to their Lax representations. In doing this we will be using the reduction group \cite{mik}. Next we describe the spectral properties of the Lax operators. We start in Section II by some preliminaries concerning the spinor models and the reduction group of Mikhailov \cite{mik}. In Section III we outline the spectral theory of the unreduced Lax operators. In Section IV we derive the $\bbbz_2$-reduced spinor models. Section V is devoted to the spectral properties of the $\bbbz_2$-reduced Lax operators. More specifically we treat 4 different cases in each of which we specify the continuous spectrum and the symmetries of the discrete eigenvalues. We end with brief conclusions. \section{Preliminaries} The integrability of the 2-dimensional versions of the Nambu--Jona-Lasinio--Vaks--Larkin (NJLVL) and the Gross--Neveu model (GN) was discovered by Zakharov and Mikhailov in \cite{zm2}. The showed that NJLVL models are related to $su(N)$ algebras, while the Gross--Neveu models are related to the $sp(N)$. In the same paper an additional type of spinor models related to the algebras $so(N)$ was discovered; we will call them Zakharov--Mikhailov (ZM) models. Let us first outline the Lax representations of these models \cite{zm1}. \begin{equation}\label{eq:U-Y}\begin{aligned} \Psi_\xi & = U(\xi,\eta,\lambda) \Psi(\xi,\eta,\lambda), &\qquad \Psi_\eta & = U(\xi,\eta,\lambda) \Psi(\xi,\eta,\lambda),\\ U(\xi,\eta,\lambda) &= \frac{U_1(\xi,\eta)}{\lambda -a} , &\qquad V(\xi,\eta,\lambda) &= \frac{V_1(\xi,\eta)}{\lambda -a} , \end{aligned}\end{equation} where $\eta =t+x$, $\xi=t-x$ and $a$ is a real number. We also impose the $\bbbz_2$-reduction: \begin{equation}\label{eq:red}\begin{split} U^\dag (x,t,\lambda) &=-U(x,t,\lambda^*) , \qquad V^\dag (x,t,\lambda) =-V(x,t,\lambda^*). \end{split}\end{equation} The compatibility condition of the above linear problems reads: \begin{equation}\label{eq:Lax}\begin{split} U_\eta - V_\xi + [U,V]=0, \end{split}\end{equation} which is equivalent to \begin{equation}\label{eq:UV12}\begin{aligned} U_{1,\eta} &+ \frac{1}{2a} [U_1, V_1(\xi,\eta)] =0, \qquad V_{1,\xi} &- \frac{1}{2a} [V_1, U_1(\xi,\eta)] =0. \end{aligned}\end{equation} From these equations, fixing up properly the gauge, (see \cite{zm2}) there follows that \begin{equation}\label{eq:UVphi}\begin{aligned} U_1(\xi,\eta) & = -i\phi J_1^0 \phi^{-1}, \qquad V_1(\xi,\eta) & = i\psi I_1^0 \psi^{-1}, \end{aligned}\end{equation} where $J_{1}^0$ and $I_{1}^0$ are properly chosen constant elements (choice of the gauge) of the corresponding simple Lie algebra $\mathfrak{g}$. In what follows we fix up $J_1^0 =-I_1^0 =J$ and choose $J$ for each of the above mentioned models accordingly. The matrix valued functions $\phi(\xi,\eta)$ and $\psi(\xi,\eta)$ take values in the corresponding simple Lie group and are fundamental solutions of the following ODE's: \begin{equation}\label{eq:psifi2}\begin{aligned} \psi_{\xi} &\equiv - \frac{U_1(\xi,\eta)}{2a} \psi(\xi,\eta) = \frac{i}{2a} \phi J \hat{\phi} \psi(\xi,\eta), \\ \phi_{\eta} &\equiv \frac{V_1(\xi,\eta)}{2a} \phi(\xi,\eta) = \frac{i}{2a} \psi J \hat{\psi} \phi(\xi,\eta). \end{aligned}\end{equation} Here and below by `hat' we will denote the inverse matrix, i.e. $\hat{\psi} \equiv (\psi)^{-1}$. In this way we get three classes of spinor models. Below, following \cite{zm1} we briefly outline their derivation. \begin{description} \item[i) Nambu-Jona-Lasinio-Vaks-Larkin models.] Here we choose $\mathfrak{g}\simeq su(N)$. Then $\psi(\xi,\eta)$ and $\phi(\xi,\eta)$ are elements of the group $SU(N)$ and by definition $\hat{ \psi}(\xi,\eta) = \psi^\dag (\xi,\eta)$, $\hat{ \phi}(\xi,\eta) = \phi^\dag (\xi,\eta)$. Next we choose $J=\diag(1,0,\dots,0)$ and as a result only the first columns $\phi^{(1)}$, $\psi^{(1)}$ and the first rows $\hat{ \phi}^{(1)}$, $\hat{ \psi}^{(1)}$ enter into the systems (\ref{eq:psifi2}). If we introduce the notations: \begin{equation}\label{eq:not1}\begin{split} \phi_\alpha (\xi,\eta) = \phi^{(1)}_{\alpha,1}, \qquad \psi_\alpha (\xi,\eta) = \psi^{(1)}_{\alpha,1}, \end{split}\end{equation} then the explicit form of the system is: \begin{equation}\label{eq:JLVS}\begin{aligned} \frac{\partial \phi_\alpha }{ \partial \eta } &=\frac{i}{2a} \psi_\alpha \sum_{\beta=1}^{N} \psi^*_{\beta}\phi_\beta , \\ \frac{\partial \psi_\alpha }{ \partial \xi } &=\frac{i}{2a} \phi_\alpha \sum_{\beta=1}^{N} \phi^*_{\beta}\psi_\beta . \end{aligned}\end{equation} The functional of the action is: \begin{equation}\label{eq:Ljl}\begin{split} A_{\rm NJLVL} &= \int_{-\infty}^{\infty} dx\; dt\; \left( i \sum_{ \alpha=1}^{N} \left(\phi^*_{\alpha} \frac{\partial \phi_\alpha}{ \partial \eta } + \psi^*_{\alpha} \frac{\partial \psi_\alpha}{ \partial \xi } \right) - \frac{1}{2a} \left| \sum_{\alpha=1}^{N} ( \psi^*_{\alpha}\phi_\alpha ) \right|^2 \right). \end{split}\end{equation} \item[ii) Gross-Nevew models.] Here we choose $\mathfrak{g}\simeq sp(2N,\bbbr)$; then $\psi(\xi,\eta)$ and $\phi(\xi,\eta)$ are elements of the group $\mathfrak{G}\simeq SP(2N,\bbbr)$. Following \cite{zm1} we use the standard definition of symplectic group elements: \begin{equation}\label{eq:sym}\begin{split} \hat{ \psi}(\xi,\eta) = \mathfrak{J} \psi^T(\xi,\eta) \hat{ \mathfrak{J}}, \qquad \hat{ \phi}(\xi,\eta) = \mathfrak{J} \phi^T(\xi,\eta) \hat{ \mathfrak{J}}, \qquad \mathfrak{J} = \left(\begin{array}{cc} 0 & -\openone \\ \openone & 0 \end{array}\right). \end{split}\end{equation} Then the corresponding Lie algebraic elements acquire the following block-matrix structure: \begin{equation}\label{eq:spN}\begin{split} U_1(\xi, \eta) = \left(\begin{array}{cc} A & B \\ C & -A^T \end{array}\right), \end{split}\end{equation} where $A,B,C$ are arbitrary real $N\times N$ matrices. Next we choose \begin{equation}\label{eq:J0}\begin{split} J = \left(\begin{array}{cc} 0 & B_0 \\ 0 & 0 \end{array}\right), \qquad B_0 = \diag(1,0,\dots,0,0). \end{split}\end{equation} As a consequence again only the first columns $\phi^{(1)}$, $\psi^{(1)}$ and the first rows $\hat{ \phi}^{(1)}$, $\hat{ \psi}^{(1)}$ enter into the systems (\ref{eq:psifi2}). If we introduce the $N$-component complex vectors: \begin{equation}\label{eq:not1'}\begin{split} \phi_\alpha (\xi,\eta) = \frac{1}{2} (\phi^{(1)}_{\alpha,1} +i \phi^{(1)}_{N+\alpha,1}), \qquad \psi_\alpha (\xi,\eta) =\frac{1}{2} (\psi^{(1)}_{\alpha,1} +i \psi^{(1)}_{N+\alpha,1}) \end{split}\end{equation} then the explicit form of the system is: \begin{equation}\label{eq:GN}\begin{aligned} \frac{\partial \phi_\alpha }{ \partial \eta } &= \frac{i}{a} \psi_\alpha \sum_{\beta=1}^{N} (\psi_{\beta}\phi^*_\beta -\psi^*_{\beta}\phi_\beta), \\ \frac{\partial \psi_\alpha }{ \partial \xi } &=-\frac{i}{a} \phi_\alpha \sum_{\beta=1}^{N} (\phi_{\beta}\psi^*_\beta - \phi^*_{\beta}\psi_\beta). \end{aligned}\end{equation} The functional of the action is: \begin{equation}\label{eq:Lj2}\begin{split} A_{\rm GN} &= \int_{-\infty}^{\infty} dx\; dt\; \left( i \sum_{ \alpha=1}^{N} \left(\phi^*_{\alpha} \frac{\partial \phi_\alpha}{ \partial \eta } + \psi^*_{\alpha} \frac{\partial \psi_\alpha}{ \partial \xi }\right) - \frac{1}{2a} \left(\sum_{\alpha=1}^{N} ( \psi^*_{\alpha}\phi_\alpha -\phi^*_{\alpha} \psi_{\alpha}) \right)^2 \right). \end{split}\end{equation} \item[iii) Zakharov--Mikhailov models.] Now we choose $\mathfrak{g}\simeq so(N,\bbbr)$; then $\psi(\xi,\eta)$ and $\phi(\xi,\eta)$ are elements of the group $\mathfrak{G}\simeq SO(N,\bbbr)$. Following \cite{zm1} we use the standard definition of orthogonal group elements: \begin{equation}\label{eq:ort}\begin{split} \hat{ \psi}(\xi,\eta) = \psi^T(\xi,\eta) , \qquad \hat{ \phi}(\xi,\eta) = \phi^T(\xi,\eta). \end{split}\end{equation} Now we choose \begin{equation}\label{eq:J0'}\begin{split} J = E_{1,N} - E_{N,1}, \end{split}\end{equation} where the $N\times N$ matrices $E_{kp}$ are defined by $(E_{kp})_{nm} =\delta_{kn} \delta_{pm}$. As a consequence now the first and the last columns $\phi^{(1)}, \phi^{(N)}$, $\psi^{(1)}, \psi^{(N)}$ and the first and the last rows $\hat{ \phi}^{(1)}, \hat{ \phi}^{(N)}$, $\hat{ \psi}^{(1)}, \hat{ \psi}^{(N)}$ enter into the systems (\ref{eq:psifi2}). If we introduce the $N$-component complex vectors: \begin{equation}\label{eq:not1''}\begin{split} \phi_\alpha (\xi,\eta) = \frac{1}{2} (\phi^{(1)}_{\alpha,1} +i \phi^{(N)}_{\alpha,N}), \qquad \psi_\alpha (\xi,\eta) =\frac{1}{2} (\psi^{(1)}_{\alpha,1} +i \psi^{(N)}_{\alpha,N}) \end{split}\end{equation} then the explicit form of the system becomes: \begin{equation}\label{eq:ZM}\begin{aligned} i\frac{\partial \psi_\alpha }{ \partial \xi } &=\frac{i}{a} \sum_{\beta=1}^{N} (\phi^*_{\alpha} \phi_{\beta} \psi_\beta - \phi_{\alpha} \phi^*_{\beta})\psi_\beta) , \\ i\frac{\partial \phi_\alpha }{ \partial \eta } &=\frac{i}{a} \sum_{\beta=1}^{N} (\psi^*_{\alpha} \psi_{\beta} \phi_\beta -\psi_{\alpha} \psi^*_{\beta}) \phi_\beta , \end{aligned}\end{equation} The functional of the action is: \begin{equation}\label{eq:Lj3}\begin{split} A_{\rm ZM} &= \int_{-\infty}^{\infty} dx\; dt\; \left( i \sum_{ \alpha=1}^{N} \left(\phi^*_{\alpha} \frac{\partial \phi_\alpha}{ \partial \eta } + \psi^*_{\alpha} \frac{\partial \psi_\alpha}{ \partial \xi }\right) \right. \\ &- \left. \frac{1}{2a} \left(\sum_{\alpha,\beta =1}^{N} ( \phi^*_{\alpha}\phi_\beta -\phi^*_{\beta} \phi_{\alpha}) ( \psi^*_{\alpha}\psi_\beta -\psi^*_{\beta} \psi_{\alpha}) \right) \right). \end{split}\end{equation} \end{description} For more details of deriving the models see \cite{zm1}. \section{ Spectral properties of the Lax operator} Here we briefly outline the construction of the fundamental analytic solutions of the Lax operator $L$. First we fix up the class of potentials $U_1(\xi,\eta)$ and $V_1(\xi,\eta)$ by assuming that $U_1(\xi,\eta)+iJ$ and $V_1(\xi,\eta)-iJ$ are Schwartz-type functions of $\xi$ and $\eta$. We also assume that $J\in \mathfrak{h}$ is a real element of the Cartan subalgebra of $\mathfrak{g}$. \begin{remark}\label{rem:1} These conditions are compatible with two of the classes of spinor models listed above. These are, first of all the NJLVL models for which $J$, up to a trivial term $1/N \openone$, belongs to the Cartan subalgebra of $su(n)$. For the ZM models $J$ belongs to the Cartan subalgebra, which in that case consists of off-diagonal matrices. However, there is a simple similarity transformation which takes $J$ of eq. (\ref{eq:J0'}) into $J= i {\rm \diag} (1,0,\dots,0 ,-1)$. For the GN models the choice of $J$ is nilpotent. The spectral problem for such Lax operators is singular and will not be discussed here. \end{remark} In what follows we will consider the spectral problem for Lax operators of the type: \begin{equation}\label{eq:lax}\begin{split} L\Psi (\xi,\eta,\lambda) \equiv \frac{\partial \Psi}{ \partial \xi } + i\frac{\phi J \hat{\phi}}{\lambda -a} \Psi(\xi, \eta,\lambda)=0, \end{split}\end{equation} where $J\in \mathfrak{h}$ and $\phi (\xi,\eta) \in \mathfrak{G}$ and $\lim_{\xi\to\pm\infty} \phi (\xi,\eta) =\openone $. The Jost solutions of $L$ are defined by: \begin{equation}\label{eq:Jo}\begin{split} \lim_{\xi\to\infty} \Psi_+(\xi,\eta,\lambda) \hat{\mathcal{E}}(\xi,\lambda) &=\openone, \qquad \lim_{\xi\to -\infty} \Psi_-(\xi,\eta,\lambda) \hat{\mathcal{E}}(\xi,\lambda) =\openone, \end{split}\end{equation} where \begin{equation}\label{eq:E}\begin{split} \mathcal{E}(\xi,\lambda) &=\exp \left( -i \frac{ J\xi}{\lambda -a} \right). \end{split}\end{equation} The scattering matrix is introduced by: \begin{equation}\label{eq:T}\begin{split} T(\lambda,\eta) = \hat{\Psi}_+(\xi,\eta,\lambda) \Psi_-(\xi,\eta,\lambda) \end{split}\end{equation} The continuous spectrum of $L$ is located on a line of the complex $\lambda$-plane on which $\mathcal{E}(\xi,\lambda)$ oscillates. In our case the continuous spectrum of $L$ fills up the real axis on the complex $\lambda$-plane. The discrete eigenvalues $\lambda_k^\pm \in \bbbc_\pm$ come in pairs, which due to the reduction (\ref{eq:red}) are mutually conjugate $\lambda_k^+ =(\lambda_k^-)^*$, see fig. \ref{fig:0}. \begin{figure} \includegraphics[width=6cm]{fig0.eps} \caption{The continuous and the discrete spectrum of the operators $L$. }\label{fig:0} \end{figure} The next step is to construct the fundamental analytic solutions (FAS) of $L$. Their construction is done analogously to the case of the generalized Zakharov-Shabat system, see \cite{GKV*09}. To this end we need the generalized Gauss decomposition of $T(\lambda,\eta)$ compatible with $J$: \begin{equation}\label{eq:gau}\begin{split} T(\lambda,\eta) = T_J^-D_J^+ \hat{S}_J^+, \qquad T(\lambda,\eta) = T_J^+D_J^- \hat{S}_J^-, \end{split}\end{equation} If $J=\diag (1,0,\dots,0)$ then \begin{equation}\label{eq:TSJ0}\begin{aligned} S_J^+(\eta,\lambda) & = \left(\begin{array}{cc}1 & \vec{s}\;^{+,T} \\ 0 & \openone \end{array}\right), &\qquad S_J^-(\eta,\lambda) & = \left(\begin{array}{cc}1 & 0\\ \vec{s}^- & \openone \end{array}\right), \\ T_J^+(\eta,\lambda) & = \left(\begin{array}{cc}1 & \vec{\tau}\;^{+,T} \\ 0 & \openone \end{array}\right), &\qquad S_J^-(\eta,\lambda) & = \left(\begin{array}{cc}1 & 0\\ \vec{\tau}^- & \openone \end{array}\right), \\ D_J^+(\lambda) & = \left(\begin{array}{cc}d_1^+ & \\ 0 & {\bf d}_2^+ \end{array}\right), &\qquad D_J^-(\lambda) & = \left(\begin{array}{cc}d_1^- & \\ 0 & {\bf d}_2^- \end{array}\right). \end{aligned}\end{equation} For $J=\diag (1,0,\dots,0,-1)$ and $\mathfrak{g}\simeq so(N)$ we have: \begin{equation}\label{eq:TSJ1}\begin{aligned} S_J^+(\eta,\lambda) & = \left(\begin{array}{ccc}1 & \vec{s}\;^{+,T} & s^{\prime,+}\\ 0 & \openone & s_0 \vec{s}\;^+ \\ 0 & 0 &1 \end{array}\right), &\qquad S_J^-(\eta,\lambda) & = \left(\begin{array}{ccc}1 & 0 &0 \\ \vec{s}^- & \openone &0 \\ s^{\prime,-} & \vec{s}\;^{-,T}s_0 & 1 \end{array}\right), \\ T_J^+(\eta,\lambda) & = \left(\begin{array}{ccc}1 & \vec{\tau}\;^{+,T} & \tau^{\prime,+} \\ 0 & \openone & s_0 \vec{\tau}\;^+ \\ 0 & 0 1 \end{array}\right), &\qquad S_J^-(\eta,\lambda) & = \left(\begin{array}{ccc}1 & 0 &0 \\ \vec{\tau}^- & \openone &0 \\ \tau^{\prime,-} & \vec{\tau}\;^{-,T} s_0 & 1 \end{array}\right), \\ D_J^+(\lambda) & = \left(\begin{array}{ccc}d_1^+ & 0 & 0 \\ 0 & {\bf d}_2^+ &0 \\ 0 & 0 & 1/d_1^+ \end{array}\right), &\qquad D_J^-(\lambda) & = \left(\begin{array}{ccc}d_1^- & 0 & 0 \\ 0 & {\bf d}_2^- &0 \\ 0 & 0 & 1/d_1^- \end{array}\right), \end{aligned}\end{equation} where $s_0 = \sum_{j=1}^{N} E_{j,N+1-j}$. Then the FAS analytic for $\lambda\in \bbbc_\pm$ are related to Jost solutions by: \begin{equation}\label{eq:fas}\begin{split} \chi^\pm(\xi,\eta,\lambda) = \Psi_-(\xi,\eta,\lambda)S_J^\pm (\eta,\lambda)= \Psi_+(\xi,\eta,\lambda)T_J^\pm (\eta,\lambda) D_J^\pm (\lambda). \end{split}\end{equation} The FAS (\ref{eq:fas}) satisfy a Riemann-Hilbert problem (RHP) with canonical normalization at $\lambda \to a$: \begin{equation}\label{eq:rhp}\begin{split} \chi^+(\xi,\eta,\lambda) &= \chi^-(\xi,\eta,\lambda) G_J(\lambda,\eta), \qquad G_J(\lambda,\eta)= \hat{S}_J^-(\lambda,\eta) S_J^+(\lambda,\eta), \\ \lim_{\lambda\to a} \chi^+(\xi,\eta,\lambda) &=\openone. \end{split}\end{equation} The canonical normalization of the RHP means that the FAS allow asymptotic expansions over the powers of $\lambda -a$: \begin{equation}\label{eq:asXi}\begin{split} \chi^\pm (\xi,\eta,\lambda) = \openone + \sum_{ s=1}^{\infty} X_s^\pm (\xi,\eta) (\lambda -a)^{-s} . \end{split}\end{equation} Therefore if we are given a solution of the RHP $\chi^+(\xi,\eta,\lambda)$ then the corresponding potential of $L$ can be recovered from \begin{equation}\label{eq:XiU}\begin{split} U_1(\xi,\eta) \equiv igJg^{-1}(\xi,\eta) = i \frac{\partial X_1^\pm}{ \partial \xi }. \end{split}\end{equation} We finish this Section with the obvious remark, that the RHP formulation allows one to derive the $N$-soliton solutions of the corresponding model via the Zakharov-Shabat dressing procedure \cite{zm1}. \section{ $\bbbz_2$-Reductions of the spinor models} Here we combine the construction of spinor models in two dimensions \cite{zm2} with the idea of the reduction group \cite{mik}. Thus we intend to construct new types of spinor models generalizing the ones in Section II. Start with the Lax representation: \begin{equation}\label{eq:U-YR}\begin{aligned} \Psi_\xi & = U_{\rm R}(\xi,\eta,\lambda) \Psi(\xi,\eta,\lambda), &\qquad \Psi_\eta & = V_{\rm R}(\xi,\eta,\lambda) \Psi(\xi,\eta,\lambda),\\ U_{\rm R}(\xi,\eta,\lambda) &= \frac{U_1(\xi,\eta)}{\lambda -a} + \frac{C U_1(\xi,\eta) C^{-1}}{\epsilon\lambda^{-1} -a} , &\qquad V_{\rm R}(\xi,\eta,\lambda) &= \frac{V_1(\xi,\eta)}{\lambda +a} + \frac{CV_1(\xi,\eta) C^{-1}}{\epsilon\lambda^{-1} +a} , \end{aligned}\end{equation} where $\epsilon =\pm 1$, $a\neq 1$ is a real number and $C$ is an involutive automorphism of $\mathfrak{g}$. Obviously this Lax representation along with the typical reduction (\ref{eq:red}) satisfy also: \begin{equation}\label{eq:UVR}\begin{aligned} U_{\rm R}(\xi,\eta,\lambda) &= CU_{\rm R}(\xi,\eta,\epsilon\lambda^{-1}) C^{-1}, &\qquad V_{\rm R}(\xi,\eta,\lambda) &= CV_{\rm R}(\xi,\eta,\epsilon\lambda^{-1}) C^{-1}, \end{aligned}\end{equation} which is automatically compatible with the Lax representation \cite{mik}. The new Lax representation is: \begin{equation}\label{eq:LaxR}\begin{split} \frac{\partial U_{\rm R}}{ \partial \eta} - \frac{\partial V_{\rm R}}{ \partial \xi} + [U_{\rm R},V_{\rm R}]=0, \end{split}\end{equation} which is equivalent to \begin{equation}\label{eq:UV12R}\begin{aligned} U_{1,\eta} &+ [U_1, V_{\rm R}(\xi,\eta,a)] =0, &\qquad V_{1,\xi} &+ [V_1, U_{\rm R}(\xi,\eta,-a)] =0. \end{aligned}\end{equation} Next we apply the same way of deriving the models as in Section II; obviously, due to the additional terms in $U_{\rm R}$ and $V_{\rm R}$ we get additional terms in the models. In what follows we also list some typical choices for the automorphism $C$. Skipping the details we get: \begin{description} \item[i) $\bbbz_2$-NJLVL models.] Here $\mathfrak{G}\simeq SU(N)$ and the system takes the form: \begin{equation}\label{eq:phiR}\begin{aligned} i\frac{\partial \vec{\phi}}{ \partial \eta} &+\frac{1}{2a} \vec{\psi} ({\vec{\psi}\;}^\dag \vec{\phi}) +\frac{1}{\epsilon a^{-1}+a} C \vec{\psi} ({\vec{\psi}\;}^\dag \hat{C} \vec{\phi})(\xi,\eta) =0, \\ i\frac{\partial \vec{\psi}}{ \partial \xi} & + \frac{ 1}{2a} \vec{\phi} ({\vec{\phi}\;}^\dag \vec{\psi} ) +\frac{ 1}{\epsilon a^{-1}+a} C\vec{\phi} ({\vec{\phi}\;}^\dag \hat{C} \vec{\psi})(\xi,\eta) =0. \end{aligned}\end{equation} where $\vec{\psi} = (\psi_{\alpha,1} , \dots ,\psi_{\alpha,N})^T$ and $\vec{\phi} = (\phi_{\alpha,1} , \dots ,\phi_{\alpha,N})^T$. For the automorphism $C$ of the $SU(N)$ group we may have \begin{equation}\label{eq:CN-su}\begin{split} \mbox{a)} \qquad C_N = \diag (\epsilon_1, \epsilon_2, \dots ,\epsilon_N), \qquad \epsilon_j=\pm 1, \qquad \mbox{b)} \qquad C'_N = \left(\begin{array}{cc} 1 & 0 \\ 0 & C_{N-1} \end{array}\right). \end{split}\end{equation} where $C_{N-1}$ belongs to the Weyl group of $SU(N-1)$ and is such that $C_{N-1}^2 =\openone$. These two special choices of $C$ are such that $\lim_{\xi\to\pm\infty} U_R(\xi,\eta) = \lim_{\xi\to\pm\infty} CU_R(\xi,\eta) \hat{C}$. \item[ii) $\bbbz_2$-GN models. ] Here $\mathfrak{G}\simeq SP(2N,\bbbr)$ and the form of the reduced system depends on the choice of the automorphism $C$. Two typical choices of $C$ are given by: \begin{equation}\label{eq:CN-so}\begin{aligned} \mbox{a)} \qquad C &= \left(\begin{array}{cc} C_1 & 0 \\ 0 & C_1 \end{array}\right), &\qquad \mbox{b)} \qquad C' &= \left(\begin{array}{cc} 0 & C_2 \\ C_{2} & 0 \end{array}\right), \end{aligned}\end{equation} where $C_1^2 =C_2^2 =\openone$. In this way we obtain two different systems of GN-type. Using the $N$-component vectors $\vec{\psi}$ and $\vec{\phi}$ we can write them down in compact form: \begin{equation}\label{eq:GNz2a}\begin{aligned} \frac{\partial \vec{\phi} }{ \partial \eta } &= -\frac{i}{a} \vec{\psi} \left( ( \vec{\psi}\;^\dag ,\vec{\phi}) - (\vec{\phi}\;^\dag ,\vec{\psi} ) \right) - \frac{2i}{a +\epsilon a^{-1}} C_1\vec{\psi} \left( ( \vec{\psi}\;^\dag C_1\vec{\phi}) - (\vec{\phi}\;^\dag C_1 \vec{\psi} ) \right) , \\ \frac{\partial \vec{\psi} }{ \partial \xi } &= \frac{i}{a} \vec{\phi} \left( ( \vec{\psi}\;^\dag ,\vec{\phi}) - (\vec{\phi}\;^\dag ,\vec{\psi} ) \right) +\frac{2i}{a +\epsilon a^{-1}} C_1\vec{\phi} \left( ( \vec{\psi}\;^\dag C_1\vec{\phi}) - (\vec{\phi}\;^\dag C_1 \vec{\psi} ) \right). \end{aligned}\end{equation} The corresponding action can be written as follows: \begin{multline}\label{eq:AGNa} A_{\bbbz_2,\rm GNa} = \int_{-\infty}^{\infty} dx\; dt\; \left( i \left(\vec{\phi}\;^\dag \frac{\partial \vec{\phi}}{ \partial \eta } + \vec{\psi}\;^\dag \frac{\partial \vec{\psi} }{ \partial \xi }\right) - \frac{1}{2a} \left(( \vec{\psi}\;^\dag, \vec{\phi}) -(\vec{\phi}\;^\dag ,\vec{\psi} ) \right)^2 \right. \\ \left. -\frac{1}{\epsilon a^{-1} +a} \left(( \vec{\psi}\;^\dag C_1 \vec{\phi}) -(\vec{\phi}\;^\dag C_1\vec{\psi} ) \right)^2 \right). \end{multline} The second $\bbbz_2$-reduced GN-system is: \begin{equation}\label{eq:GNz2b}\begin{aligned} \frac{\partial \vec{\phi} }{ \partial \eta } &= -\frac{i}{a} \vec{\psi} \left( ( \vec{\psi}\;^\dag ,\vec{\phi}) - (\vec{\phi}\;^\dag ,\vec{\psi} ) \right) + \frac{2i}{a +\epsilon a^{-1}} C_2\vec{\psi}\;^* \left( ( \vec{\psi}^T C_2\vec{\phi}) + (\vec{\psi}\;^\dag C_2 \vec{\phi}\;^* ) \right) , \\ \frac{\partial \vec{\psi} }{ \partial \xi } &= \frac{i}{a} \vec{\phi} \left( ( \vec{\psi}\;^\dag ,\vec{\phi}) - (\vec{\phi}\;^\dag ,\vec{\psi} ) \right) +\frac{2i}{a +\epsilon a^{-1}} C_2\vec{\phi}\;^* \left( ( \vec{\phi}^T C_2\vec{\psi}) + (\vec{\phi}\;^\dag C_2 \vec{\psi}\;^* ) \right). \end{aligned}\end{equation} These equations can be obtained from the action: \begin{multline}\label{eq:AGNb} A_{\bbbz_2,\rm GNb} = \int_{-\infty}^{\infty} dx\; dt\; \left( i \left(\vec{\phi}\;^\dag \frac{\partial \vec{\phi}}{ \partial \eta } + \vec{\psi}\;^\dag \frac{\partial \vec{\psi} }{ \partial \xi }\right) - \frac{1}{2a} \left(( \vec{\psi}\;^\dag, \vec{\phi}) -(\vec{\phi}\;^\dag ,\vec{\psi} ) \right)^2 \right. \\ \left. -\frac{1}{\epsilon a^{-1} +a} \left(( \vec{\phi}\;^\dag C_2 \vec{\psi}\;^*) +(\vec{\phi}^T C_2\vec{\psi} ) \right)^2 \right). \end{multline} \item[iii) $\bbbz_2$-ZM models.] Here $\mathfrak{G}\simeq SO(N,\bbbr)$. Again we used $N$-component vectors to cast the $\bbbz_2$-reduced ZM systems in the form: \begin{equation}\label{eq:ZMa}\begin{aligned} \frac{\partial \vec{\psi} }{ \partial \xi } &= \frac{i}{a} \left( \vec{\phi}\;^* ( \vec{\phi}^T ,\vec{\psi}) - \vec{\phi} (\vec{\phi}\;^\dag ,\vec{\psi} ) \right) + \frac{2i}{a +\epsilon a^{-1}} C \left( \vec{\phi}\;^* ( \vec{\phi}^T C\vec{\psi}) - \vec{\phi}(\vec{\phi}\;^\dag C \vec{\psi} ) \right) , \\ \frac{\partial \vec{\phi} }{ \partial \eta } &= \frac{i}{a} \left( \vec{\psi}\;^* ( \vec{\psi}^T ,\vec{\phi}) - \vec{\psi} (\vec{\psi}\;^\dag ,\vec{\phi} ) \right) +\frac{2i}{a +\epsilon a^{-1}} C \left( \vec{\psi}\;^* ( \vec{\psi}^T \hat{ C} \vec{\phi}) - \vec{\psi} (\vec{\psi}\;^\dag \hat{ C}\vec{\phi} ) \right), \end{aligned}\end{equation} where the involutive automorphism $C$ can be chosen as one of the type: \begin{equation}\label{eq:ZMC}\begin{aligned} \mbox{a)} \qquad C &= \diag (\epsilon_1, \epsilon_2, \dots , \epsilon_2,\epsilon_1 ), \qquad \epsilon_j=\pm 1, \qquad \mbox{b)} \qquad C' &= \left(\begin{array}{cc} 1 & 0 \\ 0 & C_3 \end{array}\right), \end{aligned}\end{equation} with $C_3^2=\openone$. For these choices of $C$ we have $\lim_{\xi\to\pm\infty} U_R(\xi,\eta) = \lim_{\xi\to\pm\infty} CU_R(\xi,\eta) \hat{C}$. The action for the reduced ZM models is provided by: \begin{multline}\label{eq:AGNbA} A_{\bbbz_2,\rm ZM} = \int_{-\infty}^{\infty} dx\; dt\; \left( i \left(\vec{\phi}\;^\dag \frac{\partial \vec{\phi}}{ \partial \eta } + \vec{\psi}\;^\dag \frac{\partial \vec{\psi} }{ \partial \xi }\right) +\frac{1}{a} \left(( \vec{\psi}\;^\dag, \vec{\phi}\;^*) (\vec{\phi}^T ,\vec{\psi}) - ( \vec{\phi}\;^\dag, \vec{\psi}) (\vec{\psi}\;^\dag ,\vec{\phi}) \right) \right. \\ \left. +\frac{2}{\epsilon a^{-1} +a} \left(( \vec{\psi}\;^\dag C \vec{\phi}\;^*) (\vec{\phi}^T C\vec{\psi}) - ( \vec{\phi}\;^\dag C \vec{\psi}) (\vec{\psi}\;^\dag C\vec{\phi}) \right) \right). \end{multline} \end{description} \section{ Spectral properties of the reduced Lax operators} Here we briefly outline the construction of the fundamental analytic solutions of the Lax operator $L_{\rm R}$. First we introduce the Jost solutions: \begin{equation}\label{eq:E-R}\begin{split} \lim_{\xi\to\infty} \Psi_{\rm R, +}(x,t,\lambda) \mathcal{E}_{\rm R}^{-1}(x,t,\lambda) &=\openone, \qquad \lim_{\xi\to -\infty} \Psi_{\rm R, -}(x,t,\lambda) \mathcal{E}_{\rm R}^{-1}(x,t,\lambda) =\openone,\\ \mathcal{E}_{\rm R}(x,t,\lambda) &=\exp \left( -i \frac{ J\xi}{\lambda -a} -i \frac{ CJC^{-1}\xi}{\epsilon\lambda^{-1} -a} \right), \end{split}\end{equation} The scattering matrix is defined by: \begin{equation}\label{eq:TR}\begin{split} T_{\rm R}(\lambda,\eta) = \hat{\Psi}_{\rm R,+}(x,t,\lambda) \Psi_{\rm R -}(x,t,\lambda) \end{split}\end{equation} Again we will need the generalized Gauss decomposition compatible with $J$: \begin{equation}\label{eq:gauR}\begin{split} T_{\rm R}(\lambda,t) = T_{J,\rm R}^-D_{J,\rm R}^+ \hat{S}_{J,\rm R}^+, \qquad T_{\rm R}(\lambda,t) = T_{J,\rm R}^+D_{J,\rm R}^- \hat{S}_{J,\rm R}^-, \end{split}\end{equation} Their block-matrix form is the same like in eq. (\ref{eq:TSJ0}) or (\ref{eq:TSJ1}); the only difference is that they should satisfy the additional symmetry condition with respect to the second involution of $L_{\rm R}$. The continuous spectrum of $L_{\rm R}$ (\ref{eq:U-YR}) fills up the curves on the complex $\lambda$-plane on which \begin{equation}\label{eq:cs}\begin{split} \re \left( -i \frac{ J\xi}{\lambda -a} -i \frac{ CJC^{-1}\xi}{\epsilon\lambda^{-1} -a} \right) = \im \left( \frac{ J\xi}{\lambda -a} + \frac{ CJC^{-1}\xi}{\epsilon\lambda^{-1} -a} \right) =0. \end{split}\end{equation} Below we consider four different cases depending on the choice of $\epsilon$ and $C$. For convenience we denote $\lambda =\lambda_0 +i\lambda_1$ where $\lambda_0$ and $\lambda_1$ are real. \begin{description} \item[Case a): $CJ \hat{C}=J$ and $\epsilon=1$.] Condition (\ref{eq:cs}) becomes: \begin{equation}\label{eq:Sp-a}\begin{split} \lambda_1( \lambda_0^2 +\lambda_1^2 -1) =0, \\ \end{split}\end{equation} Thus the continuous spectrum of $L_{\rm i)}$ consists of $\bbbr \cup \bbbs^1$, where $\bbbs^1$ is the unit circle with center at the origin. The discrete spectrum of $L_{\rm R}$ contains quadruplets of discrete eigenvalues. The generic quadruplet of eigenvalues consists of $\lambda_k$, $\lambda_k^*$, $1/\lambda_k$ and $1/\lambda_k^*$, see fig. \ref{fig:1}a). \item[Case b): $CJ \hat{C}=J$ and $\epsilon= -1$.] The analog of eq. (\ref{eq:cs}) is: \begin{equation}\label{eq:cs2}\begin{split} \im \left( \frac{ J\xi}{\lambda -a} + \frac{ CJC^{-1}\xi}{-\lambda^{-1} -a} \right) =0. \end{split}\end{equation} Its solution is \begin{equation}\label{eq:Sp-b}\begin{split} \lambda_1( \lambda_0^2 +\lambda_1^2 +1) =0, \end{split}\end{equation} The second factor $\lambda_0^2 +\lambda_1^2 +1$ is always positive, therefore in this case the continuous spectrum of $L_{\rm ii)}$ consists of the real axis $\bbbr $ only. The discrete spectrum of $L_{\rm R}$ consists of quadruplets and doublet discrete eigenvalues. The generic quadruplet of eigenvalues consists of $\lambda_k$, $\lambda_k^*$, $-1/\lambda_k$ and $-1/\lambda_k^*$. These quadruplets do not degenerate even on the unit circle. Doublet eigenvalues takes place only at $ i$ and $-i$, see fig. \ref{fig:1}b). \item[Case c): $CJ \hat{C}=-J$ and $\epsilon=1$.] From eq. (\ref{eq:cs}) we get: \begin{equation}\label{eq:Sp-c}\begin{split} \lambda_1 \left( \left( \lambda_0 -\frac{ 2a}{1+a^2}\right) ^2 +\lambda_1^2 +c_0 \right) =0, \qquad c_0 =\frac{ (1-a^2)^2}{(1+a^2)^2}. \end{split}\end{equation} Again the second factor $\lambda_0^2 +\lambda_1^2 +c_0$ is always positive, and therefore the continuous spectrum of $L_{\rm R}$ consists of the real axis $\bbbr $ only. The discrete spectrum of $L_{\rm R}$ consists of quadruplets and doublet discrete eigenvalues. The generic quadruplet eigenvalues consists of $\lambda_k$, $\lambda_k^*$, $1/\lambda_k$ and $1/\lambda_k^*$. The doublet eigenvalues take place if $|\lambda_k|=1$, i.e. they lie on the unit circle, see fig. \ref{fig:1}c). \item[Case d): $CJ \hat{C}=-J$ and $\epsilon= -1$.] From eq. (\ref{eq:cs2}) we find: \begin{equation}\label{eq:Sp-d}\begin{split} \lambda_1 \left( \left( \lambda_0 -\frac{ 2a}{1-a^2}\right) ^2 +\lambda_1^2 -c_1^2 \right) =0, \qquad c_1 = \frac{ a^2+1}{|a^2-1|}. \end{split}\end{equation} Thus the continuous spectrum of $L_{\rm iv)}$ consists of $\bbbr \cup \bbbs^1$, where $\bbbs^1$ is a circle with center on the real axis at $2a/(1-a^2)$ and radius $c_1$. The discrete spectrum of $L_{\rm R}$ consists of quadruplets. The generic quadruplet eigenvalues consists of $\lambda_k$, $\lambda_k^*$, $-1/\lambda_k$ and $-1/\lambda_k^*$. The only possible doublet eigenvalues at $\pm i$ are ruled out because they lie on the continuous spectrum of $L_{\rm R}$, see fig. \ref{fig:1}d). \end{description} \begin{figure} \includegraphics[width=6cm]{fig1a.eps}\qquad\includegraphics[width=6cm]{fig1b.eps}\\ \includegraphics[width=6cm]{fig1c.eps}\qquad\includegraphics[width=6cm]{fig1d.eps}\\ \caption{The continuous and the discrete spectrum of the operators $L_{\rm R}$ for the 4 different cases as described in the text. In the last case d) we have chosen $a=1/3$. }\label{fig:1} \end{figure} \begin{remark}\label{rem:2} Note that for the NJLVL models with $\mathfrak{g}\simeq su(N) $ and $J={\rm \diag} (1,0,\dots,0)$ only cases a) and b) are relevant. Indeed, there are no automorphisms of $su(N)$ that transform $J$ into $-J$. \end{remark} In all the cases described above one should avoid discrete eigenvalues lying on the continuous spectrum of $L_{\rm R}$. Now we construct the FAS using the Gauss factors in (\ref{eq:gauR}): \begin{equation}\label{eq:fas2}\begin{split} \chi^\pm(x,t,\lambda) = \Psi_-(x,t,\lambda)S_J^\pm (t,\lambda)= \Psi_+(x,t,\lambda)T_J^\pm (t,\lambda) D_J^\pm (\lambda). \end{split}\end{equation} For the cases b) and c) $\chi^+(x,t,\lambda)$ and $\chi^-(x,t,\lambda)$ are analytic for $\lambda\in \bbbc_+$ and $\lambda\in \bbbc_-$ respectively. For the cases a) and d) $\chi^+(x,t,\lambda)$ is analytic for $\lambda \in \Omega_1\cup \Omega_3$ and $\chi^-(x,t,\lambda)$ -- for $\lambda \in \Omega_2\cup \Omega_4$. The FAS (\ref{eq:fas2}) satisfy a RHP on a contour in $\bbbc$ which coincides with the continuous spectrum of $L_{\rm R}$: \begin{equation}\label{eq:rhpR}\begin{split} \chi^+(x,t,\lambda) = \chi^-(x,t,\lambda) G_J(\lambda,t), \qquad G_J(\lambda,t)= \hat{S}_J^-(\lambda,t) S_J^+(\lambda,t), \qquad \lambda \in \mathcal{S}, \end{split}\end{equation} where $\mathcal{S}$ is the continuous spectrum of $L_{\rm R}$, see fig. \ref{fig:1}. This fact allows one to apply the Zakharov-Shabat dressing method for constructing the soliton solutions of the $\bbbz_2$-reduced spinor models, very much along the ideas of \cite{zm1}. Unfortunately now there is no natural point in $\bbbc$ at which the RHP can be normalized, which presents an additional difficulty in applying the dressing method. \section{Conclusion} We have proposed a new class of $\bbbz_2$-reduced spinor models. The spectral properties and the construction of the FAS for their reduced Lax operators $L_{\rm R}$ are outlined. Other important developments are related to the interpretation of the ISM as a generalized Fourier transform \cite{AKNS,GVY}. This can be done using the Wronskian relations to analyze the mapping between the potential $U_{\rm R}$ and the scattering data. The soliton solutions of these models can be calculated using the method of \cite{zm1} and will be published elsewhere. New classes of generalized GN-type spinor models can be constructed choosing appropriate rank-2 matrices for $J$ instead of eq. (\ref{eq:J0}). Such models will have $4N$ independent components and the inverse scattering problem for their Lax operators will be regular. One can also consider reductions with automorphisms $C$ such that $CJ \hat{C}\neq \pm J$. Another important problem will be to explore the supersymmetric generalizations of the above models. \section*{Acknowledgements} I am grateful to Professor A. V. Mikhailov and Professor A. S. Sorin for useful suggestions and discussions. I also acknowledge a grant with the JINR, which allowed me to work on the topic 01-3-1073-2009/2013 of Dubna scientific plan and to participate in the XV SYMPHYS conference in Dubna.
{ "timestamp": "2012-10-16T02:02:04", "yymm": "1210", "arxiv_id": "1210.3722", "language": "en", "url": "https://arxiv.org/abs/1210.3722" }
\section{Introduction} Granger causality \citep{granger69} provides a statistical framework for determining whether a time series $X$ is useful in forecasting another one $Y$, through a series of statistical tests. It has found wide applicability in economics, including testing relationships between money and income \citep{sims1972money}, government spending and taxes on economic output \citep{blanchard2002empirical}, stock price and volume \citep{hiemstra1994testing}, etc. Extensions involving multiple time series can be handled through analysis of vector autoregressive processes (VAR) \citep{lutkepohl2005new}, which provide a convenient framework for analysis of relationships amongst multiple variables. As a result, the Granger causality framework has recently found diverse applications in biological sciences including genetics, bioinformatics and neurosciences to understand the structure of gene regulation, protein-protein interactions and brain circuitry, respectively. In these applications, the main goal is to reconstruct a network of interactions amongst the entities involved based on time course data. It should be noted that the concept of Granger causality is based on associations between times series, and only under very stringent conditions, true causal relationships can be inferred \citep{pearl2000causality}. Nonetheless, this framework provides a powerful tool for understanding the interactions among random variables based on time course data. Network Granger causality (NGC) extends the notion of Granger causality among two variables to a wider class of $p$ variables. More generally if $X_1^t, \ldots, X_p^t$ are $p$ stationary time series, with $\mathbf{X^t} = (X_1^t, \ldots, X_p^t)'$, we consider the class of models \begin{equation}\label{NGCdefn} \mathbf{X^T} = A^1 \mathbf{X^{T-1}} + \ldots + A^d \mathbf{X^{T-d}} + \mathbf{\epsilon^T}, \end{equation} where $d$ the order of the VAR model is allowed to be unknown and the innovation process satisfies $\mathbf{\epsilon^T} \sim N(0, \sigma^2 I)$. We call $A^1, \ldots, A^d$ the adjacency matrices from lags $1, \ldots, d$. In this model, $X^t_j$ is said to be Granger causal for $X^T_i$ if $A^t_{i,j}$ is statistically significant. In this case, there exists an edge $X^t_j \rightarrow X^T_i$ in the underlying network model comprising of $T \times p$ nodes (see Figure~\ref{GGCdemo}). \begin{figure}[h] \begin{center} \includegraphics[scale = 0.5, clip = TRUE, trim = 1.3in 2in 2in 1.2in]{single} \caption{An Example of a network Granger model with two non-overlapping groups observed over T = 4 time points}\label{GGCdemo} \end{center} \end{figure} Note that the presence of ordering between the variables in this network, due to their temporal structure, simplifies significantly the network estimation problem \citep{alipendag10}. Nevertheless, one still has to deal with estimating a high-dimensional network (e.g. hundreds of genes) from a limited number of samples. Estimation of NGC models often arises in the analysis of large panel data in econometrics, where one is interested to understand the temporal relationship of several economic variables observed over time across a panel of subjects. Such an example is presented in Section~\ref{banking} that examines the structure of the balance sheets of the 50 largest US banks by size, over 9 quarterly periods. The nature of high-dimensionality in this problem comes from both estimation of $p^2$ coefficients for the adjacency matrices $A^1, \ldots, A^d$, but also from the fact that the order of the time series $d$ is often unknown. Thus, in practice, one must either ``guess'' the order of the time series (often times, it is assumed that the data is generated from a VAR(1) model, which can result in significant loss of information), or include all of the past time points, resulting in significant increase in the number of variables in cases where $d \ll T$. Thus, efficient estimation of the order of the time series becomes crucial. Recent work of \citet{fujitaetal07} and \citet{lozanoetal09} employed NGC models coupled with penalized $\ell_1$ regression methods to learn gene regulatory mechanisms from time course microarray data. Specifically, \citet{lozanoetal09} proposed to group all the past observations, using a variant of group lasso penalty, in order to construct a relatively simple Granger network model. This penalty takes into account the average effect of the covariates over different time lags and connects Granger causality to this average effect being significant. However, it suffers from significant loss of information and makes the consistent estimation of the signs of the edges difficult (due to averaging). \citet{alitrunc} proposed a truncating lasso approach by introducing a truncation factor in the penalty term, which strongly penalizes the edges from a particular time lag, if it corresponds to a highly sparse adjacency matrix. Despite recent use of NGC in high dimensional settings, theoretical properties of the resulting estimators have not been fully investigated. For example, \citet{lozanoetal09} and \citet{alitrunc} discuss consistency of the resulting estimators, but neither address in depth selection consistency properties nor do they examine under what vector autoregressive structures the obtained results hold. Hence, there is significant room for theoretical work in understanding theoretically the performance of penalized estimators in NGC models. In addition, in many applications structural information about the variables exists, which could improve the estimation of Granger causal models. For example, genes can be naturally grouped according to their function or chromosomal location, stocks according to their industry sectors, assets/liabilities according to their class, etc. This information can be incorporated to the Granger causality framework through a group lasso penalty. If the group specification is correct it enables estimation of denser networks with limited sample sizes \citep{bach08, huangzhang10, lounici2011}. However, the group lasso penalty can achieve model selection consistency only at a group level. In other words, if the groups are misspecified, this procedure can not perform within group variable selection \citep{grpbridge09}, an important feature in many applications. To address this issue, we propose a new notion of ``direction consistency'', and use this notion to introduce a thresholded variant of group lasso for NGC models. In this paper, we develop a general framework that accommodates different variants of group lasso penalties for NGC models. It allows for the simultaneous estimation of the order of the times series and the Granger causal effects; further, it allows for variable selection even when the groups are misspecified. In summary, the key contributions of this work are: (i) investigate sufficient conditions that explicitly take into consideration the structure of the VAR$(d)$ model to establish norm and variable selection consistency, (ii) introduce the novel notion of direction consistency, which generalizes the concept of sign consistency, and use it to establish variable selection consistency of group lasso estimates with misspecified group structures, and (iii) use the latter notion to introduce an easy to compute thresholded variant of group lasso, that performs within group variable selection in addition to group sparsity pattern selection. Application of the proposed framework to data from banks' balance sheets and temporal regulatory mechanisms related to T-cell activation indicates that the resulting estimates provide novel insight into interactions among components of the system, as well as improved prediction of future values of the variables. The rest of the paper is organized as follows. In Section~\ref{secmodel}, we formulate the group NGC estimate and its variants. We explain their major advantages and briefly discuss the implementation procedure. Section~\ref{secassump} describes the notation used and introduces the notion of direction consistency, and discusses different assumptions required for the consistency of NGC estimates. The theoretical properties of group NGC estimates are discussed in Section~\ref{secresults}, where non-asymptotic bounds for their norm and variable selection consistency are established. Section~\ref{secsim} reports the results of numerical experiments, under different settings, and Section~\ref{secdata} applies the different NGC methods on two real datasets. \bigskip \section{Model and Framework}\label{secmodel} \subsection{Notation} Consider a VAR model \begin{equation}\label{eqn1:NGCdefn} \underbrace{\mathbf{X}^T}_{p \times 1} = \underbrace{A^1}_{p \times p} \mathbf{X}^{T-1} + \ldots + A^d \mathbf{X}^{T-d} + \mathbf{\epsilon}^T \end{equation} observed over $T$ time points $t = 1, \ldots, T$, with innovation process $\mathbf{\epsilon}^T \sim N(\mathbf{0}, \sigma^2 \mathbf{I}_{p \times p})$. The index set of the variables $\mathbb{N}_p = \{ 1, 2, \ldots, p\}$ can be partitioned into $G$ non-overlapping groups $\mathcal{G}_g$, i.e., $\mathbb{N}_p = \cup_{g=1}^G \mathcal{G}_g$ and $\mathcal{G}_g \cap \mathcal{G}_{g'} = \phi$ if $g \ne g'$ and where $k_g = |\mathcal{G}_g|$ denotes the size of the $g^{th}$ group with $k_{max} = \displaystyle \max_{1 \le g \le G} k_g$. For any matrix $A$, we denote the $i^{th}$ row by $A_{i:}$, $j^{th}$ column by $A_{:j}$ and the collection of rows (columns) corresponding to the $g^{th}$ group by $A_{[g]:}$ ($A_{:[g]}$). The transpose of a matrix $A$ is denoted by $A'$ and its Frobenius norm by $|| A ||_{F}$. The symbol $A^{1:T}$ is used to denote the concatenated matrix $\left[A^1: \cdots: A^T \right]$. Further, for notational convenience, we reserve the symbol $\|. \|$ to denote the $\ell_2$ norm of a vector and/or the spectral norm of a matrix. Any other norm will be indexed explicitly (e.g., $\|.\|_1,~\|.\|_{2, 2},~\|.\|_{2, \infty}$) to avoid confusion. Also for any vector $\beta$, we use $\beta_j$ to denote its $j^{th}$ coordinate and $\beta_{[g]}$ to denote the coordinates corresponding to the $g^{th}$ group. \subsection{Network Granger causal (NGC) estimates with group sparsity}\label{modeldescription} Consider $n$ replicates from the NGC model~\eqref{eqn1:NGCdefn}, and denote the $n \times p$ observation matrix at time $t$ by $\mathcal{X}^t$. For example in a panel-VAR setting, the data on $p$ economic variables on $n$ subjects (firms, households etc.) can be observed over $T$ time points. The data is high-dimensional if either $T$ or $p$ is large compared to $n$. In such a scenario, we assume the existence of an underlying group sparse structure, i.e., the support of each row of $A^{1:T} = \left[A^1: \cdots: A^T \right]$ in the model \eqref{eqn1:NGCdefn} can be covered by a small number of groups $s$, where $s \ll (T-1)G$. Note that the groups can be misspecified in the sense that the coordinates of a group covering the support need not be all non-zero. Hence, for a properly specified group structure we shall expect $s \ll \| A^{1:T}_{i:} \|_0$. On the contrary, with many misspecified groups, $s$ can be of the same order, or even larger than $\| A^{1:T}_{i:} \|_0$. The group Granger causal estimates of the adjacency matrices $A^1, \ldots, A^{T-1}$ are obtained by solving the following optimization problem \begin{align}\label{eqn:NGCest_original} \hat{A}^{1:T-1} = \displaystyle {\operatornamewithlimits{argmin}}_{A^1, A^2, \ldots, A^{T-1} \in \mathbb{R}^{p \times p}} &~ \frac{1}{2n} \left \| \mathcal{X}^T - \displaystyle \sum_{t=1}^{T-1} \mathcal{X}^{T-t} \left( A^t \right)' \right \|_F^2 \\ &+ \lambda_n \displaystyle \sum_{t=1}^{T-1} \Psi^t \displaystyle \sum_{i=1}^p \displaystyle \sum_{g=1}^G w^{t}_{i, g}\| A^t_{i:[g]} \|_{2}, \nonumber \end{align} where $\mathcal{X}^t$ is the $n \times p$ observation matrix at time $t$, constructed by stacking $n$ i.i.d. replicates from the model \eqref{eqn1:NGCdefn}, $w^t$ is a $p \times G$ matrix of suitably chosen weights, and $\Psi^t$ is a truncating or thresholding factor, for every $t$. This optimization problem can be separated into the following $p$ different penalized regression problems - for $i =1, \ldots, p$, \begin{align}\label{eqn:NGCest} \hat{A}^{1:T-1}_{i:} = \displaystyle \operatornamewithlimits{argmin}_{\theta^1, \theta^2, \ldots, \theta^{T-1} \in \mathbb{R}^{p }} &~ \frac{1}{2n} \| \mathcal{X}^T_{:i} - \displaystyle \sum_{t=1}^{T-1} \mathcal{X}^{T-t} \theta^t \|_2^2 \\ &+ \lambda_n \displaystyle \sum_{t=1}^{T-1} \Psi^t \displaystyle \sum_{g=1}^G w^{t}_{i, g}\| A^t_{i:[g]} \|_{2}. \nonumber \end{align} The order $d$ of the VAR model is estimated as $\hat{d} = \displaystyle \max_{1 \le t \le T-1} \{t: \hat{A}^t \neq \mathbf{0} \} $. Different choices of weights $w^t_{i:g}$ and truncating/thresholdings factor $\Psi^t$ introduce different variants of NGC estimates: \begin{enumerate} \item \textbf{Regular:} The regular NGC estimates correspond to the choices $\Psi^t = 1$, $w^t_{i, g} =1$ or $\sqrt{k_g}$. The estimation procedure requires solving $p$ group lasso penalized regression problems, as described in Section \ref{secassump}. Estimation and selection properties of the estimates are discussed in Section~\ref{secl2} under different choices of tuning parameter $\lambda_n$ and weights $w^t$. In practice, $\lambda_n$ can be tuned through cross-validation, that showed promising results in our numerical work. \item \textbf{Adaptive:} The adaptive version of NGC estimates corresponds to the choices $w^t_{i,g} = \min\{1, \| \tilde{A}^t_{i:[g]} \|_2^{-1}\}$, where $\tilde{A^t}$ are the estimates from Regular NGC. This variant of NGC involves a two-stage estimation procedure. In the first stage, only estimates of the adjacency matrices $A^t$ are obtained, but not of the order $d$. The second stage uses the first-stage estimates to select weights $w^t_{i, g}$ and yields an improved rate of false positives. The algorithm requires solving $p$ adaptive group lasso problems. The adaptive NGC estimation procedure requires a single tuning parameter $\lambda_n$ which is selected in the same way as in regular NGC. Consistency of adaptive group NGC estimates rely on the consistency of adaptive group lasso estimates \citep[cf.][]{adaptivegrplassofei}. \item \textbf{Thresholded: } Thresholded NGC estimates are also calculated by a two-stage procedure. The first stage involves a regular NGC estimation procedure, while at the second stage, bi-level thresholding is used. At first, the estimated groups with $\ell_2$ norm less than a threshold ($\delta_{grp} = t \lambda, \, t>0$) are set to zero. The second thresholding (within groups) is applied if the \textit{a priori} available grouping information is not reliable. The members within each estimated parent group are thresholded using $\delta_{misspec} = \delta_n$ for some $\delta_n \in (0, 1)$. Mathematically, for every $t=1, \ldots, T-1$, if $j \in \mathcal{G}_g$,\\ $\hat{A}^t_{ij} = \tilde{A}_{ij}^t I\left\{\left|\tilde{A}^t_{ij}\right| \ge \delta_{misspec} \left\| \tilde{A}^t_{i:[g]} \right\|_2 \right\}$ $ I \left\{ \left\| \tilde{A}^t_{i:[g]} \right\|_2 \ge \delta_{grp} \right\}$ \label{thres_defn} \item \textbf{Truncating:} A truncating variant of NGC estimates encourages accurate order selection in NGC problems, if the Granger causal effects decay over time. Truncating NGC estimates are obtained by solving a non-convex optimization problem via an iterative procedure based on a Block-Relaxation algorithms suggested in \citet{alitrunc}. This variant corresponds to the choices \\ $\Psi^1 = 1$, $\Psi^t = \mbox{exp}{[ \Delta n I\{ \sum_{g=1}^G I_{\{ \| A^{t-1}_{:[g]} \|_0 > 0\}} < G^2 \beta / (T-t)\} ]}$, $t \ge 2$ Consistent estimation and selection properties of truncating NGC estimates (without any group structure) were discussed in \citep{alitrunc} under a decay assumption on the Granger causal effects. Similar properties can be established using the consistency of regular group NGC estimates discussed in Section~\ref{secresults}, but are not pursued in this paper. \end{enumerate} \section{Assumptions and Conditions}\label{secassump} Note that to obtain the solution of the NGC problem, one needs to solve for each $i=1, \ldots, p$ a generic group lasso problem of the form \vspace{-0.2in} \begin{center} \begin{eqnarray}\label{genericgrplasso} \mathbf{Y}_{n \times 1} &=& \mathbf{X}_{n \times \bar{p}} \mathbf{\beta}^0_{\bar{p} \times 1} + \mathbf{\epsilon},~~~~~~\mathbf{\epsilon} \sim N(\mathbf{0}, \sigma^2 \mathbf{I}_{n \times n}) \nonumber \\ \{ 1, \ldots, \bar{p}\} &=& \displaystyle \cup_{g=1}^{\bar{G}} \mathcal{G}_g,~~~~~~|\mathcal{G}_g| = k_g \nonumber\\ \hat{\beta} &=& \displaystyle \operatornamewithlimits{argmin}_{\beta \in \mathbb{R}^p} \frac{1}{2n} \| \mathbf{Y} - \mathbf{X} \mathbf{\beta^0} \|^2_2 + \displaystyle \sum_{g=1}^{\bar{G}} \lambda_g \| \mathbf{\beta}_{[g]}\|_2 \label{eqn:grplasso} \end{eqnarray} \end{center} with $\mathbf{Y} = \mathcal{X}^T_i$, $\mathbf{X} = [ \mathcal{X}^1: \cdots: \mathcal{X}^{T-1}]$, $\mathbf{\beta}^0 = vec(A^{1 :(T-1)}_{i:})$, $\bar{p} = (T-1)p$, $\bar{G} = (T-1)G$ and $\lambda_g = \lambda_n w_{i, g}$. For ease of presentation, in the remainder we use $p$ instead of $\bar{p}$ and $G$ instead of $\bar{G}$ when examining the properties of the above problem. Next, we introduce assumptions needed for establishing norm and variable selection consistency for estimators of \eqref{genericgrplasso}. Specifically, for norm consistency, group variants of compatibility and restricted eigenvalue conditions are used, while selection consistency relies on group irrepresentable ones. Further, for the problem at hand, we establish a connection between group irrepresentable and group compatibility conditions (Appendix~\ref{sec:supp}). Note that selection consistency of group lasso estimators involves both group-level, as well as within-groups selection consistency. Furthermore, due to its inability to perform within group variable selection, group lasso estimates are not sign consistent whenever the groups are misspecified. Towards this end, the notion of ``direction consistency'' is introduced (Section~\ref{dircont}) and the necessity of group (weak) irrepresentable conditions is established (Appendix~\ref{sec:supp}). \subsection{Direction Consistency and Irrepresentable Conditions}\label{sec:dir} \subsubsection{Direction Consistency}\label{dircont} As discussed in the introductory section, lasso estimates exhibit the right sparsity pattern and corresponding signs of the support variables with high probability. However, group lasso achieves sparsity at the group level \citep{grpbridge09}, but not necessarily within the group itself. Hence, within group selection consistency is still unclear and several alternative penalized regression procedures have been proposed to overcome this shortcoming \citep{grpreg09,grpbridge09,zhao09cap}. We formulate a generalized notion of sign consistency, henceforth referred as ``direction consistency'', that provides insight into the properties of group lasso estimates within a single group. Subsequently, these properties are used in a simple thresholding variant of the group lasso estimates that achieves within group variable selection consistency. Consider a generic group lasso estimate as in \eqref{genericgrplasso}. Without loss of generality, let $S = \{ 1, \ldots, s\}$, and denote the group indices by $support(\mathbf{\beta}^0)$, i.e., \begin{equation*} \mathbf{\beta}^0 = [\mathbf{\beta}^0_{[1]}, \ldots, \mathbf{\beta}^0_{[s]}, \mathbf{0}, \ldots, \mathbf{0}],~\mathbf{\beta}^0_{[g]} \neq \mathbf{0} ~\forall~ g \in S = \{1, \ldots, s \},~\sum_{g \in S} k_g = q. \end{equation*} For a vector $\mathbf{\tau} \in \mathbb{R}^m \backslash \{\mathbf{0} \}$ we define the following quantities: $D(\tau) = \frac{\mathbf{\tau}}{\| \mathbf{\tau} \|_2}$ and $D(\mathbf{0}) = \mathbf{0}$. In general, the function $D(\cdot)$ indicates the direction of the vector $\mathbf{\tau}$ in $\mathbb{R}^m$. Specifically, for the problem at hand, for a group $g \in S$ of size $m$, $D(\mathbf{\beta}^0_{[g]})$ indicates the direction of influence of $\mathbf{\beta}^0_{[g]}$ at a group level as it reflects the relative importance of the influential group members. Note that for $m=1$ the function $D(\cdot)$ simplifies to the usual $sgn(\cdot)$ function. We define an estimate $\mathbf{\hat{\beta}}$ as \textbf{\textit{direction consistent}} at a rate $\delta_n$, if there exists a sequence of positive real numbers $\delta_n \rightarrow 0$ such that \begin{equation} \mathbb{P} \left(\| D(\mathbf{\hat{\beta}}_{[g]}) - D(\mathbf{\beta}^0_{[g]}) \|_2 < \delta_n,~\forall g \in S\right) \rightarrow 1 \mbox{ as } n, p \rightarrow \infty. \end{equation} It readily follows from the definition, that if $\mathbf{\hat{\beta}}$ is direction consistent and \\ $\tilde{S}^n_{g} = \{ j \in \mathcal{G}_g: \frac{| \mathbf{\hat{\beta}}_j |}{\|\mathbf{\hat{\beta}}_{[g]} \|_2} > \delta_n \}$ denotes a collection of influential group members within a group $\mathcal{G}_g$, which are detectable with a sample size of $n$, then \begin{equation}\label{sgnwithingrp} \mathbb{P}(sgn(\mathbf{\hat{\beta}}_j) = sgn(\mathbf{\beta}_j), ~\forall j \in \tilde{S}^n_g, \forall g \in \{ 1, \ldots, s\}) \rightarrow 1 \mbox{ as } n, p \rightarrow \infty. \end{equation} \begin{remark} The latter observation connects the precision of group lasso estimates to the accuracy of \textit{a priori} available grouping information. In particular, if the pre-specified grouping structure is correct, i.e., all the members within a group have non-zero effect, then for a sufficiently large sample size we have $\tilde{S}_g^n = \mathcal{G}_g$ and group lasso correctly estimates the sign of all the coordinates. On the other hand, in case of a misspecified \textit{a priori} grouping structure, in the form of numerous zero coordinates, $\mathbf{\beta}_g$, group lasso correctly estimates only the signs of strongly influential group members detectable with sample size $n$ . \end{remark} \subsubsection{Group Irrepresentable Conditions} Irrepresentable conditions are common in the literature of high-dimensional regression problems \citep{Zhaoyu06, vandegeerconditions} and are shown to be sufficient (and essentially necessary) for selection consistency of the lasso estimates. Further these conditions are known to be satisfied with high probability, if the population analogue of the Gram matrix belongs to the Toeplitz family. Specifically, if the predictor variables in a group lasso regression problem are generated from an AR process, the design matrix satisfies irrepresentable conditions with high probability. Since we are working with vector AR processes and the population analogue of the Gram matrix $var(\mathbf{X}^{1:T})$ is block Toeplitz, the irrepresentable assumptions are natural candidates for studying selection consistency of the estimates. Next, we formulate group analogues of these conditions. Consider the setup of a group lasso penalized linear model in \eqref{genericgrplasso} with $p$ regressors partitioned into $G$ groups, of which only the first $s$ groups (of total size $q$) exert non-zero signal (influence) on the response. We partition the design matrix and the coefficient vector into signal and non-signal parts \vspace{-0.25in} \begin{center} \begin{eqnarray} \underbrace{\mathbf{X}}_{n \times p} = [\underbrace{\mathbf{X}_{(1)}}_{n \times q} : \underbrace{\mathbf{X}_{(2)}}_{n \times (p-q)}] \\ \underbrace{\mathbf{\beta}^0}_{p \times 1} = [\underbrace{\mathbf{\beta}^0_{[1]}, \ldots, \mathbf{\beta}^0_{[s]}}_{k_1 + \ldots+k_s = q}, \underbrace{\mathbf{0}, \ldots, \mathbf{0}}_{p-q}] = [\mathbf{\beta}^0_{(1)}: \mathbf{\beta}^0_{(2)}]\\ C = \frac{1}{n} \mathbf{X}' \mathbf{X} = \left[ \begin{array} {cc}C_{11} & C_{12} \\ C_{21} & C_{22} \\ \end{array} \right]\label{signonsigpartn} \end{eqnarray} \end{center} Also, for a $q$-dimensional vector $\theta$ define the stacked direction vectors \begin{eqnarray} \underbrace{\tilde{D}(\mathbf{\tau})}_{q \times 1} = \left[ \begin{array}{c} \underbrace{D({\mathbf{\tau}}_{[1]})}_{k_1 \times 1} \\ \vdots \\ \underbrace{D({\mathbf{\tau}}_{[s]})}_{k_s \times 1} \end{array} \right], ~ K = \left[ \begin{array} {cccc} \lambda_1 \mathbf{I}_{k_1} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{0} & \lambda_2 \mathbf{I}_{k_2} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{0} & \mathbf{0} & \cdots& \lambda_s \mathbf{I}_{k_s} \end{array}\right] \end{eqnarray} \textbf{Uniform Irrepresentable Condition} is satisfied if there exists $0 < \eta < 1$ such that for all $\tau \in \mathbb{R}^q $ with $\| \tau \|_{2, \infty} = \displaystyle \max_{1 \le g \le s} \| \tau_{[g]}\|_2 \le 1 $ \begin{equation} \label{unifirrep} \frac{1}{\lambda_g}\left \| \left[ C_{21} {\left(C_{11}\right)}^{-1} K \tau \right]_{[g]} \right \| < 1-\eta, ~ \forall g \notin S = \{ 1, \ldots, s\}% \end{equation} \textbf{Weak Irrepresentable Condition} is satisfied if \begin{equation} \label{weakirrep} \frac{1}{\lambda_g}\left \| \left[ C_{21} {\left(C_{11}\right)}^{-1} K \tilde{D}(\beta^0_{(1)}) \right]_{[g]} \right \| \le 1, ~ \forall g \notin S = \{ 1, \ldots, s\} \end{equation} Note that these definitions revert to usual irrepresentable conditions for lasso estimates when all groups correspond to singletons. \subsection{Group Restricted Eigenvalue Condition and Group Compatibility Condition} Restricted eigenvalue conditions \citep{bickel2009simultaneous} ensure minimax optimal $\ell_2$ estimation error in several penalized regression problems \cite{vandegeerconditions}, while the analogue for group lasso problems is introduced in \citet{lounici2011}. In the regression framework of \eqref{eqn:grplasso}, RE(s, L) is satisfied, if there exists a positive number $\phi_{RE} = \phi_{RE}(s) > 0$ such that \begin{equation}\label{RElounici} \hspace{-0.1in}\min_{\begin{smallmatrix}J \subset \mathbb{N}_G,\, |J| \le s\\ \Delta \in \mathbb{R}^p \backslash\{\mathbf{0}\} \end{smallmatrix}} \left \{ \frac{\| X \Delta\| }{\sqrt{n} \| \Delta_{[J]} \| } : \displaystyle \sum_{g \in J^c} \lambda_g \| \Delta_{[g]}\| \le L \displaystyle \sum_{g \in J} \lambda_g \| \Delta_{[g]} \| \right \} \ge \phi_{RE} \end{equation} Oracle inequalities for consistency of group lasso estimators in $\ell_{2, 1}$ norms under a RE(s, 3) assumption and consistency in $\ell_2$ norms under an RE(2s, 3) assumption are discussed in \citet{lounici2011}. Following \citet{vandegeerconditions}, we introduce a slightly weaker notion called \textbf{Group Compatibility} (GC). For a constant $L>0$ we say that GC(S, L) condition holds, if there exists a constant\\ $\phi_{compatible} = \phi_{compatible}(S, L) > 0$ such that \begin{equation}\label{compatible} \min_{\Delta \in \mathbb{R}^p \backslash \{\mathbf{0}\}} \left \{ \frac{\left( \sum_{g \in S} \lambda_g^2 \right)^{1/2} \| X \Delta\| }{\sqrt{n} \displaystyle \sum_{g \in S} \lambda_g \| \Delta_{[g]} \| } : \displaystyle \sum_{g \notin S} \lambda_g \| \Delta_{[g]}\| \le L \displaystyle \sum_{g \in S} \lambda_g \| \Delta_{[g]} \| \right \} \ge \phi_{compatible} \end{equation} This notion is used to connect the irrepresentable conditions to the consistency results of group lasso estimators in $\ell_{2, 1}$ norms. The fact that GC(S, L) holds whenever RE(s, L) is satisfied follows directly from the Cauchy Schwarz inequality. \section{Main Results}\label{secresults} As discussed earlier, a number of authors have investigated the norm consistency of generic group lasso estimates under different assumptions, and asymptotic regimes \citep{bach08, nardi08, adaptivegrplassofei, lounici2011}. In particular, \citet{lounici2011} establish the norm consistency of group lasso estimates under restricted eigenvalue assumptions. Of main interest, is to derive conditions that establish the validity of these assumptions in the context of NGC models. This issue is addressed in Sections~\ref{secl2} and \ref{sec:l2NGC}. Subsequently, employing the notion of direction consistency introduced in Section~\ref{sec:dir}, we establish selection consistency of the generic group lasso estimate, and investigate both the group-level and within group consistency of thresholded group lasso estimates for NGC. \subsection{Norm consistency of generic group lasso estimates}\label{secl2} We start by presenting for the NGC framework independent derivations of the results established in \citet{lounici2011}, under slightly different choices of tuning parameters and assumptions. Asymptotically both estimates share the same convergence rate. However, we use a compatibility condition analogous to the one in \citet{vandegeerconditions}, instead of $RE(s,3)$ assumption of \citet{lounici2011}, to derive finite sample estimation error bounds in the $\ell_{2,1}$ norm. \begin{prop}\label{compatible2consist} Suppose the GC condition \eqref{compatible} holds with $L=3$. Choose $\alpha > 0$ and denote $\lambda_{min} = \min_{1 \le g \le G} \lambda_g$. If $$ \lambda_g \ge \frac{2 \sigma}{\sqrt{n}} \sqrt{\mbox{NN}{C_{[g] [g]}}} \left( \sqrt{k_g} + \frac{\pi}{\sqrt{2}} \sqrt{\alpha \, \log\,G} \right) $$ for every $g \in \mathbb{N}_{G}$, then, the following statements hold with probability at least $1 - 2G^{1-\alpha}$, \begin{eqnarray} &~& \frac{1}{n} \left\| X \left(\hat{\beta} - \beta^0 \right) \right\|^2 \le \frac{16}{\phi^2_{compatible}} \sum_{g=1}^s \lambda^2_g\label{eqpred}\\ &~& \| \hat{\beta} - \beta^0 \|_{2, 1} \le \frac{16}{\phi_{compatible}^2} \, \frac{\sum_{g=1}^s \lambda_g ^2}{\lambda_{min}}. \label{eql21} \end{eqnarray} If, in addition, RE(2s, 3) holds, then, with the same probability we get \begin{equation}\label{re2stol2consist} \| \hat{\beta} - \beta^0 \| \le \frac{4\sqrt{10}}{\phi_{RE}^2 (2s)} \, \frac{\sum_{g=1}^s \lambda_g^2}{\lambda_{min} \, \sqrt{s }} \, . \end{equation} \end{prop} The result shows that group lasso achieves faster convergence rate than lasso, if the groups are appropriately specified. Note that if all groups are of equal size $k$ and $\lambda_g = \lambda$ for all $g$, then group lasso has an $\ell_2$ estimation error of order $O\left(\sqrt{s}(\sqrt{k} + \sqrt{\log\,G})/\sqrt{n}\right)$. In contrast, lasso's error is $\sqrt{\| \beta^0 \|_0 \, \log\,p/n}$, which establishes that group lasso has a lower error bound if $s \ll \| \beta^0 \|_0$. On the other hand, lasso will have a lower error bound if $s \asymp \| \beta^0 \|_0$, i.e., if the groups are highly misspecified. Next, we investigate when the restricted eigenvalue and compatibility conditions hold. \citet{raskutti2010REcorrgauss}, \citet{rudelsonzhou2012} discuss the RE assumption for lasso for different families of random design matrices and error distributions. In particular, \citet{raskutti2010REcorrgauss} show that the restricted eigenvalue condition for lasso holds with high probability if the sample size is large enough ($n \asymp q\, \log\,p $) and the minimum eigenvalue of the covariance matrix of each row of the design matrix (i.e. $\Lambda_{min}(\Sigma)$) is bounded away from $0$. The following is an adaptation of that result, tailored to group lasso regression. \begin{prop}\label{repop2resamp} Consider a generic group lasso regression \eqref{genericgrplasso} with a Gaussian random design matrix $X \in \mathbb{R}^{n \times p}$ whose rows are i.i.d. $N(\mathbf{0}, \Sigma)$. If $\Sigma^{1/2}$ satisfies $RE(s, 3)$ with a constant $\phi_{RE}$ (which holds trivially if $\Lambda_{min}(\Sigma) > 0$), then there exist universal positive constants $c, c', c''$, such that if the sample size $n$ satisfies \begin{align*} n > c'' \frac{16 \rho^2(\Sigma)}{\phi^2_{RE}} \left( \frac{s (\sqrt{\log\,G} + \sqrt{k_g})^2}{\lambda_{min}/\lambda_{max}} \right), \mbox{~~ where ~} \rho^2(\Sigma) = \displaystyle \max_{1 \le g \le G} \left\| \Sigma_{[g][g]} \right\| \end{align*} then $X$ also satisfies $RE(s, 3)$ with $\phi_{RE} / 8$ with probability at least $1 - c' \mbox{exp}(-cn)$. \end{prop} \subsection{Norm Consistency of Group NGC estimates}\label{sec:l2NGC} In view of the above results, norm consistency of the regular group NGC estimates holds under an appropriate asymptotic regime, if both the restricted eigenvalue and group compatibility conditions are satisfied with high probability. The following result, together with Proposition \ref{repop2resamp}, achieves this objective. Specifically, it shows that for a regular NGC estimation problem \eqref{eqn:NGCest}, $\Lambda_{min}(\Sigma)$ is bounded away from $0$, as long as the underlying VAR model is stable \citep[cf. ][]{lutkepohl2005new}, with its cross spectral density and the true adjacency matrices bounded above in spectral norm. \begin{prop}\label{spectralresult} Consider a stable, stationary VAR(d) model of the form \eqref{eqn1:NGCdefn}. Let $\Sigma = Var(\mathbf{X}^{1:T})$ and $f(\theta),~ \theta \in [-\pi, \pi]$ denote its cross spectral density. Suppose the spectral norm of the characteristic polynomial $A(z) = I - A^1z - A^2 z^2 - \ldots - A^d z^d$ evaluated on the circle $|z| = 1$ is bounded above, i.e., $\exists M > 0$ such that $\| A(e^{-i \theta}) \| < M,~ \theta \in [-\pi, \pi]$. Then $\Lambda_{min}(\Sigma) > \frac{1}{M}$. In particular this is satisfied when $m := \displaystyle \max_{1 \le t \le d} \| A^t \| < \infty$, for some $m >0$. \end{prop} \begin{cor} If the maximum incoming and outgoing effects at every node are bounded above, i.e., if \begin{equation} \mathbf{v}_{in} = \displaystyle \max _{1 \le i \le p} \displaystyle \sum_{t=1}^d \displaystyle \sum_{j=1}^p |A^t_{ij}| < \infty, ~~~~~~ \mathbf{v}_{out} = \displaystyle \max _{1 \le j \le p} \displaystyle \sum_{t=1}^d \displaystyle \sum_{i=1}^p |A^t_{ij}| < \infty \end{equation} then $\Lambda_{min}(\Sigma)$ is bounded away from $0$. \end{cor} \begin{proof} This corollary is a simple consequence of the above proposition together with the following result relating different norms for a matrix, \citep[see e.g.][Cor $2.3.2$]{matrixcomputations}, \begin{eqnarray*} \| A^t \|_2 \le \sqrt{\|A^t\|_1 \|A^t\|_{\infty}} \le \frac{\|A^t\|_1 + \|A^t\|_{\infty}}{2}~~t=1, \ldots, d \end{eqnarray*} and the definitions \begin{eqnarray*} \|A^t\|_1 = \displaystyle \max_{1 \le i \le p } \displaystyle \sum_{j=1}^p |A^t_{ij}|,~~ \|A^t\|_{\infty} = \displaystyle \max_{1 \le j \le p } \displaystyle \sum_{i=1}^p |A^t_{ij}| \end{eqnarray*} \end{proof} The following theorem is an immediate corollary of the above results. \begin{thm}\label{thm:l2consistency} Consider a NGC estimation problem \eqref{eqn:NGCest}. Suppose the common design matrix $\mathbf{X}^n = [ \mathcal{X}^1: \cdots: \mathcal{X}^{T-1}]$ in the $p$ regression problems satisfy $RE(2s, 3)$ with $s = \max_{i} \left| pa_i \right|$, where $pa_i$ denotes the set of parent nodes of $X^T_i$ in the network. Consider the asymptotic regimes $G \asymp n^a, \, a>0$ and $s = O(n^{c_1}), k_{max} = O(n^{c_2}), \, 0 < c_1, c_2 < 1$ such that $\sqrt{s}(\sqrt{k_{max}}+\sqrt{\log \, G})/\sqrt{n} = o(1)$. Then for a suitably chosen sequence of $\lambda_n$ we have $\left\| \tilde{A}^{1:\hat{d}} - A^{1:d} \right\|_F \rightarrow 0$ in probability, as $n, p \rightarrow \infty$. \end{thm} \subsection{Selection consistency for generic group lasso estimates}\label{sec:selcon} Next, we discuss the selection consistency properties of a generic group lasso regression problem with a common tuning parameter across groups, i.e., $\lambda_g = \lambda$ for every $g \in \mathbb{N}_G$. Similar results can be obtained for more general choices of the tuning parameters. \begin{thm}\label{selectconsist} Assume that the group uniform irrepresentable condition holds with $1 - \eta$ for some $\eta > 0$. Then, for any choice of \begin{eqnarray*} \lambda &\ge& \displaystyle \max_{g \notin S} \frac{1}{\eta} \frac{\sigma}{\sqrt{n}} \sqrt{\left\| \left( C_{22}\right)_{[g][g]} \right\|} \left( \sqrt{k_g} + \frac{\pi}{\sqrt{2}} \sqrt{\alpha \, \log\,G} \right) \\ \delta_n &\ge& \displaystyle \max_{g \in S} \frac{1}{\left\| \beta^0_{[g]} \right\|} \left( \lambda \sqrt{s} \left\| (C_{11})^{-1} \right\| + \sigma \sqrt{\left\| (C_{11})^{-1}_{[g][g]} \right\| }\frac{(\sqrt{k_g} + \sqrt{\alpha \log\,G})}{\sqrt{n}} \right), \end{eqnarray*} with probability greater than $1 -4 G^{1-\alpha}$, there exists a solution $\hat{\beta}$ satisfying \begin{enumerate} \item $\hat{\beta}_{[g]} = 0$ for all $g \notin S$, \item $\left\| \hat{\beta}_{[g]} - \beta^0_{[g]} \right\| < \delta_n \left\| \beta_{[g]} \right\|$, and hence $\left\| D(\hat{\beta}_{[g]}) - D(\beta^0_{[g]}) \right\| < 2\delta_n$, for all $g \in S$. If $\delta_n < 1$, then $\hat{\beta}_{[g]} \neq 0$ for all $g \in S$. \end{enumerate} \end{thm} \begin{remark} The tuning parameter $\lambda$ can be chosen of the same order as required for $\ell_2$ consistency to achieve selection consistency within groups in the sense of \eqref{sgnwithingrp}. Further, with the above choice of $\lambda$, $\delta_n$ can be chosen of the order of $O(\sqrt{s}(\sqrt{k_{max}} + \sqrt{\log\,G})/\sqrt{n})$. Thus, group lasso correctly identifies the group sparsity pattern if $\sqrt{s}(\sqrt{k_{max}} + \sqrt{\log\,G})/\sqrt{n} \rightarrow 0$, the same scaling required for $\ell_2$ consistency. \end{remark} Note that, the second part of the Theorem~\ref{selectconsist} also shows that group lasso estimates are direction consistent under the same scaling and hence a thresholded version of the estimates selects all important variables with high probability, as discussed in section \ref{sec:thresh}. It can be shown that the weak irrepresentable condition is necessary for direction consistency of the group lasso estimates under mild regularity conditions on the design matrices. In addition, analogously to the result in \citep{vandegeerconditions}, it can be shown that a slightly stronger version of the uniform irrepresentable condition implies group compatibility conditions for group lasso estimates. We refer to Appendix~\ref{sec:supp} for a detailed discussion of these connections. \subsection{ Thresholding in Group NGC estimators}\label{sec:thresh} As described in Section \ref{modeldescription}, regular group NGC estimates can be thresholded both at the group and coordinate levels. The first level of thresholding is motivated by the fact that lasso can select too many false positives [cf. \citet{vandegeerthreshadaptejs2011}, \citet{zhou2010thresholded} and the references therein]. We propose a hard-thresholding of regular group NGC estimates using a threshold $\delta_{grp} = C \lambda$ for some suitably chosen constant $C$. The second level of thresholding employs the direction consistency of regular group NGC estimates to perform within group variable selection with high probability. At this level, we hard-threshold a coordinate $j \in {\cal G}_g$ to zero if the corresponding coordinate of $D(\hat{\beta}_{[g]})$ is lower than a threshold $\delta_n \in (0, 1)$ in absolute value. In view of Theorem~\ref{selectconsist}, the within group thresholding selects the group members with strong enough signal relative to other members of that group. The following result demonstrates the benefit of these two types of thresholding. Note that the thresholding at group level relies only on a weak GC(S, 3) condition, while the within group thresholding requires a stronger irrepresentable condition. \begin{thm}\label{propthres} Consider a generic group lasso regression problem \eqref{genericgrplasso} with common tuning parameter $\lambda_g = \lambda$. \begin{enumerate} \item[i)] Assume the GC(S, 3) condition of \eqref{compatible} holds with a constant $\phi = \phi_{compatible}$ and define \[ \hat{\beta}^{thgrp}_{[g]} = \hat{\beta}_{[g]} {\bf 1}_{\| \hat{\beta}_{[g]} \| > 4 \lambda}. \] If $\hat{S} = \{ g \in {\mathbb N}_G: \hat{\beta}^{thgrp}_{[g]} \ne \mathbf{0}\}$, then $|\hat{S} \backslash S| \le \frac{ s}{\phi^2/12}$, with probability at least $1 - 2G^{1-\alpha}$. \item[ii)] Assume that uniform irrepresentable condition holds with $1-\eta$ for some $\eta > 0$. Choose $\lambda$ and $\delta_n$ as in Theorem~\ref{selectconsist} and define \[ \hat{\beta}^{thgrp}_{j} = \hat{\beta}_j {\bf 1} \{|\hat{\beta}_j|/ \|\hat{\beta}_{[g]} \| > 2 \, \delta_n \} \mbox{ for all $j \in \mathcal{G}_g$ } \] Then, we have $supp(\beta^0) = supp(\hat{\beta}^{thgrp})$ with probability at least $1-4G^{1-\alpha}$ if $\min_{j \in supp(\beta^0)} |\beta^0_j| > 2 \delta_n \, \| \beta^0_{[g]} \|$ for all $j \in {\cal G}_g$, i.e., the effect of every non-zero member in a group is ``visible" relative to the total effect from the group. \end{enumerate} \end{thm} In NGC settings, where information about the temporal decay of the edge density of the network is available, a third level of thresholding is useful. Specifically, one can shrink to zero all the coefficients for each time lag where the total number of edges does not exceed a prespecified threshold that takes into account a predefined probability for false negatives. In this work, we do not further pursue such estimators. \section{Performance Evaluation}\label{secsim} \begin{figure}[t!] \includegraphics[width = \textwidth, height = 0.15\textwidth]{true1.pdf} \includegraphics[width = \textwidth, height = 0.15\textwidth]{lasso1.pdf} \includegraphics[width = \textwidth, height = 0.15\textwidth]{grp1.pdf} \includegraphics[width = \textwidth, height = 0.15\textwidth]{thgrp1.pdf} \caption{Estimated adjacency matrices of a misspecified NGC model: (a) True, (b) Lasso, (c) Group Lasso, (d) Thresholded Group Lasso} \label{fig_adj} \end{figure} We evaluate the performances of regular, adaptive and thresholded variants of the group NGC estimators through an extensive simulaiton study, and compare the results to those obtained from lasso estimates. A standard R package (\texttt{grpreg} \citep{grpreg09}) was used to obtain the estimates. \input{tbl1.tex} \input{tbl2.tex} \input{tbl3.tex} The settings considered are: \begin{enumerate} \item \textit{Balanced groups of equal size}: The parameters are as follows: i.i.d samples of size $n=60,110,160$ are generated from lag-2 ($d=2$) VAR models on $T=5$ time points, comprising of $p=60,120,200$ nodes partitioned into groups of equal size in the range 3-5. \item \textit{Unbalanced groups:} In this case, the corresponding node set is partitioned into one larger group of size 10 and many groups of size 5. \item \textit{Misspecified balanced groups:} The parameters are as follows: i.i.d samples of size $n=60,110,160$ are generated from lag-2 ($d=2$) VAR models on $T=10$ time points, comprising of $p=60,120$ nodes partitioned into groups of equal size 6. Further, for each group there is a 30\% misspecification rate, namely that for every parent group of a downstream node, 30\% of the group members do not exert any effect on it. \end{enumerate} The choice of the best tuning parameter $\lambda$ is based on a grid search in the interval $[C_1 \lambda_e, C_2 \lambda_e]$ where $\lambda_e = \sqrt{2 \,\log\,p/n}$ for lasso and $\sqrt{2 \,\log\,G/n}$ for group lasso, using a $19:1$ sample-splitting. The thresholding parameters are selected as $ \delta_{grp} = 0.7 \lambda \sigma$ at the group level and $\delta_{misspec} = n^{-0.2}$ within groups. Finally, within group thresholding is applied only when the group structure is misspecified. The following performance metrics were used for comparison purposes: $Precision= TP/(TP+FP)$ , (ii) $Recall = TP/(TP+FN)$ and (iii) Matthew's Correlation coefficient (MCC) defined as \begin{eqnarray*} \frac{(TP \times TN) - (FP \times FN)}{((TP + FP)\times (TP + FN) \times(TN + FP) \times(TN + FN))^{1/2}} \end{eqnarray*} where $TP$, $TN$, $FP$ and $FN$ correspond to true positives, true negatives, false positives and false negatives in the estimated network, respectively. The results for the balanced settings are given in Table~\ref{tbl:bal}. The average and standard deviations (in parentheses) of the performance metrics are presented for each setup. The Recall for $p=60$ shows that even for a network with $60 \times (5-1) = 240$ nodes and $|E| = 351$ true edges, the group NGC estimators recover about $71\%$ of the true edges with a sample size as low as $n = 60$, while lasso based NGC estimates recover only $31\%$ of the true edges. The three group NGC estimates have comparable performances in all the cases. However thresholded lasso shows slightly higher precision than the other group NGC variants for smaller sample sizes (e.g., $n=60, p = 200$). The results for $p = 60, n = 110$ also display that lower precision of lasso is caused partially by its inability to estimate the order of the VAR model correctly, as measured by ERR LAG=Number of falsely connected edges from lags beyond the true order of the VAR model divided by the number of edges in the network ($|E|$). This finding is nicely illustrated in Figure~\ref{fig_adj} and Table~\ref{tbl:bal}. The group penalty encourages edges from the nodes of the same group to be picked up together. Since the nodes of the same group are also from the same time lag, the group variants have substantially lower ERR LAG. For example, average ERR LAG of lasso for $p=200, ~n=160$ is $19.79\%$ while the average ERR LAGs for the group lasso variants are in the range $3.06\% - 4.21\%$. The results for the unbalanced networks are given in Table \ref{tbl:unbal}. As in the balanced group setup, in almost all the simulation settings the group NGC variants outperform the lasso estimates with respect to all three performance metrics. However the performances of the different variants of group NGC are comparable and tend to have higher standard deviations than the lasso estimates. Also the average ERR LAGs for the group NGC variants are substantially lower than the average ERR LAG for lasso demonstrating the advantage of group penalty. Although the conclusions regarding the comparisons of lasso and group NGC estimates remain unchanged it is evident that the performances of all the estimators are affected by the presence of one large group, skewing the uniform nature of the network. For example the MCC measures of group NGC estimates in a balanced network with $p=60$ and $|E| = 351$ vary around $97 - 98\%$ which lowers to $89\% - 90\%$ when the groups are unbalanced. The results for misspecified groups are given in Table \ref{tbl:mis}. Note that for higher sample size $n$ the MCC of lasso and regular group lasso are comparable. However, the thresholded version of group lasso ($\delta_{misspec} = n^{-0.2}$ used for within group selection) achieves significantly higher MCC than the rest. This demonstrates the advantage of using the directional consistency of group lasso estimators to perform within group variable selection. We would like to mention here that a careful choice of the thresholding parameters $\delta_{grp}$ and $\delta_{misspec}$ via cross-validation or other model selection criteria indicate improvement in the performance of thresholded group lasso; however, we do not pursue these methods here as they require grid search over many tuning parameters or an efficient estimator of the degree of freedom of group lasso. In summary, the results clearly show that all variants of group lasso NGC outperform the lasso-based ones, whenever the grouping structure of the variables is known and correctly specified. Further, their performance depends on the composition of group sizes. On the other hand, if the a priori known group structure is moderately misspecified lasso estimates produce comparable results to regular and adaptive group NGC ones, while thresholded group estimates outperform all other methods, as expected. \section{Application}\label{secdata} \subsection{Example: Banking balance sheets application}\label{banking} \begin{figure}[t!] \centering {\includegraphics[angle = 90, scale = 0.85, trim= 0in 1in 0.5in 0.5in, clip=true ]{graph_l_large}} \caption{Estimated Networks of banking balance sheet variables using lasso. The network represents the aggregated network over 5 time points.} \label{fig_bank_l} \end{figure} \begin{figure}[t!] \centering {\includegraphics[angle = 90, scale = 0.85, trim= 0in 1in 0.5in 0.5in, clip=true ]{graph_g_large}} \caption{Estimated Networks of banking balance sheet variables using group lasso. The network represents the aggregated network over 5 time points.} \label{fig_bank_g} \end{figure} \begin{table} \caption{Mean and standard deviation (in parentheses) of PMSE (MSE in case of Dec 2010) for prediction of banking balance sheet variables.} \label{MSE_bank} \centering \begin{tabular}{|r|cccc|} \hline Quarter & Lasso & Grp & Agrp & Thgrp \\ \hline Dec 2010 & 1.59 (0.29) & 0.36 (0.05) & 0.36 (0.05) & 0.37 (0.05) \\ Mar 2011 & 1.46 (0.30) & 0.47 (0.23) & 0.47 (0.23) & 0.46 (0.22) \\ Jun 2011 & 1.33 (0.26) & 0.36 (0.11) & 0.36 (0.11) & 0.35 (0.11) \\ Sep 2011 & 1.72 (0.32) & 0.50 (0.18)& 0.50 (0.18) & 0.47 (0.16) \\ \hline \end{tabular} \end{table} In this application, we examine the structure of the balance sheets in terms of assets and liabilities of the $n=50$ largest (in terms of total balance sheet size) US banking corporations. The data cover $9$ quarters (September 2009-September 2011) and were directly obtained from the Federal Deposit Insurance Corporation (FDIC) database (available at \texttt{www.fdic.gov}). The $p=21$ variables correspond to different assets (US and foreign government debt securities, equities, loans (commercial, mortgages), leases, etc.) and liabilities (domestic and foreign deposits from households and businesses, deposits from the Federal Reserve Board, deposits of other financial institutions, non-interest bearing liabilities, etc.) We have organized them into four categories: two for the assets (loans and securities) and two for the liabilities (Balances Due and Deposits, based on a \$250K reporting FDIC threshold). Amongst the 50 banks examined, one discerns large integrated ones with significant retail, commercial and investment activities (e.g. Citibank, JP Morgan, Bank of America, Wells Fargo), banks primarily focused on investment business (e.g. Goldman Sachs, Morgan Stanley, American Express, E-Trade, Charles Schwab), regional banks (e.g. Banco Popular de Puerto Rico, Comerica Bank, Bank of the West). The raw data are reported in thousands of dollars. The few missing values were imputed using a nearest neighbor imputation method with $k=5$, by clustering them according to their total assets in the most recent quarter (September 2011) and subsequently every missing observation for a particular bank was imputed by the median observation on its five nearest neighbors. The data were log-transformed to reduce non-stationarity issues. The dataset was restructured as a panel with $p=21$ variables and $n=50$ replicates observed over $T=9$ time points. Every column of replicates was scaled to have unit variance. We applied the proposed variants of NGC estimates on the first $T=6$ time points (Sep 2009 - Dec 2010) of the above panel dataset. The parameters $\lambda$ and $\delta_{grp}$ were chosen using a $19:1$ sample-splitting method and the misspecification threshold $\delta_{misspec}$ was set to zero as the grouping structure was reliable. We calculated the MSE of the fitted model in predicting the outcomes in the four quarters (December 2010 - September 2011). The Predicted MSE (MSE for Dec 2010) are listed in Table~\ref{MSE_bank}. The estimated network structures are shown in Figures~\ref{fig_bank_l} and \ref{fig_bank_g}. It can be seen that the lasso estimates recover a very simple temporal structure amongst the variables; namely, that past values (in this case lag-1) influence present ones. Given the structure of the balance sheet of large banks, this is an anticipated result, since it can not be radically altered over a short time period due to business relationships and past commitments to customers of the bank. However, the (adaptive) group lasso estimates reveal a richer and more nuanced structure. Examining the fitted values of the adjacency matrices $A^t$, we notice that the dominant effects remain those discovered by the lasso estimates. However, fairly strong effects are also estimated within each group, but also between the groups of the assets (loans and securities) on the balance sheet. This suggests rebalancing of the balance sheet for risk management purposes between relatively low risk securities and potentially more risky loans. Given the period covered by the data (post financial crisis starting in September 2009) when credit risk management became of paramount importance, the analysis picks up interesting patterns. On the other hand, significant fewer associations are discovered between the liabilities side of the balance sheet. Finally, there exist relationships between deposits and securities such as US Treasuries and other domestic ones (primarily municipal bonds); the latter indicates that an effort on behalf of the banks to manage the credit risk of their balance sheets, namely allocating to low risk assets as opposed to more risky loans. It is also worth noting that the group lasso model exhibits superior predictive performance over the lasso estimates, even 4 quarters into the future. Finally, in this case the thresholded estimates did not provide any additional benefits over the regular and adaptive variants, given that the specification of the groups was based on accounting principles and hence correctly structured. \subsection{Example: T-cell activation}\label{tcell} Estimation of gene regulatory networks from expression data is a fundamental problem in functional genomics \citep{friedman2004inferring}. Time course data coupled with NGC models are informationally rich enough for the task at hand. The data for this application come from \citet{rangel2004modeling}, where expression patterns of genes involved in T-cell activation were studied with the goal of discovering regulatory mechanisms that govern them in response to external stimuli. Activated T-cells are involved in regulation of effector cells (e.g. B-cells) and play a central role in mediating immune response. The available data comprising of $n=44$ samples of $p=58$ genes, measure the cells response at 10 time points, $t = 0, 2, 4, 6, 8, 18, 24, 32, 48, 72$ hours after their stimulation with a T-cell receptor independent activation mechanism. We concentrate on data from the first 5 time points, that correspond to early response mechanisms in the cells. Genes are often grouped based on their function and activity patterns into biological pathways. Thus, the knowledge of gene functions and their membership in biological pathways can be used as inherent grouping structures in the proposed group lasso estimates of NGC. Towards this, we used available biological knowledge to define groups of genes based on their biological function. Reliable information for biological functions were found from the literature for 38 genes, which were retained for further analysis. These 38 genes were grouped into 13 groups with the number of genes in different groups ranging from 1 to 5. \begin{figure}[t!] \centering {\includegraphics[width = \textwidth]{C_LASSO}} {\includegraphics[width = \textwidth]{C_THRESHOLDED}} \caption{Estimated Gene Regulatory Networks of T-cell activation. Width of edges represent the number of effects between two groups, and the network represent the aggregated regulatory network over 3 time points.} \label{fig_gene} \end{figure} In \citet{shojaie2011thresholding}, we analyzed this data and showed that the decay condition for the truncating lasso penalty seems to be violated in this case, and considered instead estimation of regulatory effect using an adaptive thresholding penalty. Hence, we consider here only application of the adaptive and thresholding variants of the proposed group lasso estimator for NGC. Figure~\ref{fig_gene} shows the estimated networks based on lasso and thresholded group lasso estimates, where for ease of representation the nodes of the network represent groups of genes. \begin{table}[t!] \caption{Mean and standard deviation of MSE for different NGC estimates}\label{tbl_gene} \centering \begin{tabular}{|c|c c c c|} \hline \, & Lasso & Grp & Agrp & Thgrp \\ \hline mean & 0.649 & 0.456 & 0.457 & 0.456 \\ stdev & 0.340 & 0.252 & 0.251 & 0.252 \\ \hline \end{tabular} \end{table} In this case, estimates from variants of group NGC estimator were all similar, and included a number of known regulatory mechanisms in T-cell activation, not present in the regular lasso estimate. For instance, \citet{waterman1990purification} suggest that TCF plays a significant role in activation of T-cells, which may describe the dominant role of this group of genes in the activation mechanism. On the other hand, \citet{kim2005nuclear} suggest that activated T-cells exhibit high levels of osteoclast-associated receptor activity which may attribute the large number of associations between member of osteoclast differentiation and other groups. Finally, the estimated networks based on variants of group lasso estimator also offer improved estimation accuracy in terms of mean squared error (MSE) despite having having comparable complexities to their regular lasso counterpart (Table~\ref{tbl_gene}), which further confirms the findings of other numerical studies in the paper. \section{Discussion}\label{disc} In this paper, the problem of estimating Network Granger Causal (NGC) models with inherent grouping structure is studied when replicates are available. Norm, and both group level and within group variable selection consistency are established under fairly mild assumptions on the structure of the underlying time series. To achieve the second objective the novel concept of direction consistency is introduced. The type of NGC models discussed in this study have wide applicability in different areas, including genomics and economics. However, in many contexts the availability of replicates at each time point is not feasible (e.g. in rate of returns for stocks or other macroeconomic variables), while grouping structure is still present (e.g. grouping of stocks according to industry sector). Hence, it is of interest to study the behavior of group lasso estimates in such a setting and address the technical challenges emanating from such a pure time series (dependent) data structure.
{ "timestamp": "2012-11-05T02:01:07", "yymm": "1210", "arxiv_id": "1210.3711", "language": "en", "url": "https://arxiv.org/abs/1210.3711" }
\section{Introduction} Consider the steady axisymmetric Euler equations for a fluid (incompressible and with zero vorticity) with a free surface acted on only by gravity. Using cylindrical coordinates and the Stokes stream function $\psi$ (see for example \cite[Exercise 4.18 (ii)]{fraenkel}), we obtain the free boundary problem \begin{align} \label{intro_sol} &\div \left(\frac{1}{x_1} \nabla \psi(x_1,x_2)\right)=0 \textrm{ in the water phase } \{ \psi>0\}\\ &\frac{1}{x_1^2}{\vert \nabla \psi(x_1,x_2)\vert}^2 = - x_2 \text{ on the free surface } \partial \{\psi >0\};\non\end{align} here the original velocity field $$V(X,Y,Z) = \left(-{1\over {x_1}} \partial_2 \psi \cos \vartheta, -{1\over {x_1}} \partial_2 \psi \sin \vartheta , {1\over {x_1}} \partial_1 \psi\right),$$ where $(X,Y,Z)=(x_1\cos\vartheta, x_1\sin\vartheta, x_2)$. Observe that the positive sign of $\psi$ is chosen just for convenience and that replacing $\psi$ by $-\psi$ our analysis covers the case of negative $\psi$ as well. Note also that the equations above describe apart from a model, where the fluid is pumped in or sucked out at a fixed boundary, also the case of a traveling wave traveling in the direction of the axis of symmetry; here the equations describe the steady flow in the moving frame, so that the original velocity field is $$V(X,Y,Z, t) =\widetilde V(X,Y,Z-c_0t)+(0,0,c_0),$$ where $c_0$ is the speed of the traveling wave and $$\widetilde V(X,Y,Z)=\left(-{1\over {x_1}}\partial_2 \psi \cos \vartheta, -{1\over {x_1}} \partial_2 \psi\sin \vartheta, {1\over {x_1}} \partial_1 \psi\right).$$ \cite{S-rev} and \cite{Sw1}, \cite{Sw2} are excellent reviews on two-dimensional water waves. The free boundary problem (\ref{intro_sol}) has been studied in \cite{acf} where regularity away from the degenerate sets $\{x_1=0\}$ (the axis of symmetry) and $\{ x_2=0\}$ (containing all stagnation points) has been shown for minimizers of a certain energy. In the present paper we will focus on precisely those two sets and analyze the profile of the velocity vector field close to points in those sets. Due to the degeneracy of the free boundary condition $\vert \nabla \psi(x_1,x_2)\vert^2 = {x_1}^2 x_2$ at points $x^0=(x_1^0,x_2^0)$ with $x_1^0 x_2^0=0$, we obtain {\em four} invariant scalings \begin{align*} &{\psi(x^0+rx)\over r} \textrm{ in the case } x_1^0\ne 0 \textrm{ and } x_2^0\ne 0,\\ &{\psi(x^0+rx)\over {r^{3\over 2}}} \textrm{ in the case } x_1^0\ne 0 \textrm{ and } x_2^0=0,\\ &{\psi(x^0+rx)\over {r^{2}}} \textrm{ in the case } x_1^0=0 \textrm{ and } x_2^0\ne 0,\\ &{\psi(x^0+rx)\over {r^{5 \over 2}}} \textrm{ in the case } x_1^0=x_2^0=0. \end{align*} Note that the velocity (in the moving frame) would scale like $1,|x|^{{1\over 2}},1,|x|^{1\over 2}$ in the respective cases. In a first main result we determine the profile of the scaled solution as $r\to 0$ (Proposition \ref{2dim}): In the case $x_1^0\ne 0$ and $x_2^0\ne 0$ the only asymptotics possible is constant velocity flow parallel to the free surface. In the case $x_1^0\ne 0$ and $x_2^0=0$ the only asymptotics possible is the well-known Stokes corner flow (see \cite{toland}, \cite{plotnikov}, \cite{english}, \cite{VW}). Due to the perturbed equation the situation is actually not unlike the two-dimensional problem in the presence of vorticity (see \cite{VW2}, \cite{CS}, \cite{CS2}, \cite{CS3} for two-dimensional results in the presence of vorticity). In the case $x_1^0=0$ and $x_2^0\ne 0$ the only asymptotics possible is constant velocity flow in the gravity direction. This suggests the {\em possibility of air cusps} pointing in the gravity direction (Figure \ref{fig9}). \begin{minipage}{\textwidth} \begin{center} \input{vort10.pdf_t} \end{center} \captionof{figure}{Dynamics suggested by our analysis}\label{fig9} \end{minipage} In the case $x_1^0=x_2^0=0$ the only asymptotics possible is the {\em Garabedian pointed bubble solution} with water above air (cf. \cite{garabedian}, Figure \ref{fig11}). This comes at first as a surprise as it means that there is no nontrivial asymptotic profile at all with air above water and with the invariant scaling. However there remains at this stage the possibility that the solution has a higher growth than that suggested by the invariant scaling. \begin{figure} \begin{center} \input{vort11.pdf_t} \end{center} \captionof{figure}{Garabedian pointed bubble asymptotics}\label{fig11} \end{figure} In Theorem \ref{curve} we first analyze the possible shapes of the surface close to stagnation points and close to points on the axis of symmetry. Assuming that the surface is given by an injective curve and assuming also a strict Bernstein inequality (corresponding to a Rayleigh-Taylor condition) we obtain the following result: In the case $x_1^0\ne 0$ and $x_2^0=0$ the only asymptotics possible are the well-known Stokes corner (an angle of opening $120^\circ$ in the direction of the axis of symmetry), and a horizontal point. In the case $x_1^0=0$ and $x_2^0<0$ the only asymptotics possible are cusps in the direction of the axis of symmetry. In the case $x_1^0=x_2^0=0$ the only asymptotics possible are the {\em Garabedian pointed bubble asymptotics} (an angle of opening $\approx 114.799^\circ$ with water above air), and a {\em horizontal point}. A fine analysis of the velocity profile in the last case ($x_1^0=x_2^0=0$ and a horizontal point) is no mean feat, and we confine ourselves to the case of air above water. Here we prove (Theorem \ref{deg2d}) that the velocity scales almost like $\sqrt{X^2+Y^2+Z^2}$ and is asymptotically given by the velocity field $$V(\sqrt{X^2+Y^2},Z)=c (-\sqrt{X^2+Y^2}, 2Z),$$ where $c$ is a nonzero constant (Figure \ref{fig10}). \begin{minipage}{\textwidth} \begin{center} \input{vort9.pdf_t} \end{center} \captionof{figure}{Dynamics suggested by our analysis}\label{fig10} \end{minipage} The proofs rely on a monotonicity formula as well as a {\em frequency formula} for the axisymmetric problem; as remarked in \cite{VW}, it is for certain semilinear problems possible to derive {\em on the set of highest density} not a perturbation of Almgren's frequency formula (see \cite{almgren}, \cite{lin2}, \cite{lin}, \cite{arshak}), but a true nonlinear frequency formula. Here we extend the formula of \cite{VW} to the axisymmetric case. In combination with a concentration compactness result for the axially symmetric Euler equations by J.-M. Delort \cite{delort}, this leads to the already mentioned profile for the velocity vector field. Note that while the concentration compactness result alone does {\em not} lead to strong convergence in general, we prove the convergence to the limiting velocity vector field to be strong in our application. \section{Notation} \label{notation} We will use coordinates $(X,Y,Z)$ in the physical space $\R^3$ together with partial derivatives $\partial_X,\partial_Y,\partial_Z$ as well as two-dimensional coordinates $x=(x_1,x_2)$ together with partial derivatives $\partial_1,\partial_2$. Sometimes we are going to to use cylindrical coordinates $(X,Y,Z)=(x_1 \cos \vartheta, x_1 \sin \vartheta, x_2)$. We denote by $x\cdot y$ the Euclidean inner product in $\R^n \times \R^n$, by $\vert x \vert$ the Euclidean norm in $\R^n$, by $B_r(x^0):= \{x \in \R^n : \vert x-x^0 \vert < r\}$ the ball of center $x^0$ and radius $r$, by $B^+_r(x^0):= \{x \in \R^n : x_1>0 \textrm{ and }\vert x-x^0 \vert < r\}$, by $\partial B^+_r(x^0):= \{x \in \R^n : x_1>0 \textrm{ and }\vert x-x^0 \vert =r\}$ and $\R^n_+ := \{ (x_1,\dots,x_n) : x_1>0\}$ the positive parts. \emph{Note that $\partial B^+_r(x^0)$ is not the topological boundary of $B^+_r(x^0)$ and that $B^+_r(x^0)$ is not necessarily a half ball.} We will use the notation $B_r$ for $B_r(0)$ as well as $B^+_r$ for $B^+_r(0)$, and denote by $\omega_2$ the $2$-dimensional volume of $B_1$. We will use the weighted $L^p$ space $$ L^p_w(\R^2_+):= \{ v \textrm{ measurable} : \int_{\R^2_+} \we |v|^p\,dx <+\infty\}$$ with norm $\Vert f\Vert_{L^p_w(\R^2_+)} = \left(\int_{\R^2_+} \we |v|^p\,dx\right)^{1\over p}$, the weighted Sobolev space \begin{align*}W^{1,p}_w(\R^2_+):= &\{ v \in L^p_w(\R^2_+) : \textrm{ all weak partial derivatives of } v \\ &\textrm{ are contained in } L^p_w(\R^2_+)\}\end{align*} as well as the local spaces $$L^p_{w,loc}(\R^2_+) := \{ v \textrm{ measurable} : \int_{B_R^+} \we |v|^p\,dx <+\infty \textrm{ for each } R\in (0,+\infty)\}$$ and \begin{align*}W^{1,p}_{w,loc}(\R^2_+) := & \{ v \textrm{ measurable} : v \in L^p_{w,loc}(\R^2_+) \textrm{ and all weak partial derivatives}\\ &\textrm{of } v \textrm{ are contained in } L^p_w(B_R^+) \textrm{ for each } R\in (0,+\infty)\}.\end{align*} We denote by $\chi_A$ the characteristic function of a set $A$. For any real number $a$, the notation $a^+$ stands for $\max(a, 0)$ and $a^-$ stands for $\min(a, 0)$. Also, ${\mathcal L}^n$ shall denote the $n$-dimensional Lebesgue measure and ${\mathcal H}^s$ the $s$-dimensional Hausdorff measure. By $\nu$ we will always refer to the outer normal on a given surface. We will use functions of bounded variation $BV(U)$, i.e. functions $f\in L^1(U)$ for which the distributional derivative is a vector-valued Radon measure. Here $|\nabla f|$ denotes the total variation measure. Note that for a smooth open set $E\subset \R^2$, $|\nabla \chi_E|$ coincides with the surface measure on $\partial E$. We will also use the reduced boundary $\partial_{red} E$. \section{Notion of solution and monotonicity formula}\label{notion} Let $\Omega$ be a bounded domain contained in $\{ (x_1,x_2): x_1\ge 0\}$, in which to consider the combined problem for fluid and air. We study solutions $u$, in a sense to be specified, of the problem \begin{align} \label{strongp} \div (\frac{1}{x_1} \nabla u)= \frac{\partial}{\partial x_1}\left(\we \frac{\partial u}{\partial x_1}\right)+\frac{\partial}{\partial x_2}\left(\we \frac{\partial u}{\partial x_2}\right) &=0 \quad\text{in } \Omega \cap \{u>0\},\\ \frac{1}{x_1^2}{\vert \nabla u\vert}^2 &= x_2 \quad\text{on } \Omega\cap\partial \{u>0\}.\non\end{align} Note that, compared to the Introduction, we have switched notation from $\psi$ to $u$, and we have ``reflected'' the problem at the hyperplane $\{x_2=0\}$. Since our results are completely local, we do not specify boundary conditions on $\partial\Omega$. We begin by introducing our notion of a {\em variational solution} of problem (\ref{strongp}). \begin{definition}[Variational Solution] \label{vardef} We define $u\in W^{1,2}_{w,\textnormal{loc}}(\Omega)$ to be a {\em variational solution} of (\ref{strongp}) if $u\in C^0(\Omega)\cap C^2(\Omega \cap \{ u>0\})$, $\nabla u/x_1\in C^1(\Omega \cap \{ u>0\})$, $u=0$ on $\{ x_1=0\}$ (motivated by the fact that the velocity on the axis orthogonal to the axis direction should be zero), $u\geq 0$ in $\Omega$, and the first variation with respect to domain variations of the functional \[ J(v) := \int_\Omega \left(\we {\vert \nabla v \vert}^2 + x_1 x_2 \chi_{\{ v>0\}} \right)\,dx\] vanishes at $v=u ,$ i.e. \begin{align} 0 &= -\frac{d}{d\epsilon} J(u(x+\epsilon \phi(x)))|_{\epsilon=0} \non\\&= \int_\Omega \Bigg[ \left(\we{\vert \nabla u \vert}^2 +x_1x_2 \chi_{\{ u>0\}}\right)\div\phi-2\we \nabla u D\phi\nabla u \non\\&\qquad+ \left( -\frac{1}{x_1^2}\vert\nabla u\vert^2+ x_2\chi_{\{ u>0\}}\right)\phi_1 +x_1\chi_{\{ u>0\}} \phi_2\Bigg]\,dx\non\end{align} for any $\phi=(\phi_1,\phi_2) \in C^1_0(\Omega;\R^2) \textrm{ such that } \phi_1=0 \textrm{ on } \{ x_1=0\}.$ \end{definition} A proof of the just mentioned first variation formula can be found in \cite[Section 3.2]{GH}. An integration by parts shows that $u$ satisfies on smooth parts of free boundary $\partial\{u>0\}$ in $\{ x_1x_2\ne 0\}$ the free boundary condition $$\frac{1}{x_1}{\vert \nabla u\vert}^2 = x_1x_2.$$ \begin{theorem}[Monotonicity Formula]\label{elmon2} Let $u$ be a variational solution of (\ref{strongp}), let $x^0\in\Om$ and let $\delta:=\textnormal{dist}(x^0,\partial\Om)/2$. Let, for any $r\in (0,\delta)$, \begin{align} &I(r)=\int_{B^+_r(x^0)}\left(\we|\nabla u|^2+x_1x_2\chi_{\{u>0\}}\right)\,dx,\label{I}\\ &J(r)=\int_{\partial B^+_r(x^0)}\we u^2\dh,\label{J}\\ &M^{int}(r)=r^{-2}I(r)- r^{-3}J(r),\label{Mint}\\ &M^{x_2}(r)=r^{-3}I(r)- \frac{3}{2} r^{-4}J(r),\label{Mx2}\\ &M^{x_1}(r)=r^{-3}I(r)- 2 r^{-4}J(r),\label{Mx1}\\ &M^{x_1x_2}(r)=r^{-4}I(r)- \frac{5}{2} r^{-5}J(r).\label{Mx1x2} \end{align} Then, for a.e. $r\in (0,\delta)$, \begin{align} \label{Mprint}&(M^{int}(r))' = 2r^{-2}\int_{\partial B^+_r(x^0)}\we\left(\nabla u\cdot \nu-\frac{u}{r}\right)^2\dh\\ &\quad\quad + r^{-3}\int_{B^+_r(x^0)}\left( -\frac{x_1-x^0_1}{x_1^2}|\nabla u|^2 + \left[ (x_1-x^0_1)x_2 + (x_2-x^0_2)x_1\right]\chi_{\{ u>0\}}\right) \,dx\non \\ &\quad\quad + r^{-4} \int_{\partial B^+_r(x^0)} \frac{x_1-x^0_1}{(x_1)^2} u^2 \dh.\non \intertext{In the case $x^0_2=0$,} \label{Mprx2}&(M^{x_2}(r))' = 2r^{-3}\int_{\partial B^+_r(x^0)}\we\left(\nabla u\cdot \nu-\frac{3}{2}\frac{u}{r}\right)^2\dh\\ &\quad\quad\non + r^{-4} \int_{B^+_r(x^0)}\left( -\frac{x_1-x^0_1}{x_1^2}|\nabla u|^2 + (x_1-x^0_1)x_2 \chi_{\{ u>0\}}\right) \,dx\non\\ & \quad\quad + \frac{3}{2}r^{-5} \int_{\partial B^+_r(x^0)} \frac{x_1-x^0_1}{(x_1)^2} u^2 \dh\non. \intertext{In the case $x^0_1=0$,} \label{Mprx1}&(M^{x_1}(r))' = 2r^{-3}\int_{\partial B^+_r(x^0)}\we\left(\nabla u\cdot \nu-2\frac{u}{r}\right)^2\dh\\ &\quad\quad\non + r^{-4}\int_{B^+_r(x^0)} (x_2-x^0_2)x_1 \chi_{\{ u>0\}} \,dx. \intertext{Last, in the case $x^0_1=x^0_2=0$,} \label{Mprx1x2}&(M^{x_1x_2}(r))' =2r^{-4}\ipbrx\we\left(\nabla u\cdot \nu-\frac{5}{2}\frac{u}{r}\right)^2\dh. \end{align} \end{theorem} \begin{remark} \label{elrem} (i) The integrand in the first integral on the right-hand side of (\ref{Mprint}) is a scalar multiple of $ (\nabla u(x) \cdot (x-x^0) - u(x))^2$, and therefore vanishes if and only if $u$ is a homogeneous function of degree $1$ with respect to $x^0$. \\ (ii) The integrand in the first integral on the right-hand side of (\ref{Mprx2}) is a scalar multiple of $ (\nabla u(x) \cdot (x-x^0) - \frac{3}{2}u(x))^2$, and therefore vanishes if and only if $u$ is a homogeneous function of degree $3/2$ with respect to $x^0$. \\ (iii) The integrand in the first integral on the right-hand side of (\ref{Mprx2}) is a scalar multiple of $ (\nabla u(x) \cdot (x-x^0) - 2u(x))^2$, and therefore vanishes if and only if $u$ is a homogeneous function of degree $2$ with respect to $x^0$. \\ (iv) The integrand in the first integral on the right-hand side of (\ref{Mprx2}) is a scalar multiple of $ (\nabla u(x) \cdot x - \frac{5}{2}u(x))^2$, and therefore vanishes if and only if $u$ is a homogeneous function of degree $5/2$. \end{remark} \begin{proof} First, for each $u\in W^{1,2}_{w,loc}(\R^2)$, each $\alpha \in\R$ and a.e. $r\in(0,\delta)$ we obtain, setting $w_r(x):= u(x^0+rx)$, \begin{align} & \frac{d}{dr} \left(r^\alpha \int_{\partial B^+_r(x^0)} \we u^2 \dh\right) = \frac{d}{{dr}} \left(r^{\alpha+n-1} \int_{\partial B^+_1} \frac{1}{x^0_1+rx_1}w_r^2 \dh\right)\label{easy}\\ \non &= (\alpha+n-1) r^{\alpha-1} \int_{\partial B^+_r(x^0)} \we u^2 \dh - r^{\alpha+n-1} \int_{\partial B^+_1} \frac{x_1}{(x^0_1+rx_1)^2}w_r^2 \dh\\ \non &\quad + r^{\alpha+n-1} \int_{\partial B^+_1} \frac{2}{x^0_1+rx_1}w_r\nabla u(x^0+rx)\cdot x \dh\\ \non &= (\alpha+n-1) r^{\alpha-1} \int_{\partial B^+_r(x^0)} \we u^2 \dh - r^{\alpha-1} \int_{\partial B^+_r(x^0)} \frac{x_1-x^0_1}{(x_1)^2} u^2 \dh\\ &\quad +r^{\alpha} \int_{\partial B^+_r(x^0)} \frac{2}{x_1}u\nabla u\cdot \nu \dh.\non \end{align} Suppose now that $u$ is a variational solution of (\ref{strongp}). For small positive $\kappa$ and $\eta_\kappa(t) := \max(0,\min(1,\frac{r- t}{ \kappa}))$, we take after approximation $\phi_\kappa(x) := \eta_\kappa(|x-x^0|)(x-x^0) $ as a test function in the definition of a variational solution, obtaining \begin{align}\non 0 =& \int_{\Om} \left(\we |\nabla u |^2+x_1x_2\chi_{\{ u>0\}}\right)\left(2\eta_\kappa(|x-x^0|)+\eta_\kappa'(|x-x^0|)|x-x^0|\right)\,dx\\ \non &-\int_{\Om}\frac{2}{x_1}\bigg[(\partial_1 u)^2\left(\eta_\kappa(|x-x^0|)+\eta_\kappa'(|x-x^0|)\frac{(x_1-x^0_1)^2}{|x-x^0|}\right)\\ \non &+(\partial_2 u)^2\left(\eta_\kappa(|x-x^0|)+\eta_\kappa'(|x-x^0|)\frac{(x_2-x^0_2)^2}{|x-x^0|}\right)\non\\&\qquad\qquad\qquad +(\partial_1 u)(\partial_2 u)2\eta'(|x-x^0|)\frac{(x_1-x^0_1)(x_2-x^0_2)}{|x-x^0|}\bigg]\,dx\non\\& \non +\int_{\Om}\bigg(-\frac{x_1-x^0_1}{x_1^2}|\nabla u|^2\eta_k(|x-x^0|)+\big[ (x_1-x^0_1)x_2 \\ &\qquad\qquad\qquad + (x_2-x^0_2)x_1\big]\chi_{\{ u>0\}}\eta_\kappa(|x-x^0|)\bigg)\,dx.\non\end{align} Passing to the limit as $\kappa\to 0$, we obtain for a.e. $r \in (0,\delta)$, \begin{align}\label{mff2} 0=& 2\int_{B^+_r(x^0)} x_1x_2 \chi_{\{ u>0\}} \,dx - r \int_{\partial B^+_r(x^0)} \left(\we {\vert \nabla u \vert}^2 + x_1x_2 \chi_{\{ u>0\}} \right)\dh\\ &\quad + 2r\int_{\partial B^+_r(x^0)} \we (\nabla u \cdot \nu)^2\,\dh\non \\ & + \int_{B^+_r(x^0)}\bigg( -\frac{x_1-x^0_1}{x_1^2}|\nabla u|^2 + \big[ (x_1-x^0_1)x_2 + (x_2-x^0_2)x_1\big]\chi_{\{ u>0\}}\bigg) \,dx.\non \end{align} Observe that letting $\epsilon\to 0$ in \[\int_{B^+_r(x^0)} \we \nabla u \cdot \nabla \max(u-\epsilon,0)^{1+\epsilon}\,dx =\int_{\partial B^+_r(x^0)} \we \max(u-\epsilon,0)^{1+\epsilon} \nabla u \cdot \nu\, \dh\] for a.e. $r \in (0,\delta)$, we obtain the integration by parts formula \begin{equation}\label{part2} \int_{B^+_r(x^0)} \we {\vert \nabla u \vert}^2\,dx =\int_{\partial B^+_r(x^0)} \we u \nabla u \cdot \nu\, \dh \end{equation} for a.e. $r \in (0,\delta).$ Note that \begin{align*} (r^{-2}I(r))'=& -2 r^{-3} \int_{B^+_r(x^0)} \left(\we|\nabla u|^2 + x_1 x_2 \chi_{\{ u>0\}}\right) \,dx\non\\ &\qquad+r^{-2} \int_{\partial B^+_r(x^0)}\left(\we|\nabla u|^2 + x_1x_2 \chi_{\{ u>0\}}\right)\dh, \end{align*} so that by (\ref{mff2}) and (\ref{part2}), \begin{align}(r^{-2}I(r))'& = r^{-3}\Bigg(2r\int_{\partial B^+_r(x^0)}\we(\nabla u\cdot \nu)^2\dh- 2\int_{\partial B^+_r(x^0)}\we u \nabla u\cdot \nu\dh\label{Iprint}\\ & + \int_{B^+_r(x^0)}\left( -\frac{x_1-x^0_1}{x_1^2}|\nabla u|^2 + \left[ (x_1-x^0_1)x_2 + (x_2-x^0_2)x_1\right]\chi_{\{ u>0\}}\right) \,dx \Bigg),\non \end{align} Combining (\ref{Iprint}) and (\ref{easy}) with $\alpha=-3$ yields (\ref{Mprint}). Moreover, \begin{align} (r^{-3}I(r))'& =-3 r^{-4} \int_{B^+_r(x^0)} \left(\we|\nabla u|^2 + x_1 x_2 \chi_{\{ u>0\}}\right) \,dx\label{oldidx1}\\ & \qquad+r^{-3} \int_{\partial B^+_r(x^0)}\left(\we|\nabla u|^2 + x_1x_2 \chi_{\{ u>0\}}\right)\dh.\non \end{align} In the case $x^0_2=0$ we obtain from (\ref{oldidx1}), using (\ref{mff2}) and (\ref{part2}), that \begin{align} (r^{-3}I(r))'&=r^{-4}\Bigg(2r\int_{\partial B^+_r(x^0)}\we(\nabla u\cdot \nu)^2\dh- 3\int_{\partial B^+_r(x^0)}\we u \nabla u\cdot \nu\dh\label{Iprx2}\\ & + \int_{B^+_r(x^0)}\left( -\frac{x_1-x^0_1}{x_1^2}|\nabla u|^2 + (x_1-x^0_1)x_2 \chi_{\{ u>0\}}\right) \,dx \Bigg),\non \end{align} Combining (\ref{Iprx2}) and (\ref{easy}) with $\alpha=-4$ yields (\ref{Mprx2}). On the other hand, in the case $x^0_1=0$ we obtain from (\ref{oldidx1}), using (\ref{mff2}) and (\ref{part2}), that \begin{align} (r^{-3}I(r))'&=r^{-4}\Bigg(2r\int_{\partial B^+_r(x^0)}\we(\nabla u\cdot \nu)^2\dh- 4\int_{\partial B^+_r(x^0)}\we u \nabla u\cdot \nu\dh\label{Iprx1}\\ & + \int_{B^+_r(x^0)} (x_2-x^0_2)x_1 \chi_{\{ u>0\}} \,dx \Bigg),\non \end{align} Combining (\ref{Iprx1}) and (\ref{easy}) with $\alpha=-4$ yields (\ref{Mprx1}). Last, in the case $x^0_1=x^0_2=0$, since \begin{align*} (r^{-4}I(r))' &=-4 r^{-5} \int_{B^+_r(0)} \left(\we|\nabla u|^2 + x_1 x_2 \chi_{\{ u>0\}}\right) \,dx\non\\&\qquad+r^{-4} \int_{\partial B^+_r(0)}\left(\we|\nabla u|^2 + x_1x_2 \chi_{\{ u>0\}}\right)\dh,\end{align*} we obtain from (\ref{mff2}) and (\ref{part2}) that \be(r^{-4}I(r))'= r^{-5}\left(2r\ipbrx\we(\nabla u\cdot \nu)^2\dh- 5\ipbrx\we u (\nabla u\cdot \nu)\dh \right).\label{Iprx1x2}\ee Combining (\ref{Iprx1x2}) and (\ref{easy}) with $\alpha=-5$ yields (\ref{Mprx1x2}). \end{proof} \begin{lemma}[Bernstein estimate] In $\{ u>0\}$, the solution satisfies $$\Delta\left(\frac{|\nabla u|^2}{x_1}- x_1 x_2\right)= 2\sum_{i,j=1}^2 \frac{(\partial_{ij} u)^2}{x_1}.$$ \end{lemma} \begin{proof} Direct calculation. \end{proof} \begin{remark} Constructing barrier solutions it is therefore possible to verify $\frac{|\nabla u|^2}{x_1}- x_1 x_2\le 0$ for certain domains $\subset \{ x_2>0\}$, certain Dirichlet boundary data and the {\em minimal solution} $u$ (cf. \cite{ejde}). \end{remark} \begin{definition}[Weak Solution]\label{weak} We define $u\in W^{1,2}_{w,\textnormal{loc}}(\Omega)$ to be a {\em weak solution} of (\ref{strongp}) if the following are satisfied: $u$ is a {\em variational solution} of (\ref{strongp}) and the topological free boundary $\partial \{ u>0\} \cap \Omega^\circ \cap \{ x_2 \ne 0\}$ is locally a $C^{2,\alpha}$-surface. \end{definition} \begin{remark} (i) It follows that in $\Omega^\circ \cap \{ x_2\ne 0\}$ the solution is a classical solution of (\ref{strongp}). It follows also that $\partial\{ u>0\}\subset \{ x_2\ge 0\}$. (ii) For any weak solution $u$ of {\rm (\ref{strongp})} such that \[\frac{|\nabla u|^2}{x_1}\le Cx_1 |x_2|\quad\text{locally in }\Omega,\] $u$ is a variational solution of {\rm (\ref{strongp})}, $ \chi_{\{ u>0\}}$ is locally in $\Omega^\circ\cap \{ x_2> 0\}$ a function of bounded variation, and the total variation measure $|\nabla \chi_{\{ u>0\}}|$ satisfies \[\int_{B^+_r(x^0)} \sqrt{x^+_2}\,d|\nabla \chi_{\{ u>0\}}|\le C_1 \left\{\begin{array}{l} r^{3\over 2}, x^0_2=0\\ r \sqrt{x^0_2}, x^0_2> 0\end{array}\right.\] for all $B^+_r(x^0)\subset\subset \Omega$. The reason is that, integrating by parts, \begin{align*}0 & = \int_{B^+_r(x^0)\cap \{ u>0\}} \div (\we \nabla u) \le \int_{\partial B^+_r(x^0)\cap \{ u>0\}}\frac{|\nabla u|}{x_1}\dh\\&\quad + \int_{B_r(x^0)\cap \{ x_1=0\}} \frac{|\nabla u|}{x_1}\dh -\int_{B^+_r(x^0)\cap \partial_{\rm red} \{ u>0\}} \frac{|\nabla u|}{x_1}\dh \\& \le C_1 \left(\int_{\partial B^+_r(x^0)}\sqrt{x^+_2}\dh + \int_{B_r(x^0)\cap \{ x_1=0\}}\sqrt{x^+_2}\dh\right) \\&\quad -\int_{B^+_r(x^0)\cap \partial_{\rm red} \{ u>0\}}\sqrt{x^+_2}\dh. \end{align*} \end{remark} \begin{lemma}\label{density_1} Let $u$ be a variational solution of {\rm (\ref{strongp})} and suppose that \[\frac{|\nabla u|^2}{x_1}\le C x_1 |x_2|\quad\text{locally in }\Omega.\] Then: (i) The limit $M^{int}(0+)=\lim_{r\to 0+} M^{int}(r)$ exists and is finite. If $x^0_2=0$, then the limit $M^{x_2}(0+)=\lim_{r\to 0+} M^{x_2}(r)$ exists and is finite. If $x^0_1=0$, then the limit $M^{x_1}(0+)=\lim_{r\to 0+} M^{x_1}(r)$ exists and is finite. If $x^0_1=x^0_2=0$, then the limit $M^{x_1x_2}(0+)=\lim_{r\to 0+} M^{x_1x_2}(r)$ exists and is finite. (iii) Let $x^0_1>0$, $x^0_2>0$ and $0<r_m\to 0+$ as $m\to \infty$ be a sequence such that the {\em blow-up} sequence \begin{equation}\label{blo1} u_m(x) := {u(x^0+{r_m}x)/ {r_m}}\end{equation} converges weakly in $W^{1,2}_{\textnormal{loc}}(\R^2)$ to a blow-up limit $u_0$. Then $u_0$ is a homogeneous function of degree $1$, i.e. $u_0(\lambda x) = \lambda u_0(x)$. Let $x^0_2=0$ and let $0<r_m\to 0+$ as $m\to \infty$ be a sequence such that the {\em blow-up} sequence \begin{equation}\label{blo2}u_m(x) := {u(x^0+{r_m}x)/ {r_m}^{3\over 2}}\end{equation} converges weakly in $W^{1,2}_{\textnormal{loc}}(\R^2)$ to a blow-up limit $u_0$. Then $u_0$ is a homogeneous function of degree $3/2$. Let $x^0_1=0$ and let $0<r_m\to 0+$ as $m\to \infty$ be a sequence such that the {\em blow-up} sequence \begin{equation}\label{blo3}u_m(x) := {u(x^0+{r_m}x)/ {r_m}^2}\end{equation} converges weakly in $W^{1,2}_{w,\textnormal{loc}}(\R^2_+)$ to a blow-up limit $u_0$. Then $u_0$ is a homogeneous function of degree $2$. Let $x^0_1=x^0_2=0$ and let $0<r_m\to 0+$ as $m\to \infty$ be a sequence such that the {\em blow-up} sequence \begin{equation}\label{blo4}u_m(x) := {u(x^0+{r_m}x)/ {r_m}^{5\over 2}}\end{equation} converges weakly in $W^{1,2}_{w,\textnormal{loc}}(\R^2_+)$ to a blow-up limit $u_0$. Then $u_0$ is a homogeneous function of degree $5/2$. (iii) Let $u_m$ be one of the converging sequences in (ii). Then $u_m$ converges {\em strongly} in $W^{1,2}_{w,\textnormal{loc}}(\R^2_+)$ ({\em strongly} in $W^{1,2}_{\textnormal{loc}}(\R^2)$ in the cases where $x^0_1>0$). (iv) If $x^0_1>0$ and $x^0_2\ne 0$, then $$M^{int}(0+)=x^0_1 x^0_2 \lim_{r\to 0+} r^{-2} \int_{B^+_r(x^0)}\chi_{\{ u>0\}}\,dx.$$ Moreover, $M^{int}(0+)=0$ implies that $u_0=0$ in $\R^2$ for each blow-up limit $u_0$ of $u_m(x) = u(x^0+{r_m}x)/r_m$. If $x^0_1>0$ and $x^0_2=0$, then $$M^{x_2}(0+)=x^0_1 \lim_{r\to 0+} r^{-3} \int_{B^+_r(x^0)}x_2 \chi_{\{ u>0\}}\,dx.$$ If $x^0_1=0$ and $x^0_2\ne 0$, then $$M^{x_1}(0+)=x^0_2 \lim_{r\to 0+} r^{-3} \int_{B^+_r(x^0)}x_1 \chi_{\{ u>0\}}\,dx.$$ Moreover, $M^{x_1}(0+)=0$ implies that $u_0=0$ in $\R^2_+$ for each blow-up limit $u_0$ of $u_m(x) = u(x^0+{r_m}x)/{r_m}^2$. If $x^0_1=x^0_2=0$, then $$M^{x_1x_2}(0+)=\lim_{r\to 0+} r^{-4} \int_{B^+_r(x^0)}x_1 x_2 \chi_{\{ u>0\}}\,dx.$$ \end{lemma} \begin{proof} (i) follows from the assumption \[\frac{|\nabla u|^2}{x_1}\le C x_1 |x_2|\quad\text{locally in }\Omega\] together with Theorem \ref{elmon2}. (ii): For each $0<\sigma<\infty$ the sequence $u_m$ is in each case by assumption bounded in $C^{0,1}(B^+_\sigma)$ (bounded in $C^{0,1}(B_\sigma)$ in the case that $x^0_1>0$). For any $0<\tau<\sigma<\infty$, we write the identities (\ref{Mprint}), (\ref{Mprx2}), (\ref{Mprint}), (\ref{Mprx1x2}) in integral form as \begin{align} &2\int_\tau^\sigma r^{-2} \int_{\partial B^+_r(x^0)}{1\over {x_1}} \left(\nabla u\cdot \nu -\frac{u}{r}\right)^2\dh dr\non \\ &=M^{int}(\sigma)-M^{int}(\tau)-\int_\tau^\sigma K^{int}(r)\,dr\label{jhmint} \textrm{ in the case } x^0_1>0 \textrm{ and } x^0_2> 0,\\ &2\int_\tau^\sigma r^{-3} \int_{\partial B^+_r(x^0)}{1\over {x_1}} \left(\nabla u\cdot \nu -\frac{3}{2}\frac{u}{r}\right)^2\dh dr\non \\ &=M^{x_2}(\sigma)-M^{x_2}(\tau)-\int_\tau^\sigma K^{x_2}(r)\,dr\label{jhmx2} \textrm{ in the case } x^0_1>0 \textrm{ and } x^0_2=0, \end{align} \begin{align} &2\int_\tau^\sigma r^{-3} \int_{\partial B^+_r(x^0)}{1\over {x_1}} \left(\nabla u\cdot \nu -2\frac{u}{r}\right)^2\dh dr\non \\ &=M^{x_1}(\sigma)-M^{x_1}(\tau)-\int_\tau^\sigma K^{x_1}(r)\,dr\label{jhmx1} \textrm{ in the case } x^0_1=0 \textrm{ and } x^0_2> 0,\\ &2\int_\tau^\sigma r^{-4} \int_{\partial B^+_r(x^0)}{1\over {x_1}} \left(\nabla u\cdot \nu -\frac{5}{2}\frac{u}{r}\right)^2\dh dr\non \\ &=M^{x_1x_2}(\sigma)-M^{x_1x_2}(\tau)\,dr\label{jhmx1x2} \textrm{ in the case } x^0_1=x^0_2=0;\end{align} here $K^{int}, K^{x_2}$ and $K^{x_1}$ are defined by (\ref{jhmint}), (\ref{jhmx2}) and (\ref{jhmx1}), and they are all integrable. It follows by rescaling in (\ref{jhmint})-(\ref{jhmx1x2}) that \begin{align*} 2\int_{B_\sigma(0)\setminus B_\tau(0)}&|x|^{-3}{1\over {x_1}}\left(\nabla u_m(x) \cdot x - u_m(x)\right)^2\,dx\\&\leq M^{int}(r_m\sigma)-M^{int}(r_m\tau)+\int_{r_m\tau}^{r_m\sigma} |K^{int}(r)|\,dr \to 0\quad\text{as }m\to\infty,\\ &\textrm{ in the case } x^0_1>0 \textrm{ and } x^0_2> 0,\\ 2\int_{B_\sigma(0)\setminus B_\tau(0)}&|x|^{-5}{1\over {x_1}}\left(\nabla u_m(x) \cdot x - \frac{3}{2} u_m(x)\right)^2\,dx\\&\leq M^{x_2}(r_m\sigma)-M^{x_2}(r_m\tau)+\int_{r_m\tau}^{r_m\sigma} |K^{x_2}(r)|\,dr \to 0\quad\text{as }m\to\infty,\\ &\textrm{ in the case } x^0_1>0 \textrm{ and } x^0_2=0,\\ 2\int_{B^+_\sigma(0)\setminus B^+_\tau(0)}&|x|^{-5}{1\over {x_1}}\left(\nabla u_m(x) \cdot x - 2 u_m(x)\right)^2\,dx\\&\leq M^{x_1}(r_m\sigma)-M^{x_1}(r_m\tau)+\int_{r_m\tau}^{r_m\sigma} |K^{x_1}(r)|\,dr \to 0\quad\text{as }m\to\infty,\\ &\textrm{ in the case } x^0_1=0 \textrm{ and } x^0_2> 0,\\ 2\int_{B^+_\sigma(0)\setminus B^+_\tau(0)}&|x|^{-6}{1\over {x_1}}\left(\nabla u_m(x) \cdot x - \frac{5}{2} u_m(x)\right)^2\,dx\\&\leq M^{x_1x_2}(r_m\sigma)-M^{x_1x_2}(r_m\tau)\to 0\quad\text{as }m\to\infty\\ &\textrm{ in the case } x^0_1=x^0_2=0, \end{align*} which yields the desired homogeneity of $u_0$. (iii): In order to show strong convergence of $\nabla u_m$, it is in view of the weak $L^2_w$-convergence of $\nabla u_m$ sufficient to prove convergence of the $L^2_w$-norm. Let $\delta:=\text{dist}(x^0,\partial\Om)/2$. Then, for each $m$, $u_m$ is a variational solution of \begin{align} \label{strongpm} \div\left( \frac{\nabla u_m(x)}{(x^0+r_mx)_1}\right) &=0\quad\left\{ \begin{array}{l}\textrm{in } B_{\delta/r_m} \cap \{u_m>0\} \textrm{ in the case } x^0_1>0,\\ \textrm{in } B^+_{\delta/r_m} \cap \{u_m>0\} \textrm{ in the case } x^0_1=0. \end{array}\right. \end{align} Since $u_m$ converges to $u_0$ locally uniformly, it follows from (\ref{strongpm}) that $u_0$ is harmonic in $\{ u_0>0\}$ in the case $x^0_1>0$ and a solution of the equation $$\div\left({1\over {x_1}}\nabla u_0\right)=0$$ in the case $x^0_1=0$. Also, using the uniform convergence, the continuity of $u_0$ and its solution property in $\{ u_0>0\}$ we obtain as in the proof of (\ref{part2}) that \begin{align*}&o(1)+\int_{\R^2} \frac{1}{x^0_1}|\nabla u_m|^2\eta\,dx \\&=\int_{\R^2} \frac{1}{(x^0+r_mx)_1}|\nabla u_m|^2\eta\,dx = -\int_{\R^2} u_m \frac{1}{(x^0+r_mx)_1} \nabla u_m\cdot \nabla \eta \,dx \\&\to -\int_{\R^2} u_0 \frac{1}{x^0_1}\nabla u_0\cdot \nabla \eta\,dx =\frac{1}{x^0_1} \int_{\R^2} |\nabla u_0|^2\eta\,dx\textrm{ in the case } x^0_1>0 \textrm{ and that} \\ &\int_{\R^2_+} \frac{1}{x_1}|\nabla u_m|^2\eta\,dx = -\int_{\R^2_+} u_m \frac{1}{x_1} \nabla u_m\cdot \nabla \eta \,dx \\&\to -\int_{\R^2_+} u_0 \frac{1}{x_1}\nabla u_0\cdot \nabla \eta\,dx = \int_{\R^2_+} \we |\nabla u_0|^2\eta\,dx \textrm{ in the case } x^0_1=0 \end{align*} as $m\to \infty$. It therefore follows that $\nabla u_m$ converges strongly in $L^2_w$ (and in $L^2$ if $x^0_1>0$) to $\nabla u_0$ as $m\to\infty$. (iv): Let us take a sequence $r_m\to 0+$ such that $u_m$ defined in (\ref{blo1})-(\ref{blo4}) converges weakly in $W^{1,2}_{w,\textnormal{loc}}(\R^2_+)$ (weakly in $W^{1,2}_{\textnormal{loc}}(\R^2)$ in the case $x^0_1>0$) to a function $u_0$. Using (iii) and the homogeneity of $u_0$, we obtain that \begin{align*} &\lim_{m\to \infty} M^{int}(r_m) = {1\over {x^0_1}}\left(\int_{B_1} |\nabla u_0|^2\,dx - \int_{\partial B_1} u_0^2 \dh \right)\non \\ \non &\qquad + \lim_{r\to 0+} r^{-2} \int_{B^+_r(x^0)} x_1x_2 \chi_{\{ u>0\}}\,dx\\ & = x^0_1 x^0_2 \lim_{r\to 0+} r^{-2} \int_{B^+_r(x^0)} \chi_{\{ u>0\}}\,dx,\\& \lim_{m\to \infty} M^{x_2}(r_m) = {1\over {x^0_1}}\left(\int_{B_1} |\nabla u_0|^2\,dx - \frac{3}{2}\int_{\partial B_1} u_0^2 \dh \right)\non \\ \non &\qquad + \lim_{r\to 0+} r^{-3} \int_{B^+_r(x^0)} x_1x_2 \chi_{\{ u>0\}}\,dx\\ & = x^0_1 \lim_{r\to 0+} r^{-3} \int_{B^+_r(x^0)} x_2 \chi_{\{ u>0\}}\,dx,\\& \lim_{m\to \infty} M^{x_1}(r_m) = \int_{B_1^+} \we |\nabla u_0|^2\,dx - 2\int_{\partial B_1^+} \we u_0^2 \dh \non \\ \non &\qquad + \lim_{r\to 0+} r^{-3} \int_{B^+_r(x^0)} x_1x_2 \chi_{\{ u>0\}}\,dx \end{align*} \begin{align*} & = x^0_2 \lim_{r\to 0+} r^{-3} \int_{B^+_r(x^0)} x_1\chi_{\{ u>0\}}\,dx,\\& \lim_{m\to \infty} M^{x_1 x_2}(r_m) = \int_{B_1^+} \we |\nabla u_0|^2\,dx - \frac{5}{2}\int_{\partial B_1^+} \we u_0^2 \dh \non \\ \non &\qquad + \lim_{r\to 0+} r^{-4} \int_{B^+_r(x^0)} x_1x_2 \chi_{\{ u>0\}}\,dx\\ & = \lim_{r\to 0+} r^{-4} \int_{B^+_r(x^0)} x_1x_2 \chi_{\{ u>0\}}\,dx.\end{align*} In the case $x^0_2>0$, $M^{int}(0+)\ge 0$ and $M^{x_1}(0+)\ge 0$, and equality implies that $u_m$ converges to $0$ in measure in $\R^2_+$. \end{proof} The next lemma will be useful in the characterization of blow-up limits in Proposition \ref{2dim}. \begin{lemma}\label{legendre} The Legendre function $y=P_{3/2}$ satisfies $$x\mapsto\frac{y'(x)}{y'(-x)} \textrm{ is strictly increasing on } (-1,1).$$ \end{lemma} \begin{proof} It suffices to prove that $$y''(x)y'(-x)+y''(-x)y'(x)>0 \textrm{ in } (-1,1).$$ Using the differential equation $$(1-x^2)y''(x)-2xy'(x)+ \frac{3}{2}\frac{5}{2}y(x)=0,$$ we obtain $$y''(x)y'(-x)+y''(-x)y'(x)=-\frac{15}{4}\frac{1}{1-x^2} (y(x)y'(-x)+y(-x)y'(x)).$$ Therefore it is sufficient to prove that $f(x)=y(x)y'(-x)+y(-x)y'(x)<0$ in $(-1,1)$. As $f(x)\to -\infty$ for $|x|\to 1$, must have a maximum point in $(-1,1)$. At the maximum point, $$0=f'(x)=\frac{2x}{1-x^2} (y(x)y'(-x)+y(-x)y'(x)),$$ implying that $x=0$ and that $$\max f = 2 y(0)y'(0)=3\frac{\sqrt{\pi}}{\Gamma(-{1\over 2})\Gamma({7\over 4})}P_{1\over 2}(0)$$ $$= 3\frac{\sqrt{\pi}}{\Gamma(-{1\over 4})\Gamma({7\over 4})} \frac{\sqrt{\pi}}{\Gamma({1\over 4})\Gamma({5\over 4})}<0$$ (see http://functions.wolfram.com/07.07.20.0006.01,\\ http://functions.wolfram.com/07.07.03.0001.01). \end{proof} \begin{proposition}[Characterization of blow-up Limits]\label{2dim} Let $u$ be a variational solution of {\rm (\ref{strongp})}, and suppose that \[\frac{|\nabla u|^2}{x_1}\le C x_1 |x_2|\quad\text{locally in }\Omega,\] and that \[\int_{B^+_r(x^0)} \sqrt{x_2^+}\,d|\nabla \chi_{\{ u>0\}}|\le C_1 \left\{\begin{array}{l} r^{3\over 2}, x^0_2=0\\ r \sqrt{x^0_2}, x^0_2> 0\end{array}\right.\] for all sufficiently small $r>0$. \\ Then the following hold: \\ (i) In the case $x^0_1>0$ and $x^0_2>0$, the only possible blow-up limits of $u_m(x) = u(x^0+{r_m}x)/r_m$ are \[u_0(x)= x^0_1\sqrt{x^0_2}\max(x\cdot e,0)\qquad\text{and}\qquad u_0(x)=\gamma |x\cdot e|,\] where $e$ is a unit vector and $\gamma$ is a nonnegative constant. In the case $u_0(x)= x^0_1\sqrt{x^0_2}\max(x\cdot e,0)$, the corresponding density is $M^{int}(0+)=x^0_1x^0_2 \omega_2/2$, in the case $u_0(x)=\gamma |x\cdot e|$ with $\gamma>0$ the density is $M^{int}(0+)=x^0_1x^0_2\omega_2$, while in the case $u_0=0$ the density has possible values $M^{int}(0+)\in\{0,x^0_1x^0_2\omega_2\}$. \\ (ii) In the case $x^0_1>0$ and $x^0_2=0$, the only possible blow-up limits are \[u_0(\rho \sin \theta,\rho \cos \theta)= \frac{\sqrt{2}x^0_1}{3}\rho^{3/2}\cos({3\over 2}\theta)\chi_{\{(\rho\sin\theta,\rho\cos\theta): -\pi/3<\theta<\pi/3\}},\] with corresponding density \[ M^{x_2}(0+)=x^0_1 \int_{B_1} x_2\chi_{\{(\rho\sin\theta,\rho\cos\theta): -\pi/3<\theta<\pi/3\}}\,dx,\] and $u_0(x)=0$, with possible values of the density \[ M^{x_2}(0+)\in \left\{x^0_1 \int_{B_1} x_2^+\,dx, x^0_1 \int_{B_1} x_2^-\,dx, 0\right\}.\] \\ (iii) In the case $x^0_1=0$ and $x^0_2>0$, the only possible blow-up limits are \[u_0(x) = \gamma x_1^2\] with $\gamma$ a nonnegative constant and corresponding density $$M^{x_1}(0+)=x^0_2 \int_{B^+_1} x_1\,dx,$$ and $u_0(x)=0$, with possible values of the density \[M^{x_1}(0+)\in\left\{x^0_2 \int_{B^+_1} x_1\,dx, 0\right\}.\] \\ (iv) In the case $x^0_1=x^0_2=0$, the only possible blow-up limits are \[u_0(\rho\sin\theta,\rho\cos\theta)= r^{5\over 2}\> U_\ell(\theta)\] with corresponding density \[ M^{x_1x_2}(0+)=\int_{B^+_1\cap \{(\rho\sin\theta,\rho\cos\theta): P'_{3/2}(-\cos \theta)<0\}} x_1 x_2\,dx ,\] where $P_{3/2}$ is the Legendre function and $U_\ell$ is a unique function which is positive in $B^+_1\cap \{P'_{3/2}(-\cos \theta)<0\}$ (an angle of $\approx 114.799^\circ$ in the positive $x_2$-direction) and zero else, and $u_0(x)=0$, with possible values of the density \[ M^{x_1x_2}(0+)\in\left\{\int_{B^+_1} x_1 x_2^+\,dx, \int_{B^+_1} x_1 x_2^-\,dx,0\right\}.\] For $U_\ell$ we have the relations $$\frac{5}{2} U_\ell(\theta)=c_0 \sin^2 \theta P_{3/2}'(\cos\theta), U_\ell'(\theta) = c_0 \frac{3}{2} \sin\theta P_{3/2}(\cos \theta)$$ with a unique positive constant $c_0$. \end{proposition} \begin{proof} Consider a blow-up sequence $u_m$ as in Lemma \ref{density_1}, where $r_m\to 0+$, with blow-up limit $u_0$. Because of the strong convergence of $\nabla u_m$ to $\nabla u_0$ in $L^2$ and the compact embedding from $BV$ into $L^1$, $u_0$ is a homogeneous solution of \begin{align} \label{dmvint} 0 = & \int_{\R^2} \frac{1}{x^0_1}\Big( {\vert \nabla u_0 \vert}^2 \div\phi - 2 \nabla u_0 D\phi \nabla u_0 \Big)\,dx +x^0_1 x^0_2 \int_{\R^2} \chi_0 \div\phi\,dx\\ \non & \textrm{ in the case } x^0_1>0 \textrm{ and } x^0_2> 0,\\ \label{dmvx2}0 = &\int_{\R^2} \frac{1}{x^0_1} \Big( {\vert \nabla u_0 \vert}^2 \div\phi - 2 \nabla u_0 D\phi \nabla u_0 \Big)\,dx + \int_{\R^2} \Big(x^0_1 x_2 \chi_0 \div\phi + x^0_1\chi_0 \phi_2\Big)\,dx\\ \non &\textrm{ in the case } x^0_1>0 \textrm{ and } x^0_2=0,\\ \label{dmvx1}0=& \int_{\R^2_+} \frac{1}{x_1} \Big( {\vert \nabla u_0 \vert}^2 \div\phi -\frac{1}{x_1}\vert\nabla u_0\vert^2 \phi_1 - 2 \nabla u_0 D\phi \nabla u_0 \Big)\,dx\\ \non & + \int_{\R^2_+} \Big(x_1 x^0_2 \chi_0 \div\phi\,dx + x^0_2 \chi_0 \phi_1 \Big)\,dx\\ \non &\textrm{ in the case } x^0_1=0 \textrm{ and }x^0_2> 0;\\ \label{dmvx1x2}0=& \int_{\R^2_+} \frac{1}{x_1} \Big({\vert \nabla u_0 \vert}^2 \div\phi -\frac{1}{x_1}\vert\nabla u_0\vert^2 \phi_1 - 2 \nabla u_0 D\phi \nabla u_0 \Big)\,dx\\ \non & + \int_{\R^2_+} \Big(x_1 x_2 \chi_0 \div\phi + x_2 \chi_0 \phi_1 + x_1\chi_0 \phi_2\Big)\,dx\\ \non &\textrm{ in the case } x^0_1=x^0_2=0; \end{align} the formulas are valid for every $\phi=(\phi_1,\phi_2) \in C^1_0(\R^2;\R^2)$ in the case $x^0_1>0$ and for every $\phi=(\phi_1,\phi_2) \in C^1_0(\R^2;\R^2)$ such that $\phi_1=0$ on $\{ x_1=0\}$ in the case $x^0_1=0$. Moreover $\chi_0$ is the strong $L^1_{\textnormal{loc}}$-limit of $\chi_{\{u_m>0\}}$ along a subsequence. The values of the function $\chi_0$ are almost everywhere in $\{0, 1\}$, and the locally uniform convergence of $u_m$ to $u_0$ implies that $\chi_0=1$ in $\{ u_0>0\}$. Moreover $\chi_0$ is constant in each connected component of $\{ u_0=0\}^\circ\setminus \{ x_2=0\}$. In the case $u_0= 0$, (\ref{dmvint})-(\ref{dmvx1x2}) show that $\chi_0$ is constant in $\{ x_2\ne 0\}$ in the cases (\ref{dmvx2}) and (\ref{dmvx1x2}) and that $\chi_0$ is constant in the cases (\ref{dmvint}) and (\ref{dmvx1}). Its value may be either $0$ or $1$. Let $z$ be an arbitrary point in $ \partial\{ u_0=0\} \setminus \{ 0\}$. Consider first the case when $B_\delta(z)\cap\{u_0>0\}$ has exactly one connected component. Note that the normal to $\partial \{ u_0=0\}$ has the constant value $\nu(z)$ in $B_\delta(z)$ for some $\delta>0$. Plugging in $\phi(x):= \eta(x)\nu(z)$ into (\ref{dmvint})-(\ref{dmvx1x2}), where $\eta \in C^1_0(B_\delta(z))$ is arbitrary, and integrating by parts, it follows that \begin{align} \label{dmvint2}0= & \int_{\partial\{ u_0>0\}} \left( -\frac{1}{x^0_1}|\nabla u_0|^2 + x^0_1x^0_2 (1-\bar\chi_0)\right)\eta \,d\mathcal{H}^1\\ \non & \textrm{ in the case } x^0_1>0 \textrm{ and } x^0_2> 0,\\ \label{dmvx22}0= &\int_{\partial\{ u_0>0\}} \left( -\frac{1}{x^0_1}|\nabla u_0|^2 + x^0_1x_2 (1-\bar\chi_0)\right)\eta \,d\mathcal{H}^1\\ \non &\textrm{ in the case } x^0_1>0 \textrm{ and } x^0_2=0,\\ \label{dmvx12}0= & \int_{\partial\{ u_0>0\}} \left( -\frac{1}{x_1}|\nabla u_0|^2 + x_1x^0_2 (1-\bar\chi_0)\right)\eta \,d\mathcal{H}^1\\ \non & \textrm{ in the case } x^0_1=0 \textrm{ and } x^0_2> 0,\\ \label{dmvx1x22}0= &\int_{\partial\{ u_0>0\}} \left( -\frac{1}{x_1}|\nabla u_0|^2 + x_1x_2 (1-\bar\chi_0)\right)\eta \,d\mathcal{H}^1\\ \non &\textrm{ in the case } x^0_1=x^0_2=0. \end{align} Here $\bar\chi_0$ denotes the constant value of $\chi_0$ in $\{ u_0=0\}^\circ$. Note that by Hopf's principle, $\nabla u_0\cdot \nu\ne 0$ on $B_\delta(z)\cap \partial\{ u_0>0\}$. In all cases it follows therefore that $\bar\chi_0\neq 1$, and hence necessarily $\bar\chi_0=0$. We deduce from (\ref{dmvint2})-(\ref{dmvx1x22}) that \begin{align*} |\nabla u_0|^2=&(x^0_1)^2x^0_2 \textrm{ on } \partial \{ u_0>0\} \\ \non & \textrm{ in the case } x^0_1>0 \textrm{ and } x^0_2> 0,\\ |\nabla u_0|^2=&(x^0_1)^2x_2 \textrm{ on } \partial \{ u_0>0\} \\ \non & \textrm{ in the case } x^0_1>0 \textrm{ and } x^0_2=0,\\ |\nabla u_0|^2=&x_1^2x^0_2 \textrm{ on } \partial \{ u_0>0\} \\ \non & \textrm{ in the case } x^0_1=0 \textrm{ and } x^0_2> 0,\\ |\nabla u_0|^2=&x_1^2x_2 \textrm{ on } \partial \{ u_0>0\} \\ \non & \textrm{ in the case } x^0_1=x^0_2=0. \end{align*} Next, let us try to compute $u_0$: In the cases where $x^0_1>0$, the homogeneity of $u_0$ and its harmonicity in $\{u_0>0\}$ imply the following: if $x^0_2>0$, then each connected component of $\{u_0>0\}$ is a half-plane passing trough the origin. If $x^0_2=0$, then the fact that $u_0$ must be harmonic in $\{ x_2<0\}$, implies that $\{u_0>0\}$ is a cone with vertex at the origin and of opening angle $120^\circ$ symmetric with respect to and containing $\{ (0, t): t>0\}$. In the cases where $x^0_1=0$, solving the resulting ODE leads to hypergeometric functions and is slightly awkward, so we will instead use, in each section of the unit disk where $u_0>0$, the velocity potential $\phi$ defined by \begin{align*} \partial_1 \phi = {1\over x_1}\partial_2 u, \partial_2 \phi = -{1\over x_1}\partial_1 u. \end{align*} In the case $x^0_2>0$ we obtain that $\phi(\rho\sin\theta,\rho\cos\theta)$ is homogeneous of degree $1$ and is on the unit circle given by a linear combination of $P_1(\cos \theta)$ and $\Re(Q_1(\cos \theta))$, where $P_1$ and $Q_1$ are the Legendre functions. Now $P_1(x)=x$ and $\Re Q_1$ is a strictly convex function with singularities at $-1$ and $1$, so that it is not possible that $$\alpha P_1'(x)+ (\Re Q_1)'(x)=\alpha P_1'(y)+ (\Re Q_1)'(y) \textrm{ for } x\ne y \in (-1,1).$$ It follows that there can be at most one free surface point of the solution $\alpha P_1(\cos\theta)+ \Re Q_1(\cos\theta)$ in $(0,\pi)$, but then the solution would have at least one singularity in the interval $[0,\pi]$. Thus the only solution possible is $\sigma P_1(\cos \theta)= \sigma \cos \theta$, so that $\phi(x)=\sigma x_2$ and $u_0(x)=c x_1^2$, where $c$ and $\sigma $ are non-negative constants. The statement about the density follows as $\chi_0=1$ in $\{ u_0>0\}$. In the case $x^0_2=0$ we obtain that $\phi(\rho\sin\theta,\rho\cos\theta)$ is homogeneous of degree $3/2$ and is on the unit circle given by a liner combination of $P_{3/2}(\cos \theta)$ and $P_{3/2}(-\cos \theta)$, where $P_{3/2}$ is the Legendre function. It is well known that $P_{3/2}$ has only one singularity at $-1$ and that $P'_{3/2}$ has in $(-1,1)$ a unique zero $z_0\in (-1,0)$. By Lemma \ref{legendre} we obtain as in the last case that $\alpha P_{3/2}(\cos \theta)+\beta P_{3/2}(-\cos \theta)$ can have at most one free surface point in $(0,\pi)$. but then the solution would have at least one singularity in the interval $[0,\pi]$ unless $\beta=0$. The fact that the singularity and the unique zero are both contained in $[-1,0)$ implies therefore that either \begin{align*} \phi(\rho\sin\theta,\rho\cos\theta)= \sigma\rho^{3/2} P_{3/2}(\cos \theta) \textrm{ in } \{0< \theta < \arccos(z_0)\}\\ \intertext{or}\\ \phi(\rho\sin\theta,\rho\cos\theta)= \sigma \rho^{3/2} P_{3/2}(-\cos \theta) \textrm{ in } \{\arccos(-z_0)<\theta<\pi\}. \end{align*} However the free surface must not intersect $\{x_2<0\}$, so that we obtain that the only admissible solution is $$\phi(\rho\sin\theta,\rho\cos\theta)= \sigma \rho^{3/2} P_{3/2}(-\cos \theta) \textrm{ in } \{\arccos(-z_0)<\theta<\pi\}$$ for some nonzero constant $\sigma $. Switching from the velocity potential back to $u_0$ we obtain the statement about $u_0$ as well as the density. Last, consider the situation when the set $B_\delta(z)\cap\{u_0>0\}$ has two connected components. The computations of $u_0$ in the respective cases show that this is only possible for $x^0_1>0$ and $x^0_2>0$. The argument for (\ref{dmvint2}) yields in this case that the constant values of $|\nabla u_0|^2$ on either side of $\partial \{u_0>0\}$ are equal. This concludes the proof. \end{proof} \begin{lemma}\label{zero} Let $u$ be a weak solution of {\rm(\ref{strongp})} such that $u=0$ in $\{x_2\le 0\}$ and suppose that \[\frac{|\nabla u|^2}{x_1} \leq x_1 x_2^+\quad\text{in }\Omega.\] Then $x^0_2=0$, $x^0_1>0$ and $M^{x_2}(0+)=0$ imply that $u\equiv 0$ in some open $2$-dimensional ball containing $x^0$, while $x^0_1=x^0_2=M^{x_1x_2}(0+)=0$ implies that $u\equiv 0$ in $B_\delta^+$ for some $\delta>0$. \end{lemma} \begin{proof} Suppose towards a contradiction that $x^0\in \partial\{ u>0\}$, and let us take a blow-up sequence \[u_m(x) := {u(x^0+{r_m}x)/{r_m^{3/2}}}\] converging weakly in $W^{1,2}_{\textnormal{loc}}(\R^2)$ to a blow-up limit $u_0$ in the case that $x^0_1>0$, and a blow-up sequence \[u_m(x) := {u(x^0+{r_m}x)/{r_m^{5/2}}}\] converging weakly in $W^{1,2}_{w,\textnormal{loc}}(\R^2_+)$ to a blow-up limit $u_0$ in the case that $x^0_1=0$. Proposition \ref{2dim} shows that $u_0=0$ in $\R^2$. Consequently, \begin{align}\label{meas0} &0\gets \div ({1\over {x^0_1+r_m x_1}} \nabla u_m) (B_2)\ge \int_{B_2\cap \partial_{\textnormal{red}} \{ u_m>0\}} \sqrt{x_2} \,d\mathcal{H}^1 \textrm{ in the case } x^0_1>0,\\ \non &0\gets \div ({1\over {x_1}} \nabla u_m) (B_2^+)\ge \int_{B_2^+\cap \partial_{\textnormal{red}} \{ u_m>0\}} \sqrt{x_2} \,d\mathcal{H}^1 \textrm{ in the case } x^0_1=0 \end{align} as $m\to\infty$. (Recall that $\div ({1\over {x_1}} \nabla u)$ is a nonnegative Radon measure in $\Omega$.) On the other hand, there is at least one connected component $V_m$ of $\{ u_m>0\}$ touching the origin and containing by the maximum principle a point $x^m\in \partial A$, where $A = (-1,1)\times (0,1)$ in the case $x^0_1>0$ and $A= (0,1)\times (0,1)$ in the case $x^0_1=0$. If $\max\{x_2: x \in V_m\cap \partial A\}\not\to 0$ as $m\to\infty$, we immediately obtain a contradiction to (\ref{meas0}). If $\max\{ x_2: x \in V_m\cap \partial A\}\to 0$, we use the free-boundary condition as well as $|\nabla u|^2/x_1^2 \leq x_2^+$ to obtain \begin{align*} &0=\div ({1\over {x^0_1+r_mx_1}} \nabla u_m) (V_m\cap A) \le \int_{V_m\cap \partial A} \sqrt{x_2} \,d\mathcal{H}^1 - \int_{A\cap \partial_{\textnormal{red}} V_m} \sqrt{x_2} \,d\mathcal{H}^1\\&\qquad\qquad\qquad \qquad\qquad\qquad\textrm{ in the case } x^0_1>0,\\ &0=\div ({1\over {x_1}} \nabla u_m) (V_m\cap A) \le \int_{V_m\cap \partial A} \sqrt{x_2} \,d\mathcal{H}^1 - \int_{A\cap \partial_{\textnormal{red}} V_m} \sqrt{x_2} \,d\mathcal{H}^1\\&\qquad\qquad\qquad\qquad\qquad\qquad\textrm{ in the case } x^0_1=0. \end{align*} However $\int_{V_m\cap \partial A} \sqrt{x_2} \,d\mathcal{H}^1$ is the unique minimiser of $\int_{\partial D} \sqrt{x_2} \,d\mathcal{H}^1$ with respect to all open sets $D$ with $D=V_m$ on $\partial A$. So $V_m$ cannot touch the origin, a contradiction.\end{proof} \begin{theorem}[Curve Case]\label{curve} Let $u$ be a weak solution of (\ref{strongp}) satisfying \[\frac{|\nabla u|^2}{x_1}\le C x_1 |x_2|\quad\text{locally in }\Omega,\] and let $x^0\in\Om$ be such that $x^0_1x^0_2=0$. Suppose in addition that $\partial\{ u>0\}\cap B_1^+(x^0)$ is in a neighborhood of $x^0$ a continuous injective curve $\sigma:I\to \R^2$ such that $\sigma=(\sigma_1,\sigma_2)$ and $\sigma(0)=x^0$. Then the following hold: $(i_1)$ {\em Stokes corner:} If $x^0_1> 0$, $x^0_2=0$ and $$M^{x_2}(0+)= x^0_1 \int_{B^+_1} x_2\chi_{\{(\rho\sin\theta, \rho\cos\theta): -\pi/3<\theta<\pi/3\}}\,dx,$$ then (cf. Figure \ref{fig1}) $\sigma_1(t)\ne x^0_1$ in $(-t_1,t_1)\setminus \{ 0\}$ and, depending on the parametrization, either $$ \lim_{t\to 0+} \frac{\sigma_2(t)}{\sigma_1(t)-x^0_1} = \frac{1}{\sqrt{3}} \textrm{ and } \lim_{t\to 0-} \frac{\sigma_2(t)}{\sigma_1(t)-x^0_1} = -\frac{1}{\sqrt{3}},$$ or $$ \lim_{t\to 0+} \frac{\sigma_2(t)}{\sigma_1(t)-x^0_1} = -\frac{1}{\sqrt{3}} \textrm{ and } \lim_{t\to 0-} \frac{\sigma_2(t)}{\sigma_1(t)-x^0_1} = \frac{1}{\sqrt{3}}.$$ \begin{minipage}{\textwidth} \begin{center} \input{vort1.pdf_t} \end{center} \captionof{figure}{Stokes corner ($x^0_1>0, x^0_2=0$)}\label{fig1} \end{minipage} $(i_2)$ If $x^0_1> 0$, $x^0_2=0$ and $M^{x_2}(0+)=x^0_1\int_{B^+_1} x^+_2\,dx$ or $M^{x_2}(0+)=x^0_1\int_{B^+_1} x^-_2\,dx$, then (cf. Figure \ref{fig6}) $\sigma_1(t)\ne x^0_1$ in $(-t_1,t_1)\setminus \{ 0\}$, $\sigma_1-x^0_1$ changes sign at $t=0$ and $$ \lim_{t\to 0} \frac{\sigma_2(t)}{\sigma_1(t)-x^0_1} = 0.$$ \begin{minipage}{\textwidth} \begin{center} \input{vort2.pdf_t} \end{center} \captionof{figure}{Horizontal point ($x^0_1>0, x^0_2=0$)}\label{fig6} \end{minipage} $(i_3)$ In the case $x^0_1>0$, $x^0_2=0$ and $M^{x_2}(0+)=0$ ---which is according to Lemma \ref{zero} not possible at all provided that $u=0$ in $\{x_2\le 0\}$ and the sharp Bernstein inequality holds---, then $\sigma_1(t)\ne x^0_1$ in $(-t_1,t_1)\setminus \{ 0\}$, $\sigma_1-x^0_1$ does not change its sign at $t=0$, and $$ \lim_{t\to 0} \frac{\sigma_2(t)}{\sigma_1(t)-x^0_1} = 0.$$ $(ii_1)$ If $x^0_2>0$, $x^0_1=0$ and $M^{x_1}(0+)=x^0_2\int_{B^+_1} x_1\,dx$, then (cf. Figures \ref{fig3}-\ref{fig5}) $\sigma_2(t)\ne x^0_2$ in $(0,t_1)$ and $$ \lim_{t\to 0} \frac{\sigma_1(t)}{\sigma_2(t)-x^0_2} = 0,$$ \begin{figure} \begin{center} \input{vort3.pdf_t} \end{center} \captionof{figure}{Downward Vertical Cusp ($x^0_1=0, x^0_2>0$)}\label{fig3} \end{figure} \begin{figure} \begin{center} \input{vort4.pdf_t} \end{center} \captionof{figure}{Upward Vertical Cusp ($x^0_1=0, x^0_2>0$)}\label{fig4} \end{figure} or $\sigma_2(t)\ne x^0_1$ in $(-t_1,t_1)\setminus \{ 0\}$, $\sigma_2-x^0_2$ changes sign at $t=0$ and $$ \lim_{t\to 0} \frac{\sigma_1(t)}{\sigma_2(t)-x^0_2} = 0.$$ \begin{figure} \begin{center} \input{vort5.pdf_t} \end{center} \captionof{figure}{Double Vertical Cusp ($x^0_1=0, x^0_2>0$)}\label{fig5} \end{figure} $(ii_2)$ The case $x^0_2>0$, $x^0_1=0$ and $M^{x_1}(0+)=0$ is not possible. $(iii_1)$ {\em Garabedian corner:} If $x^0_1=x^0_2=0$ and $$M^{x_1x_2}(0+)= \int_{B^+_1\cap \{P'_{3/2}(-\cos \theta)<0\}} x_1 x_2\,dx,$$ then (cf. Figure \ref{fig2}) $\sigma_1(t)\ne 0$ in $(0,t_1)$ and, $$ \lim_{t\to 0+} \frac{\sigma_2(t)}{\sigma_1(t)} = \tan(\pi/2-\arccos(-z_0)).$$ \begin{figure} \begin{center} \input{vort8.pdf_t} \end{center} \captionof{figure}{Garabedian corner ($x^0_1=x^0_2=0$)}\label{fig2} \end{figure} $(iii_2)$ If $x^0_1=x^0_2=0$ and $$M^{x_1x_2}(0+)=\int_{B^+_1} x_1x^+_2\,dx \textrm{ or }M^{x_1x_2}(0+)=\int_{B^+_1} x_1x^-_2\,dx,$$ then (cf. Figure \ref{fig7}) $\sigma_1(t)\ne 0$ in $(0,t_1)$ and $$ \lim_{t\to 0} \frac{\sigma_2(t)}{\sigma_1(t)} = 0.$$ (In the subsequent sections of the present paper we will analyze the precise asymptotics of the velocity field in the case $M^{x_1x_2}(0+)=\int_{B^+_1} x_1x^+_2\,dx$.) \begin{figure} \begin{center} \input{vort6.pdf_t} \end{center} \captionof{figure}{Horizontal point ($x^0_1=x^0_2=0$)}\label{fig7} \end{figure} $(iii_3)$ If $x^0_1=x^0_2=0$ and $M^{x_1x_2}(0+)=0$ ---which is according to Lemma \ref{zero} not possible at all provided that $u=0$ in $\{x_2\le 0\}$ and the sharp Bernstein inequality holds---, then $\sigma_1(t)\ne 0$ in $(-t_1,t_1)\setminus \{ 0\}$, and $$ \lim_{t\to 0} \frac{\sigma_2(t)}{\sigma_1(t)} = 0.$$ \end{theorem} \begin{remark} Although we omit a proof in the present paper, a perturbation of the frequency formula in \cite{VW} (see \cite{VW2}) can be used to prove that, if $x_1^0>$ and $x_2^0=0$, then case $M^{x_2}(0+)=x^0_1\int_{B^+_1} x^+_2\,dx$ is not possible. Case $(ii_1)$ seems possible as we have a nontrivial homogeneous solution. We do at present not have an existence proof for the cusps suggested here. \end{remark} \begin{proof} We prove the claimed results only case (iii), when $x^0_1=x^0_2=0$, the analysis in the other cases being similar. For each $y=(y_1,y_2)\in \R^2$ with $y_1\geq 0$ and $(y_1,y_2)\neq (0,0)$, we define $\theta(y)\in [0,\pi]$ by the relation \[(y_1,y_2)=(\rho(y)\sin\theta(y), \rho(y)\cos\theta(y)).\] We now consider the set $${\mathcal L}= \{ \theta_0\in [0,\pi] \> : \> \textrm{ there is } t_m \to 0\textrm{ such that }\theta(\sigma(t_m)) \to \theta_0 \textrm{ as } m\to\infty\}.$$ Note that in fact ${\mathcal L}\subset[0,\pi/2]$, since the free boundary $\partial\{u>0\}$ is contained in $\{x_2\geq 0\}$. We now claim that: {\em The set ${\mathcal L}$ is a subset of $\{ 0, \theta^*,\pi/2\}$, where $\theta^*=\arccos(-z_0)$ is the angle corresponding to the Garabedian cone.}\\ Indeed, suppose towards a contradiction that a sequence $0\ne t_m\to 0+, m\to\infty$ exists such that $\theta(\sigma(t_m)) \to \theta_0\in {\mathcal L} \setminus \{ 0, \theta^*,\pi/2\}$, let $r_m := |\sigma(t_m)|$ and let $$u_m(x) := \frac{u({r_m}x)}{r_m^{5/2}}.$$ For each $\rho>0$ such that $\tilde B := B_\rho(\sin\theta_0,\cos \theta_0)$ satisfies $$\emptyset = \tilde B \cap \left(\{ (\alpha,0)\> : \alpha\in \R_+\}\cup \{ (0,\alpha)\> : \alpha\in \R_+\} \cup \{ (\alpha\sin\theta^*,\alpha\cos\theta^*)\> : \alpha\in\R_+\}\right),$$ we infer from the formula for the unique blow-up limit $u_0$ (see Theorem \ref{2dim}) that the convergence of measures $$(\div (\we \nabla u_m))(\tilde B)\to (\div (\we \nabla u_0))(\tilde B)=0\textrm{ as } m\to\infty.$$ On the other hand, $$\div (\we \nabla u_m)= \sqrt{x_2} {\mathcal H}^1 \lfloor \partial{\{ u_m>0\}},$$ which implies, since $\tilde B\cap \partial{\{ u_m>0\}}$ contains a curve of length at least $2\rho-o(1)$, that $$0 \gets (\div (\we \nabla u_m))(\tilde B) \ge c(\theta_0,\rho)\textrm{ as } m\to\infty,$$ where $c(\theta_0,\rho)>0$, a contradiction. This proves the property claimed. Now, a continuity argument yields that ${\mathcal L}$ is a connected set. Consequently the limit $$\ell = \lim_{t\to 0+} \theta(\sigma(t))$$ exists and is contained in the set $\{ 0, \theta^*,\pi/2\}$. In what follows, we identify the value of $\ell$ in terms of the value of $M^{x_1x_2}(0+)$. Suppose first that $M^{x_1x_2}(0+)= \int_{B^+_1\cap \{P'_{3/2}(-\cos \theta)<0\}} x_1 x_2\,dx$. Then, by Proposition \ref{2dim}, the blow-up limit is \[u_0(\rho\sin\theta,\rho\cos\theta)= r^{5\over 2}\> U_\ell(\theta).\] Since $ (\div (\we \nabla u_0))(B_{1/100}(\sin\theta^*,\cos\theta^*))>0$, it follows that we cannot have $\ell\in\{0,\pi/2\}$, and therefore we must have $\ell=\theta^*$. This proves case $(iii_1)$ of the Theorem. Suppose now that $M^{x_1x_2}(0+)\in \left\{\int_{B^+_1} x_1 x_2^+\,dx, \int_{B^+_1} x_1 x_2^-\,dx,0\right\}$. Then the blow-up limit is $u_0(x)=0$. The same argument given earlier in the proof shows that $\ell\neq\theta^*$, so that necessarily $\ell\in\{0,\pi/2\}$. But then the formula in Lemma \ref{density_1} that $$M^{x_1x_2}(0+)=\lim_{r\to 0+} r^{-4} \int_{B^+_r(x^0)}x_1 x_2 \chi_{\{ u>0\}}\,dx,$$ shows that $\ell=0$ implies $M^{x_1x_2}(0+)=0$, while $\ell=\pi/2$ implies that $M^{x_1x_2}(0+)\in \left\{\int_{B^+_1} x_1 x_2^+\,dx, \int_{B^+_1} x_1 x_2^-\,dx, 0\right\}$. However, the possibility that $M^{x_1x_2}(0+)=0$ and $\ell=0$ is ruled out by the argument in the proof of Lemma \ref{zero}, even in the absence of the strict Bernstein condition. This proves the cases $(iii_2)$ and $(iii_3)$ of the Theorem. \end{proof} \section{Frequency formula} From now on we will focus on the case $x^0_1=x^0_2=0$, $u=0$ in $\{ x_2\le 0\}$ and $M^{x_1x_2}(0+)=\int_{B^+_1} x_1x^+_2\,dx$, in which we will derive a precise asymptotic profile of the velocity. \begin{theorem}[Frequency Formula]\label{freq2} Let $u$ be a variational solution of (\ref{strongp}), and let $\delta:=\dist(0,\partial\Omega)/2$. Let, for any $r\in (0,\delta)$, \[D(r)= \frac{r\int_{B^+_r(0)}\we|\nabla u|^2\,dx}{\int_{\partial B^+_r(0)}\we u^2\dh}\] and \[V(r)= \frac{r\int_{B^+_r(0)} x_1x_2(1-\chi_{\{ u>0\}})\,dx}{\int_{\partial B^+_r(0)}\we u^2\dh}.\] Then the ``frequency'' \begin{align*}H(r)&=D(r)-V(r)\\ &= \frac{r \int_{B^+_r(0)} \Big(\we |\nabla u|^2+x_1x_2 (\chi_{\{ u>0\}}-1)\Big)\,dx}{\int_{\partial B^+_r(0)}\we u^2\dh} \end{align*} satisfies for a.e. $r\in (0,\delta)$ the identities \begin{align} & H'(r)\non\\ &= \frac{2}{r} \int_{\partial B^+_r(0)}\we \left[ \frac{r (\nabla u\cdot\nu)}{\left(\int_{\partial B^+_r(0)}\we u^2\dh\right)^{1/2}} -D(r)\frac{u}{\left(\int_{\partial B^+_r(0)}\we u^2\dh\right)^{1/2}}\right]^2\dh\non\\&\qquad\qquad +\frac{2}{r} V^2(r)+\frac{2}{r} V(r)\left(H(r)-\frac{5}{2}\right)\label{ff}\end{align} and \begin{align} & H'(r)\non\\ &= \frac{2}{r} \int_{\partial B^+_r(0)}\we\left[ \frac{r (\nabla u\cdot\nu)}{\left(\int_{\partial B^+_r(0)}\we u^2\dh\right)^{1/2}} -H(r)\frac{u}{\left(\int_{\partial B^+_r(0)}\we u^2\dh\right)^{1/2}}\right]^2\dh\non\\&\qquad\qquad+ \frac{2}{r} V(r)\left(H(r)-\frac{5}{2}\right).\label{newff}\end{align} \end{theorem} \begin{proof} Note that, for all $r\in (0,\delta)$, \be H(r)= \frac{r^{-4}I(r)-\int_{B^+_1} x_1x_2\,dx}{r^{-5}J(r)}.\label{fgs}\ee Hence, for a.e. $r\in (0,\delta)$, \[ H'(r)= \frac{(r^{-4}I(r))'}{r^{-5}J(r)}-\frac{(r^{-4}I(r)-\int_{B^+_1} x_1x_2\,dx)}{r^{-5}J(r)}\frac{(r^{-5}J(r))'}{r^{-5}J(r)},\non\] Using the identities (\ref{Iprx1x2}) and (\ref{easy}) with $\alpha=-5$, we therefore obtain that, for a.e. $r\in (0,\delta)$, \begin{align} H'(r)&=\frac{\left(2r\ipbrx\we(\nabla u\cdot \nu)^2\dh- 5\ipbrx\we u (\nabla u\cdot \nu)\dh \right)}{\ipbrx \we u^2\dh}\non\\ &\qquad -(D(r)-V(r))\frac{1}{r}\frac{\left(2r\ipbrx\we u (\nabla u\cdot \nu)\dh-5\ipbrx \we u^2\dh\right)}{\ipbrx \we u^2\dh}\non\\ &=\frac{2}{r}\left(\frac{r^2\ipbrx\we (\nabla u\cdot \nu)^2\dh}{\ipbrx\we u^2\dh}-\frac{5}{2}D(r)\right)\non\\ &\qquad-\frac{2}{r}(D(r)-V(r))\left(D(r)-\frac{5}{2}\right),\label{wq} \end{align} where we have also used the fact, which follows from (\ref{part2}), that \be D(r)=\frac{r\ipbrx \we u (\nabla u\cdot \nu)\dh}{\ipbrx \we u^2\dh}\label{hfvv}.\ee Identity (\ref{ff}) now follows by merely rearranging (\ref{wq}), making use again of (\ref{hfvv}) and the fact that $D(r)=V(r)+H(r)$. Since (\ref{ff}) holds, it follows by inspection that (\ref{newff}) holds if and only if \begin{align} \int_{\partial B^+_r(0)}&\we\left[{r (\nabla u\cdot\nu)} -D(r){u}\right]^2\dh + V^2(r)\int_{\partial B^+_r(0)}\we u^2\dh\non\\& =\int_{\partial B^+_r(0)}\we \left[{r (\nabla u\cdot\nu)} -H(r){u}\right]^2\dh\label{ttr} .\end{align} However, (\ref{ttr}) is easily verified as a consequence of (\ref{hfvv}) and the fact that $D(r)=H(r)+V(r)$. In conclusion, identity (\ref{newff}) also holds. \end{proof} \begin{theorem}\label{mouthm} Let $u$ be a variational solution of {\r (\ref{strongp})} such that $u=0$ in $\{ x_2\le 0\}$, let $x^0=0$, suppose that $M^{x_1x_2}(0+)=\int_{B^+_1} x_1x^+_2\,dx$, and let $\delta:=\dist(0,\partial\Omega)/2$. Then the following hold: (i) $H(r)\geq\frac{5}{2}\quad\text{for all }r\in (0,\delta)$. (ii) The function $r\mapsto r^{-5}J(r)$ is nondecreasing on $(0,\delta)$. (iii) The function $H$ is nondecreasing on $(0,\delta)$, and has a right limit $H(0+)$, where $H(0+)\ge 5/2$. (iv) $r\mapsto \frac{1}{r}V^2(r)\in L^1(0, \delta)$. \end{theorem} \begin{proof} (i) The monotonicity, which follows from Theorem \ref{elmon2}, of the function $M^{x_1x_2}$ ensures that, for all $r\in (0,\delta)$, \be r^{-4}I(r)-\frac{5}{2}r^{-5}J(r)\geq \int_{B^+_1} x_1x^+_2\,dx.\label{lala}\ee Using (\ref{fgs}), the above inequality may be rearranged in the form of the claimed result. (ii) Plugging $\alpha=-5$ into (\ref{easy}), using also (\ref{part2}), and then (\ref{lala}), we obtain, for a.e. $r\in (0,\delta)$, \begin{align*}(r^{-5}J(r))'&=\frac{2}{r}\left( r^{-4}\int_{B^+_r(0)} \we {\vert \nabla u \vert}^2\,dx-\frac{5}{2}r^{-5}\int_{\partial B^+_r(0)}\we u^2\dh\right)\\ &\geq 2r^{-5}\int_{B^+_r(0)} x_1x_2(1-\chi_{\{ u>0\}})\,dx\geq 0, \end{align*} which implies the claimed result. (iii) The monotonicity of $H$ on $(0,\delta)$ is a consequence of (\ref{ff}) and (i). The remaining part of the claim is immediate. (iv) The claimed result follows from (\ref{ff}) and (iii). \end{proof} \section{Blow-up limits} The Frequency Formula allows passing to blow-up limits. \begin{proposition}\label{blowup} Let $u$ be a variational solution of {\rm (\ref{strongp})} such that $u=0$ in $\{x_2\le 0\}$, let $x^0=0$, and suppose that $M^{x_1x_2}(0+)=\int_{B^+_1} x_1x^+_2\,dx$. Then: (i) There exist $\lim_{r\to 0+}V(r)= 0$ and $\lim_{r\to 0+}D(r)= H(0+)$. (ii) For any sequence $r_m\to 0+$ as $m\to\infty$, the sequence \be v_m(x) := \frac{u(r_m x)}{\sqrt{r_m^{-1}\int_{\partial B_{r_m}^+} \we u^2 \dh}}\label{vm}\ee is bounded in $W^{1,2}_w(B_1^+)$. (iii) For any sequence $r_m\to 0+$ as $m\to\infty$ such that the sequence $v_m$ in {\rm (\ref{vm})} converges weakly in $W^{1,2}_w(B_1^+)$ to a blow-up limit $v_0$, the function $v_0$ is homogeneous of degree $H(0+)$ in $B_1^+$, and satisfies \[\text{$v_0\geq 0$ in $B_1$, $v_0\equiv 0$ in $B_1^+\cap\{x_2\leq 0\}$ and $\int_{\partial B_1^+}\we v_0^2\dh=1$.}\] \end{proposition} \begin{proof} We first prove that, for any sequence $r_m\to 0+$, the sequence $v_m$ defined in (\ref{vm}) satisfies, for every $0<\tau<\sigma<1$, \be\int_{B_\sigma^+\setminus B_\tau^+} \we|x|^{-5}\left[\nabla v_m(x) \cdot x-H(0+)v_m(x)\right]^2\,dx\to 0\quad\text{as }m\to\infty.\label{vol}\ee Indeed, for any such $\tau$ and $\sigma$, it follows by scaling from (\ref{newff}) that, for every $m$ such that $r_m<\delta$, \begin{align}\int_\tau^\sigma\frac{2}{r} &\int_{\partial B_r^+}\we\left[ \frac{r (\nabla v_m \cdot\nu)}{\left(\int_{\partial B_r^+}\we v_m^2\dh\right)^{1/2}}-H(r_mr)\frac{v_m}{\left(\int_{\partial B_r^+}\we v_m^2\dh\right)^{1/2}}\right]^2\dh\,dr\non\\&\leq H(r_m\sigma)-H(r_m \tau)\to 0\quad\text{as }m\to\infty,\non\end{align} as a consequence of Theorem \ref{mouthm} (iii). The above implies that \begin{align}\int_\tau^\sigma\frac{2}{r} &\int_{\partial B_r^+}\we\left[ \frac{r (\nabla v_m \cdot\nu)}{\left(\int_{\partial B_r^+}\we v_m^2\dh\right)^{1/2}}-H(0+)\frac{v_m}{\left(\int_{\partial B_r^+}\we v_m^2\dh\right)^{1/2}}\right]^2\dh\,dr\non\\&\to 0 \quad\text{as }m\to\infty.\label{coo}\end{align} Now note that, for every $r\in (\tau,\sigma)\subset (0,1)$ and all $m$ as before, it follows by using Theorem \ref{mouthm} (ii), that \[\int_{\partial B_r^+}\we v_m^2\dh=\frac{\int_{\partial B_{r_m r}^+} \we u^2\dh}{\int_{\partial B_{r_m}^+}\we u^2\dh}\leq r^{5}\leq 1.\] Therefore (\ref{vol}) follows from (\ref{coo}), which proves our claim. Let us also recall (\ref{hfvv}). We can now prove all parts of the Proposition. (i) Suppose towards a contradiction that (i) is not true. Let $s_m\to 0$ be such that the sequence $V(s_m)$ is bounded away from $0$. It is a consequence of Theorem \ref{mouthm}(iv) that \[ \min_{r\in [s_m, 2s_m]} V(r)\to 0\quad\text{as }m\to\infty.\] Let $t_m\in [s_m, 2s_m]$ be such that $V(t_m)\to 0$ as $m\to\infty$. For the choice $r_m:=t_m$ for every $m$, the sequence $v_m$ given by (\ref{vm}) satisfies (\ref{vol}). The fact that $V(r_m)\to 0$ implies that $D(r_m)$ is bounded, and hence that $v_m$ is bounded in $W^{1,2}_w(B_1^+)$. Let $v_0$ be any weak limit of $v_m$ along a subsequence. Note that by the compact embedding $W^{1,2}_w(B_1^+)\hookrightarrow L^2(\partial B_1^+)$, $v_0$ has norm $1$ on $L^2_w(\partial B_1^+)$, since this is true for $v_m$ for all $m$. It follows from (\ref{vol}) that $v_0$ is homogeneous of degree $H(0+)$. Note that, by using Theorem \ref{mouthm} (ii), \begin{align} V(s_m)&=\frac{s_m^{-4}\int_{B_{s_m}^+}x_1x_2(1-\chi_{\{u>0\}})\,dx} {s_m^{-5}\int_{\partial B_{s_m}^+}\we u^2\dh}\non\\&\leq \frac{s_m^{-4}\int_{B_{r_m}^+}x_1x_2(1-\chi_{\{u>0\}})\,dx} {(r_m/2)^{-5}\int_{\partial B_{r_m/2}^+}\we u^2\dh}\non\\ &\leq\frac{1}{2}\frac{\int_{\partial B_{r_m}^+}\we u^2\dh}{\int_{\partial B_{r_m/2}^+}\we u^2\dh}V(r_m)\non\\ &=\frac{1}{2\int_{\partial B_{1/2}^+}\we v_m^2\dh}V(r_m).\label{smrm}\end{align} Since, at least along a subsequence, \[\int_{\partial B_{1/2}^+}\we v_m^2\dh\to\int_{\partial B_{1/2}^+}\we v_0^2\dh>0,\] (\ref{smrm}) leads to a contradiction. It follows that indeed $V(r)\to 0$ as $r\to 0+$. This implies that $D(r)\to H(0+)$. (ii) Let $r_m$ be an arbitrary sequence with $r_m\to 0+$. The boundedness of the sequence $v_m$ in $W^{1,2}_w(B_1)$ is equivalent to the boundedness of $D(r_m)$, which is true by (i). (iii) Let $r_m\to 0+$ be an arbitrary sequence such that $v_m$ converges weakly to $v_0$. The homogeneity degree $H(0+)$ of $v_0$ follows directly from (\ref{vol}). The fact that $\int_{\partial B_1^+}\we v_0^2\dh=1$ is a consequence of $\int_{\partial B_1^+}\we v_m^2\dh=1$ for all $m$, and the remaining claims of the Proposition are obvious. The homogeneity of $v_0$, together with the fact that $v_0$ belongs to $W^{1,2}_w(B_1^+)$, imply (in two dimensions) that $v_0$ is continuous. \end{proof} \section{Concentration compactness} In the present section we will prove a concentration compactness result which allows us to preserve variational solutions in the blow-up limit at degenerate points and excludes concentration. In order to do so we combine the concentration compactness result of J.-M. Delort \cite{delort} with information gained by our Frequency Formula. In addition, we obtain strong convergence of our blow-up sequence which is necessary in order to prove our main theorems. \begin{theorem}\label{comp} Let $u$ be a variational solution of {\rm (\ref{strongp})} such that $u=0$ in $x_2\le 0$ and $M^{x_1x_2}(0+)=\int_{B^+_1} x_1x^+_2\,dx$. Let $r_m\to 0+$ be such that the sequence $v_m$ given by {\rm(\ref{vm})} converges weakly to $v_0$ in $W_w^{1,2}(B^+_1)$. Then $v_m$ converges to $v_0$ strongly in $W^{1,2}_{w,\textnormal{loc}}(B^+_1\setminus \{ 0\})$, $v_0$ is continuous on $B^+_1$ and $\div (\frac{1}{x_1} \nabla v_0)$ is a nonnegative Radon measure satisfying $v_0 \div (\frac{1}{x_1} \nabla v_0)=0$ in the sense of Radon measures in $B^+_1$. \end{theorem} \begin{proof} Note first that the homogeneity of $v_0$ given by Proposition \ref{blowup}, together with the fact that $v_0$ belongs to $W^{1,2}_w(B^+_1)$, imply that $v_0$ is continuous. Let $\sigma$ and $\tau$ with $0<\tau<\sigma<1$ be arbitrary. We know that $\div (\frac{1}{x_1} \nabla v_m)\ge 0$ and $\div (\frac{1}{x_1} \nabla v_m)(B^+_{(\sigma+1)/2})\leq C_1$ for all $m$. We regularize each $v_m$ to \[\tilde v_m := v_m*\phi_m \in C^\infty(B^+_1),\] where $\phi_m$ is a standard mollifier such that \[ \div (\frac{1}{x_1} \nabla \tilde v_m)\ge 0, \int_{B^+_\sigma} \div (\frac{1}{x_1} \nabla \tilde v_m)\le C_2<+\infty \quad\text{for all } m,\] and \[\left\Vert \frac{1}{x_1}(\nabla v_m-\nabla \tilde v_m)\right\Vert_{L^2(B^+_\sigma)} + \left\Vert v_m-\tilde v_m\right\Vert_{L^2(B^+_\sigma)} \to 0\quad\text{as }m\to\infty.\] Let us now consider the velocity field in three dimensions $$V^m(X,Y,Z):= \left(-{1\over x_1} \partial_2 \tilde v_m \cos \vartheta , -{1\over x_1} \partial_2 \tilde v_m \sin \vartheta, {1\over x_1} \partial_1 \tilde v_m\right),$$ where $(X,Y,Z)=(x_1\cos\vartheta, x_1\sin\vartheta, x_2)$, as well as their weak limit $$V(X,Y,Z):= \left(-{1\over x_1} \partial_2 \tilde v \cos \vartheta, -{1\over x_1} \partial_2 \tilde v \sin \vartheta , {1\over x_1} \partial_1 \tilde v\right).$$ We have that $V^m$ is divergence free and satisfies $$\curl V^m = \omega^m = (-\sin \vartheta,\cos \vartheta,0) \alpha^m \textrm{ in }B_2(0)$$ with a non-negative function $\alpha^m$ that is bounded in $L^1(B_\sigma)$. It follows that \begin{align*} V^m_1& = \Delta_{m1}^{-1} \partial_Z \omega^m_2\\ V^m_2 &= -\Delta_{m2}^{-1} \partial_Z \omega^m_1\\ V^m_3 &= \Delta_{m3}^{-1} (\partial_Y \omega^m_1-\partial_X \omega^m_2), \end{align*} where $\Delta_{mi}^{-1}$ is the inverse of the three dimensional Laplace operator with averaged Dirichlet boundary data $V^m_i$, more precisely $$\Delta_{mi}^{-1} f = {2\over {1-\sigma}} \int_\sigma^{1+\sigma\over 2} \int_{B_R} G_R f\> dx \> dR + {2\over {1-\sigma}} \int_\sigma^{1+\sigma\over 2} \int_{\partial B_R} V^m_i \nabla G_R\cdot \nu \> d\mathcal{H}^2,$$ where $G_R$ is Green's function with respect to the Laplace operator in $B_R$. From the proof of \cite[Proposition 3.2]{delort}, where \cite[(3.6)]{delort} holds with $v^\varepsilon_i$ replaced by $V^m_i$ and $\omega^\varepsilon_i$ replaced by $\omega^m_i$ but the remainder terms $w^\varepsilon_i$ given by Greens formula in $B_\sigma$, we infer that \begin{align*} V^m_1 V^m_3 \rightharpoonup V_1 V_3 \textrm{ weakly in } L^2_{\textnormal{loc}}(B_\sigma),\\ V^m_2 V^m_3 \rightharpoonup V_2 V_3 \textrm{ weakly in } L^2_{\textnormal{loc}}(B_\sigma),\\ (V^m_1)^2 + (V^m_2)^2 - (V^m_3)^2 \rightharpoonup (V_1)^2 + (V_2)^2 - (V_3)^2 \textrm{ weakly in } L^2_{\textnormal{loc}}(B_\sigma); \end{align*} note that as in \cite{delort} the remainder terms converge strongly in $L^2_{\textnormal{loc}}(B_\sigma)$. It follows that \be\label{evm}{1\over {x_1}}\partial_1 v_m \partial_2 v_m \to {1\over {x_1}}\partial_1 v_0 \partial_2 v_0\ee and \[{1\over {x_1}}\left((\partial_1 v_m)^2- (\partial_2 v_m)^2 \right)\to {1\over {x_1}}\left((\partial_1 v_0)^2- (\partial_2 v_0)^2\right)\] in the sense of distributions on $B^+_\sigma$ as $m\to\infty$. Let us remark that in contrast to the true two-dimensional problem, this alone would {\em not} allow us to pass to the limit in the domain variation formula for $v_m$! Observe now that (\ref{vol}) shows that \[\nabla v_m(x)\cdot x - H(0+) v_m(x)\to 0\] strongly in $L^2_w(B^+_\sigma\setminus B^+_\tau)$ as $m\to\infty$. It follows that \[\partial_1 v_m x_1 + \partial_2 v_m x_2\to \partial_1 v_0 x_1 + \partial_2 v_0 x_2\] strongly in $L^2_w(B^+_\sigma\setminus B^+_\tau)$ as $m\to\infty$. But then \begin{align*} &\int_{B^+_\sigma\setminus B^+_\tau} {1\over {x_1}}(\partial_1 v_m \partial_1 v_m x_1 + \partial_1 v_m \partial_2 v_m x_2)\eta \,dx \\&\to \int_{B^+_\sigma\setminus B^+_\tau} {1\over {x_1}}(\partial_1 v_0 \partial_1 v_0 x_1 + \partial_1 v_0 \partial_2 v_0 x_2)\eta \,dx\end{align*} for each $\eta \in C^0_0(B^+_\sigma\setminus \overline B^+_\tau)$ as $m\to\infty$. Using (\ref{evm}), we obtain that \[\int_{B^+_\sigma\setminus B^+_\tau} (\partial_1 v_m)^2 \eta \,dx \to \int_{B^+_\sigma\setminus B^+_\tau} (\partial_1 v_0)^2 \eta \,dx\] for each $0\le \eta \in C^0_0(B^+_\sigma\setminus \overline B^+_\tau)$ as $m\to\infty$. Using once more (\ref{evm}) yields that $\nabla v_m$ converges strongly in $L^2_{w,\textnormal{loc}}(B^+_\sigma\setminus \overline B^+_\tau)$. Since $\sigma$ and $\tau$ with $0<\tau<\sigma<1$ were arbitrary, it follows that $\nabla v_m$ converges to $\nabla v_0$ strongly in $L^2_{w,\textnormal{loc}}(B^+_1\setminus \{ 0\})$. As a consequence of the strong convergence, we see that \[\int_{B^+_1}{1\over {x_1}} \nabla (\eta v_0)\cdot \nabla v_0 =0 \quad\text{ for all } \eta \in C^1_0(B^+_1\setminus\{0\}).\] Combined with the fact that $v_0=0$ in $B^+_1\cap\{x_2\leq 0\}$, this proves that $v_0\Delta v_0=0$ in the sense of Radon measures on $B^+_1$. \end{proof} \section{Degenerate points}\label{twodimensions} \begin{theorem}\label{deg2d} Let $u$ be a weak solution of {\rm (\ref{strongp})} such that $u=0$ in $x_2\le 0$ and $M^{x_1x_2}(0+)=\int_{B^+_1} x_1x^+_2\,dx$, let the free boundary $\partial\{u>0\}\cap B_1^+$ be a continuous injective curve $\sigma=(\sigma_1,\sigma_2)$ such that $\sigma(0)=0$. Then $\sigma_1(t)\ne 0$ in $[0,t_1)\setminus \{ 0\}$, $$ \lim_{t\to 0} \frac{\sigma_2(t)}{\sigma_1(t)} = 0$$ and \[\frac{u(rx)}{\sqrt{r^{-1}\int_{\partial B^+_{r}(0)} u^2 \dhone}}\to \frac{x_1^2 x_2}{\sqrt{\int_{\partial B^+_{1}(0)} x_1^4 x_2^2 \dhone}}\quad\text{as }r\to 0+,\] strongly in $W^{1,2}_{w,\textnormal{loc}}(B^+_1\setminus\{0\})$ and weakly in $W^{1,2}(B^+_1)$. Moreover, \begin{align*} &\frac{u(rx)}{r^\alpha} \to 0 \textrm{ in } L^2_w(B_1^+) \textrm{ for } \alpha \in (0,2) \textrm{ and}\\ &\frac{u(rx)}{r^\alpha}\textrm{ is unbounded in } L^2_w(B_1^+) \textrm{ for } \alpha>2. \end{align*} \end{theorem} \begin{proof} Let $r_m\to 0+$ be an arbitrary sequence such that the sequence $v_m$ given by (\ref{vm}) converges weakly in $W^{1,2}_w(B^+_1)$ to a limit $v_0$. By Proposition \ref{blowup} (iii) and Theorem \ref{comp}, $v_0\not\equiv 0$, $v_0$ is homogeneous of degree $H(0+)\ge 5/2$, $v_0$ is continuous, $v_0\ge 0$ and $v_0 \equiv 0$ on $\{ x_1=0\}$ and in $\{ x_2\leq 0\}$, $v_0 \div (\frac{1}{x_1} \nabla v_0)=0$ in $B_1^+$ as a Radon measure, and the convergence of $v_m$ to $v_0$ is strong in $W^{1,2}_{w,\textnormal{loc}}(B_1^+\setminus\{0\})$. Moreover, the strong convergence of $v_m$ and the fact proved in Proposition \ref{blowup} (i) that $V(r_m)\to 0$ as $m\to \infty$ imply that $$0=\int_{\R^2} \Big( \frac{1}{x_1} {\vert \nabla v_0 \vert}^2 \div\phi - 2 \nabla u_0 D\phi \nabla v_0 \Big)$$ for every $\phi\in C^1_0(\{ x_1>0\}\cap \{ x_2 >0\};\R^2),$ so that even an analysis in the case of $\{ u=0\}$ consisting of infinitely many disconnected components (similar to that in \cite{VW}) would be possible in principle. However the structure here is more complicated. For that reason we confine ourselves to the assumed injective curve case. As in the proof of Proposition \ref{2dim}, we will use in each section of the unit disk where $v_0>0$ the velocity potential $\phi$ defined by \begin{align*} \partial_1 \phi = {1\over x_1}\partial_2 v_0, \partial_2 \phi = -{1\over x_1}\partial_1 v_0. \end{align*} We obtain that $\phi(\rho\sin\theta,\rho\cos\theta)$ is homogeneous of degree $m=H(0+)\ge 5/2$ and is on the unit circle given by a linear combination $f(\cos\theta)=\alpha P_m(\cos \theta)+\beta P_m(-\cos \theta)$, in the case that the Legendre function $P_m$ and the function $P_m(-x)$ are linearly independent, and $f(\cos\theta)=\alpha P_m(\cos \theta)+\beta \Re(Q_m(\cos \theta))$ in the case the Legendre function $P_m$ and the function $P_m(-x)$ are linearly dependent. Moreover $(1,0)$ is a free boundary point of $v_0$ so that $f'(0)=0$, which implies $\alpha=\beta$ in the case of linear independence. On the other hand, Theorem \ref{curve} (ii) implies that for any ball $\tilde B\subset\subset B^+_1\cap \{ x_2>0\}$, $v_r=\frac{u(rx)}{\sqrt{r^{-1}\int_{\partial B^+_r(0)} u^2 \dhone}}>0$ in $\tilde B$. Consequently $\div (\frac{1}{x_1} \nabla v_0)=0$ in $\{ x_1>0\}\cap \{ x_2>0\}$. However, if there is a free boundary point $x$ in $(0,1)\times (0,1)$ then by homogeneity the half line connecting that point to the origin consists of free boundary points, so that $(\div (\frac{1}{x_1} \nabla v_0))(B_\delta(x))>0$ for each $\delta>0$, a contradiction. Thus $\alpha P_m'+\beta Q_m'$ must be either strictly positive or strictly negative in $(0,1)$. In the case $f(\cos\theta)=\alpha (P_m(\cos \theta)+P_m(-\cos \theta))$ we obtain now a contradiction to the fact that $P_m$ is bounded at $1$ and has a singularity at $-1$. In the case that $P_m$ is an even function, we obtain from $P_m'(0) = m P_{m-1}(0)=\frac{m\sqrt{\pi}}{\Gamma({2-m\over 2})\Gamma({m-1\over 2}+1)}$ and $Q_m'(0) = m Q_{m-1}(0)=-\frac{m\pi^{3/2}\tan(\pi (m-1)/2)}{(m-1)\Gamma({2-m\over 2})\Gamma({m-1\over 2})}$ \\(see http://functions.wolfram.com/07.07.20.0006.01,\\ http://functions.wolfram.com/07.07.03.0001.01, \\http://functions.wolfram.com/07.10.20.0003.01,\\ http://functions.wolfram.com/07.10.03.0001.01),\\ that $m$ is an even integer $\ge 2$ and that $\beta=0$ so that $f$ is up to a nonzero multiplicative constant the Legendre polynomial $P_m$. But, using \cite[Corollary on p. 114]{arnold} there is only one even integer $\ge 2$ such that $P_m$ has no critical point in $(0,1)$, namely $m=2$. We obtain $f(x)=c_2 P_2(x) = c_2 {1\over 2} (3x^2-1)$. In order to obtain the claimed growth we calculate for $u_r(x) = u(rx)/r^{\alpha}$ and a.e. $r\in (0,\delta)$, using (\ref{part2}), \begin{align*}&\left(\int_{\partial B^+_1(0)}\we u_r^2 \dh\right)' =\frac{2}{r}\left(\int_{B^+_1(0)} \we {\vert \nabla u_r\vert}^2\,dx-\alpha \int_{\partial B^+_1(0)}\we u_r^2\dh\right) \\& \left\{\begin{array}{ll} \ge \frac{\kappa}{r} \int_{\partial B^+_1(0)}\we u_r^2\dh, &\alpha\in (0,2),\\ \le - \frac{\kappa}{r} \int_{\partial B^+_1(0)}\we u_r^2\dh, &\alpha>2. \end{array}\right. \end{align*} Integrating we obtain the result. \end{proof}
{ "timestamp": "2012-10-16T02:01:21", "yymm": "1210", "arxiv_id": "1210.3682", "language": "en", "url": "https://arxiv.org/abs/1210.3682" }
\section{} \begin{abstract} For a $C^{1}$ degree two latitude preserving endomorphism $f$ of the $2$-sphere, we show that $f$ has $2^{n}$ periodic points. \end{abstract} \section{Introduction} The relationship between the long term dynamics of an endomorphism of a manifold and its long term effect on the algebraic topology of the manifold can depend on the smoothness of the endomorphism. See \cite{Spain} for a discussion of this. Here we deal with a particular case of Problem~3 of that paper. Let $S$ be the $2$-sphere, oriented in the standard fashion. Fix a continuous map $f : S \rightarrow S$ of global degree $2$. That is, the map $f_{_{\displaystyle *}} : H_{2}(S) \rightarrow H_{2}(S)$ is multiplication by $2$. Problem~3 asks: Is the Growth Rate Inequality $$ \limsup_{n\rightarrow \infty} \frac{1}{n}\ln N_{n}(f) \geq \ln(2) $$ true? Here $N_{n}(f)$ is the number distinct periodic points of $f$ having period $n$, i.e., the number of fixed points of $f^{n}$. If $f$ is merely continuous the answer is ``not necessarily.'' For as observed in \cite{SS}, a Lipschitz counterexample is $z \mapsto 2z^{2}/\abs{z}$ where $S$ is the Riemann sphere. In polar coordinates $f$ sends $(r, \theta )$ to $(2r, 2\theta )$. The only periodic points are the North and South poles. On the other hand, if $f$ is a rational map, then the answer is ``yes.'' See Proposition~1 of \cite{Asterisque}. So the question becomes: Does there exist an $r \geq 1$ such that if $f$ is $C^{r}$ then the Growth Rate Inequality holds? Perhaps $r=2$ will suffice, or even $r=1$. From \cite{SS} it follows that if $f$ is $C^{1}$ then it must have infinitely many distinct periodic points, but their growth rate remains unknown. As shown in \cite{MP}, the topological entropy of $f$ is better understood: If $f$ is $C^{1}$ then it is at least $\ln 2$. This implies there are invariant probability measures with measure theoretic entropy $\ln 2 - \epsilon $. Thus, if $f$ is $C^{1+ \alpha }$ then the sum of the Lyapunov exponents is at least $\ln 2 - \epsilon $. We expect a version of Katok's Theorem \cite{Katok} about diffeomorphisms to be true for endomorphisms in the case that all the Lyapunov exponents are different from zero. In fact this is already proved in the case that all the Lyapunov exponents are positive. See \cite{GW}. The remaining case will be where one of the Lyapunov exponents is zero. At the end of Section~\ref{s.examples} we give three examples of smooth endomorphisms of the $2$-sphere, two with topological entropy $\ln 2$, the other with topological entropy $\ln 3$. All have one Lyapunov exponent zero and are essentially $2:1$. The first is of degree zero and has only one periodic point. The second and third have degree two, are unlike the map $z \mapsto z^{2}$, but satisfy the Growth Rate Inequality. All three examples preserve the latitude foliation. We hope this elementary result might give a clue as to how homology assumptions can intervene when there are zero Lyapunov exponents and also families of invariant center manifolds replacing the circles of our foliation, but that will surely go way beyond what we can accomplish here. \section{Invariant latitudes} We say that $f : S \rightarrow S$ preserves the latitude foliation if it carries each latitude into another latitude or to one of the poles. It need not be a homeomorphism from one latitude to another. We assume throughout that $f : S \rightarrow S$ is a continuous, latitude preserving endomorphism of degree $2$. \begin{Thm} \label{t.fixedpoints} If $f$ is $C^{1}$ then $f^{n}$ has at least $2^{n} $ fixed points. \end{Thm} \begin{Cor} \label{c.GRF} If $f$ is $C^{1}$ and $N_{n}$ is the number of fixed points of $f^{n}$ then the Growth Rate Inequality $$ \limsup_{n \rightarrow \infty} \frac{1}{n} \ln (N_{n}) \geq \ln(2). $$ holds. \end{Cor} \begin{proof}[\bf Proof] This is immediate from the theorem, but see the remark at the end of Section~\ref{s.counting} for a shorter proof of the corollary. \end{proof} \begin{Rmk} $f^{n}$ is the $n^{\textrm{th}}$ iterate of $f$ and the fixed points are geometrically distinct. The $C^{1}$ assumption is used rarely in the proof, but without it the theorem fails: As noted above, the Lipschitz endomorphism $z \mapsto 2z^{2}/\abs{z}$ from \cite{SS} has degree $2$ but only two periodic points. \end{Rmk} \begin{Lemm} \label{l.poletopole} A latitude preserving endomorphism sends a pole to a pole, not a latitude. \end{Lemm} \begin{proof}[\bf Proof] Obvious from continuity of the endomorphism. \end{proof} Parametrize the latitudes by their height $h$, $0 \leq h \leq 1$. (The sphere $S$ rests on the $xy$-plane and has center $(0,0,1/2)$.) The Southpole corresponds to $h=0$ and the Northpole to $h=1$. If $L(h)$ denotes the latitude of height $h$ then we define the \textbf{latitude map} $\varphi : [0,1] \rightarrow [0,1]$ by $$ \varphi (h) = \textrm{ height of } f(L(h)). $$ It is continuous and Lemma~\ref{l.poletopole} implies that $\varphi \{0,1\} \subset \{0,1\}$. The map $f$ fibers over $\varphi $. Orient each latitude circle in $S$ in a counterclockwise fashion as viewed from above the Northpole of $S$. If $0 < h, \varphi (h) < 1$ then the \textbf{latitude degree} $d(h)$ is the degree of the map $f : L(h) \rightarrow L(\varphi (h))$. Since these latitude circles are oriented, $d(h)$ is well defined and locally constant as a function of $h \in ((0,1) \cap \varphi ^{-1}(0,1))$. Corresponding to the maximal open intervals $I$ on which $d(h)$ is well defined and constant are open bands $B = B(I) \subset S$, $$ B = \bigcup_{h \in I} L(h). $$ We denote by $d(B)$ the common latitude degree of $f$ on latitudes in $B$. Lemma~\ref{l.poletopole} implies that the value of $\varphi $ at the endpoints of $I$ is $0$ or $1$. \begin{Propn} \label{p.globaldegree} The global degree of $f$ is $$ d(f) = \sum \Delta _{I}\varphi \cdot d(B) $$ where $\Delta _{I}\varphi = \varphi (b) - \varphi (a)$ and the sum is taken over the bands and their band intervals $I = (a,b)$. \end{Propn} \begin{proof}[\bf Proof] By continuity of $\varphi $ there are at most finitely many band intervals $I = (a, b)$ with $\varphi (a) \not= \varphi (b)$, so the sum makes sense. The degree of $f$ is independent of homotopy, so we can deform $f$ on each band $B$ in order that $\varphi $ becomes linear on the band interval $I$. Then we can homotop $f$ further to make the intervals on which $\varphi $ is constant become points. Finally we can homotop $f$ on each latitude so that up to homothety, it sends $z$ to $z^{d}$ or $\overline{z}^{d}$. (In the case $d=0$ we homotop $f$ so that on each latitude, up to homothety it is the constant map $z \mapsto i$.) The net effect is that by homotopy, we can assume the latitude map of $f$ is a sawtooth as shown in Figure~\ref{f.sawtooth}, and the map on each latitude is the simplest possible. \begin{figure}[htbp] \centering \includegraphics[scale=.65]{sawtooth} \caption{The graph of a latitude map with five bands. Each pole is fixed by $f$. We imagine $f$ sending the first, second, fourth, and fifth band to $S$ with latitude degree $d_{1}, d_{2}, d_{4}, d_{5}$, and sending the third band to $S$ with latitude degree zero.} \label{f.sawtooth} \end{figure} Take a regular value $v \in S$ of $f$ near $ (1/2, 0, 1/2)$. The global degree of $f$ is the number of pre-images of $v$, counted with multiplicity. There are no pre-images in bands with latitude degree zero, because those bands are sent to the half-longitude through $(0, 1/2, 1/2)$. In a band $B$ with $\Delta _{I}\varphi = 1$ and latitude degree $d > 0$ there are $d$ pre-images. The same is true if $\Delta _{I}\varphi = -1$ and $B$ has latitude degree $-d < 0$. The other bands give pre-images with corresponding negative multiplicity, so the total number of pre-images, counted with multiplicity, is the sum $\sum \Delta _{I}\varphi \cdot d(B)$ as claimed. \end{proof} If $\Delta _{I} \varphi = 1$ we say that the corresponding band is \textbf{directed upward} or \textbf{ascends}, while if $\Delta _{I}\varphi = -1$ it is \textbf{directed downward} or \textbf{descends}. If $\Delta _{I}\varphi = 0$, the band is \textbf{neutral}. \begin{Lemm} \label{l.directedbands} There exist directed bands. \end{Lemm} \begin{proof}[\bf Proof] Since $f$ is surjective, so is $\varphi $, and $\varphi $ carries some minimal interval $[a,b] \subset [0,1]$ onto $[0,1]$ with $\varphi (a) = 0$ and $\varphi (b) =1$ or vice versa. The interval $(a,b)$ corresponds to a directed band. \end{proof} \begin{Lemm} \label{l.invariantlatitude} If the band $B$ is directed and $\abs{d(B)} \geq 2$ then $B$ contains an $f$-invariant latitude. \end{Lemm} \begin{proof}[\bf Proof] \underline{Case 1.} $\partial B$ is the pair of poles, each being fixed by $f$. Since $f$ has latitude degree $2$ or better and is $C^{1}$ at the poles, its derivative there is zero. For let $p$ be a pole. The derivative of $f$ at $p$ is a linear map of the tangent space $T_{p}S$ to itself. Infinitesimally it preserves the latitude foliation, so it is a scalar multiple of a rotation or reflection, $T_{p}f = cR$. But if $c \not= 0$ then $f$ has latitude degree $\pm 1$ on $B$, contrary to the hypothesis that $\abs{d(B)} \geq 2$. Thus, $c=0$ and the poles are sinks for $f$, so the latitude map $\varphi : [0, 1] \rightarrow [0,1]$ has \begin{equation*} \begin{split} \varphi (0) & = 0 \quad \varphi ^{\prime}(0) = 0 \\ \varphi (1) &= 1 \quad \varphi ^{\prime}(1) = 0. \end{split} \end{equation*} This gives a fixed point $h$ of $\varphi $ with $0 < h < 1$, and $L(h)$ is invariant under $f$. See Figure~\ref{f.phifixespoles}. \underline{Case 2.} $\partial B$ is the pair of poles, and $f$ interchanges them. Differentiability of $f$ is irrelevant. See Figure~\ref{f.phifixespoles}. \begin{figure}[htbp] \centering \includegraphics[scale=1.2]{phifixespoles} \caption{The graph of $\varphi $. The first corresponds to Case 1 and the second to Case~2.} \label{f.phifixespoles} \end{figure} \underline{Case 3.} $\partial B$ is a pole, fixed by $f$, and a latitude. The pole is a sink and the latitude is sent the other pole. This gives a fixed point of $\varphi $ as in Case~1. \underline{Case 4.} $\partial B$ is a pole and a latitude, and $f$ sends the pole to the opposite pole. This gives a fixed point as in Case~2. \underline{Case 5.} $\partial B$ is two latitudes, say $L_{1}, L_{2}$ with heights $0 < h_{1} < h_{2} < 1$. Then $\varphi : [h_{1}, h_{2}] \rightarrow [0,1]$ sends $[h_{1}, h_{2}]$ over itself, so it has a fixed point $h \in (h_{1}, h_{2})$. The latitude $L(h)$ is invariant by $f$. \end{proof} \begin{Rmk} The hypothesis that $f$ is $C^{1}$ is used only in Cases 1 and 3. \end{Rmk} \begin{Propn} \label{p.2tothen} If the band $B$ is directed and $\abs{d(B)} \geq 2$ then $f^{n}$ has at least $2^{n}$ fixed points. \end{Propn} \begin{proof}[\bf Proof] By Lemma~\ref{l.invariantlatitude} there is an invariant latitude on which $f$ has degree $k$ with $\abs{k} \geq 2$. The $n^{\textrm{th}}$ iterate of such a map of the circle has at least $\abs{k}^{n}$ fixed points. \end{proof} \section{Counting Fixed Points} \label{s.counting} \begin{Lemm} \label{l.degreeproduct} If $f$ and $g$ preserve the latitude foliation then the latitude degree of $f \circ g$ is at most the product of their latitude degrees. \end{Lemm} \begin{proof}[\bf Proof] This is a standard fact about maps of the circle. \end{proof} \begin{Lemm} \label{l.numberoffixedpoints} Let $f : S \rightarrow S$ be a continuous latitude preserving surjection, not necessarily of degree $2$. If each of its directed bands $B$ has $\abs{d(B)} \leq 1$ then $f$ has more than $d(f)$ fixed points. \end{Lemm} \begin{proof}[\bf Proof] We count the directed bands as follows. \begin{itemize} \item[(a)] $a$ is the number of ascending bands with $d(B) = 1$. \item[(b)] $b$ is the number of ascending bands with $d(B) = -1$. \item[(c)] $c$ is the number of descending bands with $d(B) = 1$. \item[(d)] $d$ is the number of descending bands with $d(B) = -1$. \item[(e)] $e$ is the number of directed bands with $d(B) = 0$. \end{itemize} We think of the graph of $\varphi $ as composed of ``legs'' that join $[0,1] \times 0$ to $[0,1] \times 1$. Formally, they are arcs in the open square $(0,1) \times (0,1)$, and they correspond to the bands $B_{1}, \dots , B_{N}$. See Figures~\ref{f.polesswitch}, \ref{f.polestoSouthpole}, and \ref{f.polesfixed}. Ascending legs correspond to ascending bands, descending to descending. We call a leg corresponding to a band with $d(B) = -1$ a \textbf{reverse-leg}, and we call a leg corresponding to a band with $d(B) = 0$ a \textbf{zero-leg}. There are $b+d$ reverse-legs and $e$ zero-legs. Each intersection of the diagonal $\Delta $ with a reverse leg produces two fixed points of $f$, since such an intersection gives an $f$-invariant latitude $L$, and $f : L \rightarrow L$ reverses orientation. Each intersection of $\Delta $ with a zero-leg produces at least one fixed point of $f$, since such an intersection gives an $f$-invariant latitude $L$, and $f : L \rightarrow L$ has degree zero. Intersections of $\Delta $ with other legs need not produce fixed points. The rest of the proof is a counting argument in which there are three cases to consider, concerning how $f$ affects the poles, and twelve subcases concerning which legs $\Delta $ crosses. Let $p$ be the number of fixed poles, and $P = N_{1}(f)$ the number of fixed points of $f$ (including the fixed poles). According to Proposition~\ref{p.globaldegree} the degree of $f$ is $d(f) = (a+d)-(b+c)$. We are trying to show that $P > d(f)$. Naively, we imagine $\Delta $ crossing all the legs, and hence generating corresponding fixed points -- two for each (b)-crossing, two for each (d)-crossing, and one for each (e)-crossing. Thus we would hope $$ P \;\; \geq \;\; p + e + 2(b+d) \;\; >\;\; (a+d) - (b+c), $$ which leads us to ask whether $p + N > 2a -2b$ since $ a+b+c+d + e = N$. This inequality is in fact true, but we need a stronger one because if $\Delta $ fails to cross some legs of type (b), (d), or (e) then we would have over-estimated the fixed points. We write \begin{equation*} \begin{split} P - d(f) \;\; &\geq \;\; p + e + 2(b+ d) - (a+d) + (b+c) -r \\ &= p + N -2a + 2b -r \end{split} \end{equation*} where $r$ is the correction term due to $\Delta $ missing legs of type (b), (d), (e). Thus, $$ r \, = \, \begin{cases} 0 & \textrm{ if } \Delta \textrm{ crosses all legs,} \\ 1 & \textrm{ if } \Delta \textrm{ crosses all legs except one of type (e),} \\ 2 & \textrm{ if } \Delta \textrm{ crosses all legs except two of type (e),} \\ 2 & \textrm{ if } \Delta \textrm{ crosses all legs except one of type (b) or (d),} \\ 3 & \textrm{ if } \Delta \textrm{ crosses all legs except one of type (b) or (d)} \\ &\textrm{ and one of type (e),} \\ 4 & \textrm{ if } \Delta \textrm{ crosses all legs except two of type (b) or (d).} \end{cases} $$ \underline{Case 1.} $\varphi $ switches the poles. Then $p=0$, $\varphi (0) = 1$, $\varphi (1) = 0$, $N$ is odd, and there are $(N-1)/2$ ascending legs. In particular $a$ is at most the number of ascending legs, so $2a \leq N-1$. The diagonal meets all the legs so we have $r=0$. This gives \begin{equation*} \begin{split} P - d(f) \;\; &\geq \;\; p + N -2a + 2b - r \\ &\geq \;\;0 + N -(N-1) + 2b - 0 \;\; \geq \;\; 1 \end{split} \end{equation*} since $2b \geq 0$. Thus $P> d(f)$ in Case 1. See Figure~\ref{f.polesswitch}. \begin{figure}[htbp] \centering \includegraphics[scale=.55]{polesswitch} \caption{The graph of a latitude map $\varphi $ when $f$ interchanges the poles. The diagonal $\Delta $ meets each leg. The graph is simplified -- each leg can actually intersect $\Delta $ several times; in the case shown, the graph must intersect $\Delta $ five times. Also unshown are possible arcs in the graph that join the top or bottom of the square to itself. They correspond to neutral bands} \label{f.polesswitch} \end{figure} \underline{Case 2.} $f$ sends both poles to the same pole, say the Southpole. Then $p=1$, $\varphi (0) = 0 = \varphi (1)$, $N$ is even, and there are $N/2$ ascending legs. In particular, $2a \leq N$. The diagonal crosses all the legs except possibly the first. See Figure~\ref{f.polestoSouthpole} \begin{figure}[htbp] \centering \includegraphics[scale=.55]{polestoSouthpole} \caption{The graph of a latitude map when $f$ sends both poles to the Southpole. It is also simplified. The diagonal meets each leg except perhaps the first.} \label{f.polestoSouthpole} \end{figure} \underline{Case 2a.} The first leg is of type (a). Then $\Delta $ crosses all the legs of type (b), (d), (e), which implies $r=0$. This gives \begin{equation*} \begin{split} P - d(f) \;\; &\geq \;\; p + N -2a + 2b - r \\ &\geq \;\;1 + N -N + 2b -0 \;\; \geq \;\; 1 \end{split} \end{equation*} since $2b \geq 0$. \underline{Case 2e.} The first leg is of type (e). Then $r=1$. Also, $2a \leq N-2$ since there are $N/2$ ascending legs, one of which (the first one) is of type (e), not of type (a). This gives \begin{equation*} \begin{split} P - d(f) \;\; &\geq \;\; p + N -2a + 2b - r \\ &\geq \;\;1 + N -(N-2) + 2b -1 \;\; \geq \;\; 2 \end{split} \end{equation*} since $2b \geq 0$. \underline{Case 2b.} The first leg is of type (b). Then $r=2$, and as in the previous case, $2a \leq N-2$. This gives \begin{equation*} \begin{split} P - d(f) \;\; &\geq \;\; p + N -2a + 2b - r \\ &\geq \;\;1 + N -(N-2) + 2b -2 \;\; \geq \;\; 3 \end{split} \end{equation*} since $2b \geq 2$. Thus $P > d(f)$ in Case 2. \underline{Case 3.} $f$ fixes both poles. Then $p=2$, $\varphi (0) = 0$, $\varphi (1) = 1$, $N$ is odd, and there are $(N+1)/2$ ascending legs. In particular, $2a \leq N+1$. The diagonal crosses all legs except possibly the first and last. See Figure~\ref{f.polesfixed}. \begin{figure}[htbp] \centering \includegraphics[scale=.55]{polesfixed} \caption{The graph of a latitude map when $f$ leaves the poles fixed. It is also simplified. The diagonal meets each leg except perhaps the first and last.} \label{f.polesfixed} \end{figure} \underline{Case 3aa.} The first and last legs are of type (a). Then $\Delta $ crosses all the legs of type (b), (d), (e), so $r=0$. This gives \begin{equation*} \begin{split} P - d(f) \;\; &\geq \;\; p + N -2a + 2b - r \\ &\geq \;\; 2 + N - (N+1) + 2b - 0 \;\; \geq \;\; 1 \end{split} \end{equation*} since $2b \geq 0$. \underline{Case 3ae.} The first leg is of type (a) and the last of type (e). Then $r=1$. Also, since the first and last legs ascend, and since one of them (the last one) is of type (e), not of type (a), we have $2a \leq N-1$. This gives \begin{equation*} \begin{split} P - d(f) \;\; &\geq \;\; p + N -2a + 2b - r \\ &\geq \;\;2 + N -(N-1) + 2b -1 \;\; \geq \;\; 2 \end{split} \end{equation*} since $2b \geq 0$. \underline{Case 3ab.} The first leg is of type (a) and the last of type (b). Then $r=2$ and, as in the previous case, $2a \leq N-1$. This gives \begin{equation*} \begin{split} P - d(f) \;\; &\geq \;\; p + N -2a + 2b - r \\ &\geq \;\;2 + N -(N-1) + 2b -2 \;\; \geq \;\; 3 \end{split} \end{equation*} since $2b \geq 2$. \underline{Case 3ea.} The first leg is of type (e) and the last of type (a). This is symmetric to Case~3.ae. \underline{Case 3ee.} The first and last legs are of type (e). If $N \geq 3$ then $r=2$ and $2a \leq N-3$ since two ascending legs are not of type (a). This gives \begin{equation*} \begin{split} P - d(f) \;\; &\geq \;\; p + N -2a + 2b - r \\ &\geq \;\;2 + N -(N-3) + 2b -2 \;\; \geq \;\; 3 \end{split} \end{equation*} since $2b \geq 0$. On the other hand, if $N = 1$ then there is just one ascending leg, and it is of type (e). Then $r=1$ and $a=0=b$. This gives \begin{equation*} \begin{split} P - d(f) \;\; &\geq \;\; p + N -2a + 2b - r \\ &= \;\;2 + 1 -0 + 0 -1 \;\; = \;\; 2. \end{split} \end{equation*} \underline{Case 3eb.} The first leg is of type (e) and the last of type (b). Then $N \geq 3$, $r=3$, and $2a \leq N-3$. This gives \begin{equation*} \begin{split} P - d(f) \;\; &\geq \;\; p + N -2a + 2b - r \\ &\geq \;\;2 + N -(N-3) + 2b -3 \;\; \geq \;\; 4 \end{split} \end{equation*} since $2b \geq 2$. \underline{Case 3ba.} The first leg is of type (b) and the last of type (a). This is symmetric to Case~3.ab. \underline{Case 3be.} The first leg is of type (b) and the last of type (e). This is symmetric to Case~3.eb. \underline{Case 3bb.} The first and last legs are of type (b). If $N \geq 3$ then $r=4$ and $2a \leq N-3$. This gives \begin{equation*} \begin{split} P - d(f) \;\; &\geq \;\; p + N -2a + 2b - r \\ &\geq \;\;2 + N -(N-3) + 2b -4 \;\; \geq \;\; 5 \end{split} \end{equation*} since $2b \geq 4$. On the other hand, if $N=1$ then there is just one ascending leg and it is of type (b). Then $r=2$, $a=0$, and $b =1$. This gives \begin{equation*} \begin{split} P - d(f) \;\; &\geq \;\; p + N -2a + 2b - r \\ &= \;\;2 + 1 -0 + 2 -2 \;\; = \;\; 3. \end{split} \end{equation*} Thus $P > d(f)$ in Case 3, which completes the proof of Lemma~\ref{l.numberoffixedpoints}. \end{proof} \begin{Rmk} The degree of $f$ can be negative, in which case Lemma~\ref{l.numberoffixedpoints} says nothing. One can re-work the preceding estimates to show that $$ P - \abs{d(f)} \geq -1, $$ but we do not need this. \end{Rmk} \begin{proof}[\bf Proof of Theorem~\ref{t.fixedpoints}] We assume that the $C^{1}$ latitude preserving map $f : S \rightarrow S$ has degree $2$ and claim that $f^{n}$ has at least $2^{n}$ fixed points. By Lemma~\ref{l.directedbands} there exist directed bands. If there is a directed band $B$ with $\abs{d(B)} \geq 2$ then the result follows from Proposition~\ref{p.2tothen}. If all the directed bands have $\abs{d(B)} \leq 1$ then by Lemma~\ref{l.degreeproduct} the same is true of $f^{n}$. Applying Lemma~\ref{l.numberoffixedpoints} to $f^{n}$, we conclude that $f^{n}$ has more than $d(f^{n}) = 2^{n}$ fixed points. \end{proof} \begin{Rmk} The proof of Corollary~\ref{c.GRF} does not require the full force of Lemma~\ref{l.numberoffixedpoints}. It suffices that $P = N_{1}(f) \geq d(f) - k$ where $k$ is an absolute constant. For, as just observed, if all the bands for $f$ have $\abs{d(B)} \leq 1$ then the same is true for $f^{n}$. Thus, $N_{1}(f^{n}) \geq d(f^{n}) - k$. Since $N_{1}(f^{n}) = N_{n}(f)$, we get $$ N_{n} \geq 2^{n} - k $$ from which Corollary~\ref{c.GRF} is immediate. Here is the caseless proof of this weaker inequality. Since all the bands of $f$ have $\abs{d(B)} \leq 1$, the number of fixed points of $f$ is at least $ e-2 + 2(b+d-2) $. For the graph of the latitude map has at least $e-2$ legs of type (e) that cross the diagonal, and at least $b+d-2$ legs of type (b) or (d) that cross the diagonal. The former produce one fixed point each, and the latter two fixed points each. For it is only the first and last legs of the latitude map that can fail to intersect the diagonal. This quantity minus the degree of $f$ is $ N -2a + 2b - 7$ where $N$ is the number of legs. That is, $$ N_{1}(f) - d(f) \geq N-2a + 2b - 7 \geq -7 $$ since $2a \leq N + 1$ and $2b \geq 0$. \end{Rmk} \section{Three Examples} \label{s.examples} It is possible to code a latitude preserving map $f$ as follows. Take any finite sequence of integers $(n_{0}; d_{1}, \dots , d_{N})$ such that $n_{0}$ is $0$ or $1$, and make the following interpretation. If $n_{0}= 0$ and $N$ is even then consider directed bands $B_{1}, \dots , B_{N}$ such that \begin{equation*} \begin{split} B_{1} & \textrm{ ascends and has latitude degree $d_{1}$} \\ B_{2} & \textrm{ descends and has latitude degree $d_{2}$} \\ B_{3} & \textrm{ ascends and has latitude degree $d_{3}$} \\ \dots & \dots \\ B_{N} & \textrm{ descends and has latitude degree $d_{N}$.} \end{split} \end{equation*} On the other hand, if $N$ is odd then the last band ascends and has latitude degree $d_{N}$. Similarly, if $n_{0} = 1$ then every ascending band becomes descending and vice versa. The latitude degrees remain the same. The choice $n_{0}= 0$ indicates that $f$ fixes the Southpole, while $n_{0} = 1$ indicates that $f$ sends the Southpole to the Northpole. Since ascending and descending bands alternate, such a code is well defined for any latitude preserving map $f : S \rightarrow S$, and it describes $f$ well up to non-monotonicity of the latitude maps $\varphi : I \rightarrow (0,1)$, the presence of neutral bands, and latitude rotation. The global degree of $f$ is the alternating sum $d_{1} - d_{2} + \dots + (-1)^{N+1}d_{N}$. Assume that $\abs{d_{i}} \leq 1$ for $1 \leq i \leq 1$. It is easy to see that to each code $(n_{0}; d_{1}, \dots , d_{N})$ there corresponds a smooth endomorphism $f$ with the following properties. \begin{itemize} \item The map $f$ preserves the latitude foliation, and up to homothety it is $z \mapsto z$ or $z \mapsto \overline{z}$ on each latitude. \item $S$ has $N$ bands $B_{i}$ with latitude degrees $d_{i}$. \item If $d_{i} \not= 0$ then $f$ sends $B_{i}$ diffeomorphically to the sphere minus the poles. \item If $d_{i} = 0$ then $f$ sends $B_{i}$ to the prime meridian $M$. ($M$ is the longitude arc that joins the poles and contains the point $(1/2, 0, 1/2)$.) \end{itemize} We refer to such an $f$ as a good representative of the code. It is not unique. Here are three examples. \begin{enumerate} \item The code is $(0;1,1)$. Let $f$ be a good representative of the code. Then $S$ has two bands, the Southern and Northern hemispheres, minus the poles and equator. \begin{itemize} \item $f$ wraps the Southern hemisphere upward over $S$, pinching the equator to the Northpole, and preserving the latitude orientation. It fixes the Southpole. \item $f$ wraps the Northern hemisphere downward over $S$, pinching the equator to the Northpole, and preserving the latitude orientation. It sends the Northpole to the Southpole. \end{itemize} The map $f$ has degree zero, preserves the latitude foliation, and fixes the Southpole. Its latitude map $\varphi $ is unimodal, so its entropy is $\ln 2$, and since $f$ fibers over $\varphi $ with diffeomorphisms in the circle fibers, it too has entropy $\ln 2$. Now take a latitude-preserving rotation $R$ of the sphere by an angle $\theta $ where $\theta /2\pi $ is irrational. Set $F = f \circ R$. The entropy is unaffected and $F$ preserves the latitude foliation. The only fixed point of $F^{n}$ is the Southpole, for the effect of $F^{n}$ on any invariant latitude is an irrational rotation. Thus, the Growth Rate Inequality holds for $F$, in the form $0 \geq 0$. \item The code is $(0;1,-1)$. Let $g$ be a good representative of the code. Again, $S$ has the hemisphere bands. \begin{itemize} \item $g$ wraps the Southern hemisphere upward over $S$, pinching the equator to the Northpole, and preserving the latitude orientation. It fixes the Southpole. \item $g$ wraps the Northern hemisphere downward over $S$, pinching the equator to the Northpole, and reversing the latitude orientation. It sends the Northpole to the Southpole. \end{itemize} The map $g$ has degree $2$ and fixes the Southpole. Again, let $R$ be an irrational rotation of the sphere and set $G = g \circ R$. The map $G$ preserves the latitude foliation. Its latitude map $\varphi $ is unimodal, so its entropy is $\ln 2$, and since $f$ fibers over $\varphi $ with diffeomorphisms in the circle fibers, it too has entropy $\ln 2$. The map $\varphi ^{n}$ has $2^{n}$ fixed points. Each corresponds to a $G^{n}$-invariant latitude $L$. On half of them, $G^{n}$ preserves the latitude orientation, and on half of them $G^{n}$ reverses it. On the latitudes with preserved orientation, $G^{n}$ is an irrational rotation and has no periodic points. On the latitudes with reversed orientation $G^{n}$ has two fixed points. Altogether, $G^{n}$ has $2^{n}$ fixed points, so the Growth Rate Inequality holds for $G$, in the form $\ln 2 \geq \ln 2$. \item The code is $(0;1,0,1)$. Let $h$ be a good representative of the code. Then $S$ has three bands $B_{1}, B_{2}, B_{3}$. \begin{itemize} \item $h$ wraps $B_{1}$ upward over $S$, pinching its boundary latitude to the Northpole, and preserving the latitude orientation. It fixes the Southpole. \item $h$ wraps $B_{2}$ downward along the prime meridian $M$, pinching its lower boundary latitude to the Northpole and its upper boundary latitude to the Southpole. The $h$-image of $B_{2}$ equals $M$. \item $h$ wraps $B_{3}$ upward over $S$, pinching its boundary latitude to the Southpole, and preserving the latitude orientation. It fixes the Northpole. \end{itemize} The map $h$ has degree $2$ and fixes both poles. Its latitude map $\varphi $ is bimodal, so its entropy is $\ln 3$, and since $h$ fibers over $\varphi $ with diffeomorphisms in the circle fibers, it too has entropy $\ln 3$. Take again an irrational rotation of the sphere, $R$, and set $H = h \circ R$. The map $\varphi ^{n}$ has $3^{n}$ fixed points. For the majority of them, their $\varphi $-orbits contain points in $I_{2}$. (That is, the proportion of the $\varphi ^{n}$-fixed-points whose orbit includes points of $I_{2}$ tends to $1$ as $ n\rightarrow \infty $. The other orbits lie in a zero measure Cantor set.) For each such fixed point of $\varphi ^{n}$ we have an $H^{n}$-invariant latitude $L$, at least one of whose $H$-iterates lies in $B_{2}$, and therefore $H^{n}(L) $ is the single point $L \cap M$. The other fixed points of $\varphi ^{n}$ correspond to $H^{n}$-invariant latitudes whose $H$-orbits avoid $B_{2}$. On these latitudes, $H^{n}$ is an irrational rotation, so they give no fixed points. Altogether, $H^{n}$ has $3^{n}$ fixed points, nearly each of which contributes one fixed point of $H^{n}$, so the Growth Rate Inequality holds in the form $\ln 3 \geq \ln 2$. \end{enumerate} \begin{Rmk} It is possible that Lemma~\ref{l.numberoffixedpoints} and our theorem have proofs using a coding like this. We would need to know how the coding is affected when we take a banding $B_{1},\dots , B_{N}$ of $S$ and consider a sub-banding of each $B_{i}$ as $$ B_{ij} = B_{i} \cap f^{-1}(B_{j}). $$ \end{Rmk}
{ "timestamp": "2012-10-16T02:02:01", "yymm": "1210", "arxiv_id": "1210.3717", "language": "en", "url": "https://arxiv.org/abs/1210.3717" }
\subsection{Discrete Log based Equivocal Commitment Scheme \comdl} \label{commitdl} The committer and the receiver are given a group $\mathbb{G}$ of prime order $q$, its generator $g$ and an element $B \in \mathbb{G}$ such that $q$ is an $n-$bit prime. To commit to $x \in \mathbb{Z}_q$, choose $r \xleftarrow{\$} \mathbb{Z}_q$ and send $Z = g^x\cdot B^r$. To open, the sender sends $(x,r)$. This commitment scheme is perfectly hiding i.e. $\comdl(x)$ and $\comdl(x')$ are identically distributed. If the committer does not know the discrete log of $B$, then $\comdl$ is computationally binding under discrete log assumption. \red{We assume that discrete log assumption holds in all the groups we consider.} Also, if $Z$ is a commitment under $\comdl$, then given two distinct openings of $Z$ to $(x,r)$ and $(x',r')$ such that $x\ne x'$, one can easily solve for the discrete log of $B$, say $b$, as follows: $b = (x-x')\cdot(r'-r)^{-1}$. Also, if the simulator knows the discrete log of $B$, say $b$, it can open $Z = \comdl(x;r)$\ as being a commitment to any $x' \in \mathbb{Z}_q$ by sending $r' = \comopendl(x, x', r, b) = (x+r\cdot b - x')\cdot b^{-1}$. \section{Discussion regarding use of auxiliary inputs for concurrent simulation} \label{app:KEAUX} A potentially promising idea for using \ka s for concurrent simulation is the following: Formulate a \ka\ that holds for all auxiliary inputs for the adversary, and then invoke the knowledge extractor provided by the \ka\ with different auxiliary inputs corresponding to the extraction history. In other words, one could attempt to apply a single extractor iteratively for different concurrent sessions, passing along all the information extracted so far as auxiliary input to the extractor. However, similar to the example discussed in the Introduction concerning a potential ``interactive'' \ka, a problem may arise if the auxiliary input contains ``external knowledge'' and thereby prevents extraction. We stress there is an important distinction between why this fails and failure of the interactive knowledge assumption. Here we are not saying that a \ka\ which holds with regard to all auxiliary inputs must be false. Rather we are saying that any natural application of such an assumption to the concurrent setting would fail. This is because it would cause us to invoke the extractor with auxiliary inputs that impermissibly correlate with messages received by the adversary in earlier executions of \kcp. By the definition of auxiliary input, an extractor would not be required to function in such a case. To make the intuition precise, consider an example of such an iterative application of knowledge assumption in the concurrent setting. Suppose the Adversary schedules the messages of the malicious committer (MC) as follows: First, MC asks for the random first message of the Receiver (R) in the \kcp\ for all the sessions ($r_1,r_2,\ldots,r_m$). Now, MC chooses a function $f$ and completes the first \kcp\ by committing to $f(r_1,r_2,\ldots,r_m)$. We apply the knowledge assumption to recover $f(r_1,r_2,\ldots,r_m)$. Next, the MC completes another \kcp. Now in order to extract, we need to provide the extractor one of the random $r_i$'s as input and $f(r_1,r_2,\ldots,r_m)$ as auxiliary input. But here, depending on the function $f$, this auxiliary input may be highly correlated to the input $r_i$. In this case, the extractor is \emph{allowed} to fail with high probability. This is because the extractor is only required to work for the fixed auxiliary input $aux = f(r_1, r_2, \dots, r_m)$, when $r_i$ is chosen at random independently of $aux$. However, the actual simulation would use $aux$ that correlates with the input $r_i$. \section{Related Work} \label{Sec:RW} \textbf{Knowledge Assumptions} Knowledge or extractability assumptions capture our belief that certain computational tasks can be done efficiently only by going through certain specific intermediate stages and generating some specific kinds of intermediate values. One such class of assumptions is Knowledge of Exponent Assumptions which were first introduced by Damgard~\cite{Dam92} to construct a CCA secure encryption scheme. Though these assumptions do not fall in the class of falsifiable class of assumptions~\cite{Naor03}, these have been proven secure against generic algorithms~\cite{Nechaev94,Shoup97,Dent06}, thus offering some evidence of validity. Hada and Tanaka~\cite{HT98} gave a three round zero-knowledge protocol using two knowledge of exponent assumptions. Later, Bellare and Pallacio~\cite{BP04} proved that the assumption used for proving the soundness of the protocol was false, proposed a modified assumption and recovered the earlier result. We stress that in our protocol, we are able to argue soundness directly without the use of any knowledge assumption. Extending the assumption of~\cite{BP04}, Abe and Fehr~\cite{AF07} constructed the first perfect NIZK for NP with full adaptive soundness. Under knowledge of exponents assumption, Prabharakaran and Xue~\cite{PX09} constructed statistically hiding sets based on trapdoor DDH groups~\cite{DG06}. Gennaro et al.~\cite{GKR10} modify the Okamoto-Tanaka key agreement protocol to get perfect forward secrecy. Recently, Groth~\cite{Groth10} generalized the assumption of~\cite{AF07} to short non-interactive perfect zero-knowledge arguments for circuit satisfiability. Other set of knowledge assumptions used recently are extractable functions~\cite{CD08,CD09}. All of ~\cite{BCCT12,DFH12,GLR11} give one of the constructions of Extractable Collision Resistant Hash functions (ECRH) using Knowledge of Exponent Assumptions. Then assuming the existence of ECRH, Bitansky et al~\cite{BCCT12} modify the construction of~\cite{DCL08} and prove that the modified construction is a succinct non-interactive adaptive arguments of knowledge (SNARK). They also show that existence of SNARKs imply the existence of (their notion of) ECRH. In the CRS model, they combined NIZK and SNARKs to give zero-knowledge non-interactive arguments. On the other hand, Damgard et al~\cite{DFH12} also use ECRH to construct succinct non-interactive arguments in CRS model. Using these, they give a two message protocol for two party computation which is UC-secure. \textbf{Concurrent Zero-Knowledge:} The difficulty in constructing a round-efficient $c\mathcal{ZK}$ was first observed by Dwork et al.~\cite{DNA98}. Following this, rigorous lower bounds on round complexity of $c\mathcal{ZK}$ for NP with a black-box simulator have been proven in~\cite{KPR98,Rosen00,CKPR01}; the best lower bound being $\Omega(\log n/\log \log n)$ rounds given by Canetti et al.~\cite{CKPR01}. \\ Barak~\cite{Barak01} gave a constant round protocol for all NP, in which he gave a non-black-box simulator for zero-knowledge. Also, for any predetermined polynomial $p(\cdot)$, this constant round protocol is zero-knowledge even when $p(n)$ sessions are concurrently executed. But it has a major drawback. The polynomial $p(\cdot)$ has to be fixed at the beginning of the protocol and the message lengths grow linearly in $p(n)$. Killian and Petrank~\cite{KP01} gave a poly-logarithmic round protocol which is zero-knowledge even when it is executed concurrently for any (not determined) polynomial number of times. The gap between the upper and lower bounds of round complexity of black-box $c\mathcal{ZK}$ was closed by Prabhakaran, Rossen, and Sahai~\cite{PRS02} who gave a $\tilde{O}(\log n)$ round protocol. Since then improving the round complexity of concurrent zero-knowledge has been an open problem.
{ "timestamp": "2012-10-16T02:02:03", "yymm": "1210", "arxiv_id": "1210.3719", "language": "en", "url": "https://arxiv.org/abs/1210.3719" }
\section*{The Pierre Auger Collaboration} P.~Abreu$^{63}$, M.~Aglietta$^{51}$, M.~Ahlers$^{94}$, E.J.~Ahn$^{81}$, I.F.M.~Albuquerque$^{15}$, D.~Allard$^{29}$, I.~Allekotte$^{1}$, J.~Allen$^{85}$, P.~Allison$^{87}$, A.~Almela$^{11,\: 7}$, J.~Alvarez Castillo$^{56}$, J.~Alvarez-Mu\~{n}iz$^{73}$, R.~Alves Batista$^{16}$, M.~Ambrosio$^{45}$, A.~Aminaei$^{57}$, L.~Anchordoqui$^{95}$, S.~Andringa$^{63}$, T.~Anti\v{c}i'{c}$^{23}$, C.~Aramo$^{45}$, E.~Arganda$^{4,\: 70}$, F.~Arqueros$^{70}$, H.~Asorey$^{1}$, P.~Assis$^{63}$, J.~Aublin$^{31}$, M.~Ave$^{37}$, M.~Avenier$^{32}$, G.~Avila$^{10}$, A.M.~Badescu$^{66}$, M.~Balzer$^{36}$, K.B.~Barber$^{12}$, A.F.~Barbosa$^{13~\ddag}$, R.~Bardenet$^{30}$, S.L.C.~Barroso$^{18}$, B.~Baughman$^{87~f}$, J.~B\"{a}uml$^{35}$, C.~Baus$^{37}$, J.J.~Beatty$^{87}$, K.H.~Becker$^{34}$, A.~Bell\'{e}toile$^{33}$, J.A.~Bellido$^{12}$, S.~BenZvi$^{94}$, C.~Berat$^{32}$, X.~Bertou$^{1}$, P.L.~Biermann$^{38}$, P.~Billoir$^{31}$, F.~Blanco$^{70}$, M.~Blanco$^{31,\: 71}$, C.~Bleve$^{34}$, H.~Bl\"{u}mer$^{37,\: 35}$, M.~Boh\'{a}\v{c}ov\'{a}$^{25}$, D.~Boncioli$^{46}$, C.~Bonifazi$^{21,\: 31}$, R.~Bonino$^{51}$, N.~Borodai$^{61}$, J.~Brack$^{79}$, I.~Brancus$^{64}$, P.~Brogueira$^{63}$, W.C.~Brown$^{80}$, R.~Bruijn$^{75~i}$, P.~Buchholz$^{41}$, A.~Bueno$^{72}$, L.~Buroker$^{95}$, R.E.~Burton$^{77}$, K.S.~Caballero-Mora$^{88}$, B.~Caccianiga$^{44}$, L.~Caramete$^{38}$, R.~Caruso$^{47}$, A.~Castellina$^{51}$, O.~Catalano$^{50}$, G.~Cataldi$^{49}$, L.~Cazon$^{63}$, R.~Cester$^{48}$, J.~Chauvin$^{32}$, S.H.~Cheng$^{88}$, A.~Chiavassa$^{51}$, J.A.~Chinellato$^{16}$, J.~Chirinos Diaz$^{84}$, J.~Chudoba$^{25}$, M.~Cilmo$^{45}$, R.W.~Clay$^{12}$, G.~Cocciolo$^{49}$, L.~Collica$^{44}$, M.R.~Coluccia$^{49}$, R.~Concei\c{c}\~{a}o$^{63}$, F.~Contreras$^{9}$, H.~Cook$^{75}$, M.J.~Cooper$^{12}$, J.~Coppens$^{57,\: 59}$, A.~Cordier$^{30}$, S.~Coutu$^{88}$, C.E.~Covault$^{77}$, A.~Creusot$^{29}$, A.~Criss$^{88}$, J.~Cronin$^{90}$, A.~Curutiu$^{38}$, S.~Dagoret-Campagne$^{30}$, R.~Dallier$^{33}$, B.~Daniel$^{16}$, S.~Dasso$^{5,\: 3}$, K.~Daumiller$^{35}$, B.R.~Dawson$^{12}$, R.M.~de Almeida$^{22}$, M.~De Domenico$^{47}$, C.~De Donato$^{56}$, S.J.~de Jong$^{57,\: 59}$, G.~De La Vega$^{8}$, W.J.M.~de Mello Junior$^{16}$, J.R.T.~de Mello Neto$^{21}$, I.~De Mitri$^{49}$, V.~de Souza$^{14}$, K.D.~de Vries$^{58}$, L.~del Peral$^{71}$, M.~del R\'{\i}o$^{46,\: 9}$, O.~Deligny$^{28}$, H.~Dembinski$^{37}$, N.~Dhital$^{84}$, C.~Di Giulio$^{46,\: 43}$, M.L.~D\'{\i}az Castro$^{13}$, P.N.~Diep$^{96}$, F.~Diogo$^{63}$, C.~Dobrigkeit $^{16}$, W.~Docters$^{58}$, J.C.~D'Olivo$^{56}$, P.N.~Dong$^{96,\: 28}$, A.~Dorofeev$^{79}$, J.C.~dos Anjos$^{13}$, M.T.~Dova$^{4}$, D.~D'Urso$^{45}$, I.~Dutan$^{38}$, J.~Ebr$^{25}$, R.~Engel$^{35}$, M.~Erdmann$^{39}$, C.O.~Escobar$^{81,\: 16}$, J.~Espadanal$^{63}$, A.~Etchegoyen$^{7,\: 11}$, P.~Facal San Luis$^{90}$, H.~Falcke$^{57,\: 60,\: 59}$, K.~Fang$^{90}$, G.~Farrar$^{85}$, A.C.~Fauth$^{16}$, N.~Fazzini$^{81}$, A.P.~Ferguson$^{77}$, B.~Fick$^{84}$, J.M.~Figueira$^{7}$, A.~Filevich$^{7}$, A.~Filip\v{c}i\v{c}$^{67,\: 68}$, S.~Fliescher$^{39}$, C.E.~Fracchiolla$^{79}$, E.D.~Fraenkel$^{58}$, O.~Fratu$^{66}$, U.~Fr\"{o}hlich$^{41}$, B.~Fuchs$^{37}$, R.~Gaior$^{31}$, R.F.~Gamarra$^{7}$, S.~Gambetta$^{42}$, B.~Garc\'{\i}a$^{8}$, S.T.~Garcia Roca$^{73}$, D.~Garcia-Gamez$^{30}$, D.~Garcia-Pinto$^{70}$, G.~Garilli$^{47}$, A.~Gascon Bravo$^{72}$, H.~Gemmeke$^{36}$, P.L.~Ghia$^{31}$, M.~Giller$^{62}$, J.~Gitto$^{8}$, H.~Glass$^{81}$, M.S.~Gold$^{93}$, G.~Golup$^{1}$, F.~Gomez Albarracin$^{4}$, M.~G\'{o}mez Berisso$^{1}$, P.F.~G\'{o}mez Vitale$^{10}$, P.~Gon\c{c}alves$^{63}$, J.G.~Gonzalez$^{35}$, B.~Gookin$^{79}$, A.~Gorgi$^{51}$, P.~Gouffon$^{15}$, E.~Grashorn$^{87}$, S.~Grebe$^{57,\: 59}$, N.~Griffith$^{87}$, A.F.~Grillo$^{52}$, Y.~Guardincerri$^{3}$, F.~Guarino$^{45}$, G.P.~Guedes$^{17}$, P.~Hansen$^{4}$, D.~Harari$^{1}$, T.A.~Harrison$^{12}$, J.L.~Harton$^{79}$, A.~Haungs$^{35}$, T.~Hebbeker$^{39}$, D.~Heck$^{35}$, A.E.~Herve$^{12}$, G.C.~Hill$^{12}$, C.~Hojvat$^{81}$, N.~Hollon$^{90}$, V.C.~Holmes$^{12}$, P.~Homola$^{61}$, J.R.~H\"{o}randel$^{57,\: 59}$, P.~Horvath$^{26}$, M.~Hrabovsk\'{y}$^{26,\: 25}$, D.~Huber$^{37}$, T.~Huege$^{35}$, A.~Insolia$^{47}$, F.~Ionita$^{90}$, A.~Italiano$^{47}$, S.~Jansen$^{57,\: 59}$, C.~Jarne$^{4}$, S.~Jiraskova$^{57}$, M.~Josebachuili$^{7}$, K.~Kadija$^{23}$, K.H.~Kampert$^{34}$, P.~Karhan$^{24}$, P.~Kasper$^{81}$, I.~Katkov$^{37}$, B.~K\'{e}gl$^{30}$, B.~Keilhauer$^{35}$, A.~Keivani$^{83}$, J.L.~Kelley$^{57}$, E.~Kemp$^{16}$, R.M.~Kieckhafer$^{84}$, H.O.~Klages$^{35}$, M.~Kleifges$^{36}$, J.~Kleinfeller$^{9,\: 35}$, J.~Knapp$^{75}$, D.-H.~Koang$^{32}$, K.~Kotera$^{90}$, N.~Krohm$^{34}$, O.~Kr\"{o}mer$^{36}$, D.~Kruppke-Hansen$^{34}$, D.~Kuempel$^{39,\: 41}$, J.K.~Kulbartz$^{40}$, N.~Kunka$^{36}$, G.~La Rosa$^{50}$, C.~Lachaud$^{29}$, D.~LaHurd$^{77}$, L.~Latronico$^{51}$, R.~Lauer$^{93}$, P.~Lautridou$^{33}$, S.~Le Coz$^{32}$, M.S.A.B.~Le\~{a}o$^{20}$, D.~Lebrun$^{32}$, P.~Lebrun$^{81}$, M.A.~Leigui de Oliveira$^{20}$, A.~Letessier-Selvon$^{31}$, I.~Lhenry-Yvon$^{28}$, K.~Link$^{37}$, R.~L\'{o}pez$^{53}$, A.~Lopez Ag\"{u}era$^{73}$, K.~Louedec$^{32,\: 30}$, J.~Lozano Bahilo$^{72}$, L.~Lu$^{75}$, A.~Lucero$^{7}$, M.~Ludwig$^{37}$, H.~Lyberis$^{21,\: 28}$, M.C.~Maccarone$^{50}$, C.~Macolino$^{31}$, S.~Maldera$^{51}$, J.~Maller$^{33}$, D.~Mandat$^{25}$, P.~Mantsch$^{81}$, A.G.~Mariazzi$^{4}$, J.~Marin$^{9,\: 51}$, V.~Marin$^{33}$, I.C.~Maris$^{31}$, H.R.~Marquez Falcon$^{55}$, G.~Marsella$^{49}$, D.~Martello$^{49}$, L.~Martin$^{33}$, H.~Martinez$^{54}$, O.~Mart\'{\i}nez Bravo$^{53}$, D.~Martraire$^{28}$, J.J.~Mas\'{\i}as Meza$^{3}$, H.J.~Mathes$^{35}$, J.~Matthews$^{83}$, J.A.J.~Matthews$^{93}$, G.~Matthiae$^{46}$, D.~Maurel$^{35}$, D.~Maurizio$^{13,\: 48}$, P.O.~Mazur$^{81}$, G.~Medina-Tanco$^{56}$, M.~Melissas$^{37}$, D.~Melo$^{7}$, E.~Menichetti$^{48}$, A.~Menshikov$^{36}$, P.~Mertsch$^{74}$, S.~Messina$^{58}$, C.~Meurer$^{39}$, R.~Meyhandan$^{91}$, S.~Mi'{c}anovi'{c}$^{23}$, M.I.~Micheletti$^{6}$, I.A.~Minaya$^{70}$, L.~Miramonti$^{44}$, L.~Molina-Bueno$^{72}$, S.~Mollerach$^{1}$, M.~Monasor$^{90}$, D.~Monnier Ragaigne$^{30}$, F.~Montanet$^{32}$, B.~Morales$^{56}$, C.~Morello$^{51}$, E.~Moreno$^{53}$, J.C.~Moreno$^{4}$, M.~Mostaf\'{a}$^{79}$, C.A.~Moura$^{20}$, M.A.~Muller$^{16}$, G.~M\"{u}ller$^{39}$, M.~M\"{u}nchmeyer$^{31}$, R.~Mussa$^{48}$, G.~Navarra$^{51~\ddag}$, J.L.~Navarro$^{72}$, S.~Navas$^{72}$, P.~Necesal$^{25}$, L.~Nellen$^{56}$, A.~Nelles$^{57,\: 59}$, J.~Neuser$^{34}$, P.T.~Nhung$^{96}$, M.~Niechciol$^{41}$, L.~Niemietz$^{34}$, N.~Nierstenhoefer$^{34}$, D.~Nitz$^{84}$, D.~Nosek$^{24}$, L.~No\v{z}ka$^{25}$, J.~Oehlschl\"{a}ger$^{35}$, A.~Olinto$^{90}$, M.~Ortiz$^{70}$, N.~Pacheco$^{71}$, D.~Pakk Selmi-Dei$^{16}$, M.~Palatka$^{25}$, J.~Pallotta$^{2}$, N.~Palmieri$^{37}$, G.~Parente$^{73}$, E.~Parizot$^{29}$, A.~Parra$^{73}$, S.~Pastor$^{69}$, T.~Paul$^{86}$, M.~Pech$^{25}$, J.~P\c{e}kala$^{61}$, R.~Pelayo$^{53,\: 73}$, I.M.~Pepe$^{19}$, L.~Perrone$^{49}$, R.~Pesce$^{42}$, E.~Petermann$^{92}$, S.~Petrera$^{43}$, A.~Petrolini$^{42}$, Y.~Petrov$^{79}$, C.~Pfendner$^{94}$, R.~Piegaia$^{3}$, T.~Pierog$^{35}$, P.~Pieroni$^{3}$, M.~Pimenta$^{63}$, V.~Pirronello$^{47}$, M.~Platino$^{7}$, M.~Plum$^{39}$, V.H.~Ponce$^{1}$, M.~Pontz$^{41}$, A.~Porcelli$^{35}$, P.~Privitera$^{90}$, M.~Prouza$^{25}$, E.J.~Quel$^{2}$, S.~Querchfeld$^{34}$, J.~Rautenberg$^{34}$, O.~Ravel$^{33}$, D.~Ravignani$^{7}$, B.~Revenu$^{33}$, J.~Ridky$^{25}$, S.~Riggi$^{73}$, M.~Risse$^{41}$, P.~Ristori$^{2}$, H.~Rivera$^{44}$, V.~Rizi$^{43}$, J.~Roberts$^{85}$, W.~Rodrigues de Carvalho$^{73}$, G.~Rodriguez$^{73}$, I.~Rodriguez Cabo$^{73}$, J.~Rodriguez Martino$^{9}$, J.~Rodriguez Rojo$^{9}$, M.D.~Rodr\'{\i}guez-Fr\'{\i}as$^{71}$, G.~Ros$^{71}$, J.~Rosado$^{70}$, T.~Rossler$^{26}$, M.~Roth$^{35}$, B.~Rouill\'{e}-d'Orfeuil$^{90}$, E.~Roulet$^{1}$, A.C.~Rovero$^{5}$, C.~R\"{u}hle$^{36}$, A.~Saftoiu$^{64}$, F.~Salamida$^{28}$, H.~Salazar$^{53}$, F.~Salesa Greus$^{79}$, G.~Salina$^{46}$, F.~S\'{a}nchez$^{7}$, C.E.~Santo$^{63}$, E.~Santos$^{63}$, E.M.~Santos$^{21}$, F.~Sarazin$^{78}$, B.~Sarkar$^{34}$, S.~Sarkar$^{74}$, R.~Sato$^{9}$, N.~Scharf$^{39}$, V.~Scherini$^{44}$, H.~Schieler$^{35}$, P.~Schiffer$^{40,\: 39}$, A.~Schmidt$^{36}$, O.~Scholten$^{58}$, H.~Schoorlemmer$^{57,\: 59}$, J.~Schovancova$^{25}$, P.~Schov\'{a}nek$^{25}$, F.~Schr\"{o}der$^{35}$, D.~Schuster$^{78}$, S.J.~Sciutto$^{4}$, M.~Scuderi$^{47}$, A.~Segreto$^{50}$, M.~Settimo$^{41}$, A.~Shadkam$^{83}$, R.C.~Shellard$^{13}$, I.~Sidelnik$^{7}$, G.~Sigl$^{40}$, H.H.~Silva Lopez$^{56}$, O.~Sima$^{65}$, A.~'{S}mia\l kowski$^{62}$, R.~\v{S}m\'{\i}da$^{35}$, G.R.~Snow$^{92}$, P.~Sommers$^{88}$, J.~Sorokin$^{12}$, H.~Spinka$^{76,\: 81}$, R.~Squartini$^{9}$, Y.N.~Srivastava$^{86}$, S.~Stanic$^{68}$, J.~Stapleton$^{87}$, J.~Stasielak$^{61}$, M.~Stephan$^{39}$, A.~Stutz$^{32}$, F.~Suarez$^{7}$, T.~Suomij\"{a}rvi$^{28}$, A.D.~Supanitsky$^{5}$, T.~\v{S}u\v{s}a$^{23}$, M.S.~Sutherland$^{83}$, J.~Swain$^{86}$, Z.~Szadkowski$^{62}$, M.~Szuba$^{35}$, A.~Tapia$^{7}$, M.~Tartare$^{32}$, O.~Ta\c{s}c\u{a}u$^{34}$, R.~Tcaciuc$^{41}$, N.T.~Thao$^{96}$, D.~Thomas$^{79}$, J.~Tiffenberg$^{3}$, C.~Timmermans$^{59,\: 57}$, W.~Tkaczyk$^{62~\ddag}$, C.J.~Todero Peixoto$^{14}$, G.~Toma$^{64}$, L.~Tomankova$^{25}$, B.~Tom\'{e}$^{63}$, A.~Tonachini$^{48}$, G.~Torralba Elipe$^{73}$, P.~Travnicek$^{25}$, D.B.~Tridapalli$^{15}$, G.~Tristram$^{29}$, E.~Trovato$^{47}$, M.~Tueros$^{73}$, R.~Ulrich$^{35}$, M.~Unger$^{35}$, M.~Urban$^{30}$, J.F.~Vald\'{e}s Galicia$^{56}$, I.~Vali\~{n}o$^{73}$, L.~Valore$^{45}$, G.~van Aar$^{57}$, A.M.~van den Berg$^{58}$, S.~van Velzen$^{57}$, A.~van Vliet$^{40}$, E.~Varela$^{53}$, B.~Vargas C\'{a}rdenas$^{56}$, J.R.~V\'{a}zquez$^{70}$, R.A.~V\'{a}zquez$^{73}$, D.~Veberi\v{c}$^{68,\: 67}$, V.~Verzi$^{46}$, J.~Vicha$^{25}$, M.~Videla$^{8}$, L.~Villase\~{n}or$^{55}$, H.~Wahlberg$^{4}$, P.~Wahrlich$^{12}$, O.~Wainberg$^{7,\: 11}$, D.~Walz$^{39}$, A.A.~Watson$^{75}$, M.~Weber$^{36}$, K.~Weidenhaupt$^{39}$, A.~Weindl$^{35}$, F.~Werner$^{35}$, S.~Westerhoff$^{94}$, B.J.~Whelan$^{88,\: 12}$, A.~Widom$^{86}$, G.~Wieczorek$^{62}$, L.~Wiencke$^{78}$, B.~Wilczy\'{n}ska$^{61}$, H.~Wilczy\'{n}ski$^{61}$, M.~Will$^{35}$, C.~Williams$^{90}$, T.~Winchen$^{39}$, M.~Wommer$^{35}$, B.~Wundheiler$^{7}$, T.~Yamamoto$^{90~a}$, T.~Yapici$^{84}$, P.~Younk$^{41,\: 82}$, G.~Yuan$^{83}$, A.~Yushkov$^{73}$, B.~Zamorano Garcia$^{72}$, E.~Zas$^{73}$, D.~Zavrtanik$^{68,\: 67}$, M.~Zavrtanik$^{67,\: 68}$, I.~Zaw$^{85~h}$, A.~Zepeda$^{54~b}$, J.~Zhou$^{90}$, Y.~Zhu$^{36}$, M.~Zimbres Silva$^{34,\: 16}$, M.~Ziolkowski$^{41}$ \\ } \affil{$^{\dag}$ Av. San Mart\'in Norte 306, 5613 Malarg\"ue, Mendoza, Argentina; www.auger.org \\ $^{1}$ Centro At\'{o}mico Bariloche and Instituto Balseiro (CNEA-UNCuyo-CONICET), San Carlos de Bariloche, Argentina \\ $^{2}$ Centro de Investigaciones en L\'{a}seres y Aplicaciones, CITEDEF and CONICET, Argentina \\ $^{3}$ Departamento de F\'{\i}sica, FCEyN, Universidad de Buenos Aires y CONICET, Argentina \\ $^{4}$ IFLP, Universidad Nacional de La Plata and CONICET, La Plata, Argentina \\ $^{5}$ Instituto de Astronom\'{\i}a y F\'{\i}sica del Espacio (CONICET-UBA), Buenos Aires, Argentina \\ $^{6}$ Instituto de F\'{\i}sica de Rosario (IFIR) - CONICET/U.N.R. and Facultad de Ciencias Bioqu\'{\i}micas y Farmac\'{e}uticas U.N.R., Rosario, Argentina \\ $^{7}$ Instituto de Tecnolog\'{\i}as en Detecci\'{o}n y Astropart\'{\i}culas (CNEA, CONICET, UNSAM), Buenos Aires, Argentina \\ $^{8}$ National Technological University, Faculty Mendoza (CONICET/CNEA), Mendoza, Argentina \\ $^{9}$ Observatorio Pierre Auger, Malarg\"{u}e, Argentina \\ $^{10}$ Observatorio Pierre Auger and Comisi\'{o}n Nacional de Energ\'{\i}a At\'{o}mica, Malarg\"{u}e, Argentina \\ $^{11}$ Universidad Tecnol\'{o}gica Nacional - Facultad Regional Buenos Aires, Buenos Aires, Argentina \\ $^{12}$ University of Adelaide, Adelaide, S.A., Australia \\ $^{13}$ Centro Brasileiro de Pesquisas Fisicas, Rio de Janeiro, RJ, Brazil \\ $^{14}$ Universidade de S\~{a}o Paulo, Instituto de F\'{\i}sica, S\~{a}o Carlos, SP, Brazil \\ $^{15}$ Universidade de S\~{a}o Paulo, Instituto de F\'{\i}sica, S\~{a}o Paulo, SP, Brazil \\ $^{16}$ Universidade Estadual de Campinas, IFGW, Campinas, SP, Brazil \\ $^{17}$ Universidade Estadual de Feira de Santana, Brazil \\ $^{18}$ Universidade Estadual do Sudoeste da Bahia, Vitoria da Conquista, BA, Brazil \\ $^{19}$ Universidade Federal da Bahia, Salvador, BA, Brazil \\ $^{20}$ Universidade Federal do ABC, Santo Andr\'{e}, SP, Brazil \\ $^{21}$ Universidade Federal do Rio de Janeiro, Instituto de F\'{\i}sica, Rio de Janeiro, RJ, Brazil \\ $^{22}$ Universidade Federal Fluminense, EEIMVR, Volta Redonda, RJ, Brazil \\ $^{23}$ Rudjer Bo\v{s}kovi'{c} Institute, 10000 Zagreb, Croatia \\ $^{24}$ Charles University, Faculty of Mathematics and Physics, Institute of Particle and Nuclear Physics, Prague, Czech Republic \\ $^{25}$ Institute of Physics of the Academy of Sciences of the Czech Republic, Prague, Czech Republic \\ $^{26}$ Palacky University, RCPTM, Olomouc, Czech Republic \\ $^{28}$ Institut de Physique Nucl\'{e}aire d'Orsay (IPNO), Universit\'{e} Paris 11, CNRS-IN2P3, Orsay, France \\ $^{29}$ Laboratoire AstroParticule et Cosmologie (APC), Universit\'{e} Paris 7, CNRS-IN2P3, Paris, France \\ $^{30}$ Laboratoire de l'Acc\'{e}l\'{e}rateur Lin\'{e}aire (LAL), Universit\'{e} Paris 11, CNRS-IN2P3, France \\ $^{31}$ Laboratoire de Physique Nucl\'{e}aire et de Hautes Energies (LPNHE), Universit\'{e}s Paris 6 et Paris 7, CNRS-IN2P3, Paris, France \\ $^{32}$ Laboratoire de Physique Subatomique et de Cosmologie (LPSC), Universit\'{e} Joseph Fourier Grenoble, CNRS-IN2P3, Grenoble INP, France \\ $^{33}$ SUBATECH, \'{E}cole des Mines de Nantes, CNRS-IN2P3, Universit\'{e} de Nantes, France \\ $^{34}$ Bergische Universit\"{a}t Wuppertal, Wuppertal, Germany \\ $^{35}$ Karlsruhe Institute of Technology - Campus North - Institut f\"{u}r Kernphysik, Karlsruhe, Germany \\ $^{36}$ Karlsruhe Institute of Technology - Campus North - Institut f\"{u}r Prozessdatenverarbeitung und Elektronik, Karlsruhe, Germany \\ $^{37}$ Karlsruhe Institute of Technology - Campus South - Institut f\"{u}r Experimentelle Kernphysik (IEKP), Karlsruhe, Germany \\ $^{38}$ Max-Planck-Institut f\"{u}r Radioastronomie, Bonn, Germany \\ $^{39}$ RWTH Aachen University, III. Physikalisches Institut A, Aachen, Germany \\ $^{40}$ Universit\"{a}t Hamburg, Hamburg, Germany \\ $^{41}$ Universit\"{a}t Siegen, Siegen, Germany \\ $^{42}$ Dipartimento di Fisica dell'Universit\`{a} and INFN, Genova, Italy \\ $^{43}$ Universit\`{a} dell'Aquila and INFN, L'Aquila, Italy \\ $^{44}$ Universit\`{a} di Milano and Sezione INFN, Milan, Italy \\ $^{45}$ Universit\`{a} di Napoli "Federico II" and Sezione INFN, Napoli, Italy \\ $^{46}$ Universit\`{a} di Roma II "Tor Vergata" and Sezione INFN, Roma, Italy \\ $^{47}$ Universit\`{a} di Catania and Sezione INFN, Catania, Italy \\ $^{48}$ Universit\`{a} di Torino and Sezione INFN, Torino, Italy \\ $^{49}$ Dipartimento di Matematica e Fisica "E. De Giorgi" dell'Universit\`{a} del Salento and Sezione INFN, Lecce, Italy \\ $^{50}$ Istituto di Astrofisica Spaziale e Fisica Cosmica di Palermo (INAF), Palermo, Italy \\ $^{51}$ Istituto di Fisica dello Spazio Interplanetario (INAF), Universit\`{a} di Torino and Sezione INFN, Torino, Italy \\ $^{52}$ INFN, Laboratori Nazionali del Gran Sasso, Assergi (L'Aquila), Italy \\ $^{53}$ Benem\'{e}rita Universidad Aut\'{o}noma de Puebla, Puebla, Mexico \\ $^{54}$ Centro de Investigaci\'{o}n y de Estudios Avanzados del IPN (CINVESTAV), M\'{e}xico, Mexico \\ $^{55}$ Universidad Michoacana de San Nicolas de Hidalgo, Morelia, Michoacan, Mexico \\ $^{56}$ Universidad Nacional Autonoma de Mexico, Mexico, D.F., Mexico \\ $^{57}$ IMAPP, Radboud University Nijmegen, Netherlands \\ $^{58}$ Kernfysisch Versneller Instituut, University of Groningen, Groningen, Netherlands \\ $^{59}$ Nikhef, Science Park, Amsterdam, Netherlands \\ $^{60}$ ASTRON, Dwingeloo, Netherlands \\ $^{61}$ Institute of Nuclear Physics PAN, Krakow, Poland \\ $^{62}$ University of \L \'{o}d\'{z}, \L \'{o}d\'{z}, Poland \\ $^{63}$ LIP and Instituto Superior T\'{e}cnico, Technical University of Lisbon, Portugal \\ $^{64}$ 'Horia Hulubei' National Institute for Physics and Nuclear Engineering, Bucharest- Magurele, Romania \\ $^{65}$ University of Bucharest, Physics Department, Romania \\ $^{66}$ University Politehnica of Bucharest, Romania \\ $^{67}$ J. Stefan Institute, Ljubljana, Slovenia \\ $^{68}$ Laboratory for Astroparticle Physics, University of Nova Gorica, Slovenia \\ $^{69}$ Instituto de F\'{\i}sica Corpuscular, CSIC-Universitat de Val\`{e}ncia, Valencia, Spain \\ $^{70}$ Universidad Complutense de Madrid, Madrid, Spain \\ $^{71}$ Universidad de Alcal\'{a}, Alcal\'{a} de Henares (Madrid), Spain \\ $^{72}$ Universidad de Granada \& C.A.F.P.E., Granada, Spain \\ $^{73}$ Universidad de Santiago de Compostela, Spain \\ $^{74}$ Rudolf Peierls Centre for Theoretical Physics, University of Oxford, Oxford, United Kingdom \\ $^{75}$ School of Physics and Astronomy, University of Leeds, United Kingdom \\ $^{76}$ Argonne National Laboratory, Argonne, IL, USA \\ $^{77}$ Case Western Reserve University, Cleveland, OH, USA \\ $^{78}$ Colorado School of Mines, Golden, CO, USA \\ $^{79}$ Colorado State University, Fort Collins, CO, USA \\ $^{80}$ Colorado State University, Pueblo, CO, USA \\ $^{81}$ Fermilab, Batavia, IL, USA \\ $^{82}$ Los Alamos National Laboratory, Los Alamos, NM, USA \\ $^{83}$ Louisiana State University, Baton Rouge, LA, USA \\ $^{84}$ Michigan Technological University, Houghton, MI, USA \\ $^{85}$ New York University, New York, NY, USA \\ $^{86}$ Northeastern University, Boston, MA, USA \\ $^{87}$ Ohio State University, Columbus, OH, USA \\ $^{88}$ Pennsylvania State University, University Park, PA, USA \\ $^{90}$ University of Chicago, Enrico Fermi Institute, Chicago, IL, USA \\ $^{91}$ University of Hawaii, Honolulu, HI, USA \\ $^{92}$ University of Nebraska, Lincoln, NE, USA \\ $^{93}$ University of New Mexico, Albuquerque, NM, USA \\ $^{94}$ University of Wisconsin, Madison, WI, USA \\ $^{95}$ University of Wisconsin, Milwaukee, WI, USA \\ $^{96}$ Institute for Nuclear Science and Technology (INST), Hanoi, Vietnam \\ (\ddag) Deceased \\ (a) at Konan University, Kobe, Japan \\ (b) now at the Universidad Autonoma de Chiapas on leave of absence from Cinvestav \\ (f) now at University of Maryland \\ (h) now at NYU Abu Dhabi \\ (i) now at Universit\'{e} de Lausanne \\ } \begin{abstract} A thorough search for large scale anisotropies in the distribution of arrival directions of cosmic rays detected above $10^{18}$~eV at the Pierre Auger Observatory is presented. This search is performed as a function of both declination and right ascension in several energy ranges above $10^{18}$~eV, and reported in terms of dipolar and quadrupolar coefficients. Within the systematic uncertainties, no significant deviation from isotropy is revealed. Assuming that any cosmic ray anisotropy is dominated by dipole and quadrupole moments in this energy range, upper limits on their amplitudes are derived. These upper limits allow us to challenge an origin of cosmic rays above $10^{18}$~eV from stationary galactic sources densely distributed in the galactic disk and emitting predominantly light particles in all directions. \end{abstract} \keywords{astroparticle physics; cosmic rays} \section{Introduction} Establishing at which energy the intensity of extragalactic cosmic rays starts to dominate the intensity of galactic ones would constitute an important step forward to provide further understanding on the origin of Ultra-High Energy Cosmic Rays (UHECRs). A time honored picture is that the \textit{ankle}, a hardening of the energy spectrum located at $\simeq~$4~EeV~\citep{Linsley1963,Lawrence1991,Nagano1992,Bird1993,AugerPLB2010} (where 1~EeV~$\equiv 10^{18}~$eV), is the feature in the energy spectrum marking the transition between galactic and extragalactic UHECRs~\citep{Linsley1963}. As a natural signature of the escape of cosmic rays from the Galaxy, large scale anisotropies in the distribution of arrival directions could be detected at energies below this spectral feature. Both the amplitude and the shape of such patterns are uncertain, as they depend on the model adopted to describe the regular and turbulent components of the galactic magnetic field, the charges of the cosmic rays, and the assumed distribution of sources in space and time. For cosmic rays mostly heavy and originating from stationary sources located in the galactic disk, some estimates based on diffusion and drift motions~\citep{Ptuskin1993,Candia2003} as well as direct integration of trajectories~\citep{Ptuskin1998,Giacinti2011} show that dipolar anisotropies at the level of a few percent could be imprinted in the energy range just below the ankle energy. Even larger amplitudes could result in the case of light primaries, unless sources are strongly intermittent and pure diffusion motions hold up to EeV energies~\citep{Calvez2010,Pohl2011}. If UHECRs above 1~EeV have a predominant extragalactic origin~\citep{Hillas1967,Blumenthal1970,Berezinsky2006,Berezinsky2004}, their angular distribution is expected to be isotropic to a high level. But, even for isotropic extragalactic cosmic rays, the translational motion of the Galaxy relative to a possibly stationary extragalactic cosmic ray rest frame can produce a dipole in a similar way to the \textit{Compton-Getting effect}~\citep{Compton} which has been measured with cosmic rays of much lower energy at the solar time scale~\citep{Groom,Tibet,Milagro,EASTOP,IceCube} as a result of the Earth motion relative to the frame in which the cosmic rays have no bulk motion. Moreover, the rotation of the Galaxy can also produce anisotropy by virtue of moving magnetic fields, as cosmic rays travelling through far away regions of the Galaxy experience an electric force due to the relative motion of the system in which the field is purely magnetic~\citep{Harari2010}. The large scale structure of the galactic magnetic field is expected to transform even a simple Compton-Getting dipole into a more complex anisotropy at Earth, described by higher order multipoles~\citep{Harari2010}. A quantitative estimate of the imprinted pattern would require knowledge of the global structure of the galactic magnetic field and the charges of the particles, as well as the frame in which extragalactic cosmic rays have no bulk motion. If, for instance, the frame in which the UHECR distribution is isotropic coincides with the cosmic microwave background rest frame, the amplitude of the simple Compton-Getting dipole would be about 0.6\%~\citep{Kachelriess2006}. The same order of magnitude is expected if UHECRs have no bulk motion with respect to the local group of galaxies. The large scale distribution of arrival directions of UHECRs as a function of the energy is thus one important observable to provide key elements for understanding their origin in the EeV energy range. Using the large amount of data collected by the Surface Detector (SD) array of the Pierre Auger Observatory, results of first harmonic analyses of the right ascension distribution performed in different energy ranges above $0.25~$EeV were recently reported~\citep{AugerAPP2011}. Upper limits on the dipole component in the equatorial plane were derived, being below 2\% at 99\% $C.L.$ for EeV energies and providing the most stringent bounds ever obtained. These analyses benefit from the almost uniform directional exposure in right ascension of the SD array of the Pierre Auger Observatory which is due to the Earth rotation, and they constitute a powerful tool for picking up any dipolar modulation in this coordinate. However, since this technique is not sensitive to a dipolar component along the Earth rotation axis, we aim in the present report at estimating not only the dipole component in the right ascension distribution but also the component along the Earth rotation axis. More generally, we present a comprehensive search in all directions for any dipole or quadrupole patterns significantly standing out above the background noise. Searching for anisotropies with relative amplitudes down to the percent level requires the control of the exposure of the experiment at even greater accuracy. Spurious modulations in the right ascension distribution are induced by the variations of the effective size of the SD array with time and by the variations of the counting rate of events due to the changes of atmospheric conditions. In Ref.~\citep{AugerAPP2011}, we showed in a quantitative way that such effects can be properly accounted for by making use of the instantaneous status of the SD array provided each second by the monitoring system, and by converting the observed signals in actual atmospheric conditions into the ones that would have been measured at some given reference atmospheric conditions. Searching for anisotropies explicitly in declination requires the control of additional systematic errors affecting both the directional exposure of the Observatory and the counting rate of events in local angles. Each of these additional effects are carefully presented in sections~\ref{s1000} and~\ref{exposure}. After correcting for the experimental effects, searches for large scale patterns above 1~EeV are presented in section~\ref{analysis}. Additional cross-checks against eventual systematic errors affecting the results obtained in section~\ref{analysis} are presented in section~\ref{systematics}. Resulting upper limits on dipole and quadrupole amplitudes are presented and discussed in section~\ref{discussion}, while a final summary is given in section~\ref{summary}. Some further technical aspects are detailed in the appendices. \section{The Pierre Auger Observatory and the data set} \label{pao} The Pierre Auger Observatory~\citep{AugerNIM2004}, located in Malarg\"{u}e, Argentina, at mean latitude 35.2$^\circ\,$S, mean longitude 69.5$^\circ\,$W and mean altitude 1400 meters above sea level, has been designed to collect UHECRs with unprecedented statistics. It exploits two available techniques to detect extensive air showers initiated by cosmic ray interactions in the atmosphere~: a \textit{surface detector array} and a \textit{fluorescence detector}. The SD array consists of 1660 water-Cherenkov detectors laid out over about 3000~km$^2$ on a triangular grid with 1.5~km spacing. These water-Cherenkov detectors are sensitive to the light emitted in their volume by the secondary particles of the showers, and provide a lateral sampling of the showers reaching the ground level. At the perimeter of this array, the atmosphere is overlooked on dark nights by 27 optical telescopes grouped in 5 buildings. These telescopes record the number of secondary charged particles in the air shower as a function of depth in the atmosphere by measuring the amount of nitrogen fluorescence caused by those particles along the track of the shower. The analyses presented in this report make use of events recorded by the SD array from 1 January 2004 to 31 December 2011, with zenith angles less than 55$^\circ$. To ensure good angle and energy reconstructions, each event must satisfy a fiducial cut requiring that the \emph{elemental cell} of the event (that is, the all six neighbours of the water-Cherenkov detector with the highest signal) was active when the event was recorded~\citep{AugerNIM2010}. Based on this fiducial cut, and accounting for unavoidable periods of array instability reducing slightly the duty cycle, the total geometric exposure corresponding to the data set considered in this report is 23,520~km$^2$~yr~sr. This geometric exposure applies to energies at which the SD array operates with full detection efficiency, that is, to energies above 3~EeV~\citep{AugerNIM2010}. The event direction is determined following the procedure described in Ref.~\citep{ang-res}. At the lowest energies observed, the angular resolution of the SD is about $2.2^\circ$, and reaches $\sim 1^\circ$ at the highest energies~\citep{ang-res2}. This is sufficient to perform searches for large-scale anisotropies. The energy estimation of each event is primarily based on the measurement of the signal at a reference distance of $1000\,$m, $S(1000)$, referred to as the \textit{shower size}. For a given energy, the shower size is a function of the zenith angle due to the rapid increase of the slant depth which induces an attenuation of the electromagnetic component of the showers. To account for this attenuation, the relationship between the observed $S(1000)$ and the one that would have been measured had the shower arrived at a zenith angle 38$^\circ$ is derived in an empirical way, using the constant intensity cut method~\citep{Hersil1961}. To convert $S_{38^\circ}$ into energy, a calibration curve is used, based on events measured simultaneously by the SD array and the fluorescence telescopes~\citep{AugerPRL2008}, since these telescopes indeed provide a calorimetric measurement of the energy. The statistical uncertainty of this energy estimation amounts to about 15\%, while the absolute energy scale has a systematic uncertainty of 22\%~\citep{AugerPRL2008}. \section{Control of the event counting rate} \label{s1000} The control of the event counting rate is critical in searches for large scale \linebreak anisotropies. Due to the steepness of the energy spectrum, any mild bias in the estimate of the shower energy with time or incident angles can lead to significant distortions of the event counting rate. The procedure followed to obtain an unbiased estimate of the shower energy is described in this section. This procedure consists in correcting measurements of shower sizes, $S(1000)$, for the influences of weather effects and the geomagnetic field \emph{before} the conversion to $S_{38^\circ}$ using the constant intensity method. Then, the conversion to energy is applied. \subsection{Influence of atmospheric conditions on shower size} \label{s1000-w} \begin{table*}[h] \begin{center} \begin{tabular}{c|c|c|c} $\sec{\theta}$ & $\alpha_\rho$[kg$^{-1}$m$^3$] & $\beta_\rho$[kg$^{-1}$m$^3$] & $\alpha_P$[hPa$^{-1}$] \\ \hline \hline $[1.0 - 1.2]$ & $-9.7~10^{-1}$ & $-2.6~10^{-1}$ & $-4.4~10^{-4}$ \\ $[1.2 - 1.4]$ & $-7.2~10^{-1}$ & $-2.2~10^{-1}$ & $-1.6~10^{-3}$ \\ $[1.4 - 1.6]$ & $-5.4~10^{-1}$ & $-2.0~10^{-1}$ & $-2.3~10^{-3}$ \\ $[1.6 - 1.8]$ & $-4.0~10^{-1}$ & $-4.3~10^{-2}$ & $-1.9~10^{-3}$ \\ $[1.8 - 2.0]$ & $-1.5~10^{-1}$ & $-2.3~10^{-2}$ & $-2.8~10^{-3}$ \end{tabular} \caption{\small{Coefficients $\alpha_\rho$, $\beta_\rho$ and $\alpha_P$ used to correct shower sizes for atmospheric effects on shower development, in bins of $\sec{\theta}$. From Ref.~\citep{AugerAPP2009}.}} \label{tab:weather} \end{center} \end{table*}% The energy estimator of the showers recorded by the SD array is provided by the signal at 1000~m from the shower core, $S(1000)$. For any fixed energy, since the development of extensive air showers depends on the atmospheric pressure $P$ and air density $\rho$, the corresponding $S(1000)$ is sensitive to variations in pressure and air density. Systematic variations with time of $S(1000)$ induce variations of the event rate that may distort the real dependence of the cosmic ray intensity with right ascension. To cope with this experimental effect, the observed shower size $S(1000)$, measured at the actual density $\rho$ and pressure $P$, is related to the one $S_{atm}(1000)$ that would have been measured at reference values $\rho_0$ and $P_0$~\citep{AugerAPP2009}~: \begin{equation} S_{atm}(1000)=\left[1-\alpha_P(\theta)(P-P_0)-\alpha_\rho(\theta)(\rho_d-\rho_0)-\beta_\rho(\theta)(\rho-\rho_d)\right]S(1000). \label{sweather} \end{equation} The reference values are chosen as the average values at Malarg\"ue (\textit{i.e.} $\rho_0=1.06$ kg~m$^{-3}$ and $P_0=862$~hPa). $\rho_d$ denotes here the average daily density at the time the event was recorded. The measured coefficients $\alpha_\rho$, $\beta_\rho$ and $\alpha_P$ - given in Table~\ref{tab:weather} - give the influence on the shower sizes of the air density (and thus temperature) at long and short time scales on the Moli\`ere radius (and hence the lateral profiles of the showers) and of the pressure on the longitudinal development of air showers, respectively. Applying these corrections to the energy assignments of showers allows us to cancel spurious variations of the event rate in right ascension, whose typical amplitudes amount to a few per thousand when considering data sets collected over full years. \subsection{Influence of the geomagnetic field on shower size} \label{s1000-g} The trajectories of charged particles in extensive air showers are curved in the Earth's magnetic field, resulting in a broadening of the spatial distribution of particles in the direction of the Lorentz force. As the strength of the geomagnetic field component perpendicular to any arrival direction depends on both the zenith and azimuthal angles, the small changes of the density of particles at ground induced by the field break the circular symmetry of the lateral spread of the particles and thus induce a dependence of the shower size $S(1000)$ at a fixed energy in terms of the azimuthal angle. Due to the steepness of the energy spectrum, such an azimuthal dependence translates into azimuthal modulations of the estimated cosmic ray event rate at a given $S(1000)$. To eliminate these effects, the observed shower size $S(1000)$ is related to the one that would have been observed in the absence of geomagnetic field $S_{geom}(1000)$~\citep{AugerJCAP2011}~: \begin{eqnarray} S_{geom}(1000)=\left[1-g_1\cos^{-g_2}{(\theta)}\sin^2{(\widehat{\textbf{u},\textbf{b}})}\right]S(1000), \label{sgeom} \end{eqnarray} where $g_1=(4.2\pm1)~10^{-3}$, $g_2=2.8\pm0.3$, and $\textbf{u}$ and $\textbf{b}=\textbf{B}/\Vert \textbf{B} \Vert$ denote the unit vectors in the shower direction and the geomagnetic field direction, respectively. At a zenith angle $\theta=55^\circ$, the amplitude of the asymmetry in azimuth already amounts to $\simeq2\%$, which is why we restrict the present analysis to zenith angles smaller than this value. Carrying out these corrections is thus critical for performing large scale anisotropy measurements in declination. \subsection{From shower size to energy} \label{s1000-cic} Once the influence on $S(1000)$ of weather and geomagnetic effects are accounted for, the dependence of $S(1000)$ on zenith angle due to the attenuation of the shower and geometrical effects is extracted from the data using the constant intensity cut method~\citep{AugerPRL2008}. The attenuation curve $CIC(\theta)$ is fitted with a second order polynomial in $x=\cos^2{(\theta)}-\cos^2{(38^\circ)}$~: $CIC(\theta)=1+ax+bx^2$. The angle $38^\circ$ is chosen as a reference to convert $S(1000)$ to $S_{38^\circ}=S(1000)/CIC(\theta)$. $S_{38^\circ}$ may be regarded as the signal that would have been expected had the shower arrived at $38^\circ$. The values of the parameters $a=0.94\pm 0.03$ and $b=-0.95\pm 0.05$ are deduced for $S_{38^\circ}=22~$VEM\footnote{A vertical equivalent muon, or VEM, is the expected signal in a surface detector crossed by a muon traveling vertically and centrally to it.}, that corresponds to an energy of about 4~EeV - just above the threshold energy for full efficiency. The differences of these parameters with respect to previous reports will be discussed in section~\ref{systematics}. Finally, the sub-sample of events recorded by both the fluorescence telescopes and the SD array is used to establish the relationship between the energy reconstructed with the fluorescence telescopes $E_{FD}$ and $S_{38^\circ}$~: $E_{FD}=AS_{38^\circ}^B$. The resulting parameters from the data fit are $A=(1.68\pm0.05)\times10^{-1}~$EeV and $B=1.030\pm0.009$, in good agreement with the recent report given in Ref.~\citep{AugerPesce}. The energy scale inferred from this data sample is applied to all showers detected by the SD array. \section{Directional exposure of the Surface Detector array above 1~EeV} \label{exposure} The \textit{directional exposure} $\omega$ of the Observatory provides the effective time-integrated collecting area for a flux from each direction of the sky~\footnote{In other contexts such as the determination of the energy spectrum for instance, the term "exposure" refers to the \emph{total} exposure integrated over the celestial sphere, in units km$^2$~yr~sr.}, in units km$^2$~yr. For energies below 3~EeV, it is controlled by the detection efficiency $\epsilon$ for triggering. This efficiency depends on the energy $E$, the zenith angle $\theta$, and the azimuth angle $\varphi$. Consequently, the directional exposure of the Observatory is maximal above 3~EeV, and it is smaller at lower energies where the detection efficiency is less than unity. In this section we show in a comprehensive way how the directional exposure of the SD array is obtained as a function of the energy. We first explain how the slightly non-uniform exposure of the sky in sidereal time can be accounted for in the search for anisotropies (section~\ref{sub:exposure}). In section~\ref{sub:efficiency} we empirically calculate the detection efficiency as a function of the zenith angle and deduce the exposure below the full efficiency energy (3~EeV). In section~\ref{sub:geomagnetic} we discuss the azimuthal dependence of the efficiency due to the geomagnetic effects, introduce the corrections due to the tilt of the array in section~\ref{sub:tilt} and the corrections due to the spatial extension of the array in section~\ref{sub:extension} and show that the influence of weather effects is negligible on the detection efficiency between 1 and 3~EeV in section~\ref{sub:other}. Finally we give in section~\ref{sub:examples} some examples of our fully corrected exposure at several energies. \subsection{From local to celestial directional exposure.} \label{sub:exposure} The choice of the fiducial cut to select high quality events allows the precise determination of the geometric directional aperture per cell as $a_{\mathrm{cell}}(\theta)=1.95~\cos{\theta}~$km$^2$~\citep{AugerNIM2010}. It also allows us to exploit the regularity of the array for obtaining its geometric directional aperture as a simple multiple of $a_{\mathrm{cell}}(\theta)$~\citep{AugerNIM2010}. The number of elemental cells $n_{\mathrm{cell}}(t)$ is accurately monitored every second at the Observatory. To search for celestial large scale anisotropies, it is mandatory to account for the modulation imprinted by the variations of $n_{\mathrm{cell}}(t)$ in the expected number of events at the \emph{sidereal periodicity} $T_{sid}$. Within each sidereal day, and in the same way as in Ref.~\citep{AugerAPP2011}, we denote by $\alpha^0$ the local sidereal time and express it in hours or in radians, as appropriate. For practical reasons, $\alpha^0$ is chosen so that it is always equal to the right ascension of the zenith at the centre of the array. As a function of $\alpha^0$, the total number of elemental cells $N_{\mathrm{cell}}(\alpha^0)$ and its associated relative variations $\Delta N_{\mathrm{cell}}(\alpha^0)$ are then obtained from~: \begin{equation} \label{Ncell} N_{\mathrm{cell}}(\alpha^0)=\sum_{j}n_{\mathrm{cell}}(\alpha^0+jT_{sid}), \hspace{1cm} \Delta N_{\mathrm{cell}}(\alpha^0)=\frac{N_{\mathrm{cell}}(\alpha^0)}{\left<N_{\mathrm{cell}}\right>_{\alpha^0}}, \end{equation} with $\left<N_{\mathrm{cell}}\right>_{\alpha^0}=1/T_{sid}\int_0^{T_{sid}}\mathrm{d}\alpha^0N_{\mathrm{cell}}(\alpha^0)$. In the same way as in Ref.~\citep{AugerAPP2011}, the small modulation of the expected number of events in right ascension induced by those variations will be accounted for by weighting each event $k$ with a factor inversely proportional to $\Delta N_{\mathrm{cell}}(\alpha^0_k)$ when estimating the anisotropy parameters in section~\ref{analysis}. Placing such time dependences in the event weights allows us to remove the modulations in time imprinted by the growth of the array and the dead times for each detector. At any time, the \textit{effective} directional aperture of the SD array is controlled by the geometric one \textit{and} by the detection efficiency function $\epsilon(\theta,\varphi,E)$. For each elemental cell, the directional exposure in celestial coordinates is then simply obtained through the integration over local sidereal time of $x^{(i)}(\alpha^0)\times a_{\mathrm{cell}}{(\theta)}\times\epsilon(\theta,\varphi,E)$, where $x^{(i)}(\alpha^0)$ is the operational time of the cell $(i)$. Actually, since the small modulations in time imprinted in the event counting rate by experimental effects will be accounted for by means of the weighting procedure just described when searching for anisotropies, the small variations in local sidereal time for each $x^{(i)}(\alpha^0)$ can be neglected in calculating $\omega$. The zenith and azimuth angles are related to the declination and the right ascension through~: \begin{eqnarray} \label{eqn:theta-phi} \cos{\theta}&=&\sin{\delta}\sin{\ell_\mathrm{site}}+\cos{\delta}\cos{\ell_\mathrm{site}}\cos{(\alpha-\alpha^0)},\nonumber\\ \tan{\varphi}&=&\frac{\cos{\delta}\sin{\ell_\mathrm{site}}\cos{(\alpha-\alpha^0)}-\sin{\delta}\cos{\ell_\mathrm{site}}}{\cos{\delta}\sin{(\alpha-\alpha^0)}}, \end{eqnarray} with $\ell_{\mathrm{site}}$ the mean latitude of the Observatory. Since both $\theta$ and $\varphi$ depend only on the difference $\alpha-\alpha^0$, the integration over $\alpha^0$ can then be substituted for an integration over the hour angle $\alpha^\prime=\alpha-\alpha^0$ so that the directional exposure actually does not depend on right ascension when the $x^{(i)}$ are assumed local sidereal time independent~: \begin{equation} \label{eqn:omega} \omega(\delta,E) = \sum_{i=1}^{n_{\mathrm{cell}}}x^{(i)}\int_0^{24h}~\mathrm{d}\alpha^\prime\,a_{\mathrm{cell}}{(\theta(\alpha^\prime,\delta))}~\epsilon(\theta(\alpha^\prime,\delta),\varphi(\alpha^\prime,\delta),E). \end{equation} Above 3~EeV, this integration can be performed analytically~\citep{Sommers2001}. Below 3~EeV, the non-saturation of the detection efficiency makes the directional exposure lower. The next sections are dedicated to the determination of $\epsilon(\theta,\varphi,E)$. \subsection{Detection efficiency} \label{sub:efficiency} To determine the detection efficiency function, a natural method would be to generate showers by means of Monte-Carlo simulations and to calculate the ratio of the number of triggered events to the total simulated. However, there are discrepancies in the predictions of the hadronic interaction model regarding the number of muons in shower simulations and what is found in our data~\citep{Engel2007}. This prevents us from relying on this method for obtaining the detection efficiency to the required accuracy. We adopt here instead an empirical approach, based on the quasi-invariance of the zenithal distribution to large scale anisotropies for zenith angles less than $\simeq 60^\circ$ and for any Observatory whose latitude is far from the poles of the Earth. For full efficiency, the distribution in zenith angles $\mathrm{d}N/\mathrm{d}\theta$ is proportional to $\sin{\theta}\cos{\theta}$ for solid angle and geometry reasons, so that the distribution in $\mathrm{d}N/\mathrm{d}\sin^2{\theta}$ is uniform. Consequently, below full efficiency, \emph{any significant deviation from a uniform behaviour in the $\mathrm{d}N/\mathrm{d}\sin^2{\theta}$ distribution provides an empirical measurement of the zenithal dependence of the detection efficiency}. The quasi-invariance of $\mathrm{d}N/\mathrm{d}\sin^2{\theta}$ to large scale anisotropies is demonstrated in Appendix A. \begin{figure}[!t] \centering \includegraphics[width=10cm]{DetEff.1-1.25-2-4-nocut.eps} \caption{\small{Detection efficiency averaged over the azimuth as a function of $\sin^2{\theta}$ at different energies, empirically measured from the data.}} \label{fig:deteff-th} \end{figure} Based on this quasi-invariance, the detection efficiency averaged over the azimuth can be estimated from~: \begin{equation} \label{eqn:eps1} \left<\epsilon(\theta,\varphi,E)\right>_{\varphi} = \frac{1}{\mathcal{N}}\frac{\mathrm{d}N(\sin^2{\theta},E)}{\mathrm{d}\sin^2{\theta}}, \end{equation} where the notation $\left<\cdot\right>_{\varphi}$ stands for the average over $\varphi$ and the constant $\mathcal{N}$ is the number of events that would have been observed at energy $E$ and for any $\sin^2{\theta}$ value in case of full efficiency for an energy spectrum $\mathrm{d}N/\mathrm{d}E=40~(E/\mathrm{EeV})^{-3.27}~$km$^{-2}$yr$^{-1}$sr$^{-1}$EeV$^{-1}$ - as measured between 1 and 4~EeV~\citep{AugerPLB2010}. Consequently, for each zenith angle, this empirical measurement of the efficiency provides an estimate \textit{relative} to the overall spectrum of cosmic rays. In particular, since it is applied to \textit{all} events detected at energy $E$ without distinction based on the primary mass of cosmic rays, this technique does not provide the mass dependence of the detection efficiency. For that reason, the anisotropy searches reported in section~\ref{analysis} pertain to the whole population of cosmic rays, whether this population consists of a single primary mass or a mixture of several elements. Results are shown in Fig.~\ref{fig:deteff-th} for four different energies\footnote{To get the detection efficiency at a single energy $E$, events are actually selected in narrow energy bins around $E$. In addition, to account for the energy spectrum in $E^{-3.27}$ in this energy range, each event is weighted by a factor $E^{3.27}$.}. At 4~EeV, a uniform behaviour around 1 is observed, though quite noisy due to the reduced statistics. This uniform behaviour is consistent with full efficiency at this energy, as expected. Note that some values are greater than 1 for energies close or higher than 3~EeV, because of the empirical way of measuring the efficiency relative to the overall spectrum of cosmic rays. At 2~EeV, a loss of efficiency is observed for vertical showers due to the attenuation of the electromagnetic component of the showers. Up to $\simeq 40^\circ$, the detection efficiency steadily increases because the projected area of showers at ground gets larger with zenith angle. Above $\simeq 40^\circ$, the rapid increase of the slant depth makes then the attenuation of the electromagnetic component stronger, but the muonic component of showers becomes dominant and ensures a high detection efficiency. At lower energies, the number of muons is, in contrast, too low to impact significantly on the detection efficiency above $\simeq 40^\circ-45^\circ$, so that a clear decrease is observed at high zenith angles. In the following, we use parameterisations obtained by fitting each distribution with a fourth-order polynomial function in $\sin^2{\theta}$, which is sufficient to reproduce the main details as illustrated in Fig.~\ref{fig:deteff-th}. \subsection{Geomagnetic effects below full efficiency} \label{sub:geomagnetic} \begin{figure}[!t] \centering \includegraphics[width=7.5cm]{ModulationGMFvsPhi_55deg.eps} \includegraphics[width=7.5cm]{AmplitudeGMFvsTheta_55deg.eps} \caption{\small{Left~: Dependence of the detection efficiency on azimuth for $\theta=55^\circ$ and $E=1~$EeV, due to geomagnetic effects. Right~: Maximal contrast of the azimuthal modulation of the detection efficiency induced by geomagnetic effects as a function of the zenith angle.}} \label{fig:amp-gmf-vs-th} \end{figure} In addition to the effects on the energy determination presented in section~\ref{s1000-g}, geomagnetic effects also affect the detection efficiency for showers with energies below 3~EeV. This is because under any incident angles $(\theta,\varphi)$, a shower with an energy $E$ triggers the SD array with a probability associated with its size which is a function of azimuth because of the geomagnetic effects~\footnote{Here, the shorthand notation $\Delta(\theta,\varphi)$ stands for $g_1\cos^{-g_2}{(\theta)}\left[\sin^2{(\widehat{\textbf{u},\textbf{b}})}-\left<\sin^2{(\widehat{\textbf{u},\textbf{b}})}\right>_{\varphi}\right]$. The energy $E\times(1+\Delta(\theta,\varphi))^B$ is actually the one that would have been obtained without correcting for geomagnetic effects.}~: $E\times(1+\Delta(\theta,\varphi))^B$. Above 1~EeV, this effect is in fact the main source of azimuthal dependence of the detection efficiency, so that to first order in $\Delta(\theta,\varphi)$, $\epsilon(\theta,\varphi,E)$ can be estimated as~: \begin{eqnarray} \label{eqn:eps2} \epsilon(\theta,\varphi,E) &=& \frac{1}{\mathcal{N}}\frac{\mathrm{d}N(\sin^2{\theta},E(1+\Delta(\theta,\varphi))^B)}{\mathrm{d}\sin^2{\theta}} \nonumber \\ &\simeq&\left<\epsilon(\theta,\varphi,E)\right>_{\varphi}+\frac{BE\Delta(\theta,\varphi)}{\mathcal{N}}\frac{\partial \left<\epsilon(\theta,\varphi,E)\right>_{\varphi}}{\partial E}. \end{eqnarray} The correction to the detection efficiency induced by geomagnetic effects, and in particular the azimuthal dependence, is thus straightforward to implement from the knowledge of $\left<\epsilon(\theta,\varphi,E)\right>_{\varphi}$. An example of such an azimuthal dependence is shown in the left panel of Fig.~\ref{fig:amp-gmf-vs-th}, for $E=1~$EeV and $\theta=55^\circ$. The modulation reflects the one due to the energy determination~: the detection efficiency is lowered in the directions where the uncorrected energies are under-estimated due to geomagnetic effects, and the efficiency is higher where energies are over-estimated. The maximal contrast of such azimuthal modulations is displayed in the right panel as a function of the zenith angle, for three different energies. At 2~EeV, the amplitude slightly increases up to $\simeq 35^\circ$, staying below $\simeq 0.1\%$, and then decreases and even cancels due to the saturation of the detection efficiency. In contrast, when going down in energy, the relative amplitude largely increases with the zenith angle due to the increase of the derivative term, reaching $\simeq 1.7\%$ for $\theta=55^\circ$ and $E=1~$EeV. \subsection{Tilt of the array} \label{sub:tilt} \begin{figure}[!t] \centering \includegraphics[height=8cm,width=8cm]{array_colorz.eps} ~ \raisebox{0.8cm}{ \includegraphics[height=7.2cm]{array_colorz_legend.eps} } \caption{\small{Colour-coded altitude (a.s.l.) of the water-Cherenkov detectors.}} \label{fig:sdarray} \end{figure} The altitudes above sea level of the water-Cherenkov detectors are displayed in Fig.~\ref{fig:sdarray} in colour coding. The coordinates are in a Cartesian system whose origin is defined at the "centre" of the Observatory site. The Andes ridge building up in the western and north-western direction can be seen. A slightly tilted SD array gives rise to a small azimuthal asymmetry, and consequently slightly modifies the directional exposure with respect to Eqn.~\ref{eqn:omega} through small changes of the geometric directional aperture. This modification is twofold~: the tilt changes the geometric factor ($\cos{\theta}$) of the projected surface under incidence angles $(\theta,\varphi)$; and also induces a compensating effect below full efficiency by slightly varying the detection efficiency with the azimuth angle $\varphi$. Denoting $\mathbf{n_\perp^{(i)}}$ the normal vector to each elemental cell, the geometric directional aperture per cell is not any longer simply given by $\cos{\theta}$ but now depends on both $\theta$ and $\varphi$~: \begin{equation} \label{eqn:tilt1} a_{\mathrm{cell}}^{(i)}(\theta,\varphi)=1.95~ \mathbf{n}\cdot\mathbf{n_\perp^{(i)}} \simeq 1.95~[1+\zeta^{(i)}\tan{\theta}\cos{(\varphi-\varphi_0^{(i)})}]~\cos{\theta}, \end{equation} where $\zeta^{(i)}$ and $\varphi_0^{(i)}$ are the zenith and azimuth angles of $\mathbf{n_\perp^{(i)}}$. It is actually this latter expression $a_{\mathrm{cell}}$ which has to be inserted into Eqn.~\ref{eqn:omega} to calculate the directional exposure. Overall, the average tilt of the SD array is $\zeta^{\mathrm{eff}}\simeq 0.2^\circ$, and induces a dipolar asymmetry in azimuth with a maximum in the downhill direction $\varphi_0^{\mathrm{eff}}\simeq0^\circ$ and with an amplitude increasing with the zenith angle as $\simeq0.3\%\tan{\theta}$. Below 3~EeV, the tilt of the array induces an additional variation of the detection efficiency with azimuth. This is because the effective separation between detectors for a given zenith angle depends now on the azimuth. Since, for a given zenith angle, the SD array seen by showers coming from the uphill direction is denser than that for those coming from the downhill direction, the detection efficiency is higher in the uphill direction. Parameterising the energy dependence of $\epsilon$ as $E^3/(E^3+E_{0.5}^3)$, we show in Appendix~B that the change in the detection efficiency can be estimated as~: \begin{equation} \label{eqn:tilt2} \Delta\epsilon_{\mathrm{tilt}}(\theta,\varphi,E) = \frac{E^3(E_{0.5}^3-{E_{0.5}^{\mathrm{tilt}}}^3(\theta,\varphi))}{(E^3+E_{0.5}^3)(E^3+{E_{0.5}^{\mathrm{tilt}}}^3(\theta,\varphi))}, \end{equation} where $E_{0.5}^{\mathrm{tilt}}(\theta,\varphi)$ is related to $E_{0.5}$ through~: \begin{equation} \label{eqn:E_0.5} E_{0.5}^{\mathrm{tilt}}(\theta,\varphi)\simeq E_{0.5}\times[1+\zeta^{\mathrm{eff}}\tan{\theta}\cos{(\varphi-\varphi_0^{\mathrm{eff}})}]^{3/2}. \end{equation} Around 1~EeV, this correction tends to compensate the pure geometrical effect described above, and even overcompensates it at lower energies. \subsection{Spatial extension of the array} \label{sub:extension} This spatial extension of the SD array is such that the range of latitudes covered by all cells reaches $\simeq 0.5^\circ$. This induces a slightly different directional exposure between the cells located at the northern part of the array and the ones located at the southern part. This spatial extension can be accounted for to calculate the overall directional exposure using the cell latitudes $\ell_{\mathrm{cell}}^{(i)}$ instead of the mean site one in the transformations from local to celestial angles in Eqn.~\ref{eqn:theta-phi}. \subsection{Weather effects below full efficiency} \label{sub:other} In the same way as geomagnetic effects, weather effects can also affect the detection efficiency for showers with energies below 3~EeV. However, \emph{above} 1~EeV, we have shown in~\citep{AugerAPP2011} that as long as the analysis covers an integer number of years with almost equal exposure in every season, the amplitude of the spurious modulation in right ascension induced by this effect is small enough to be neglected when performing anisotropy analyses at the present level of sensitivity. \subsection{Final estimation of the directional exposure - Examples at some energies} \label{sub:examples} \begin{figure}[!t] \centering \includegraphics[width=10cm]{DirectionalAcceptance.eps} \caption{\small{Directional exposure $\omega(\delta,E)$ as a function of the declination $\delta$, for three different energies.}} \label{fig:expo} \end{figure} Accounting for all effects, the final expression to calculate the directional exposure is slightly modified with respect to Eqn.~\ref{eqn:omega}~: \begin{equation} \label{eqn:omega2} \omega(\delta,E) = \sum_{i=1}^{n_{\mathrm{cell}}}x^{(i)}\int_0^{24h}~\mathrm{d}\alpha^\prime\,a_{\mathrm{cell}}^{(i)}{(\theta,\varphi)}~\left[\epsilon(\theta,\varphi,E)+\Delta\epsilon_{\mathrm{tilt}}(\theta,\varphi,E)\right], \end{equation} where both $\theta$ and $\varphi$ depend on $\alpha^\prime$, $\delta$ and $\ell_{\mathrm{cell}}^{(i)}$. The resulting dependence on declination is displayed in Fig.~\ref{fig:expo} for three different energies. Down to 1~EeV, the detection efficiency at high zenith angles is high enough that the equatorial south pole is visible at any time and hence constitutes the direction of maximum of exposure. For a wide range of declinations between $\simeq -89^ \circ$ and $\simeq -20^ \circ$, the directional exposure is $\simeq 2,500~$km$^ 2$~yr at 1~EeV, and $\simeq 3,500~$km$^ 2$~yr for any energy above full efficiency. Then, at higher declinations, it smoothly falls to zero, with no exposure above $\simeq 20^ \circ$ declination. The average expected number of events within any solid angle and any energy range can be recovered by integrating the directional exposure over the solid angle considered and the cosmic ray energy spectrum in the corresponding energy range. Note that the rapid variation of the exposure close to the South pole on an angular scale of the order of the angular resolution has no influence on the event counting rate, due to the quasi-zero solid angle in that particular direction. Consequently, though the exposure around the South pole could be affected by small changes of the detection efficiency around $\theta=55^\circ$, the results presented in next sections are on the other hand \textit{not} affected by the exact value of the exposure for declinations a few degrees away from the South pole. \section{Searches for large scale patterns} \label{analysis} \subsection{Estimates of spherical harmonic coefficients} Any angular distribution over the sphere $\Phi(\mathbf{n})$ can be decomposed in terms of a multipolar expansion~: \begin{equation} \label{eqn:ylm} \Phi(\mathbf{n})=\sum_{\ell\geq0}\sum_{m=-\ell}^{\ell}~a_{\ell m}Y_{\ell m}(\mathbf{n}), \end{equation} where $\mathbf{n}$ denotes a unit vector taken in equatorial coordinates. The customary recipe to extract each multipolar coefficient makes use of the completeness relation of spherical harmonics~: \begin{equation} \label{eqn:alm} a_{\ell m}=\int_{4\pi} \mathrm{d}\Omega~\Phi(\mathbf{n})Y_{\ell m}(\mathbf{n}), \end{equation} where the integration is over the entire sphere of directions $\mathbf{n}$. Any anisotropy fingerprint is encoded in the $a_{\ell m}$ spherical harmonic coefficients. Variations on an angular scale of $\Theta$ radians contribute amplitude in the $\ell\simeq1/\Theta$ modes. However, in case of partial sky coverage, the solid angle in the sky where the exposure is zero makes it impossible to estimate the multipolar coefficients $a_{\ell m}$ in this way. This is because the unseen solid angle prevents one from making use of the completeness relation of the spherical harmonics~\citep{Sommers2001}. Since the observed arrival direction distribution is in this case the \textit{combination} of the angular distribution $\Phi(\mathbf{n})$ and of the directional exposure function $\omega(\mathbf{n})$, the integration performed in Eqn.~\ref{eqn:alm} does not allow any longer the extraction of the multipolar coefficients of $\Phi(\mathbf{n})$, but only the ones of $\omega(\mathbf{n})~\Phi(\mathbf{n})$~\citep{alm}~\footnote{To cope with the unseen solid angle, another approach makes use of orthogonal functions of increasing multipolarity, tailored to the exposure $\omega$ itself~\citep{alm}. This method would yield similar accuracies.}: \begin{eqnarray} \label{eqn:blm} b_{\ell m}&=&\int_{\Delta\Omega} \mathrm{d}\Omega~\omega(\mathbf{n})\Phi(\mathbf{n})Y_{\ell m}(\mathbf{n}) \nonumber \\ &=&\sum_{\ell^\prime\geq0}\sum_{m^\prime=-\ell^\prime}^{\ell^\prime} a_{\ell^\prime m^\prime}\int_{\Delta\Omega} \mathrm{d}\Omega~\omega(\mathbf{n})Y_{\ell^\prime m^\prime}(\mathbf{n})Y_{\ell m}(\mathbf{n}). \end{eqnarray} Formally, the $a_{\ell m}$ coefficients appear related to the $b_{\ell m}$ ones through a convolution such that $b_{\ell m}=\sum_{\ell^\prime\geq0}\sum_{m^\prime=-\ell^\prime}^{\ell^\prime}[K]_{\ell m}^{\ell^\prime m^\prime}~a_{\ell^\prime m^\prime}$. The matrix $K$, which imprints the interferences between modes induced by the non-uniform and partial coverage of the sky, is entirely determined by the directional exposure. The relationship established in Eqn.~\ref{eqn:blm} is valid for \textit{any} exposure function $\omega(\mathbf{n})$. Meanwhile, the observed arrival direction distribution, $\overline{\mathrm{d}N}(\mathbf{n})/\mathrm{d}\Omega$, provides a direct estimation of the $b_{\ell m}$ coefficients through (hereafter, we use an over-line to indicate the \emph{estimator} of any quantity)~: \begin{equation} \label{eqn:blm-est-1} \overline{b}_{\ell m}=\int_{\Delta\Omega} \mathrm{d}\Omega~\frac{\overline{\mathrm{d}N}(\mathbf{n})}{\mathrm{d}\Omega}~Y_{\ell m}(\mathbf{n}), \end{equation} where the distribution $\overline{\mathrm{d}N}(\mathbf{n})/\mathrm{d}\Omega$ of any set of $N$ arrival directions $\{\mathbf{n}_1, ..., \mathbf{n}_N\}$ can be modelled as a sum of Dirac functions on the sphere. Then, if the multipolar expansion of the angular distribution $\Phi(\mathbf{n})$ is \textit{bounded} to $\ell_{\mathrm{max}}$, that is, if the $\Phi(\mathbf{n})$ has no higher moments than $\ell_{\mathrm{max}}$, the first $b_{\ell m}$ coefficients with $\ell\leq\ell_{\mathrm{max}}$ are related to the non-vanishing $a_{\ell m}$ by the square matrix $K_{\ell_{\mathrm{max}}}$ \textit{truncated} to $\ell_{\mathrm{max}}$. Inverting this truncated matrix allows us to recover the underlying $a_{\ell m}$ from the measured $b_{\ell m}$ (with $\ell\leq\ell_{\mathrm{max}}$)~: \begin{equation} \label{eqn:alm-est} \overline{a}_{\ell m}=\sum_{\ell^\prime=0}^{\ell_{\mathrm{max}}}\sum_{m^\prime=-\ell^\prime}^{\ell^\prime} [K^{-1}_{\ell_{\mathrm{max}}}]_{\ell m}^{\ell^\prime m^\prime} \overline{b}_{\ell^\prime m^\prime}. \end{equation} In the case of small anisotropies $(|a_{\ell m}|/a_{00}\ll1)$, the resolution on each recovered $\overline{a}_{\ell m}$ coefficient is proportional to $\bigg([K^{-1}_{\ell_{\mathrm{max}}}]_{\ell m}^{\ell m}\bigg)^ {0.5}$~\citep{alm}~: \begin{equation} \label{eqn:rms-alm} \sigma_{\ell m}=\bigg([K^{-1}_{\ell_{\mathrm{max}}}]_{\ell m}^{\ell m}~\overline{a}_{00}\bigg)^{0.5}. \end{equation} The dependence on $\ell_{\mathrm{max}}$ of the coefficients of $K^{-1}_{\ell_{\mathrm{max}}}$ induces an intrinsic indeterminacy of each recovered coefficient $\overline{a}_{\ell m}$ as $\ell_{\mathrm{max}}$ is increasing. This is nothing else but the mathematical translation of it being impossible to know the angular distribution of cosmic rays in the uncovered region of the sky. Henceforth, we adapt this general formalism to the search for anisotropies in Auger data in different energy intervals. We assume that the energy dependence of the angular distribution of cosmic rays is smooth enough that the multipolar coefficients can be considered constant for any energy $E$ within a narrow interval $\Delta E$. The directional exposure is hereafter considered as independent of the right-ascension, as defined in section~\ref{exposure}. Within an energy interval $\Delta E$, the expected arrival direction distribution thus reads~: \begin{equation} \label{eqn:dNdn} \frac{\mathrm{d}N(\mathbf{n})}{\mathrm{d}\Omega}\propto \tilde{\omega}(\delta)~\sum_{\ell\geq0}\sum_{m=-\ell}^{\ell}~a_{\ell m}Y_{\ell m}(\mathbf{n}), \end{equation} where $\tilde{\omega}(\delta)$ is the effective directional exposure for the energy interval $\Delta E$. For convenience, this latter function is normalised such that~: \begin{equation} \tilde{\omega}(\delta)= \frac{\displaystyle\int_{\Delta E} \mathrm{d}E~E^{-\gamma}\omega(\delta,E)}{\displaystyle \max_\delta\bigg[\int_{\Delta E} \mathrm{d}E~E^{-\gamma}\omega(\delta,E)\bigg]}, \end{equation} with $\gamma$ the spectral index in the considered energy range. This dimensionless function provides, for any direction on the sky, the effective directional exposure in the energy range $\Delta E$ at that direction, \textit{relative} to the largest directional exposure on the sky. This is actually the relevant quantity which enters into Eqn.~\ref{eqn:blm} for the analyses presented below. Note that for a directional exposure independent of the right ascension, the coefficients $[K]_{\ell m}^{\ell^\prime m^\prime}$ are proportional to $\delta_m^{m^\prime}$ - \textit{i.e.} different values of $m$ are not mixed in the matrix. The observed arrival direction distribution, $\overline{\mathrm{d}N}(\mathbf{n})/\mathrm{d}\Omega$, is here modelled as a sum of Dirac functions on the sphere weighted by the factor $\Delta N_{\mathrm{cell}}^{-1}(\alpha^0_{k})$ for each event recorded at local sidereal time $\alpha^0_{k}$, as described in section~\ref{sub:exposure} to correct for the slightly non-uniform directional exposure in right ascension. In this way, the integration in Eqn.~\ref{eqn:blm} yields to~: \begin{equation} \label{eqn:blm-est} \overline{b}_{\ell m}=\sum_{k=1}^{N} \frac{Y_{\ell m}(\mathbf{n}_k)}{\Delta N_{\mathrm{cell}}(\alpha^0_{k})}. \end{equation} The multipolar coefficients $\overline{a}_{\ell m}$ are then recovered by means of Eqn.~\ref{eqn:alm-est}. Given the exposure functions described in section~\ref{exposure}, the resolution on each recovered coefficient, encoded in Eqn.~\ref{eqn:rms-alm}, is degraded by a factor larger than 2 each time $\ell_{\mathrm{max}}$ is incremented by 1. This prevents the recovery of each coefficient with good accuracy as soon as $\ell_{\mathrm{max}}\geq3$, since, for $\ell_{\mathrm{max}}=3$ for instance, our current statistics would only allow us to probe dipole amplitudes at the 10\% level. Consequently, in the following, we restrict ourselves to reporting results on individual coefficients obtained when assuming a dipolar distribution $(\ell_{\mathrm{max}}=1)$ and a quadrupolar distribution $(\ell_{\mathrm{max}}=2)$. Meanwhile, due to the interferences between modes induced by the non-uniform and partial sky coverage, it is important to stress again that each multipolar coefficient recovered under the assumption of a particular bound $\ell_{\mathrm{max}}$ might be biased if the underlying angular distribution of cosmic rays is not bounded to $\ell_{\mathrm{max}}$. Given the directional exposure functions considered in this study, this effect can be important only if the angular distribution has in fact significant moments of order $\ell_{\mathrm{max}}+1$. \subsection{Searches for dipolar patterns} \label{dipole} As outlined in the introduction, a measurable dipole is regarded as a likely possibility in many scenarios for the origin of cosmic rays at EeV energies. Assuming that the angular distribution of cosmic rays is modulated by a \emph{pure} dipole, the intensity $\Phi(\mathbf{n})$ can be parameterised in any direction $\mathbf{n}$ as~: \begin{equation} \label{eqn:phi-dip} \Phi(\mathbf{n})=\frac{\Phi_0}{4\pi}~\bigg(1+r~\mathbf{d}\cdot\mathbf{n} \bigg), \end{equation} where $\mathbf{d}$ denotes the dipole unit vector. The dipole pattern is here fully characterised by a declination $\delta_d$, a right ascension $\alpha_d$, and an \emph{amplitude} $r$ corresponding to the maximal anisotropy contrast~: \begin{equation} r=\frac{\Phi_{\mathrm{max}}-\Phi_{\mathrm{min}}}{\Phi_{\mathrm{max}}+\Phi_{\mathrm{min}}}. \end{equation} The estimation of these three coefficients is straightforward from the estimated spherical harmonic coefficients $\overline{a}_{1m}$~: $\overline{r}=[3(\overline{a}_{10}^2+\overline{a}_{11}^2+\overline{a}_{1-1}^2)]^{0.5}/\overline{a}_{00}$, $\overline{\delta}=\arcsin{(\sqrt{3}\overline{a}_{10}/\overline{a}_{00}\overline{r})}$, and $\overline{\alpha}=\arctan{(\overline{a}_{1-1}/\overline{a}_{11})}$. Uncertainties on $\overline{r}$, $\overline{\delta}$ and $\overline{\alpha}$ are obtained from the propagation of uncertainties on each recovered $\overline{a}_{1m}$ coefficient (\textit{cf} Eqn.~\ref{eqn:rms-alm}). Under an underlying isotropic distribution, and for an axisymmetric directional exposure around the axis defined by the North and South equatorial poles, the probability density function of $\overline{r}$ is given by~\citep{AugerJCAP2011}~: \begin{equation} p_R(\overline{r})=\frac{\overline{r}}{\sigma\sqrt{\sigma_z^2-\sigma^2}}~\mathrm{erfi}\bigg(\frac{\sqrt{\sigma_z^2-\sigma^2}}{\sigma\sigma_z}\frac{\overline{r}}{\sqrt{2}}\bigg)~\exp{\bigg(-\frac{\overline{r}^2}{2\sigma^2}\bigg)}, \end{equation} where $\mathrm{erfi}(z)=\mathrm{erf}(iz)/i$, $\sigma=\sqrt{3}\sigma_{11}/\overline{a}_{00}$, and $\sigma_z=\sqrt{3}\sigma_{10}/\overline{a}_{00}$. The probability $P_R(>\overline{r})$ that an amplitude equal or larger than $\overline{r}$ arises from a statistical fluctuation of an isotropic distribution is then obtained by integrating $p_R$ above $\overline{r}$~: \begin{equation} P_R(>\overline{r})=\mathrm{erfc}\bigg(\frac{\overline{r}}{\sqrt{2}\sigma_z}\bigg)+\mathrm{erfi}\bigg(\frac{\sqrt{\sigma_z^2-\sigma^2}}{\sigma\sigma_z}\frac{\overline{r}}{\sqrt{2}}\bigg)~\exp{\bigg(-\frac{\overline{r}^2}{2\sigma^2}\bigg)}. \end{equation} \begin{figure}[!t] \centering \includegraphics[width=8.cm]{rDipVsE.eps} \includegraphics[width=7.cm]{DirectionsDip.eps} \caption{\small{Left~: Reconstructed amplitude of the dipole as a function of energy. The dotted line stands for the 99\% $C.L.$ upper bounds on the amplitudes that would result from fluctuations of an isotropic distribution. Right~: Reconstructed declination and right-ascension of the dipole with corresponding uncertainties, as a function of energy, in azimuthal projection.}} \label{fig:dip} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=7.5cm]{decDipVsE.eps} \includegraphics[width=7.5cm]{raDipVsE.eps} \caption{\small{Reconstructed declination (left) and right ascension (right) of the dipole as a function of energy. The smooth fit to the data of~\citep{AugerAPP2011} is shown as the dashed line in the right panel~: a consistent smooth behaviour is observed using the analysis presented here and applied to a data set containing two additional years of data.}} \label{fig:directiondip} \end{figure} The reconstructed amplitudes $\overline{r}(E)$ and corresponding directions are shown in Fig.~\ref{fig:dip} with the associated uncertainties, as a function of the energy. The directions are drawn in azimuthal projection, with the equatorial South pole located at the centre and the right-ascension going from 0 to 360$^\circ$ clockwise. In the left panel, the 99\% $C.L.$ upper bounds on the amplitudes that would result from fluctuations of an isotropic distribution are indicated by the dotted line (\textit{i.e.} the amplitudes $\overline{r}_{99}(E)$ such that $P_R(>\overline{r}_{99}(E))=0.01$). One can see that within the statistical uncertainties, there is no strong evidence of any significant signal. \begin{figure}[!t] \centering \includegraphics[width=7.5cm]{Map.1-2.57deg.eps} \includegraphics[width=7.5cm]{Map.2-4.57deg.eps} \includegraphics[width=7.5cm]{Map.4-8.57deg.eps} \includegraphics[width=7.5cm]{Map.8-1000.57deg.eps} \caption{\small{Significance sky maps in four independent energy bins. The maps are smoothed using an angular window with radius $\Theta=1~$radian, to exhibit any dipolar-like structures. The directions of the reconstructed dipoles are shown with the associated uncertainties. The galactic plane and galactic center are also depicted as the dotted line and the star.}} \label{fig:maps} \end{figure} The reconstructed declinations $\overline{\delta}$ and right ascensions $\overline{\alpha}$ are shown separately in Fig~\ref{fig:directiondip}. Both quantities are expected to be randomly distributed in case of independent samples whose parent distribution is isotropic. In our previous report on first harmonic analysis in right ascension~\citep{AugerAPP2011}, we pointed out the intriguing smooth alignment of the phases in right ascension as a function of the energy, and noted that such a consistency of phases in adjacent energy intervals is expected to manifest with smaller number of events than those required for the detection of amplitudes standing-out significantly above the background noise in case of a real underlying anisotropy. This motivated us to design a \textit{prescription} aimed at establishing at 99\% $C.L.$ whether this consistency in phases is real, using the exact same analysis as the one reported in Ref.~\citep{AugerAPP2011}. The prescribed test will end once the total exposure since 25 June 2011 is 21,000~km$^2$~yr~sr. The smooth fit to the data of Ref.~\citep{AugerAPP2011} is shown as a dashed line in the right panel of Fig~\ref{fig:directiondip}, restricted to the energy range considered here. Though the phase between 4 and 8~EeV is poorly determined due to the corresponding direction in declination pointing close to the equatorial south pole, it is noteworthy that a consistent smooth behaviour is observed using the analysis presented here and applied to a data set containing two additional years of data. It is also interesting to see in the left panel that all reconstructed declinations are in the equatorial southern hemisphere. \begin{figure}[!t] \centering \includegraphics[width=7.5cm]{rDipVsE-largebins.eps} \includegraphics[width=7.5cm]{rDipVsEth.eps} \caption{\small{Left~: Amplitude of the dipole for two energy intervals~: $1<E/[\mathrm{EeV}]<4$ and $E>4~$EeV. Right~: Amplitude of the dipole as a function of energy thresholds. The dotted lines stand for the 99\% $C.L.$ upper bounds on the amplitudes that could result from fluctuations of an isotropic distribution.}} \label{fig:rdip2} \end{figure} For completeness, significance sky maps are displayed in Fig.~\ref{fig:maps} in equatorial coordinates and using a Mollweide projection, for the four energy ranges. The galactic plane and galactic center are also depicted as the dotted line and the star. Significances are calculated using the Li and Ma estimator~\citep{LiMa1983}. This widely used estimator of significance, $S$, properly accounts for the fluctuations of the background and of an eventual signal in any angular region searched~\footnote{The parameter $\alpha_{LM}$ in the expression of the Li \& Ma significance, expressing the expected ratio of the count numbers between the angular region searched (the \textit{on-region}) and any background region if there is no signal in the on-region, is here taken as the ratio between the expected number of events in the on-region and the total number of events in the energy range considered.}. If no signal is present, the variable $S$ is nearly normally distributed even for small count numbers, so that positive values of $S$ can be interpreted as the number of standard deviations of any excess in the sky. As well, for negative values of $S$, $-S$ can be interpreted as the number of standard deviations of any deficit in the sky. The maps show the overdensities obtained in circular windows of radius $\Theta=1~$radian, to better exhibit possible dipolar-like structures. The directions of the reconstructed dipoles are also shown, with their associated uncertainties (thick circles). Finally, since some consistency is observed both in declination and right ascension as a function of energy, the use of larger energy intervals and/or energy thresholds may help to pick up a significant signal above the background level. The amplitudes of the dipole are shown in Fig.~\ref{fig:rdip2} for two energy intervals ($1<E/[\mathrm{EeV}]<4$ and $E>4~$EeV) and as a function of energy thresholds. This does not provide any further evidence for significant anisotropies. \subsection{Searches for quadrupolar patterns} \label{quadrupole} Any excesses along a plane would show up as a prominent quadrupole moment. Such excesses are plausible for instance at EeV energies in case of an emission of light EeV-cosmic rays from sources preferentially located in the galactic disk, or at higher energies from sources preferentially located in the super-galactic plane. Consequently, a measurable quadrupole may be regarded as an interesting outcome of an anisotropy search at ultra high energies. \begin{figure}[!t] \centering \includegraphics[width=7.5cm]{rDipVsE.2.eps} \includegraphics[width=7.5cm]{rDipVsE.2-largebins.eps} \includegraphics[width=7.5cm]{lambdaQuadVsE.eps} \includegraphics[width=7.5cm]{lambdaQuadVsE-largebins.eps} \includegraphics[width=7.5cm]{betaQuadVsE.eps} \includegraphics[width=7.5cm]{betaQuadVsE-largebins.eps} \caption{\small{Amplitudes of the dipolar (top) and quadrupolar moments (middle and bottom) as a function of energy using a multipolar reconstruction up to $\ell_{\mathrm{max}}=2$, for two different binnings (left and right). In each panel, the dotted lines stand for the 99\% $C.L.$ upper bounds on the amplitudes that could result from fluctuations of an isotropic distribution.}} \label{fig:ampquad} \end{figure} Assuming now that the angular distribution of cosmic rays is modulated by a dipole \emph{and} a quadrupole, the intensity $\Phi(\mathbf{n})$ can be parameterised in any direction $\mathbf{n}$ as~: \begin{equation} \label{eqn:phi-quad} \Phi(\mathbf{n})=\frac{\Phi_0}{4\pi}~\bigg(1+r~\mathbf{d}\cdot\mathbf{n} +\frac{1}{2}\sum_{i,j}Q_{ij}n_in_j \bigg), \end{equation} where $\mathbf{Q}$ is a traceless and symmetric second order tensor. Its five independent components are determined in a straightforward way from the $\ell=2$ spherical harmonic coefficients $a_{2m}$. Denoting by $\lambda_+,\lambda_0,\lambda_-$ the three eigenvalues of $\mathbf{Q}/2$ ($\lambda_+$ being the highest one and $\lambda_-$ the lowest one) and $\mathbf{q_+},\mathbf{q_0},\mathbf{q_-}$ the three corresponding unit eigenvectors, the intensity can be parameterised in a more intuitive way as~: \begin{equation} \Phi(\mathbf{n})=\frac{\Phi_0}{4\pi}~\bigg(1+r~\mathbf{d}\cdot\mathbf{n} +\lambda_+(\mathbf{q_+}\cdot\mathbf{n})^2 +\lambda_0(\mathbf{q_0}\cdot\mathbf{n})^2 +\lambda_-(\mathbf{q_-}\cdot\mathbf{n})^2 \bigg). \end{equation} It is then convenient to define the quadrupole amplitude $\beta$ as~: \begin{equation} \beta\equiv\frac{\lambda_+-\lambda_-}{2+\lambda_++\lambda_-}. \end{equation} In case of a pure quadrupolar distribution (\textit{i.e.} in the absence of dipole), $\beta$ is nothing else but the customary measure of maximal anisotropy contrast~: \begin{equation} r=0\Rightarrow \beta=\frac{\lambda_+-\lambda_-}{2+\lambda_++\lambda_-}=\frac{\Phi_{\mathrm{max}}-\Phi_{\mathrm{min}}}{\Phi_{\mathrm{max}}+\Phi_{\mathrm{min}}}. \end{equation} Hence, any quadrupolar pattern can be fully described by two amplitudes $(\beta,\lambda_+)$ and three angles~: $(\delta_+,\alpha_+)$ which define the orientation of $\mathbf{q_+}$ and $(\alpha_-)$ which defines the direction of $\mathbf{q_-}$ in the orthogonal plane to $\mathbf{q_+}$. The third eigenvector $\mathbf{q_0}$ is orthogonal to $\mathbf{q_+}$ and $\mathbf{q_-}$, and its corresponding eigenvalue $\lambda_0$ is such that the traceless condition is satisfied~: $\lambda_++\lambda_-+\lambda_0=0$. Though the probability density functions of the estimated quadrupole amplitudes $(\overline{\beta},\overline{\lambda}_+)$ can be in principle calculated in the same way as in the case of the estimated dipole amplitude $(\overline{r})$, expressions are much more complicated to obtain even semi-analytically and we defer hereafter to Monte-Carlo simulations to tabulate the distributions. The amplitudes $\overline{r}(E)$, $\overline{\lambda}_+(E)$ and $\overline{\beta}(E)$ are shown in Fig.~\ref{fig:ampquad} as functions of energy. Dipole amplitudes are compatible with expectations from isotropy. Compared to the results on the dipole obtained in previous section for $\ell_{\mathrm{max}}=1$, the sensitivity is now degraded by a factor larger than 2 as expected from the dependence of the resolution $\sigma_{\ell m}$ on $\ell_{\mathrm{max}}$ (\textit{cf} Eqn.~\ref{eqn:rms-alm}). In the same way as for dipole amplitudes, the 99\% $C.L.$ upper bounds on the quadrupole amplitudes that could result from fluctuations of an isotropic distribution are indicated by the dashed lines. They correspond to the amplitudes $\overline{\lambda}_{+,99}(E)$ and $\overline{\beta}_{99}(E)$ such that the probabilities $P_{\Lambda_+}(>\overline{\lambda}_{+,99}(E))$ and $P_{B}(>\overline{\beta}_{99}(E))$ arising from statistical fluctuations of isotropy are equal to 0.01. Here, both distributions $P_{\Lambda_+}$ and $P_{B}$ are sampled from Monte-Carlo simulations. Throughout the energy scan, there is no evidence for anisotropy. The largest deviation from isotropic expectations occurs between 2 and 4~EeV, where both amplitudes $\overline{\lambda}_+$ and $\overline{\beta}$ lie just above $\overline{\lambda}_{+99}$ and $\overline{\beta}_{99}$. \section{Additional cross-checks against experimental effects} \label{systematics} \subsection{More on the influence of shower size corrections for geomagnetic effects} Understanding the influence of the shower size corrections for geomagnetic effects is critical to get unbiased estimates of anisotropy parameters. Without accounting for these effects, an increase of the event rate would be observed close to the equatorial South pole with respect to expectations for isotropy, while a decrease would be observed close to the edge of the directional exposure in the equatorial Northern hemisphere. This would result in the observation of a fake dipole. A convenient way to exhibit this effect is to separate the dipole in two components~: the component of the dipole in the equatorial plane $r_\perp$, and the component along the Earth rotation axis, $r_ \parallel$. While $r_\perp$ is expected to be affected only by time-dependent effects, $r_ \parallel$ is on the other hand the relevant quantity sensitive to time-independent effects such as the geomagnetic one. \begin{table*}[h] \begin{center} \begin{tabular}{c|c|c|c|c} $\Delta E$ [EeV] & $\overline{r}_\perp^{uncorr} [\%]$ & $\overline{r}_\perp [\%]$ & $\overline{r}_\parallel^{uncorr} [\%]$ & $\overline{r}_\parallel [\%]$ \\ \hline \hline $1 - 4$ & $0.9\pm0.3$ & $0.9\pm0.3$ & $-2.2\pm0.4$ & $-1.0\pm0.4$ \\ $>4$ & $1.8\pm1.0$ & $2.1\pm1.0$ & $-4.1\pm1.7$ & $-3.0\pm1.7$ \end{tabular} \caption{\small{Influence of shower size corrections for geomagnetic effects on the component of the dipole in the equatorial plane and on the one along the Earth rotation axis.}} \label{tab:geom} \end{center} \end{table*}% Estimations of $r_\perp$ and $r_ \parallel$ obtained by accounting or not for geomagnetic effects are given in Table~\ref{tab:geom}, in two different energy ranges. These estimations are obtained from the recovered $\overline{a}_{1m}$ coefficients~: $\overline{r}_\parallel=\sqrt{3}\overline{a}_{10}/\overline{a}_{00}$, and $\overline{r}_\perp=[3(\overline{a}_{11}^2+\overline{a}_{1-1}^2]^{0.5}/\overline{a}_{00}$. It can be seen that the main effect of the geomagnetic corrections is a shift in $\overline{r}_\parallel$ of about 1.2\%. In the energy range $1\leq E/[\mathrm{EeV}]\leq4$, this shift is significant, $\overline{r}_\parallel$ changing from -2.2\% to -1.0\% with an uncertainty amounting to 0.4\%. Above 4~EeV, the net correction is of the same order, though the statistical uncertainties are larger. In contrast, $\overline{r}_\perp$ remains unchanged in both cases, as expected. \subsection{Eventual energy dependence of the attenuation curve} In this section, we study to which extent the procedure used to obtain the attenuation curve in section~\ref{s1000-cic} might influence the determination of the anisotropy parameters. To convert the shower size into energy, we explained and applied in section~\ref{s1000-cic} the constant intensity cut method for showers with $S_{38^\circ}\geq 22~$VEM, that is, just above the threshold energy for full efficiency. The value of the parameter $a$ obtained in these conditions is consistent within the statistical uncertainties with the one previously reported when applying the same constant intensity cut method for showers with $S_{38^\circ}\geq 47~$VEM. Opposite to this, the value obtained for the coefficient $b$ differs by more than 3 standard deviations. Such a difference might be expected from both the evolution of the maximum of the showers and from an eventual change in composition with energy, but it may also be due to energy and angle-dependent resolutions effects mimicking a real evolution with energy. With a different attenuation curve, some events would be reconstructed in the adjacent energy intervals in an extent which depends on the change of the attenuation curve with zenith angle. For that reason, the determination of anisotropy parameters might be altered by this effect. Disentangling real evolution of the attenuation curve with energy from resolution effects is out of the scope of this paper and will be addressed elsewhere. Here, we restrict ourselves to probe the effect that a real energy dependence would have on the determination of anisotropy parameters. To do so, we choose to fit the values of the coefficient $b$ obtained for $S_{38^\circ}=22~$VEM and $S_{38^\circ}=47~$VEM through a linear dependence with the logarithm of $S_{38^\circ}$. Below and above these values, the behaviour of $b(E)$ is obtained by \textit{extrapolating} this energy dependence. In this way, the changes in the anisotropy parameters are probed in \textit{extreme} conditions. Repeating the whole chain of analysis with this new attenuation curve, it turns out that the reconstructed dipole parameters are only marginally affected by this change, as illustrated in the top and middle panels of Fig.~\ref{fig:syst}. Meanwhile, both reconstructed quadrupole amplitudes in the energy interval $2\leq E\mathrm{/EeV}\leq 4$ are reduced in such a way that they lie now just below the 99\% upper bounds for isotropy. Conversely, the amplitudes in the energy interval $1\leq E\mathrm{/EeV}\leq 2$ are slightly increased. Below 4~EeV, the determination of the attenuation curve thus appears to bring some systematic uncertainties for determining the quadrupole amplitudes. The two extreme extrapolations performed in this analysis (\textit{i.e.} $b$ constant with the energy or linearly dependent with the logarithm of the energy) allows us to bracket the possible values. \begin{figure}[!t] \centering \includegraphics[width=9cm]{rDipVsE_syst.eps} \includegraphics[width=7.5cm]{decDipVsE_syst.eps} \includegraphics[width=7.5cm]{raDipVsE_syst.eps} \includegraphics[width=7.5cm]{lambdaQuadVsE_syst.eps} \includegraphics[width=7.5cm]{betaQuadVsE_syst.eps} \caption{\small{Impact of different sources of systematic uncertainties on the dipole amplitudes (top) and the dipole directions and phases (middle) obtained under the assumption $\ell_{\mathrm{max}}=1$, and quadrupole amplitudes (bottom) obtained with $\ell_{\mathrm{max}}=2$, as a function of the energy. The blue bands correspond to the results presented in Fig.~\ref{fig:dip} and Fig.~\ref{fig:ampquad}. }} \label{fig:syst} \end{figure} \subsection{Systematic uncertainties associated to corrections for weather and geomagnetic effects} In section~\ref{s1000}, we presented the procedure adopted to account for the changes in shower size due to weather and geomagnetic effects. Since the coefficients $\alpha_P$, $\alpha_\rho$ and $\beta_\rho$ in Eqn.~\ref{sweather} were extracted from real data, they suffer from statistical uncertainties which may impact in a systematic way the corrections made on $S(1000)$, and consequently may also impact the anisotropy parameters derived from the data set. Besides, the determination of $g_1$ and $g_2$ in Eqn.~\ref{sgeom} is based on the simulation of showers. Both the systematic uncertainties associated to the different interaction models and primary masses and the statistical uncertainties related to the procedure used to extract $g_1$ and $g_2$ constitute a source of systematic uncertainties on the anisotropy parameters. To quantify these systematic uncertainties, we repeated the whole chain of analysis on a large number of modified data sets. Each modified data set is built by sampling randomly the coefficients $\alpha_P$, $\alpha_\rho$ and $\beta_\rho$ (or $g_1$ and $g_2$ when dealing with geomagnetic effects) according to the corresponding uncertainties and correlations between parameters through the use of a Gaussian probability distribution function. For each new set of correction coefficients, new sets of anisotropy parameters are then obtained. The RMS of each resulting distribution for each anisotropy parameter is the systematic uncertainty that we assign. Results are shown in Fig.~\ref{fig:syst}, in terms of the dipole and quadrupole amplitudes as a function of the energy. Balanced against the statistical uncertainties in the original analysis (shown by the bands), it is apparent that both sources of systematic uncertainties have a negligible impact on each reconstructed anisotropy amplitude. \section{Upper limits and discussion} \label{discussion} From the analyses reported in section~\ref{analysis}, upper limits on dipole and quadrupole amplitudes can be derived at 99\% $C.L.$ (see appendices C and D). All relevant results are summarised in Table~\ref{tab:dip} and Table~\ref{tab:quad}. The upper limits are also shown in Fig.~\ref{fig:UL} accounting for the systematic uncertainties discussed in the previous section~: in the two last energy bins, the upper limits are quite insensitive to the systematic uncertainties because all amplitudes lie well within the background noise. We illustrate below the astrophysical interest of these upper limits by calculating the anisotropy amplitudes expected in a toy scenario in which sources of EeV-cosmic rays are stationary, densely and uniformly distributed in the galactic disk, and emit particles in all directions. \begin{table*}[h] \begin{center} \begin{tabular}{c|c|c|c|c|c} $\Delta E$ [EeV] & $N$ & $\overline{r}$ [\%] & $\overline{\delta} [^\circ]$ & $\overline{\alpha} [^\circ]$ & UL [\%] \\ \hline \hline $1 - 2$ & 360132 & $1.0 \pm 0.4$ & $-15 \pm 32$ & $342\pm 20$ & 1.5 \\ $2 - 4$ & 88042 & $1.6 \pm 0.8$ & $-46\pm 28$ & $35\pm 30$ & 2.8 \\ $4 - 8$ & 19794 & $2.7\pm 2.0$ & $-69\pm 30$ & $25\pm 74$ & 5.8 \\ $>8$ & 8364 & $7.5\pm 2.5$ & $-37\pm 21$ & $96\pm 18$ & 11.4 \end{tabular} \caption{\small{Summary of the dipolar analysis ($\ell_{\mathrm{max}}=1$) reported in section~\ref{dipole}, together with the derived 99\% $C.L.$ upper limits (UL) on the amplitudes.}} \label{tab:dip} \end{center} \end{table*}% \begin{table*}[h] \begin{center} \begin{tabular}{c|c|c|c|c} $\Delta E$ [EeV] & $\overline{\lambda}_+$ [\%] & $\overline{\beta}$ [\%] & UL ($\lambda_+$) [\%] & UL ($\beta$) [\%] \\ \hline \hline $1 - 2$ & $2.0 \pm 0.7$ & $1.7 \pm 0.6$ & $3.0$ & $2.9$ \\ $2 - 4$ & $5.0 \pm 1.7$ & $4.2 \pm 1.3$ & $6.3$ & $6.1$ \\ $4 - 8$ & $1.6 \pm 2.0$ & $1.9\pm 1.8$ & $10.0$ & $9.4$ \\ $>8$ & $4.0 \pm 3.4$ & $3.9\pm 2.7$ & $14.5$ & $13.8$ \end{tabular} \caption{\small{Summary of the quadrupolar analysis ($\ell_{\mathrm{max}}=2$) reported in section~\ref{quadrupole}, together with the derived 99\% $C.L.$ upper limits (UL) on the amplitudes.}} \label{tab:quad} \end{center} \end{table*}% Both the strength and the structure of the magnetic field in the Galaxy, known only approximately, play a crucial role in the propagation of cosmic rays. The field is thought to contain a large scale regular component and a small scale turbulent one, both having a local strength of a few microgauss (see \textit{e.g.}~\citep{Beck2001}). While the turbulent component dominates in strength by a factor of a few, the regular component imprints dominant drift motions as soon as the Larmor radius of cosmic rays is larger than the maximal scale of the turbulences (thought to be in the range 10-100~pc). We adopt in the following a recent parameterisation of the regular component obtained by fitting model field geometries to Faraday rotation measures of extragalactic radio sources and polarised synchrotron emission~\citep{Pshirkov2011}. It consists in two different components~: a disk field and a halo field. The disk field is symmetric with respect to the galactic plane, and is described by the widely-used logarithmic spiral model with reversal direction of the field in two different arms (the so-called \textit{BSS-model}). The halo field is anti-symmetric with respect to the galactic plane and purely toroidal. The detailed parameterisation is given in Ref.~\citep{Pshirkov2011} (with the set of parameters reported in Table~3). In addition to the regular component, a turbulent field is generated according to a Kolmogorov power spectrum and is pre-computed on a three dimensional grid periodically repeated in space. The size of the grid is taken as 100~pc, so as the maximal scale of turbulences, and the strength of the turbulent component is taken as three times the strength of the regular one. To describe the propagation of cosmic rays with energies $E\geq 1$~EeV in such a magnetic field, the direct integration of trajectories is the most appropriate tool. Performing the forward tracking of particles from galactic sources and recording those particles which cross the Earth is however not feasible within a reasonable computing time. So, to obtain the anisotropy of cosmic rays emitted from sources uniformly distributed in a disk with a radius of 20~kpc from the galactic centre and with a height of $\pm$ 100~pc, we adopt a method first proposed in Ref.~\citep{Thielheim1968} and then widely used in the literature. It consists in back tracking anti-particles with random directions from the Earth to outside the Galaxy. Each test particle \textit{probes} the total luminosity along the path of propagation from each direction as seen from the Earth. For \textit{stationary sources emitting cosmic rays in all directions}, the flux expected in a given sampled direction is then proportional to the time spent in the source region by the test particles arriving from that direction. The amplitudes of anisotropy obviously depend on the rigidity $E/Z$ of the cosmic rays, with $Z$ the electric charge of the particles. Since we only aim at illustrating the upper limits, we consider two extreme single primaries~: protons and iron nuclei. In the energy range $1\leq E/\mathrm{EeV}\leq 20$, it is unlikely that our measurements on the average position in the atmosphere of the shower maximum and the corresponding RMS can be reproduced with a single primary~\citep{AugerPRL2010}. As well, in the scenario explored here and for a single primary, the energy spectrum is expected to reveal a \textit{hardening} in this energy range, whose origin is different from the one expected if the ankle marks the cross-over between galactic and extragalactic cosmic rays~\citep{Linsley1963} or if it marks the distortion of a proton-dominated extragalactic spectrum due to $e^+/e^-$ pair production of protons with the photons of the cosmic microwave background~\citep{Hillas1967,Blumenthal1970,Berezinsky2006,Berezinsky2004}. For a given configuration of the magnetic field, the exact energy at which this hardening occurs depends on the electric charge of the cosmic rays. This is because the average time spent in the source region first decreases as $\simeq E^{-1}$ and then tends to the constant free escape time as a consequence of the direct escape from the Galaxy. The hardening with $\Delta\gamma\simeq0.6$ observed at 4~EeV in our measurements of the energy spectrum is not compatible with the one expected in this scenario ($\Delta\gamma\simeq1$). Nevertheless, the calculation of dipole and quadrupole amplitudes for single primaries is useful to probe the allowed contribution of each primary as a function of the energy. \begin{figure}[!t] \centering \includegraphics[width=7.5cm]{UpperLimitsDipole.eps} \includegraphics[width=7.5cm]{UpperLimitsQuadrupole.eps} \caption{\small{99\% $C.L.$ upper limits on dipole and quadrupole amplitudes as a function of the energy. Some generic anisotropy expectations from stationary galactic sources distributed in the disk are also shown, for various assumptions on the cosmic ray composition. The fluctuations of the amplitudes due to the stochastic nature of the turbulent component of the magnetic field are sampled from different simulation data sets and are shown by the bands (see text).}} \label{fig:UL} \end{figure} The dipole $r$ and quadrupole $\lambda_+$ amplitudes obtained for several energy values covering the range $1\leq E/\mathrm{EeV}\leq 20$ are shown in Fig.~\ref{fig:UL}. To probe unambiguously amplitudes down to the percent level, it is necessary to generate simulated event sets with $\simeq 5~10^5$ test particles. Such a number of simulated events allows us to shrink statistical uncertainties on amplitudes at the $0.5$\% level. Meanwhile, there is an intrinsic variance in the model for each anisotropy parameter due to the stochastic nature of the turbulent component of the magnetic field. This variance is estimated through the simulation of 20 sets of $5~10^5$ test particles, where the configuration of the turbulent component is frozen in each set. The RMS of the amplitudes sampled in this way is shown by the bands in Fig.~\ref{fig:UL}. While the dipole amplitude steadily increases for iron nuclei, this is not the case any longer for protons around the ankle energy. This is because we explore a source region uniformly distributed in the disk. Consequently, the image of the galactic plane appears less distorted by the magnetic field with increasing energy. This gives rise to an important quadrupolar moment which actually turns out to be the main feature of the anisotropy at large scale~\footnote{This feature would remain in the case of a radial distribution of sources following the matter in the Galaxy, though the dipole amplitude would steadily increase above the ankle energy.}. The dipole and quadrupole $\lambda_+$ amplitudes obtained here depend on the model used to describe the galactic magnetic field. We note that recently, a new model was given in Ref.~\citep{Farrar2012}, providing improved fits to Faraday rotation measures of extragalactic radio sources and polarised synchrotron emission observations. However, we tested at a few energies that the results obtained are qualitatively in agreement with the ones presented in Fig.~\ref{fig:UL}. Similar conclusions were given in Ref.~\citep{Giacinti2011}, where more systematic studies can be found in terms of the field strength and geometry. Around $1~$EeV, there are indications that the cosmic ray composition includes a significant light component from various measurements of the depth of shower maximum $X_{max}$~\citep{AugerPRL2010,HiResPRL2010,TAAPS2011}. It is apparent that amplitudes derived for protons largely stand above the allowed limits. Consequently, unless the strength of the magnetic field is much higher than in the picture used here, the upper limits derived in this analysis exclude that the light component of cosmic rays comes from galactic stationary sources densely distributed in the galactic disk and emitting in all directions. This is in agreement with the absence of any detectable point-like sources above 1~EeV that would be indicative of a flux of neutrons produced by EeV-protons through mainly pion-producing interactions in the source environments~\citep{Auger2012submit}. On the other hand, if the cosmic ray composition around $1~$EeV results from a mixture containing a large fraction of iron nuclei of galactic origin, upper limits can still be respected, or alternatively a light component of extragalactic origin would be allowed. Future measurements of composition below $1~$EeV will come from the low energy extension HEAT now available at the Pierre Auger Observatory~\citep{AugerHEAT}. Combining these measurements with large scale anisotropy ones will then allow us to further understand the origin of cosmic rays at energies less than 4~EeV. \section{Summary} \label{summary} For the first time, a thorough search for large scale anisotropies as a function of both the declination and the right ascension in the distribution of arrival directions of cosmic rays detected above $1$~EeV at the Pierre Auger Observatory has been presented. With respect to the traditional search in right ascension only, this search requires the control of additional systematic effects affecting both the exposure of the sky and the counting rate of events in local angles. All these effects were carefully accounted for and presented in sections~\ref{s1000} and~\ref{exposure}. No significant deviation from isotropy is revealed within the systematic uncertainties, although the consistency in the dipole phases may be indicative of a genuine signal whose amplitude is at the level of the statistical noise. The sensitivity accumulated so far to dipole and quadrupole amplitudes allows us to challenge an origin of cosmic rays from stationary galactic sources densely distributed in the galactic disk and emitting predominantly light particles in all directions. Future work will profit from both the increased statistics and the lower energy threshold that is now available at the Pierre Auger Observatory~\citep{AugerHEAT,AugerAmiga}. This will provide further constraints helping to understand the origin of cosmic rays in the energy range $0.1<E/\mathrm{EeV}<10$. \section*{Acknowledgements} The successful installation, commissioning, and operation of the Pierre Auger Observatory would not have been possible without the strong commitment and effort from the technical and administrative staff in Malarg\"ue. We are very grateful to the following agencies and organizations for financial support: Comisi\'on Nacional de Energ\'ia At\'omica, Fundaci\'on Antorchas, Gobierno De La Provincia de Mendoza, Municipalidad de Malarg\"ue, NDM Holdings and Valle Las Le\~nas, in gratitude for their continuing cooperation over land access, Argentina; the Australian Research Council; Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico \linebreak (CNPq), Financiadora de Estudos e Projetos (FINEP), Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de Rio de Janeiro (FAPERJ), Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP), Minist\'erio de Ci\^{e}ncia e Tecnologia (MCT), Brazil; AVCR AV0Z10100502 and AV0Z10100522, GAAV KJB100100904, MSMT-CR LA08016, LG11044, MEB111003, MSM0021620859, LA08015 and TACR TA01010517, Czech Republic; Centre de Calcul IN2P3/CNRS, Centre National de la Recherche Scientifique (CNRS), Conseil R\'egional Ile-de-France, D\'epartement Physique Nucl\'eaire et Corpusculaire (PNC-IN2P3/CNRS), D\'epartement Sciences de l'Univers (SDU-INSU/CNRS), France; Bundesministerium f\"ur Bildung und Forschung (BMBF), Deutsche Forschungsgemeinschaft (DFG), Finanzministerium Baden-W\"urttemberg, Helmholtz-Gemeinschaft Deutscher Forschungszentren (HGF), Ministerium f\"ur Wissenschaft und Forschung, Nordrhein-Westfalen, Ministerium f\"ur Wissenschaft, Forschung und Kunst, Baden-W\"urttemberg, Germany; Istituto \linebreak Nazionale di Fisica Nucleare (INFN), Ministero dell'Istruzione, dell'Universit\`a e della Ricerca (MIUR), Italy; Consejo Nacional de Ciencia y Tecnolog\'ia (CONACYT), Mexico; Ministerie van Onderwijs, Cultuur en Wetenschap, Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO), Stichting voor Fundamenteel Onderzoek der Materie (FOM), Netherlands; Ministry of Science and Higher Education, Grant Nos. N N202 200239 and N N202 207238, Poland; Portuguese national funds and FEDER funds within COMPETE - Programa Operacional Factores de Competitividade through Funda\c{c}\~ao para a Ci\^{e}ncia e a Tecnologia, Portugal; Romanian Authority for Scientific Reseach, UEFICDI, Ctr.Nr.1/ASPERA2 ERA-NET, Romania; Ministry for Higher Education, Science, and Technology, Slovenian Research Agency, Slovenia; Comunidad de Madrid, FEDER funds, Ministerio de Ciencia e Innovaci\'on and Consolider-Ingenio 2010 (CPAN), Xunta de Galicia, Spain; Science and Technology Facilities Council, United Kingdom; Department of Energy, Contract Nos. DE-AC02-07CH11359, DE-FR02-04ER41300, National Science Foundation, Grant No. 0450696, The Grainger Foundation USA; NAFOSTED, Vietnam; Marie Curie-IRSES/EPLANET, European Particle Physics Latin American Network, European Union 7th Framework Program, Grant No. PIRSES-2009-GA-246806; and UNESCO. \section*{Appendix A~: Large scale anisotropies in local coordinates} \label{appendixA} \begin{figure}[!h] \centering \includegraphics[width=7.5cm]{DipoleLocalTheta.eps} \includegraphics[width=7.5cm]{DipoleLocalPhi.eps} \caption{\small{Effect of large scale anisotropies in local coordinates (left~: as a function of $\sin^2{\theta}$, right~: as a function of $\varphi$) for an observer located at the Earth latitude $\ell_{\mathrm{site}}=-35.2^\circ$ of the Pierre Auger Observatory.}} \label{fig:localangles} \end{figure} To study the angular distribution in local coordinates for different anisotropic angular distributions $\Phi(\alpha,\delta)$ in celestial coordinates, we restrict ourselves, without loss of generalities, to the case of full detection efficiency ($\epsilon(\theta,\varphi,E)=1$). Then, the instantaneous arrival direction distribution in local coordinates reads~: \begin{equation} \label{eqn:d3N} \frac{\mathrm{d}^3N}{\mathrm{d}\theta \mathrm{d}\varphi \mathrm{d}\alpha^0} \propto \sin{\theta}~\cos{\theta}~\Phi(\theta,\varphi,\alpha^0). \end{equation} $\Phi(\theta,\varphi,\alpha^0)$ is the underlying angular distribution of cosmic rays, expressed in local coordinates. In case of isotropy, $\Phi$ is constant so that once integrated over $\varphi$ and $\alpha^0$, the arrival direction distribution is such that $\mathrm{d}N/\mathrm{d}\sin^2{\theta}$ is also constant. On the other hand, in case of a dipolar distribution for instance, $\Phi$ is proportional to $1+r\mathbf{d}(\theta,\varphi,\alpha^0)\cdot \mathbf{n}(\theta,\varphi)$, where $\mathbf{n}$ is here a unit vector in local coordinates, and $\mathbf{d}$ the dipole unit vector pointing towards $(\alpha_d,\delta_d)$ and expressed in local coordinates by means of Eqn.~\ref{eqn:theta-phi}. To quantify the distortions induced by a dipole in the $\mathrm{d}N/\mathrm{d}\sin^2{\theta}$ distribution, we define $\Delta(\mathrm{d}N/\mathrm{d}\sin^2{\theta})$ such that~: \begin{equation} \label{eqn:DeltadNdsin2th-1} \Delta(\mathrm{d}N/\mathrm{d}\sin^2{\theta}) = \frac{1}{r}~\bigg(\frac{\mathrm{d}N_{dipole}/\mathrm{d}\sin^2{\theta}-\mathrm{d}N_{iso}/\mathrm{d}\sin^2{\theta}}{\mathrm{d}N_{iso}/\mathrm{d}\sin^2{\theta}}\bigg). \end{equation} Once multiplied by the dipole amplitude $r$, $\Delta(\mathrm{d}N/\mathrm{d}\sin^2{\theta})$ gives directly the relative changes in the $\mathrm{d}N/\mathrm{d}\sin^2{\theta}$ distribution with respect to isotropy. Carrying out integrations over $\varphi$ and $\alpha^0$ yields to~: \begin{equation} \label{eqn:DeltadNdsin2th-2} \Delta(\mathrm{d}N/\mathrm{d}\sin^2{\theta}) = \frac{N_{0,dipole}}{N_{0,iso}}~\sin{\ell_\mathrm{site}}\sin{\delta_d}\cos{\theta}, \end{equation} where both intensity normalisations $N_{0,iso}$ and $N_{0,dipole}$ are tuned to guarantee the same number of events observed in the covered region of the sky for each underlying angular distribution. This result is shown in the left panel of Fig.~\ref{fig:localangles}, for the latitude $\ell_{\mathrm{site}}=-35.2^\circ$ of the Pierre Auger Observatory and for different dipole directions. Within the zenithal range $[0^\circ,55^\circ]$ considered in this article, the relative changes - maximal for $\delta_d=\pm90^\circ$ - amount at most to $\simeq\pm15\%$. So, even for an amplitude $r$ as large as 10\%, the relative changes in $\mathrm{d}N/\mathrm{d}\sin^2{\theta}$ would be within $\simeq\pm1.5\%$, variation which - given the available statistics - is sufficiently low to be considered as negligible. Besides, the same calculation applied to the case of a symmetric quadrupolar anisotropy shows that the variation of $\Delta(\mathrm{d}N/\mathrm{d}\sin^2{\theta})$ is less than $\simeq 0.1\%$, thus being negligible. Consequently, the distribution in $\mathrm{d}N/\mathrm{d}\sin^2{\theta}$ can be considered at first order as \emph{insensitive} to large scale anisotropies, so that any significant deviation from a uniform distribution provides an empirical measurement of the zenithal dependence of the detection efficiency. It is worth noting that the azimuthal distribution averaged over time is, on the other hand, sensitive to large scale anisotropies. Repeating the same calculation and integrating now over $\theta$ (in this example between 0 and 60$^\circ$) and $\alpha^0$ yields the $\Delta(\mathrm{d}N/\mathrm{d}\varphi)$ relative changes~: \begin{equation} \label{eqn:DeltadNdphi} \Delta(\mathrm{d}N/\mathrm{d}\varphi) = \frac{N_{0,dipole}}{N_{0,iso}}~\frac{\sin{\delta_d}\cos{\ell_\mathrm{site}}}{24}~\bigg(7\tan{\ell_\mathrm{site}}+3\sqrt{3}\sin{\varphi}\bigg). \end{equation} This function is shown in the right panel of Fig.~\ref{fig:localangles}, for $\delta_d=90^\circ$ (dashed line) and $\delta_d=-90^\circ$ (dotted line). The amplitude of the dipole wave is now $\simeq 0.5$. As well, the influence of a quadrupole on $\Delta(\mathrm{d}N/\mathrm{d}\varphi)$ is illustrated by the dashed-dotted line (oblate symmetric quadrupole in this example). Since, at the Earth latitude of the Pierre Auger Observatory, any genuine large scale pattern which depends on the declination translates into azimuthal modulations of the event rate \emph{similar} to the ones induced by experimental effects, it is thus mandatory to model accurately the dependence on azimuth of the detection efficiency for disentangling local from celestial effects. \section*{Appendix B~: Modulation of the detection efficiency induced by a tilted array} \label{appendixB} To estimate the modulation of the detection efficiency induced by a tilted array, we consider here that in the absence of tilt, the corresponding detection efficiency function $\epsilon_{\mathrm{notilt}}$ depends \textit{only} on the energy and the zenith angle and can be parameterised in a good approximation as~: \begin{equation} \epsilon_{\mathrm{notilt}}(E,\theta)=\frac{E^3}{E^3+E_{0.5}^3(\theta)}. \end{equation} $E_{0.5}(\theta)$ is the zenithal-dependent energy at which $\epsilon_{\mathrm{notilt}}(E,\theta)=0.5$. In case of a tilted array, this parameter depends also on the azimuth angle, which is then the source of the azimuthal modulation of the detection efficiency. To understand this, it is useful to consider for any given shower with parameters $(E,\theta,\varphi)$ the circle in the shower plane corresponding to the region in which a signal $S$ larger than some specified threshold value $S_0$ is expected. Let $r_0(\zeta)$ denote the radius of this circle, $\zeta$ being the tilt angle of the SD array. The detection efficiency, and hence also the parameter $E_{0.5}$, is ultimately a function of the average number of detectors contained in the projection of this circle into the ground, given by~: \begin{equation} \left<n_{\mathrm{det}}\right>(S>S_0)\propto\frac{r_0^2}{h^2 |\mathbf{n_\perp} \cdot \mathbf{n} |}, \end{equation} where $h=1.5~$km is the nominal separation between surface detectors. The radii $r_0(\zeta)$ obtained with the tilted array leading to the same value of $\left<n_\mathrm{det}\right>$ can be related to $r_0(\zeta=0)$ through~: \begin{equation} r_0^2(\zeta)=r_0^2(\zeta=0)\frac{| \mathbf{n_\perp}\cdot \mathbf{n} |}{\cos{\theta}}. \end{equation} Hence, we can obtain the relation between the energies $E_{0.5}$ with tilt $(E_{0.5}^{\mathrm{tilt}})$ and without tilt $(E_{0.5})$ by comparing the cosmic ray energies required to get the value $S_0$ at radius $r_0(\zeta)$ and at radius $r_0(\zeta=0)$. Approximating the lateral distribution function of the signal near the radius $r_0$ as a power law $S(r)\propto Er^{-3}$, we obtain the following relation~: \begin{equation} E_{0.5}^{\mathrm{tilt}}(\theta,\varphi)=E_{0.5}(\theta)\bigg(\frac{r_0(\zeta)}{r_0(\zeta=0)}\bigg)^3\simeq E_{0.5}(\theta)[1+\zeta\tan{\theta}\cos{(\varphi-\varphi_0)}]^3. \end{equation} Then, subtracting $\epsilon_{\mathrm{notilt}}$ to $\epsilon_{\mathrm{tilt}}$ leads to Eqn.~\ref{eqn:tilt2}. \section*{Appendix C~: Determination of upper limits on dipole amplitudes} \label{appendixC} To determine upper limits on the dipole amplitudes, Linsley described the procedure to follow in the case of first harmonic analysis in right ascension~\citep{Linsley1975}. We adapt here this procedure to the case of the dipolar reconstruction adopted in section~\ref{dipole}. Here, the data set is supposed to have been drawn at random from an underlying dipolar distribution characterised by \textbf{d}, whose value is unknown. In the limit of large number of events, the joint p.d.f. $p_{D_X,D_Y,D_Z}(\overline{d}_x,\overline{d}_y,\overline{d}_z)$ can be factorised in terms of three Gaussian distributions $N(\overline{d}_i-d_i,\sigma_i)$~: \begin{equation} \label{eqn:pdfdipole} p_{D_X,D_Y,D_Z}(\overline{d}_x,\overline{d}_y,\overline{d}_z;d_x,d_y,d_z) = N(\overline{d}_x-d_x,\sigma) N(\overline{d}_y-d_y,\sigma) N(\overline{d}_z-d_z,\sigma_z). \end{equation} The joint p.d.f. $p_{R,\Delta,A}(\overline{r},\overline{\delta},\overline{\alpha})$ expressing the dipole components in spherical coordinates is then obtained by performing the Jacobian transformation~: \begin{eqnarray} \label{eqn:pdfdipole2} p_{R,\Delta,A}(\overline{r},\overline{\delta},\overline{\alpha};d,\delta_d,\alpha_d) &=& \bigg| \frac{\partial(\overline{d}_x,\overline{d}_y,\overline{d}_z)}{\partial(\overline{r},\overline{\delta},\overline{\alpha})}\bigg|p_{D_X,D_Y,D_Z}(\overline{d}_x(\overline{r},\overline{\delta},\overline{\alpha}),\overline{d}_y(\overline{r},\overline{\delta},\overline{\alpha}),\overline{d}_z(\overline{r},\overline{\delta},\overline{\alpha}))\nonumber \\ &=&\frac{\overline{r}^2\cos{\overline{\delta}}}{(2\pi)^{3/2}\sigma^2\sigma_z}\exp{\bigg[-\frac{(\overline{r}\sin{\overline{\delta}}-d\sin{\delta_d})^2}{2\sigma_z^2}\bigg]}\nonumber\\ &~~~~~~\times&\exp{\bigg[-\frac{(\overline{r}\cos{\overline{\delta}}\cos{\overline{\alpha}}-d\cos{\delta_d}\cos{\alpha_d})^2}{2\sigma^2}\bigg]}\nonumber \\ &~~~~~~\times &\exp{\bigg[-\frac{(\overline{r}\cos{\overline{\delta}}\sin{\overline{\alpha}}-d\cos{\delta_d}\sin{\alpha_d})^2}{2\sigma^2}\bigg]}. \end{eqnarray} Each analysed data set having been selected at random from an ensemble in which all possible values of $\mathbf{d}$ are equally represented, the various $d$, $\delta_d$ and $\alpha_d$ combinations have relative probability $p_{R,\Delta,A}(\overline{r},\overline{\delta},\overline{\alpha};d,\delta_d,\alpha_d)/p_{R,\Delta,A}(\overline{r},\overline{\delta},\overline{\alpha};d=0)$. This allows us to define the joint p.d.f. $\tilde{p}_{R,\Delta,A}$ by requiring this ratio to be normalised to unity~: \begin{eqnarray} \label{eqn:pdful1} \tilde{p}_{R,\Delta,A}(\overline{r},\overline{\delta},\overline{\alpha};d,\delta_d,\alpha_d) &=& K(r,\delta)~\exp{\bigg[\frac{\overline{r}d\cos{\overline{\delta}}\cos{\delta_d}\cos{(\overline{\alpha}-\alpha_d)}}{\sigma^2}\bigg]} \nonumber \\ &~~~~~~\times&\exp{\bigg[\frac{\overline{r}d\sin{\overline{\delta}}\sin{\delta_d}}{\sigma^2_z}-\frac{d^2\cos^2{\delta_d}}{2\sigma^2}-\frac{d^2\sin^2{\delta_d}}{2\sigma_z^2}\bigg]}, \end{eqnarray} where the normalisation reads~: \begin{eqnarray} \label{eqn:pdful-norm} K(r,\delta)&=&2\pi~I_0\bigg(\frac{\overline{r}d\cos{\overline{\delta}}\cos{\delta_d}}{\sigma^2}\bigg)\nonumber \\ &~~~~~~\times&\int~\mathrm{d}d~\mathrm{d}\delta_d~\exp{\bigg[-\frac{d^2\cos^2{\delta_d}}{2\sigma^2}-\frac{d^2\sin^2{\delta_d}}{2\sigma_z^2}+\frac{\overline{r}d\sin{\overline{\delta}}\sin{\delta_d}}{\sigma^2_z}\bigg]}. \end{eqnarray} $I_0$ is here the modified Bessel function of the first kind with order 0. Integration of $\tilde{p}_{R,\Delta,A}$ over $\delta_d$ and $\alpha_d$ yields the $\tilde{p}_R$ p.d.f., from which upper limits on $d$ can be obtained within a confidence level $C.L.$ by inverting the relation~: \begin{eqnarray} \label{eqn:pdful2} \int_{\overline{r}_{data}}^1 \mathrm{d}\overline{r}~\tilde{p}_{R}(\overline{r},\overline{\delta};d^{UL}) =C.L. \end{eqnarray} Due to the non-uniform directional exposure in declination, the resulting upper limits actually depend on the declination through the dependence of $\tilde{p}_R$ on $\overline{\delta}$. In practice, this dependence is small, which is why we presented in section~\ref{discussion} upper limits \textit{averaged} over the declination. \section*{Appendix D~: Determination of upper limits on quadrupole amplitudes} \label{appendixD} To determine upper limits on quadrupole amplitudes, we rely on Monte-Carlo simulations. For each possible amplitude $\lambda_+$ ($\beta$), we estimate the p.d.f. $p_{\Lambda_+}(\overline{\lambda}_+;\lambda_+)$ ($p_{B}(\overline{\beta};\beta)$) with a given number of events $N$ and a given exposure $\tilde{\omega}$. The amplitude $\lambda_+^{UL}$ such that $\int_{\overline{\lambda}_{+,data}}^\infty \mathrm{d}\overline{\lambda}_+~\tilde{p}_{\Lambda}(\overline{\lambda}_+;\lambda_+^{UL}) =C.L.$ is a relevant upper limit (and respectively for $\beta^{UL}$). Alternatively to the previous procedure used to derive upper limits on dipole amplitudes, this procedure can lead to upper limits tighter than the upper bounds for isotropy $\overline{\lambda}_{+,99}$ when the measured values of $\overline{\lambda}_{+,data}$ are smaller than the expected average for isotropy. To cope with this undesired behaviour, the upper limits presented in section~\ref{discussion} are defined as $\mathrm{max}(\overline{\lambda}_{+,99},\lambda_+^{UL})$.
{ "timestamp": "2012-12-05T02:01:09", "yymm": "1210", "arxiv_id": "1210.3736", "language": "en", "url": "https://arxiv.org/abs/1210.3736" }
\section*{Abstract} We demonstrate by mathematical analysis and systematic computer simulations that redistribution can lead to sustainable growth in a society. In accordance with economic models of risky human capital, we assume that dynamics of human capital is modeled as a multiplicative stochastic process which, in the long run, leads to the destruction of individual human capital. When agents are linked by fully-redistributive taxation the situation might turn to individual growth in the long run. We consider that a government collects a proportion of income and reduces it by a fraction as costs for administration (efficiency losses). The remaining public good is equally redistributed to all agents. Sustainable growth is induced by redistribution despite the losses from the random growth process and despite administrative costs. Growth results from a portfolio effect. The findings are verified for three different tax schemes: proportional tax, taking proportional more from the rich, and proportionally more from the poor. We discuss which of these tax schemes performs better with respect to maximize growth under a fixed rate of administrative costs, and with respect to maximize the governmental income. This leads us to some general conclusions about governmental decisions, the relation to public good games with free-riding, and the function of taxation in a risk taking society. \section*{Introduction} This paper shows how redistribution of income spurs growth of human capital in a society just because of a portfolio-effect. Our model captures the concept of ``risky human capital'' from recent economic literature (see \cite{Grochulski.Piskorski2010Riskyhumancapital}), where human capital is described by a multiplicative stochastic process which, in the long run, leads to the destruction of individual human capital. We model the random growth or decline of human capital as proportional to the individual endowment of income. We couple agents by fully-redistributive taxation \cite{Meltzer.Richard1981RationalTheoryof} (that means collected taxes are equally redistributed) which is associated with efficiency losses \cite{Persson.Tabellini1994IsInequalityHarmful}. Redistribution re-balances gains and losses from individuals and works as a portfolio effect which spurs growth into the individually lossy stochastic processes. Economic literature does not explicitly point out the portfolio effect through redistribution in the relationship between inequality and growth of human capital. So far, in the politico-economic literature, three basic reasons are discussed of why redistribution is beneficial for society. The first branch of literature stresses the insurance aspect through redistribution (see \cite{Mirrlees1971ExplorationinTheory}). Mirrlees highlights that from a welfare maximizing point of view, the level of redistribution should be at a level on which the poor do not suffer and both the poor and the rich have an incentive to improve their situation \cite{Mirrlees1971ExplorationinTheory}.\footnote{Mirrlees motivates the insurance aspect with fairness-considerations. Therefore, this literature is also partly linked to the literature about social preferences \cite{Fehr.Schmidt1999TheoryOfFairness, Bolton.Ockenfels2000ERCTheoryof}. In a series of experiments, it is shown that subjects have a preference for avoiding high degrees of inequality even if they have to resign payoff.}. More recently, the literature of socio-political unrest \cite{Barro2000InequalityandGrowth,Forbes2000ReassessmentofRelationship} points out that redistribution guarantees social stability and reduces the effort the society has to make when inequality is high. In the second branch, redistribution reduces the disincentive for the poor for taking too high risks \cite{Aghion.Bolton1997TheoryofTrickle-Down}. Redistribution increases the endowment of the poor. The poor reduce the demand for loans and invest more efficient which means that they take less risk. The efficiency in the economy is improved, growth will be higher. Redistribution spurs growth because of decreased disincentives in society. The third branch describes the transmission channel of the median-voter-approach between inequality and growth \cite{Persson.Tabellini1994IsInequalityHarmful}. The utility maximizing calculus of the median-voter determines the level of redistribution. The median voter proposes the median level of redistribution. This level is unbeatable in pairwise majority decisions. If the inequality is high, the median-voter enforces a high level of redistribution. Redistribution is assumed to induce efficiency losses, as in our model. High levels of inequality are associated with high efficiency losses and therefore low growth rates. For instance in \cite{Perotti1996Growthincomedistribution} it is shown that high degrees of inequality are not associated with high levels of social spending. This motivated the modification and extension of this approach by the consideration of institutions \cite{Acemoglu.Johnson.ea2005InstitutionsasFundamental} and elites \cite{Glaeser.Scheinkman.ea2003injusticeofinequality}. In contrast to above mentioned work on the relationship between inequality and growth, we argue that neither socio-political nor incentive considerations have to be taken into account to show that redistribution spurs growth. Our model excludes incentive-incompatibilities, voting-approaches and normative concepts like insurance or fairness considerations, to point out the effectiveness of the pure portfolio effect through redistribution. As we will see, the effect of combining lossy proportional stochastic growth and linear lossy redistribution is non-trivial as both processes lead to destruction of human capital when they run independently, while their combination can enable survival. This ``magic'' effect of induction of growth from two lossy processes is based on the portfolio effect known from investment science \cite{Luenberger1998InvestmentScience}: Gains and losses are re-balanced by redistributing income into human capital ``assets'', which ensures optimal growth of the portfolio. The effect has been discussed before under different names, such as repeated Kelly games, Kelly optimal portfolio, and re-balancing of asset allocations, and seems to be rediscovered from time to time in a new context \cite{Kelly1956newinterpretationof, Bouchaud.Mezard2000Wealthcondensationin, Malcai.Biham.ea2002Theoreticalanalysisand, Marsili.Maslov.ea1998Dynamicaloptimizationtheory, Medo2009Breakdownofmean-field, Medo.Pismak.ea2008Diversificationandlimited, Slanina1999possibilityofoptimal, Slanina2004Inelasticallyscatteringparticles, Yaari.Solomon2010Cooperationevolutionin}. It's applicability in various fields is laid out in \cite{Yaari.Stauffer.ea2009IntermittencyandLocalization}. We will call the phenomenon \emph{portfolio re-balancing effect} in the following. This interprets each subject with its human capital endowment as an asset and the society as the portfolio. Asset values change stochastically, and taxation and redistribution of income re-balance the values of the different assets. The econophysics literature has studied extensively the statistical mechanics of money based on its conservation \cite{Dragulescu.Yakovenko2000Statisticalmechanicsof,Chakraborti.Chakrabarti2000Statisticalmechanicsof,Patriarca.Chakraborti.ea2006Influenceofsaving} and within a kinetic exchange model \cite{Chatterjee.Chakrabarti2007Kineticexchangemodels} (see also the paper and the literature review in \cite{Yakovenko.Rosser2009ColloquiumStatisticalmechanics}). The topic of taxation and redistribution has been introduced in these models \cite{Guala2009Taxesinsimple}. Regarding this stream of literature, we focus on the growth enhancing effect of redistribution neglecting conservation of money. The purpose of this paper is to discuss the role of the portfolio re-balancing effect of fully-redistributive taxation for the growth of society's human capital. By systematic computer simulation we quantify conditions of the stochastic growth process, and the taxation schemes that prevent the destruction of the accumulated human capital of the society. Further on, we quantify optimal tax rates which maximize growth of society's human capital, and how a selfish government optimizes its income. \section*{Methods} We describe our method by the following steps: \begin{compactitem} \item Specification of the outlined economic model on the endogenous variables human capital $h$ and income $y$ in terms of a straight forward transition of human capital to income, redistribution of income between agents and independent random multiplicative production of human capital proportional to income. Exogenous variables are: tax rate $a$, rate of administrative cost $b$, taxation scheme, the distribution of random growth factors, and the number of agents $N$. \item Theoretical demonstration what conditions are most interesting for studying the effect of portfolio re-balancing on growth. \item Theoretical demonstration of dynamics at border cases. \item Presentation of some example runs of the process. \item Description of the setup of a systematic simulation the extraction of the average growth factor from simulation data. \end{compactitem} \subsection*{Specification of the economic model} Let us consider a society of $N$ agents, each of which is characterized at time $t$ by its \emph{human capital} $h_{i}(t)$ which is a positive scalar value. Consequently, $h(t) \in \mathbb{R}^N$ is the \emph{human capital vector} of all $h_{i}(t)$. Let us denote the \emph{total human capital} at time $t$ by $H(t)=\sum_{i=1}^N h_i(t)$. The production process is: human capital is used to produce \emph{income} $y_i$ income is taxed and fully-redistributed, and income is directly invested in human capital. This is formalized by the equations: \begin{align} y_i(t) &= \mathrm{prod}_i(h_i(t)) \label{eq:system1}, \\ y_i(t+1) &= \mathrm{redis}_i(y(t)) \label{eq:system2}, \\ h_i(t+1) &= \mathrm{HCprod}_i(y_i(t+1)) \label{eq:system3}. \end{align} Notice that $\mathrm{prod}_i$ and $\mathrm{HCprod}_i$ are functions operating on individual values, while $\mathrm{redis}_i$ is scalar-valued but takes the whole income vector as input. These scalar-valued functions serve as component functions for the vector-valued functions $\mathrm{prod}$, $\mathrm{HCprod}$, and $\mathrm{redis}$ which are self-maps on $\mathbb{R}^N$. Given an initial human capital vector $h(0)$ the evolution of human capital in the system of individuals is described by the equation \begin{equation} h(t+1) = \mathrm{HCprod}(\mathrm{redis}(\mathrm{prod}(h(t)))). \label{eq:system_h} \end{equation} Consequently, the evolution of income is given by \begin{equation} y(t+1) = \mathrm{redis}(\mathrm{prod}(\mathrm{HCprod}(y(t)))). \label{eq:system_y} \end{equation} We call $Y(t) = \sum_{i=1}^N y_i(t)$ the \emph{total income}. Growth (positive or negative) of this aggregated variable is analyzed in the following. The total income evolves of course tightly related to the evolution of the total human capital $H(t)$ because of Eq.~(\ref{eq:system3}). \paragraph{Production of income and human capital} We assume that production is directly transfered into income \begin{equation} y_i = \mathrm{prod}_i(h_i) = h_i, \label{eq:production} \end{equation} which means that the wage is equal to one. The production of human capital is assumed to be based on an individual multiplicative stochastic event \begin{equation} h_i = \mathrm{HCprod}_i(y_i) = \eta_i(t) y_i \label{eq:HCproduction} \end{equation} where $\eta_i(t)$ is a realization of the positive random variable $\eta$. If agent $i$ at time $t$ has income $y_i(t)$ then after producing its human capital is $\eta_i(t)y_i(t)$. When $\eta_i(t)<1$ human capital declines, otherwise it grows. Thus, we assume that human capital for the next round of production is build from current income times a random factor. This can be interpreted as a generation model where each generation lives for one period and invests its income in the human capital of its successor. In an innovative economy the value of old human capital quickly dissolves and new human capital must constantly be produced by investing income. Thus, the human capital production function is also reasonable on shorter time scales then generations. Without subscript $i$, $\mathrm{prod}$ and $\eta(t)$ are meant as vectors. Thus, growth dynamics of human capital vectors $h$ reads $\mathrm{HCprod}(y) = \eta(t) y$. The product $\eta(t) y$ is meant component-wise, $\eta(t)$ being an equally sized vector of independent realizations of $\eta$. Our production function has only the input factor human capital and is therefore simplified. The simplification is motivated by the idea of the human capital intensive production in modern economies which is in line with endogenous growth theory (see \cite{LucasJr.1988mechanicsofeconomic}). For reasons of comparability, we present in the following some feasible extensions of our model. A standard Cobb-Douglas production function also includes the input of \emph{capital} $k$ and \emph{labour} $l$ (accompanied by their exponents $0 < \alpha,\beta < 1$ with $\alpha+\beta=1$). Moreover, the transfer of income into human capital is accompanied by \emph{consumption} $c$ and \emph{saving} rates $s$. This would imply Equation (\ref{eq:production}) to look as \begin{equation} y_i = \mathrm{prod}(h_i) = h_i k^\alpha l^\beta \nonumber \end{equation} and Equation (\ref{eq:HCproduction}) to be \begin{equation} h_i = \mathrm{HCprod}(y_i) = \eta_i(t)(1-s)(1-c)y_i. \nonumber \end{equation} Equation (\ref{eq:system_y}) would read \begin{equation} y(t+1) = \mathrm{redis}(\;\eta_i(t)(1-s)(1-c)y_i k^\alpha l^\beta \;). \end{equation} When $s,c,k,l,\alpha,\beta$ are all constants the term $\eta_i(t)(1-s)(1-c) k^\alpha l^\beta$ is a draw from a random variable from the distribution as $\eta$ but scaled by a constant factor $(1-s)(1-c) k^\alpha l^\beta$. Modification of these factors have thus the same effect as a multiplicative scale in the random variable $\eta$. By holding $k$ constant, we implicitly assume that savings are equal to depreciation of capital. Our model is thus based on a standard economic growth model and is simplified to focus on the \emph{portfolio re-balancing effect}. \paragraph{Redistribution} We quantify the redistribution function with three different taxation schemes: proportional taxation, a progressive scheme where agents have to pay everything above a dynamically chosen maximal tax-free income, and a regressive scheme where agents have to pay either a dynamically chosen fee -- like a per capita premium -- or all their income if they cannot afford the full fee. (No worries, agents get back some income because of the redistribution.) All three schemes we specify by the same two independent parameters: the \emph{tax rate} $a$, which determines the fraction withdrawn from the total income of all agents, and the \emph{rate of administrative cost} $b$, which determines the fraction withdrawn by the government from the raised taxes before redistribution to agents. Let us call the amount of taxes collected from agent $i$ to be $\mathrm{tax}_i(y)$. Notice, that it depends on the vector of income. This enables us to define dynamically adjusted taxation schemes which take the distribution of income into account. Naturally, $\mathrm{tax}_i(y)$ should be confined between zero (no tax) and $y_i$ (tax equals income). The tax revenue is collected by a government at a central place, which involves administrative costs (efficiency losses cf. \cite{Persson.Tabellini1994IsInequalityHarmful}), which are assumed to be proportional to the amount of taxes raised, i.e. $b\in[0, 1]$ denotes the \emph{rate of administrative cost}. Consequently, the \emph{public good} for redistribution is the raised taxes minus the cost: \begin{equation} \mathrm{pg}(y) = (1-b)\sum_{i=1}^N\mathrm{tax}_i(y), \label{eq:pg} \end{equation} while the \emph{government income} is \begin{equation} \mathrm{gi}(y) = b\sum_{i=1}^N\mathrm{tax}_i(y). \label{eq:govincome} \end{equation} Because of a fully-redistributive tax, $\mathrm{pg}(y)$ is divided with equal shares among all agents, i.e.~for every agent income increases by an amount $\mathrm{pg}(y)/N$. Other mechanisms of redistribution in a related model are analyzed in \cite{Guala2009Taxesinsimple}. The redistribution function (net income) for agent $i$ is thus \begin{equation} \mathrm{redis}_i(y) = y_i - \mathrm{tax}_i(y) + \frac{\mathrm{pg}(y)}{N}. \label{eq:redis} \end{equation} Depending on the position within society, an agent could be a net tax payer or a net transfer recipient. We specify the tax function $\mathrm{tax}_i(y)$ with respect to the tax rate $a\in[0, 1]$ such that it holds \begin{equation} \sum_i \mathrm{tax}_i(y) = a \sum_i y_i = aY. \label{eq:atax} \end{equation} We distinguish three schemes of taxation which differ in from whom the fraction $a$ of the total income is raised: (i) proportionally from everyone, (ii) more than proportionally from the poor (regressive), or (iii) more than proportionally from the rich (progressive). In progressive and regressive taxation schemes tax rates differ in given income brackets. We consider extremal cases to pronounce differences. \begin{compactenum}[(i)] \item \emph{Proportional tax} is the classical taxation scheme where each agent has to pay a fraction $a$ of its individual income \begin{equation} \mathrm{tax}_i(y) = ay_i. \label{tax:prop} \end{equation} \item \emph{Regressive tax} charges a fixed fee $c_\mathrm{fee}(y)>0$ from everyone if possible, otherwise all income is charged. \begin{equation} \mathrm{tax}_i(y) = \min\{x_i, c_\mathrm{fee}(y)\} \label{eq:dynfee} \end{equation} In the latter case, the agent still receives its proportion from the public good, so it will not be without income after redistribution. The fee has to be such that (\ref{eq:atax}) is fulfilled for the current income vector. (It is easy to see that this is possible and unique.) \item \emph{Progressive tax} charges all income exceeding a threshold of tax free income \mbox{$c_{\max}(y)>0$} from everyone. Every agent with income below $c_{\max}(y)>0$ pays no taxes. \begin{equation}\mathrm{tax}_i(y) = \max\{y_i-c_{\max}(y), 0\}\label{eq:dynmax} \end{equation} The threshold has to be determined such that (\ref{eq:atax}) is met. (It is easy to see that this is possible and unique.) Notice, that the income of an agent who has to pay taxes is larger than $c_{\max}(y)$ after redistribution because of its share from the public good. \end{compactenum} The three schemes are comparable in that the total amount of raised taxes is always a fraction $a$ of the total income, regardless of the shape of the distribution of income. Thus, they all deliver a public good of $\mathrm{pg}(y) = (1-b)a Y$. Note, that in the related model of \cite{Guala2009Taxesinsimple} taxes are raised when agents trade and not every time period from every one. The regressive and the progressive tax assign different tax rates in two different tax brackets $[0,c(y)]$ and $[c(y),\infty]$. The regressive tax scheme (where $c=c_\mathrm{fee}$) taxes 100\% in the lower tax bracket and 0\% in the upper. The progressive tax scheme (where $c=c_{\max}$) taxes 100\% in the upper tax bracket and 0\% in the lower. Figure \ref{fig:1} demonstrates an example with six agents of different income. It is shown what will be charged from each agent and how income looks after redistribution for each of the three schemes. Note, that the dynamic fee $c_{\mathrm{fee}}(y)$ and the dynamic maximum $c_{\max}(y)$ are implicitly defined, to meet the condition of Eq.~(\ref{eq:atax}). In realistic taxation systems, it might seem impractical to determine the fee and the maximum after the current income of all agents is known. In reality, one would only adjust thresholds for the next turn. We omitted that modification to prevent delay effects. Probably, this modification would cause only minor changes. \subsection*{On the distribution of random human capital growth factors} Analysis of human capital production without redistribution ($a=0$ in Eq.~(\ref{eq:system2})) does not involve interaction. Therefore, it is enough to focus on a single agent and Eq.~(\ref{eq:system_y}) collapses to $y(t+1) = \eta(t)y(t)$. Let $\eta$ have finite variance. With $y(0)=1$ it holds \begin{equation} y(t+1) = \eta(t)y(t) = \eta(t)\eta(t-1)\cdots\eta(1)\eta(0) = \prod_{s=0}^t\eta(s). \label{eq:prod} \end{equation} This resembles the human capital life cycle model \cite{Grochulski.Piskorski2010Riskyhumancapital} where human capital $h_t$ at time $t$ is determined as the result of a stochastic process $h_t = \sigma_{t-1}\cdots\sigma_1\theta i$ where $i$ is the initial investment in human capital, $\theta$ is a stochastic shock to the human capital investment and $\sigma_s$ are stochastic human capital depreciation shocks. Eq.~(\ref{eq:prod}) is equivalent to \begin{equation} \label{eq:1b} \log y(t+1) = \log\eta(t) + \log y(t) = \sum_{s=0}^t\log\eta(s). \end{equation} The central limit theorem applied to (\ref{eq:1b}) implies that the distribution of the random variable $\log y(t)$ gets closer and closer to a normal distribution $\mathcal{N}(\mu_t, \sigma_t)$ with mean and variance parameters \begin{equation} \mu_t = t\mu_{\log\eta}\;;\quad \sigma_t^2 = t\sigma_{\log\eta}^2 \label{eq:mean-t} \end{equation} with $\mu_{\log\eta} = \mean{\log\eta}$ and $\sigma_{\log\eta}^2 = \mean{(\log\eta)^2} - \mean{\log\eta}^2$. Consequently, for $t\to\infty$ the distribution of $y(t)$ approaches the log-normal distribution $\log\text{-}\mathcal{N}(t\mu_{\log\eta}, \sqrt{t}\sigma_{\log\eta})$. Based on that fact, we chose the log-normal distribution with its two characterizing parameters as the distribution of $\eta$ in our simulation setup. The expected value of income might grow, while every individual trajectory of $y(t)$ dies out. The condition for this seemingly contradictory situation is \begin{equation} \mu_{\log\eta}<0<\log\mu_\eta \label{eq:effectcondition} \end{equation} which is equivalent to $\mean{\eta}_\mathrm{geo}=\exp\mu_{\log\eta}<1<\mu_\eta = \mean{\eta}$ i.e.~the arithmetic mean of $\eta$ being larger than one, while its geometric mean is less than one. Elementary explanations of this effect are given in \cite{Luenberger1998InvestmentScience, Yaari.Solomon2010Cooperationevolutionin, Redner1990RandomMultiplicativeProcesses}. It can be shown that for long enough time span any single trajectory grows only with the geometric mean $\mean{\eta}_\mathrm{geo}$. For log-normal distributions of $\eta$ the two inequalities in (\ref{eq:effectcondition}) are equivalent to $\mu_{\log\eta}$ being negative while $\sigma^2_{\log\eta}$ is sufficiently large $\sigma^2_{\log\eta} > -2\mu_{\log\eta}$. This situation forms the basis for the effect of growth which is induced by coupling lossy multiplicative stochastic growth with lossy redistribution in a finite population. Redistribution helps the system to realize a growth rate somewhere in between the geometric and the arithmetic mean of $\eta$. To reduce the number of independent parameters in simulation, we choose log-normal distributions where $\mean{\eta}\cdot\mean{\eta}_\mathrm{geo} = 1$ holds. Under this condition, the two parameters of the log-normal distribution $\mu$ and $\sigma$ (mean and standard deviation of the underlying normal distribution) are represented by one free parameter which allows for different skewness, but keeps the balance of the expected value $\mean{\eta}$ an the realized growth rate $\mean{\eta}_\mathrm{geo}$. This condition enables that destruction and growth are theoretically possible for distributions of this class. \subsection*{Theoretical analysis of border cases and an example} We are interested in the average realized growth rate and its dependence on the independent parameters. For some border cases we can theoretically derive that average growth is exponential $Y(t+1) = gY(t)$ and also quantify the magnitude of the growth rate $g$. \textbf{Case 1: } Only redistribution, no stochastic production of human capital ($\mean{\eta}_\mathrm{geo}=1=\mean{\eta}$). For any taxation scheme, and any $N$ it holds $g=(1-ab)$. Thus, there is never growth of human capital. \textbf{Case 2:} Only stochastic production of human capital but no redistribution ($a=0$). For any taxation scheme, any $b$ and any $N$ it holds $g=\mean{\eta}_\mathrm{geo}$. \textbf{Case 3:} 100\% tax and infinite number of agents ($a=1$, $N=\infty$). All trajectories act as one, and for any taxation scheme $g=(1-ab)\mean{\eta}$. The growth with the mean can be realized because in an infinite society no rare but large growth event is ``missing''. Based on the last two cases we argue, that for intermediate $a$ and finite $N$ the average growth rate lies somewhere in between, but we do not have an analytic expression for it. Figure~\ref{fig:2} gives an example, where we fix $\mean{\eta}=3/2$ and $\mean{\eta}_\mathrm{geo}=2/3$. This implies log-normal parameters $\mu=-0.405$ and $\sigma=1.274$, which we use as the distribution of intermediate risk in simulation. Under this distribution income declines with a probability of 62.5\%, it at least doubles with probability 19.4\%, and it will be more than ten times larger with probability 1.7\%. Tax and admin rates are set at intermediate levels $a=0.3$, $b=0.2$. Trajectories are computed according to \eqref{eq:system_y} for a society of $N=10$ agents, each starting with human capital equal to one. Trajectories are shown for all three tax schemes. Each trajectory is computed with the same random draws from the random variables $\eta_i(t)$ for each $i$ and $t$, to allow for a direct comparison of the different taxation schemes. In Figure~\ref{fig:2} progressive taxation (iii) leads to the largest growth of total income, proportional taxation (i) resulted also in a growing society while regressive taxation (ii) fluctuates between growth and decline with no clear trend visible. When there was no redistribution at all there would be decline (with a growth factor $\mean{\eta}_\mathrm{geo}=0.667$). Pure redistribution without production of human capital would also imply decline (with a growth factor of $1-ab=0.94$). The performance ranking progressive better than proportional better than regressive holds even if we vary the essential parameters admin rate $b$, tax rate $a$ and the size of the population $N$ as shown in the lower part of Figure~\ref{fig:2}. Regarding their impact on the dynamical behavior, we see that for a higher admin rate ($b=0.6$) growth might turn to decline. A lower tax rate ($a=0.01$), can also imply destruction of income and human capital for all three taxation schemes. In this case the portfolio re-balancing effect of redistribution is not used well and this is not compensated by the savings from the lower loss of redistribution ($1-ab=0.998$). Finally, in a larger society with $N=100$ all schemes achieve larger growth factors. Progressive taxation may contribute to disincentives for the decision to invest income into the production of human capital. Especially in our extremal case income above a maximal income is taxed by 100\%. To that end, let us assume that every agent can decide what fraction of its income to invest while the remaining income's value remains as is. Taxation and redistribution remains obligatory. If the agent is to maximize the value of its income after human capital production, production and redistribution, the rational decision would be to invest all, as long as the expected value of human capital is larger than the invested capital. This holds also for progressive taxation, as any increase of expected value before redistribution increases the expected value of the public good and thus the own expected value after redistribution. In our model, progressive taxation does not remove the rationality of the choice of investing all into production of human capital, but makes difference to not investing in human capital smaller. We do not touch the issue of free-riding (which would be to avoid paying taxes) in the above argument. When free-riding was possible, not paying taxes but receiving a share from the public good, is of course rational under every taxation scheme. \subsection*{Simulation setup, independent and dependent variables} For each of the three taxation schemes we aim to get an overview about the dependence of the average growth factor on the tax rate and the rate of administrative costs. Further on, we want to control for the effect of society's size and lognormal distributions which are more or less risky (in the sense of higher or lower right-skewness). Consequently, we set up a systematic computer simulation to estimate the \emph{average growth factors} $g$. Table \ref{tab:simsetup} shows the list of independent variables, their ranges in the simulation setup. Growth factors are estimated on the basis of stochastic trajectories of total income $Y(t)$ for 500 time steps by regressing $\log(g)$ in $\log Y(t) = \log N + t\cdot\log(g)$. The intercept in the regression is naturally fixed at $\log N$. From 100 estimated growth rates of such stochastic trajectories we compute the average growth factor as the geometric mean. The geometric mean is used because it better fits the central tendency of the distribution, as a growth rate is naturally a parameter larger than zero an thus typically log-normally distributed. Changing to the arithmetic mean would shift the results a bit, as the arithmetic mean is always larger than the geometric mean. To get on overview about the parameter space we cover the $(b,a)$-plane by a fine grid while the number of different sizes of the society and different distributions was kept low to make computations finish within less than two weeks on a laptop. See the \texttt{matlab}-code in the supporting material which produces the simulation data (see function \texttt{dataMSPgrowthrates}). As our focus is on maximizing growth factors, optimal tax rates, and government income let us define further variables which depend on the average growth factor as a function of $a$ and $b$, $g(b,a)$: For a fixed admin rate $b$ we define the \emph{maximal growth factor} $g_{\max}(b) = \max_a g(b,a)$ and the \emph{optimal tax rate} $a_\mathrm{opt}(b) = \mathrm{argmax}_ag(b, a)$. (Note, that $\mathrm{argmax}$ is not necessarily unique, but empirical results support the conjecture that there is only one local maximum, and consequently $a_\mathrm{opt}(b)$ is unique.) The \emph{rate of government income} at the current total income is $\mathrm{gov}(b,a) = b\,a\,Y(t+1)/Y(t) = b\,a\,g(b,a)$. The \emph{rate of government income under optimal tax rate} as a function of $b$ is defined by $\mathrm{gov}_\mathrm{opttax}(b) = b\,a_\mathrm{opt}(b)\,g_{\max}(b)$. \section*{Results} \subsection*{Description of figures} Figures \ref{fig:1}--\ref{fig:4} are the core Figures to understand the message of our paper. Figures \ref{fig:1} and \ref{fig:2} explain and demonstrate our economic model and Figures \ref{fig:3} and \ref{fig:4} show main results. We visualize our simulation results exemplarily for $N=10$ and the distribution of intermediate risk $(\mean{\eta},\mean{\eta}_\mathrm{geo})=(1.5,0.667)$ in Figure \ref{fig:3}. We show in Figure 3\textbf{A} the average growth factor $g(b,a)$ color-coded in the $(b,a)$ parameter plane for each of the three tax schemes. In each plot, the solid line divides the ``zone of sustainable growth of income'' (yellow to red) from the ``zone of income destruction'' (yellow to blue). The dashed line shows the optimal growth maximizing tax rate $a_\mathrm{opt}(b)$ for a given admin rate. In Figure \ref{fig:3}\textbf{B}, we show the critical lines dividing the zones of growth and destruction and the optimal tax rates in one plot to compare the three taxation schemes. (In all plots the black dotted line shows the maximally possible size of the zone of growth, where $(1-ba)\mean{\eta} = 1$. Above this line it is trivial that income can not grow. Figure \ref{fig:4}\textbf{A} shows the maximal growth factor $g_{\max}(b)$, \ref{fig:4}\textbf{A} the optimal tax rate $a_\mathrm{opt}(b)$, and \ref{fig:4}\textbf{C} the rate of government income at optimal tax rate $\mathrm{gov}_\mathrm{opttax}(b)$. All are functions of the admin rate $b$ and they are shown for all three taxation schemes in our standard color-code. Figure \ref{fig:5}--\ref{fig:8} are extensions of Figures \ref{fig:3} and \ref{fig:4} by showing also the data for $N=100$, a less risky, and a more risky distribution of $\eta$. Figures \ref{fig:5}, \ref{fig:6}, and \ref{fig:8}, are the pure extensions of Figures \ref{fig:3}\textbf{A}, \ref{fig:3}\textbf{B}, and \ref{fig:4}, while Figure \ref{fig:7} is a regrouping of lines to different subplots focusing on comparison of $N=10$ and $N=100$. \subsection*{On growth and destruction of income and human capital} Figure \ref{fig:3} and related Figures \ref{fig:5}, \ref{fig:6}, and \ref{fig:7} summarize what combination of tax rates and admin rate allow for society's income and human capital to grow sustainable. It is interesting to focus on situations where either the tax rate $a$ or the admin rate $b$ is fixed: \textbf{Constant tax rate $a$:} Raising the admin rate $b$ always lowers the growth factor $g(a,b)$ and turns the growth regime at some point into the destruction regime. \textbf{Constant admin rate $b$:} The average growth factor $g(a,b)$ is not monotonic in $a$. For very high and very low tax rates, the growth factor is the lowest and can lead to income destruction, while only intermediate tax rates prevent this. High tax rates tend to lower the growth factor because a larger fraction of the total income is reduced by the admin rate (see the definition of the public good (\ref{eq:pg})). On the other hand, very low tax rates lower the growth factor because the portfolio re-balancing effect is not used optimal, thus part of the human capital ``gambles it self away''. This characterization is ubiquitous for all taxation schemes, both population sizes and the riskier and less risky distribution of $\eta$. It holds ubiquitous that the zone of sustainable growth of regressive taxation is contained in the zone of growth of proportional taxation, which is further contained in the zone of growth of progressive taxation, when $N$ and the distribution of $\eta$ are kept constant. When the taxation scheme and the distribution of $\eta$ is kept constant the zone of growth of the smaller society ($N=10$) is always contained in the zone of growth of the larger society ($N=100$). These findings suggest that the portfolio re-balancing effect is used more effectively, when the society is large and when proportionally more is taken from the rich (progressive tax), than from the poor (regressive tax). Simple inclusions of zones of growth do not hold for comparisons of low risk, intermediate and risky distribution of $\eta$. We refrained from comparing them in detail, because our balancing condition $\mean{\eta}\cdot\mean{\eta}_\mathrm{geo}=1$ is somehow arbitrary. In the following we answer five questions about optimal choices of tax and admin rates for different perspectives. \subsection*{On growth maximizing tax rates and taxation schemes} \textbf{(a)} \emph{What is the optimal tax rate $a_{\mathrm{opt}}(b)$ and how does it differ between the three taxation schemes?} The optimal tax rate is 100\% under admin rate $b=0$ for any tax system but it declines fast with rising admin rate as can be seen in Figure~\ref{fig:4}\textbf{A}. Within the range $0<b<0.35$ the progressive tax scheme reaches the lowest optimal tax rate, regressive taxation the highest. For a realistic admin rate of about 20\% the optimal tax rate in the progressive tax scheme and the proportional tax scheme is less than 30\%, but it is larger than 50\% under the regressive tax scheme. The ranking is inverted for large admin rates. From Figure \ref{fig:8} it can be seen that these rankings and the switch of the ranking also holds for larger societies being more drastic for low admin rates and less drastic for high admin rates. For riskier societies the optimal tax rates are larger in general. \textbf{(b)} \emph{Which taxation scheme reaches the largest average growth factor for a given admin rate and optimal choice of the tax rate?} As can be seen in the central panel of Fig.~\ref{fig:4}\textbf{B} the progressive tax scheme achieves the highest maximal growth factors for all admin rates. The proportional tax scheme is always second and the regressive tax scheme ranks last. Hence, the largest growth factor is reached with the progressive tax scheme that takes more than proportionally from the rich. \subsection*{On income maximizing governments} Let us assume, that governments are forced to choose tax rates close to the optimal tax rate. A government might be forced to do so in an informed and democratic society, when the impact on the tax rate on average growth is known and when voters wish that growth rates of total income or human capital are as large as possible. Under this assumption the rate of government income under optimal tax rate $\mathrm{gov}_\mathrm{opttax}$ is of interest, because we can ask what admin rate a government might choose to maximize its income. The rate of administrative costs is usually also under the control of the government, but we assume that a government is not forced to optimize it for growth. Optimal would of course be to have no administrative cost. One reason is that it might be easier for a government to argue, that the administrative cost cannot be lowered, because of fixed contracts. Another argument is that an alternative party which could overtake government has possibly the same interest of increasing its income and consequently democratic competition does not work as easy. Based on these two assumption and simulation results we can answer three further questions: \textbf{(c)} \emph{Which admin rate $b$ would a self interested government choose?} Figure~\ref{fig:4}\textbf{C} shows that the rate of government income under optimal tax rates is not monotonic in $b$. In particular, for higher values of $b$ the growth of total income becomes smaller, hence even a government maximizing its income has no incentives to raise the admin rate to the largest possible. This is because large admin rates reduce growth. The admin rate where the government income is maximal is marked by ``$\ast$'' in all three panels. It varies with the taxation scheme: lowest admin rates for regressive tax, highest admin rates for progressive tax. This ranking only changes for the riskier distribution of $\eta$ (see Figure \ref{fig:8}), with proportional tax having lowest admin rates. \textbf{(d)} \emph{Which taxation scheme would a self interested government choose?} Looking at absolute values of $\mathrm{gov}_\mathrm{opttax}(b)$ a self interested government would choose the regressive tax because it gives the maximum income of all schemes, even at moderate admin rates. Thus, the largest government income is reached with a scheme that takes more than proportional from the poor. This is caused mainly due to the fact that the optimal tax rate is much larger under the regressive tax scheme. Consequently, the share raised by the admin rate is also larger as under other schemes. Thus, this result crucially depends on the assumption that a government is forced to implement the optimal tax rate, while the taxation scheme is given. \textbf{(e)} \emph{Which taxation scheme delivers the largest average growth factors under a self interested government?} Looking at the ``$\ast$''-symbols in Figure \ref{fig:4}\textbf{B} which come from optimal admin rates with respect to $\mathrm{gov}_\mathrm{opttax}(b)$ in Figure \ref{fig:4}\textbf{C}, we find that proportional tax delivers the highest average growth factors. This holds although regressive taxation can deliver the highest income for the governemt as seen in \textbf{(d)}. The reason is that the regressive tax has much lower growth rates than the other schemes in general. It also holds although progressive taxation always delivers higher growth rates than the other regimes as seen in \textbf{(b)}. The reason here is that the progressive tax attracts the government to raise the admin rate to optimize its income. \section*{Discussion} In our view, redistribution enhances the dynamic potential for human capital production of an economy. The enhancement can be explained by the effectiveness of the portfolio effect. The enhancement seems is possible for proportional, regressive and progressive taxation schemes. The answers to questions \textbf{(b)}, \textbf{(d)}, and \textbf{(e)} deliver three different choices of one of the three taxation schemes. \textbf{(b)} suggest that the progressive taxation scheme should be chosen because it is always superior when a rate of administrative cost is fixed. Consequently, taking proportionally more from the rich is socially optimal when we can assume that the rate of administrative costs is an externally fixed parameter. \textbf{(d)} instead suggests, that a selfish government would decide for the progressive taxation scheme because under this scheme growth optimizing tax rates are much higher which leads to higher government income. Finally \textbf{(d)} shows that from the three taxation schemes proportional taxation achieves the highest average growth factor under an income maximizing government which can freely adjust the rate of administrative cost. The regressive scheme turns out to be too inefficient in turning on the portfolio re-balancing effect to enhance growth, while progressive taxation turns out to give incentives to raise the admin rate to such an extend that the loss due to this outweighs its efficiency in using the portfolio re-balancing effect. By focusing on the portfolio re-balancing effect we proposed a new approach to think about the link between inequality, redistributive taxation and growth. With our simulation, we have shown that taxation and redistribution can be a crucial ingredient to ensure the survival and development of a society relying on risky multiplicative stochastic growth of human capital. Our approach gives another explanation of why redistribution can be beneficial for growth. Together with the other approaches mentioned in the introduction, we show that the interplay between inequality, redistribution and growth depends on preferences (fairness and insurance considerations), socio-political unrest (social stability), incentives and disincentives (effort), the calculus of the median voter (voting-system) and on the \emph{portfolio re-balancing effect}. When paying taxes was voluntary, payment could be seen as an act of cooperation, which seems irrational but ensures the sustainable growth of human capital. As in the related public goods game, this society would be vulnerable to free-riders. In the public goods game the free rider problem is often solved by social norms or governmental forces to pay taxes. At difference to the classical public goods game, where the public good is multiplied by an efficiency factor larger than one, our model does not have such an amplification. On the contrary, from the collected public good a fraction is subtracted for administrative costs, which is equivalent to an efficiency factor less than one. Consequently, the emergence of cooperation, i.e.~the sharing of income in order to sustain a long term growth, is even more subtle in our redistribution model, because even a normative call ``Pay your part and it will be immediately increased by an efficiency factor!'' does not work as easily. If we assume that different societies compete, evolution would promote those societies with higher overall growth factors of their total income. Thus, there should be an evolutionary adaptation towards the optimal tax systems without assuming other forces. Such an idea is closely related to group selection as a mechanism to promote the evolution of cooperative behavior \cite{Nowak2006FiveRulesEvolution}. Perhaps, the portfolio re-balancing effect is also a reason for the evolutionary success of religions which propose something like the tithe. Tithing 10\% of income to charity in a religious society ensures better growth of society's human capital which might be an evolutionary advantage against other societies because of the portfolio re-balancing effect. The re-balancing effect might also be of relevance in other areas, such as biodiversity \cite{Schindler.Hilborn.ea2010Populationdiversityand} or knowledge sharing, to enhance innovativeness in social and economic systems. Can we draw conclusions for large societies of some millions as in real world societies? We speculate that sizes larger than $N=1000$ imply even lower optimal tax rates because the portfolio re-balancing effect works even with very low tax rates and thus admin costs can be saved by low tax rates. But on the other hand we speculate that riskiness of individual stochastic growth also rises in larger societies which consequently implies higher optimal tax rates (see Figure \ref{fig:8}). It is much less likely to gain the best fitting human capital in a large society because there are more competitors. On the other hand, having gained the right human capital might lead to large benefits because there are many customers which could benefit from it. In conclusion, we speculate that our results are probably still valid for societies of real world sizes. \section*{Acknowledgments} JL thanks Wolfgang Breymann for pointing to literature and helpful discussions.
{ "timestamp": "2012-10-16T02:02:00", "yymm": "1210", "arxiv_id": "1210.3716", "language": "en", "url": "https://arxiv.org/abs/1210.3716" }
\section*{Acknowledgment} We would like to thank F. Arroja, S. Dodelson, J-O. Gong, L. Senatore, D. Wands and J. White for ueseful discussions and comments. This work is supported in part by the JSPS Grant-in-Aid for Scientific Research (A) No.~21244033, by the YITP Overseas Exchange Program for Young Researchers, and by MEXT Grant-in-Aid for the global COE program at Kyoto University, ``The Next Generation of Physics, Spun from Universality." M. H. N. would like to thank Iran's Ministry of Science and Technology for the financial supports during his visit to YITP.
{ "timestamp": "2013-03-19T02:15:19", "yymm": "1210", "arxiv_id": "1210.3692", "language": "en", "url": "https://arxiv.org/abs/1210.3692" }
\section{INTRODUCTION} The study of the hadronic transitions between charmonium states has been an active field both for experimental and theoretical research. The decays $\psip\to\eta\jp$ and $\pi^0\jp$ were first observed thirty years ago, and improved measurements of the corresponding branching fractions were performed by the BESII \cite{bes2} and CLEO \cite{cleo1} collaborations. These decays are important probes of $\psip$ decay mechanisms that are characterized by the emission of a soft hadron. The QCD multipole-expansion (QCDME) technique was developed for applications to these heavy quarkonium system processes. For this, the measured branching fraction for $\psip\to\eta\jp$ can be used to predict the $\eta$ transition rate between $\Upsilon$ states~\cite{kuangyp}. The branching-fraction ratio, $R={\mathcal{B}(\psip\to\pi^0\jp)\over \mathcal{B}(\psip\to\eta\jp)}$, with $\mathcal{B}$ denoting the individual branching fraction, was suggested as a reliable way to measure the light-quark mass ratio $m_u/m_d$~\cite{getqarkmass}. Based on QCDME and the axial anomaly, the ratio is calculated to be $R=0.016$ with the conventionally accepted values of the quark masses $m_s=150\mmev$, $m_d=7.5\mmev$ and $m_u=4.2\mmev$~\cite{miller}. Previously published measurements of this ratio give a significantly larger value of $R=0.040\pm0.004$ \cite{pdg}. Recently, using chiral-perturbation theory, the J\"ulich group investigated the source of charmed-meson loops in these decays as a possible explanation for this discrepancy~\cite{Feng-kunGuo:2009}. Under the assumption that the charmed-meson loop mechanism saturates the $\psip\to\pi^{0}(\eta)J/\psi$ decay widths, they obtained a value $R=0.11\pm0.06$, which indicates that the charmed-meson loop mechanism can play an important role in explaining the data. With parameters introduced into the charmed-meson loop fixed using $\mathcal{B}(\psip\to\eta\jp)$ as input, the hadron-loop contribution to the isospin violation decay $\psip\to\pi^0\jp$ can be evaluated~\cite{Guo:2012tj,Guo:2010ak}. Measurements of these branching fractions can provide experimental evidence for hadron-loop contributions in charmonim decays, and impose more stringent constraints on charmed-meson loop contributions. It will also help clarify the influence of long-distance effects in other charmonium decays, {\it e.g.} $\psi(3770)\to\pi^0(\eta)\jp$~\cite{Zhang:2009kr,Guo:2012tj},~ $\psip\to\gamma\eta_c,$ and $~\jp\to\gamma\eta_c$~\cite{Li:2011ssa}. This paper presents the most precise measurement of the ratio $R$ and the related branching fractions for $\psip\to\pi^0J/\psi$ and $\eta\jp$. \section{BESIII EXPERIMENT AND DATA SET} The BESIII experiment at the BEPCII \cite{NIM1} electron-positron collider is an upgrade of BESII/BEPC \cite{besii}. The BESIII detector is designed to study hadron spectroscopy and $\tau$-charm physics \cite{besphysics}. The cylindrical BESIII spectrometer is composed of a Helium gas-based drift chamber (MDC), a Time-Of-Flight (TOF) system, a CsI(Tl) Electromagnetic Calorimeter (EMC) and a RPC-based muon identifier with a super-conducting magnet that provides a 1.0 T magnetic field. The nominal detector acceptance is 93\% of $4\pi$. The expected charged-particle momentum resolution and photon energy resolution are 0.5\% and 2.5\% at 1 GeV, respectively. The photon energy resolution at BESIII is much better than that of BESII and comparable to that achieved by CLEO~\cite{cleo} and the Crystal Ball~\cite{crysball}. An accurate measurement of photon energies enables the BESIII experiment to study physics involving photons, $\pi^0$ and $\eta$ mesons with high precision. We use a data sample of (106.41$\pm$0.86)$\times$10$^6$ $\psip$ decays \cite{npsip}, corresponding to an integrated luminosity of 156.4 pb$^{-1}$. In addition, a 43 pb$^{-1}$ data sample collected at 3.65 GeV is used for QED background studies. To optimize the event selection criteria and to estimate the background, a {\sc geant4}-based simulation \cite{boost} is used that includes the geometries and material of the BESIII detector components. An inclusive $\psip$ decay Monte Carlo (MC) sample is generated to study backgrounds. The generation of $\psip$ resonance production is simulated with the MC event generator {\sc kkmc} \cite{kkmc}, while $\psip$ decays are generated with {\sc besevtgen} \cite{evtgen} for known decay modes with branching fractions set to the world average values \cite{pdg}, and with {\sc lundcharm} \cite{lundcharm} for the remaining unknown decays. The analysis is performed in the framework of the BESIII offline software system \cite{boss} which handles the detector calibration, event reconstruction and data storage. \section{EVENT SELECTION} Selection criteria described below are similar to those used in previous BES analyses \cite{chicj2vv,chicj2gv}. Candidate $\pi^0$ and $\eta$ mesons are reconstructed using two photons $\gg$, and the $\jp$ is reconstructed from lepton pairs $\ll$($l=e$ or $\mu$). Photon candidates are reconstructed by clustering EMC crystal energies. The energy deposited in nearby TOF counters is included to improve the reconstruction efficiency and the energy resolution. Showers identified as photon candidates must satisfy fiducial and shower-quality requirements. A minimum energy of 25~MeV is required for barrel showers ($|\cos\theta| < 0.8$) and 50 MeV for endcap showers ($0.86 < | \cos\theta| < 0.92$). Showers in the angular range between the barrel and endcap are poorly reconstructed and excluded from the analysis. To exclude showers generated by charged particles, a photon is required to be separated by at least $10^\circ$ from the nearest charged track. EMC-cluster timing requirements are used to suppress electronic noise and energy deposits unrelated to the event. The number of photons, $N_{\gamma}$, is required to be $N_{\gamma}\ge 2$. Charged tracks are reconstructed from hit patterns in the MDC. The number of charged tracks is required to be two with zero net charge. For each track, the polar angle $\theta$ must satisfy $|\cos\theta| < 0.93$, and the track is required to originate from within $\pm 10$~cm of the interaction point in the beam direction and within $\pm 1$~cm of the beam line in the plane perpendicular to the beam. The $J/\psi \rightarrow l^{+}l^{-}$ candidates are reconstructed from pairs of oppositely charged tracks. Tracks are identified as muons (electrons) if their $E/p$ ratios satisfy $0.08~c< E/p < 0.22~c$ ($E/p > 0.8~c$), where $E$ and $p$ are the deposited energy in the EMC and the momentum of the charged track, respectively. To reduce the combinatorial background from uncorrelated $\gamma\gamma$ combinations and to improve the mass resolution, a four-constraint kinematic fit (4C-fit) is applied with the hypothesis $\psip\to\gamma\gamma l^+l^-$ constrained to the sum of the initial $e^{+}e^{-}$ beam four-momentum. For events with more than two photon candidates, the combination with the smallest $\chi^{2}$ is retained. The invariant-mass distribution for lepton pairs ($M_{ll}$) is shown in Fig.~\ref{mll_pi0}, where the $\jp$ signal is clearly seen with a high signal to background ratio. For the further analysis, events are kept for which the reconstructed $\jp$ mass falls within a window of $M_{ll}\in(3.05,~3.15)\mgev$; a mass window that is significantly larger than the mass resolution of about 8 MeV$/c^2$. Figure~\ref{fig:scatter_plot} shows a Dalitz plot of the invariant-mass squared $M^{2}_{\gamma_h\jp}$ for the reconstructed $\jp$ and the energetic photon versus the two-photon invariant-mass squared $M^{2}_{\gamma_h\gamma}$, where $\gamma_h$ denotes the photon with the higher energy $E_{\gamma_h}>E_\gamma$. Bands of $\pi^0,~\eta$, and $\chicj~(J=0,1,2)$ are clearly visible. To suppress the dominant source of background, which is from $\chi_{cJ}$ decays, the mass of the $\gamma_h\jp$ system is required to satisfy the condition $M_{\gamma_h J/\psi}\notin (3.50,~3.57)\mgev$ and $M_{\gamma_h\jp}<3.5\mgev$ for $\psip\to\pi^0\jp$ and $\eta\jp$, respectively. The 4C-fit $\chi^{2}$ is required to be less than $(100,~60,~70,~50)$ for the final states $(\pi^0\ee,~\pi^0\uu,~\eta\ee,~\eta\uu)$, respectively, where the values are determined by optimizing the statistical significance $S/\sqrt{S+B}$, with $S(B)$ the number of signal (background) events. The background event levels are determined from the $\psip$ inclusive MC sample. \begin{figure}[htbp] \includegraphics[width=12cm,angle=0]{jpsi.eps} \caption{The invariant-mass distributions for (a) electron-positron and (b) di-muon pairs in the selected $\gg\ll$ events in the data. \label{mll_pi0}} \end{figure} \begin{figure}[h] \includegraphics[width=12cm,height=8cm]{dalitz_plot.eps} \caption{\label{fig:scatter_plot}Dalitz plot of $M^2_{\gamma_h\gamma}$ (vertical) versus $M^2_{\gamma_h \jp}$ (horizontal) for data, where $\gamma_h$ denotes the energetic photon. The horizontal bands around M$^{2}_{\gamma_{h}\gamma}$=0.02~(0.30) ~(GeV/c$^{2}$)$^{2}$ are due to $\psi' \rightarrow \pi^{0}(\eta) J/\psi$ transitions. The vertical bands around $M^{2}_{\gamma_{h}J/\psi}$=11.65 (12.30, 12.70) (GeV/$c^{2}$)$^{2}$ are due to transitions $\psi' \rightarrow \gamma \chi_{c0(c1,c2)}$; the arrows denote the requirements to remove backgrounds from the $\chi_{c1,c2}$ states as described in the text.} \end{figure} \section{Data analysis} \label{sec:analysis} Background events from $\psip$ decays are studied using the inclusive MC sample. The background is dominated by $\psip\to\gamma\chicj$, $\chicj\to\gamma\jp\to\gamma\ll$ decays. In addition, there are a few events from direct $\psip\to\gg\jp$, $\jp\to\ll$ decays \cite{ggjsi}. The shape of the $M_{\gamma\gamma}$ distribution of direct $\gamma\gamma\jp$ decays is smooth within both the $\pi^0$ and the $\eta$ mass regions. The non-resonant background from $\psip\to\gg\ll$ is studied using $\jp$-mass sidebands in the data. For $\psip\to\eta\jp$, there is an additional background from $\psip\to\pi^0\pi^0\jp$, which has a smooth shape within the $\eta$-mass region. The background contribution from QED processes is studied using the continuum data taken at $\sqrt s=3.65 $ GeV, and it is found to be negligible. The sum of all the MC-determined backgrounds in the $M_{\gamma\gamma}$ distribution are shown in Fig.~\ref{mggfit}, and they are found to be in reasonable agreement with those observed in the data. To determine the detection efficiency, the angular distributions are properly modeled in the event generator which accounts for polarization in the $\psip$ and $\jp$ decays. These decays are dominated by their transverse polarization; longitudinal polarization of the $\psip$ is negligible due to its production from $\ee$ annihilation, and, since the $\jp$ is produced via $\psip\to\pi^0(\eta)\jp$ transitions, its longitudinal polarization vanishes because of parity conservation. Thus, their polar-angle distributions take the form of $dN/ d\cos\theta\propto (1+\cos^2\theta)$, where $\theta$ is the polar angle of $\jp$ in the $\psip$ rest frame for $\psip\to\eta(\pi^0)\jp$ decays, or the angle between the lepton momentum in $\jp$ rest frame and the $\jp$ momentum in the $\psip$ rest frame for $\jp\to\ll$ decays. As an example, Fig.~\ref{angdis} shows angular distributions for $\jp$ and $\mu^-$ in $\psip\to\eta\jp\to\eta\mu^+\mu^-$ decays, where the angular distributions obtained from MC simulations (histograms) are observed to be in excellent agreement with the data (the points with error bars). Similarly, we have verified that the angular distributions in the $\psip\to\pi^0\jp$ decay are well described by MC simulations. The detection efficiencies are determined using these MC event samples, and the values are listed in Table~\ref{fit_results}. The efficiencies for $\gg\ee$ final states are lower than that for $\gg\uu$ final states because the $e^+/e^-$ tracks suffer from stronger bremsstrahlung effects. The signal yields are obtained from fits to the observed two-photon invariant mass M$_{\gamma\gamma}$ distributions. The observed line shapes are described with modified $\pi^0/\eta$ line shapes plus backgrounds. The $\pi^0$ and $\eta$ line shapes are taken from MC simulation including detector resolution; the $\pi^0$ and $\eta$ are described with relativistic Breit-Wigners in the event generation, and their masses and widths are fixed at their nominal values \cite{pdg}. To account for possible differences in mass resolution between data and MC simulation, the $\pi^0$/$\eta$ line shapes [LS$(\pi^0/\eta)$] are modified by convolution with a Gaussian function G($M_{\gamma\gamma}-\delta m,\sigma$). This technique of mass resolution smearing treatment was used in a previous publication~\cite{chicj2vv}. The probability distribution function (PDF) for the signal is taken as LS$(\pi^0/\eta)\otimes$G($M_{\gamma\gamma}-\delta m,\sigma$), where $\delta m$ and $\sigma$ correct the $\pi^0/\eta$ mass and mass resolution, respectively. The PDF for the dominant background contribution is obtained from MC simulation and the residual background contribution is modeled as a first-order and a second-order polynomial function for the $\pi^0$ and $\eta$ channels, respectively. The polynomial coefficients are free parameters with values determined from the data. The fit results are shown in Fig.~\ref{mggfit}. For $\psip\to\pi^0 J/\psi$, the fit yields 1823$\pm$49 events for the $J/\psi\to\ee$ sample with a goodness of fit of $\chi^2/ndf=0.85$, and 2268$\pm$55 events for $J/\psi\to\uu$ with a $\chi^2/ndf=0.86$, where $ndf$ denotes the number of degrees of freedom in the fit. For $\psip\to\eta J/\psi$, the fit yields 29598$\pm$202 events for $J/\psi\to\ee$ with a $\chi^2/ndf=1.33$ and 38572$\pm$280 events for $J/\psi\to\uu$ with a $\chi^2/ndf=0.96$. The resulting values of $\delta m$ and $\sigma$ are $|\delta m|<1 \mmev$ and $\sigma<3\mmev$ for all the modes. The signal yields are listed in Table \ref{fit_results}. The branching fractions are calculated from the expression \begin{equation} \mathcal{B}(\psip\to X J/\psi) ={N^{sig} \over N_{\psip}\varepsilon \mathcal{B}(X\to\gamma\gamma) \mathcal{B}(J/\psi\to l^+l^-)}, \label{br_equation} \end{equation} where $X$ represents $\pi^0$ or $\eta$, $N^{sig}$ and $N_{\psip}$ are the signal yields and the number of $\psip$ events, $N_{\psip}=106.41\times 10^6$. $\mathcal{B}(X\to\gamma\gamma)$ and $\mathcal{B}(J/\psi\to l^+l^-)$ denote the branching fractions of $\pi^0/\eta\to\gamma\gamma$ and $J/\psi\to\ee(\uu)$ \cite{pdg}. The variable $\varepsilon$ represents the MC-determined detection efficiency. The measured branching fractions for each final state are listed in Table \ref{ratio results}. To validate the event selection criteria and fitting procedure, we perform a study using a MC sample of 106$\times$10$^6$ inclusive $\psip$ events, with the known branching fractions as input. The same analysis procedure as used for the real data is applied for this MC sample and the obtained branching fractions for the $\psip\to\pi^0(\eta)\jp$ channels are found to be consistent with the input branching values within the statistical accuracy. \begin{figure}[hbtp] \includegraphics[width=1\textwidth]{fit_all.eps} \caption{(Color online) $M_{\gg}$ distributions and fit results. (a)~$\psip\to\pi^0 J/\psi,\jp\to \ee$, (b)~$\psip\to\pi^0 J/\psi,\jp\to \uu$,~(c)~$\psip\to\eta J/\psi,\jp\to \ee$, (d)~$\psip\to\eta J/\psi,\jp\to \uu$, where the points with error bars are data, and the solid (red) curves are the total fit results, and the dashed curves are the fitted background shapes. The hatched histograms represents dominant background events obtained from MC simulation and $\jp$ mass sidebands. \label{mggfit}} \end{figure} \begin{figure}[hbtp] \vspace{1cm} \includegraphics[width=0.8\textwidth]{angle.eps} \caption{(Color online) Angular distributions for (a) $\jp$ in the $\psip$ rest frame, (b) $\mu^-$ in the $\jp$ helicity system, where $\theta(\mu^-,\jp)$ is the angle between the $\mu$ momentum in $\jp$ rest frame and the $\jp$ momentum in $\psip$ rest frame. Points with error bars are data, and histograms are MC simulations as described in text.} \label{angdis} \end{figure} \subsection{SYSTEMATIC ERRORS} The main sources of systematic uncertainty originate from the number of $\psip$ events, the trigger efficiency, the lepton tracking, photon reconstruction, kinematic fitting, uncertainties of the branching fractions for $\pi^0(\eta)\to\gamma\gamma$ and $J/\psi\to\ee(\uu)$, and the selection and fitting procedures. The uncertainty on the number of $\psip$ events is 0.81$\%$ as reported in Ref.~\cite{npsip}. Trigger efficiency uncertainty is 0.15$\%$ as reported in Ref.~\cite{trigger}. The photon reconstruction uncertainty is determined to be 1\% per photon in Ref.~\cite{chicj2gv}, and, thus, the two-photon final state is assigned an uncertainty of 2$\%$. The tracking efficiency of the hard leptons is studied using a control sample of $\psip\to\pi^+\pi^-J/\psi$, $J/\psi\to\ee(\uu)$ decays. The tracking efficiency $\epsilon$ is calculated as $\epsilon = N_{full}/N_{all} \nonumber,$ where $N_{full}$ indicates the number of $\pi^{+}\pi^{-}l^{+}l^{-}$ events with all final tracks reconstructed successfully; and $N_{all}$ indicates the number of events with one or both charged lepton tracks successfully reconstructed in addition to the pion-pair. The difference in tracking efficiency between data and MC is calculated bin-by-bin over the distribution of transverse momentum versus the polar angle of the lepton tracks. By this method, tracking uncertainties are determined to be 0.14$\%~$(0.20$\%$) and 0.16$\%~$(0.19$\%$) for $\psip\to\pi^0 J/\psi$, $J/\psi\to\ee$($\uu$) and $\psip\to\eta J/\psi$, $J/\psi\to\ee$($\uu$), respectively. Some differences are observed between data and MC $\chi^2$ distributions from the kinematic fit. These differences are mainly due to inconsistencies in the lepton track parameters between MC and data. We apply correction factors for various $\mu^\pm~(e^\pm)$ track parameters that are obtained from control $\psip\to\pi^+\pi^-J/\psi$ data samples, where $J/\psi\to\ee(\uu)$. The correction factors are found by smearing the MC simulation output so that the pull distributions properly describe those of the experimental data. Half of the differences between the detection efficiencies, obtained using MC simulations with and without applying these correction factors, are taken as systematic errors. These are 0.15$\%~$(0.19$\%$) and 0.20$\%~$(0.28$\%$) for $\psip\to\pi^0 J/\psi$, $J/\psi\to\ee$($\uu$) and $\psip\to\eta J/\psi$, $J/\psi\to\ee$($\uu$), respectively. Requirements on the $E/p$ ratio and the invariant mass $M_{l^+l^-}$ have been applied in the event selection. Uncertainties associated with these requirements are determined using the same control sample described above. Differences in the detection efficiency between the control data sample and the MC due to the $E/p$ ratio requirement are 0.06\% and 0.05\% for $J/\psi\to\ee$ and $J/\psi\to\uu$, respectively. Uncertainties caused by the mass window selection are 0.06\% for both the $\ee$ and $\uu$ channels. An uncertainty due to the $M_{\gamma l^+l^-}$ requirement arises from a difference, $\delta(\chi_{c1,2})$, in the $\chi_{c1,2}$ mass resolution between the data and MC simulation, and is estimated by changing the MC-optimized requirement to one optimized using the data. Uncertainties caused by this $M_{\gamma l^+l^-}$ requirement are determined in this way to be 0.55\%~(0.16\%) and 0.11\%~(0.82\%) for $\psip\to\pi^0 J/\psi$, $J/\psi\to\ee(\uu)$ and $\psip\to\eta J/\psi$, $J/\psi\to\ee(\uu)$, respectively. Systematic errors due to the background shape are estimated by varying the function used to describe nondominant backgrounds from a 1st- (2nd)- order polynomial to a 2nd- (3rd-) order polynomial for $\psip\to\pi^0(\eta)\jp$. The difference in the signal yields observed is taken as a systematic error. The uncertainty due to the choice of fitting range is estimated by repeating the fits using a fitting range that is 80\% as wide as that used in the original fit. The difference in the signal yields is taken as a systematic error. Table~\ref{sys tot err} summarizes all the sources of systematic uncertainties. \subsection{RESULTS AND DISCUSSION} Branching fractions for the decays $\psip\to\pi^0 J/\psi$ and $\eta J/\psi$ with $\jp\to\ee,~\uu$ are calculated with Eq.~(\ref{br_equation}) using the fitting results and the detection efficiencies as inputs. Branching fractions measured using $\jp\to\ee$ and $\uu$ final states are combined together with the weighted average method described in Ref.~\cite{pdg}, here common systematic uncertainties are counted only once. The combined branching fractions are $\mathcal{B}(\psip\to\pi^0\jp)=(1.26\pm0.02\pm0.03)\times 10^{-3}$ and $\mathcal{B}(\psip\to\eta\jp)=(33.75\pm0.17\pm0.86)\times10^{-3}$ (see Table \ref{ratio results}). Using the measured branching fractions, the ratio $R=\mathcal{B}(\psip\to\pi^0 J\psi)/ \mathcal{B}(\psip\to\eta J\psi)$ is calculated to be $R=(3.74\pm0.06\pm0.04)\times10^{-2}$ (see Table \ref{ratio results}). Note that systematic uncertainties that are common to both channels cancel in the ratio. Our combined result on the $R$-ratio is consistent with previous world average values with a precision improvement of about a factor of five. These precise measurements of the $\psip\to\pi^0\jp$ and $\eta\jp$ branching fractions permit the study of isospin violation mechanisms in the $\psip\to\pi^0\jp$ transition. As shown in \cite{Feng-kunGuo:2009,gfk_zq}, the axial anomaly does not adequately explain the observed isospin violation, while contributions from charmed meson loops would be a possible mechanism for additional isospin violation sources. Confirmation of sizeable contributions from charmed-meson loops would be an indication that non-perturbative effects play an important role in the charmonium energy region. \begin{table}[htbp] \setlength{\tabcolsep}{0.5pc} \caption{Summary of signal yields and detection efficiencies for the each final state.\label{fit_results}} \begin{tabular}{lllll} \hline\hline Mode &\multicolumn{2}{c}{$\psip\to\pi^0 J/\psi$}&\multicolumn{2}{c}{$\psip\to\eta J/\psi$}\\\hline Final state &$\gamma\gamma\ee$&$\gamma\gamma\uu$&$\gamma\gamma\ee$&$\gamma\gamma\uu$\\\hline $\varepsilon$(\%)&23.05&29.11&35.41&46.28\\ $N^{sig}$ &1823$\pm$49&2268$\pm$55&29598$\pm$202&38572$\pm$280\\\hline\hline \end{tabular} \end{table} \begin{table}[htbp] \setlength{\tabcolsep}{0.5pc} \caption{Summary of measured branching fractions ($\mathcal{B}$) and the ratio $R={\mathcal{B}(\psip\to\pi^0 J\psi)\over \mathcal{B}(\psip\to\eta J\psi)}$ with comparison to world average values (see PDG).\label{ratio results}} \begin{center} \begin{tabular}{lcccc} \hline\hline $\mathcal{B}$ or $R$ & Final state & This work & Combined & PDG\cite{pdg}\\\hline $\mathcal{B}(\psip\to\pi^0\jp)$ & $\gg\ee$ & $1.27\pm0.03\pm0.03$&---&--- \\ $(\times10^{-3})$ & $\gg\uu$ & $1.25\pm0.03\pm0.03$& $1.26\pm0.02\pm0.03$&$1.30\pm0.10$ \\\hline $\mathcal{B}(\psip\to\eta\jp)$ & $\gg\ee$ &$33.77\pm0.23\pm0.93$ &---&--- \\ $(\times10^{-3})$ & $\gg\uu$ &$33.73\pm0.24\pm0.90$ & $33.75\pm0.17\pm0.86$&$32.8\pm0.7$ \\\hline $R={\mathcal{B}(\psip\to\pi^0\jp)\over \mathcal{B}(\psip\to\eta\jp)}$&$\gg\ee$&$3.76\pm0.09\pm0.06$&--&---\\ $(\times10^{-2})$ &$\gg\uu$&$3.71\pm0.09\pm0.05$&$3.74\pm0.06\pm0.04$&$3.96\pm0.42$\\\hline\hline \end{tabular} \end{center} \end{table} \begin{table}[htbp] \caption{Summary of all systematic errors (\%) considered in this analysis. \label{sys tot err}} \begin{center} \begin{tabular}{lcccc} \hline\hline Sources&$\pi^{0} J/\psi(\ee)$&$\pi^{0} J/\psi(\uu)$&$\eta J/\psi(\ee)$&$\eta J/\psi(\uu)$\\\hline $N_{\psip}$&0.81&0.81&0.81&0.81\\ Trigger&0.15&0.15&0.15&0.15\\ Tracking&0.14&0.20&0.16&0.19\\ Photon&2.00&2.00&2.00&2.00\\ 4-C Fit&0.15&0.19&0.20&0.28\\ $B_{r}(J/\psi\to l^{+}l^{-})$&1.01&1.01&1.01&1.01\\ $B_{r}(\pi^0/\eta\to\gamma\gamma)$&0.03&0.03&0.51&0.51\\ M($l^+l^-$) &0.06&0.06&0.06&0.06\\ M($\gamma l^+l^-$)&0.55&0.16&0.11&0.82\\ E/p&0.06&0.05&0.06&0.05\\ Background shape&0.24&0.24&1.14&0.10\\ Fitting range&0.63&0.80&0.55&0.58\\ \hline Total&2.55 &2.55 &2.77 &2.66 \\\hline\hline \end{tabular} \end{center} \end{table} \newpage {\bf Acknowledgement:}\\ The BESIII collaboration thanks the staff of BEPCII and the computing center for their hard efforts. This work is supported in part by the Ministry of Science and Technology of China under Contract No. 2009CB825200; National Natural Science Foundation of China (NSFC) under Contracts Nos. 10625524, 10821063, 10825524, 10835001, 10935007, 11125525; Joint Funds of the National Natural Science Foundation of China under Contracts Nos. 11079008, 11179007; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; CAS under Contracts Nos. KJCX2-YW-N29, KJCX2-YW-N45; 100 Talents Program of CAS; Istituto Nazionale di Fisica Nucleare, Italy; Ministry of Development of Turkey under Contract No. DPT2006K-120470; U. S. Department of Energy under Contracts Nos. DE-FG02-04ER41291, DE-FG02-91ER40682, DE-FG02-94ER40823; U.S. National Science Foundation; University of Groningen (RuG) and the Helmholtzzentrum fuer Schwerionenforschung GmbH (GSI), Darmstadt; WCU Program of National Research Foundation of Korea under Contract No. R32-2008-000-10155-0
{ "timestamp": "2012-10-16T02:02:20", "yymm": "1210", "arxiv_id": "1210.3746", "language": "en", "url": "https://arxiv.org/abs/1210.3746" }
\section{Introduction} The spontaneous breaking of chiral symmetry (SBCS) in the QCD vacuum is believed to be a consequence of the condensation of scalar quark-antiquark pairs. It manifests itself in the generation of mass in strong interactions and, more precisely, in lifting the degeneracies within chiral multiplets ($\pi$-$\sigma$, $\rho$-$a_1$, $N$-$N^*(1535)$, ...). The search for (partial) restoration of SBCS in hot and dense matter as a fundamental phenomenon is one of the core missions of the ultrarelativistic heavy-ion programs at laboratories around the world. The most promising observable in this regard are invariant-mass spectra of dileptons ($e^+e^-$ or $\mu^+\mu^-$) which open a direct window on the in-medium spectral function of the electromagnetic (EM) current, cf.~Refs.~\cite{Rapp:2009yu,Tserruya:2009zt,Specht:2010xu,Rapp:2011is,Gale:2012xq} for recent reviews. In the vacuum, and at low mass ($M\le 1$\,GeV), the EM spectral function reflects the mass distribution of the light vector mesons $\rho$, $\omega$ and $\phi$, and thus the dynamical generation of mass in QCD. Thermal radiation of low-mass dileptons is ideally suited to illuminate the changes in the vector-meson mass distributions as the (pseudo-) critical temperature for chiral restoration, $T_{\rm pc}^\chi\simeq160$~MeV, is approached and surpassed. However, robust conclusions from measurements of dilepton spectra in heavy-ion collisions require a number of rather challenging steps. First, one needs sufficiently accurate measurements of so-called ``excess" radiation (beyond final-state decays) to extract its spectral shape. Second, such measurements need to cover a large range of collision energies to establish systematic trends, enabling the extraction of genuine features in the radiation. In particular, one needs to correlate the observed spectral shapes with the thermodynamic properties of the emission sources, most notably its temperature(s). Third, model calculations and predictions need to be tested against the data which is critical for deducing mechanisms underlying observed spectral modifications. Such comparisons not only require calculations of the in-medium EM spectral functions but also a good control over space-time evolution of the fireball at each collision energy. Finally, the model calculations have to be rooted in the bigger picture of QCD thermodynamics, and specifically in the context of chiral restoration by utilizing relations between the EM spectral functions and chiral order parameters, where the latter can be tested (or extracted) from thermal lattice QCD. It is the purpose of this proceedings to give an update on carrying out these steps. The remainder of this article is organized into two main sections. The first one (Sec.~\ref{sec_exp}) contains a phenomenological assessment of the current experimental situation regarding the extraction of the in-medium EM spectral function and the pertinent regimes in temperature and baryon density that it corresponds to. The second one (Sec.~\ref{sec_theo}) summarizes how the phenomenological findings relate to chiral symmetry restoration, invoking both rigorous and more heuristic arguments. Both sections are not divided up any further to stipulate the comprehensive nature of the discussion. We briefly conclude in Sec.~\ref{sec_concl}. \section{Electromagnetic Emission Spectra from Experiment} \label{sec_exp} The first unambiguous detection of a low-mass dilepton excess in ultrarelativistic heavy-ion collisions (URHICs) was achieved by the CERES/NA45 collaboration, first in the S-Au system~\cite{Agakishiev:1995xb} and later with improved precision in Pb-Au collisions~\cite{Agakichiev:2005ai,Adamova:2006nu}. Early theoretical analyses successfully described these data utilizing the conjecture of a dropping $\rho$-meson mass in hot and dense matter~\cite{Li:1995qm}, as a direct consequence of the reduction of the quark condensate (later, this connection was found to be problematic and revisited~\cite{Brown:2009az}). Shortly thereafter, several groups started to evaluate ``more mundane" medium modifications, by performing ``standard" hadronic many-body (or thermal-field theory) calculations of the $\rho$ spectral function using {\it known} (or at least well-constrained) hadronic interactions in vacuum as an input. These calculations, performed for cold nuclear matter~\cite{Chanfray:1993ue,Herrmann:1993za,Friman:1997tc,Klingl:1997kf,Peters:1997va,Urban:1998eg,Cabrera:2000dx}, a hot meson gas~\cite{Haglin:1994xu,Pisarski:1995xu,Song:1996dg,Rapp:1999qu,Ayala:2003yp}, or hot and dense hadronic matter~\cite{Rapp:1995zy,Rapp:1999us,Eletsky:2001bb,Ghosh:2011gs}, generically produce a rather strong broadening of the in-medium $\rho$ spectral function with little mass shift, especially in nuclear media. What was originally intended to provide a baseline for disentangling more ``exotic" medium effects, subsequently turned into a fair description of the CERES dilepton enhancement all by itself, see the left panel of Fig.~\ref{fig_sps}. Does this imply that dilepton data are not sensitive (or even unrelated) to chiral restoration? Our answer is no, as will be disucssed in Sec.~\ref{sec_theo} below. \begin{figure}[!t] \begin{minipage}{18pc} \includegraphics[width=18pc]{dndm-ceres.eps} \end{minipage}\hspace{1pc \begin{minipage}{19pc} \vspace{0.6pc} \includegraphics[width=19pc]{dndm-na60.eps} \end{minipage} \caption{Dilepton spectra measured at SPS energies ($\sqrt{s}$=17.3\,AGeV) by the CERES/NA45 collaboration in Pb-Au collisions (left panel)~\cite{Agakichiev:2005ai}, and by the NA60 collaboration in In-In (right panel)~\cite{Arnaldi:2008fw}, compared to the same theoretical approach~\cite{vanHees:2007th}. The CERES data are taken in the $e^+e^-$ channel and include acceptance cuts in the (di-) electron tracks, while the NA60 data are for $\mu^+\mu^-$, subtracted of the cocktail and fully corrected for (di-) muon acceptance cuts.} \label{fig_sps} \end{figure} A quantitative test of the ``$\rho$-broadening" became possible with the NA60 dimuon spectra in In-In($\sqrt{s}$=17.3\,AGeV) collisions~\cite{Arnaldi:2006jq,Arnaldi:2008fw}, which have set a new standard for precision in dilepton spectroscopy in URHICs to date, see right panel of Fig.~\ref{fig_sps}. These data have been fully corrected for the detector acceptance, which, for the first time in URHIC dilepton spectroscopy, renders the mass spectra (Lorentz-) {\em invariant}. This means that their shape is {\it unaffected} by the blue shift due to the collective flow of the expanding fireball, and thus they directly reflect the spectral distribution of the medium's emission rate (in addition, the excellent mass resolution and statistics allowed for a subtraction of the hadronic decay cocktail, defining the notion of ``excess" spectra). If the emission emanates from a locally thermalized medium, it takes the well-known form \begin{equation} \frac{dN_{ll}}{d^4xd^4q} = -\frac{\alpha_{\rm em}^2}{\pi^3 M^2} \ f^B(q_0;T)~{\rm Im}\Pi_{\rm em}(M,q;\mu_B,T) \ , \label{rate} \end{equation} where ${\rm Im}\Pi_{\rm em}$ is the in-medium EM spectral function, $f^B$ the thermal Bose distribution and the factor $1/M^2$ is a remnant of the intermediate propagator of the virtual photon. The predictions of hadronic many-body theory (evolved over a thermal fireball evolution constrained by hadron spectra)~\cite{vanHees:2006ng,vanHees:2007th} turn out to agree well with these data, cf.~right panel of Fig.~\ref{fig_sps} (see also Refs.~\cite{Dusling:2006yv,Ruppert:2007cr,Santini:2011zw,Linnyk:2011hz}). This actually provides non-trivial information beyond the realm of strict reliability of hadronic theory, which we estimate to be up to $T\simeq150$\,MeV, where the total hadron density reaches about 2$\varrho_0$ ($\varrho_0$=0.16\,fm$^{-3}$ denotes nuclear matter saturation density). A good portion of the low-mass dilepton enhancement is radiated from temperatures around and somewhat below $T_{\rm pc}$ in the fireball evolution. In other words, the NA60 data quantitatively support a smooth extrapolation of the hadronic $\rho$ broadening into the temperature region of the chiral transition. More explicitly, the observed spectra follow from a convolution of the above rate, Eq.~(\ref{rate}), over 3-momentum and 4-volume of the expanding medium, \begin{eqnarray} \frac{dN_{ll}}{dMdy} &=& \frac{1}{\Delta y} \int\limits_{\tau_0}^{\tau_{\mathrm{fo}}} d\tau \int\limits_{V_{\mathrm{FB}}} d^3x \int \frac{Md^3 q}{q_0} \ \frac{dN_{ll}}{d^4 xd^4q}(M,q;T(\tau),\mu_B(\tau)) \label{int} \\ &\simeq& \frac{V_4}{\Delta y} \int \frac{d^3q}{Mq_0} \frac{\alpha_{\rm em}^2}{\pi^3}f^B(q_0;\bar{T}) \ (-{\rm Im}\Pi_{\rm em}(M,q;\bar{T}, \bar{\mu}_B)) \ . \label{spec} \end{eqnarray} In the second line we have condensed the space-time integration into a 4-volume, $V_4$, at the expense of replacing the time-dependent temperature and baryo-chemical potential (or baryon density) in the dilepton rate by average values, $\bar{T}$ and $\bar{\mu}_B$, respectively. The latter can be extracted from the model calculations which describe the data to be $\bar{T}\simeq$~150-160\,MeV and $\bar{\mu}_B^{\rm tot}\simeq$~250-300\,MeV (or $\bar{\varrho}_B^{\rm tot} \simeq$~0.7-1\,$\varrho_0$). The remaining 3-momentum integral in Eq.~(\ref{spec}) is a Lorentz scalar and can thus be evaluated in the local rest frame. This is routinely done in model calculations of the spectral function, as shown in the left panel of Fig.~\ref{fig_rate}, including the dimuon phase-space factor and mass threshold, $M_{\rm thr}=2m_\mu=211$\,MeV. One immediately recognizes a remarkable reminiscence of the theoretical rates, calculated a decade ago~\cite{Rapp:1999us}, with the NA60 data. This corroborates the possibility of extracting an ``average" $\rho$-meson width which turns out to be $\bar\Gamma_\rho^{\rm med}\simeq$~350-400\,MeV (which is not not far from the value found for cold nuclear matter at saturation density). This necessarily implies larger widths in the earlier stages of the fireball evolution (later and earlier contributions are, in fact, required to properly account for the total excess yield), which reach around $\Gamma_\rho(T_{\rm pc})\simeq$~600\,MeV, before a transition to QGP rates is performed. At this point, the QGP and hadronic rates are very similar (see left panel of Fig.~\ref{fig_rate}), which has been interpreted as quark-hadron duality in dilepton rates across $T_{\rm pc}$~\cite{Rapp:1999if}. A similar feature was found for photon rates in Ref.~\cite{Kapusta:1991qp}, and later again in Ref.~\cite{Turbide:2003si}. \begin{figure}[!t] \begin{minipage}{19pc} \includegraphics[width=15.5pc,height=22pc,angle=-90]{drdmIny-na60-1.ps} \end{minipage} \hspace{0.5pc \begin{minipage}{19pc} \vspace{0.3pc} \includegraphics[width=18pc]{Arho-na60.eps} \end{minipage} \caption{Left panel: Comparison of the thermal dilepton emission rates at fixed temperature (as figuring into the full calculation of the spectra shown in Fig.~\ref{fig_sps}) with the acceptance corrected NA60 spectra. The rates are a combination of the low-mass $\rho$ contribution (solid lines) and a continuum ``4$\pi$" part (dashed lines) which for simplicity has been approximated by a perturbative continuum limited to $M$$>$0.9\,GeV. Both contributions carry the relative normalization as used in the fireball evolution (including pion fugacity factors), but are scaled by an overall constant to match the data at $M$$\simeq$0.5\,GeV. Isoscalar contributions ($\omega$ and $\phi$) to the rates are not included. Right panel: In-medium $\rho$ spectral function based on Ref.~\cite{Rapp:1999us} as figuring into the calculations of the dilepton spectra in Fig.~\ref{fig_sps}~\cite{vanHees:2007th} and underlying the low-mass in-medium hadronic rates in the left panel.} \label{fig_rate} \end{figure} An important question is whether one can obtain independent information on the medium's temperature from which the observed spectral shape is originating. As stated above, invariant-mass spectra are an ideal tool to evaluate emission temperatures due to the absence of blue shifts induced by the collective flow (which are present in transverse-momentum ($q_T$) spectra). However, as evident from Eq.~(\ref{spec}), the thermal slope associated with the Bose distribution is modulated by the medium effects in the EM spectral function. Nevertheless, when using the in-medium hadronic rates underlying the description of the NA60 data in Fig.~\ref{fig_sps} (right) as a ``thermometer", one finds that the slope in the data over the range $M=$~0.3-1.5\,GeV is reasonably well reproduced with $T\simeq$~150-170\,MeV, cf.~left panel of Fig.~\ref{fig_rate}. Closer inspection of this comparison reveals that the enhancement at the very low-mass end, close to the dimuon threshold, is slightly overestimated, suggesting that in this regime emission from later stages prevails, where the medium effects are smaller, cf.~the in-medium $\rho$ spectral function in the right panel of Fig.~\ref{fig_rate}. Alternatively, carrying out the slope analysis for masses above $M=$~1\,GeV, where the medium effects on the rate are less pronounced (resembling a structureless continuum), one deduces a slope of around $T\simeq$~170\,MeV up to $M\simeq 1.5$\,GeV, with a tendency to further increase at still higher mass. This is consistent with the well-known feature that at higher mass (or higher $q_T$, as governed by the energy, $q_0$, figuring into the thermal distribution function), the temperature sensitivity of the thermal exponential increasingly biases the contributions to the spectra toward earlier phases (i.e., higher $T$)~\cite{Rapp:2011is}. The NA60 collaboration has also conducted a systematic investigation of slope parameters, $T_{\rm eff}$, of $q_T$ spectra in various mass bins~\cite{Arnaldi:2007ru}. Here, $T_{\rm eff}$ contains the radial-flow blue shift, schematically as $T_{\rm eff} \simeq T + M \beta_{\rm av}^2$, where $\beta_{\rm av}$ denotes the average expansion velocity of the fireball at a given moment. The extracted values for $T_{\rm eff}$ gradually increase from about 170\,MeV at threshold up to ca.~260\,MeV at the $\rho$ mass, decreasing thereafter and leveling off at ca. 200-220\,MeV for $M>1$\,GeV. This is remarkably consistent with the information from $M$-spectra, i.e., an emission source of hot hadronic matter at and below the $\rho$-mass with a mass-dependent slope increase, and emission from around $T_c$ and higher (with a possibly large QGP component) above the $\rho$ mass. An intriguing aspect of the $\rho$-broadening as obtained from many-body theory in hot and dense hadronic matter is the prevalence of baryon-induced medium effects~\cite{Rapp:1995zy,Rapp:1999us}. Theoretically, this is a consequence of quantitative constraints on the $\rho$-meson coupling to nucleons as deduced, e.g., from nuclear photoabsorption data on the proton and on nuclei~\cite{Rapp:1997ei}. This feature raised some doubts in the early stages of interpreting the CERES data, as the experimental pion-to-baryon ratio at full SPS energy is about 5:1. However, many of the pions observed in the final state originate from resonance decays; including these in a thermally equilibrated ``hadron resonance gas" gives meson-to-baryon ratios of closer to 2:1. Nonetheless, the question arises how the properties of the excess radiation develop with the nuclear collision energy, i.e., with the relative baryon content in the system. The first step in this direction was a CERES measurement at a lower SPS bombarding energy of $E_{\rm lab}$=40\,AGeV ($\sqrt{s}$=8.7\,AGeV)~\cite{Adamova:2003kf}. While the pion multiplicity decreases by ca.~40\%, the low-mass dilepton enhancement over the cocktail indicates an increase over the result at 158\,AGeV, consistent with the anticipated prevalence of baryon-driven medium effects. The predictions from the in-medium broadened $\rho$ spectral function describe the data well. This trend continues all the way down to relativistic energies in the BEVALAC/SIS regime ($E_{\rm lab}$=1-2\,AGeV), where a large excess radiation has also been reported~\cite{Porter:1997rc,Agakishiev:2011vf}. The next step is to go to higher energies as available at the colliders RHIC and LHC. Here, the net baryon density at mid-rapidity becomes small. It was pointed out, however, that at temperatures close to $T_{\rm pc}$, the {\em sum} of the densities of baryons ($B$) and anti-baryons ($\bar{B}$) is appreciable, about 0.7$\varrho_0$, and thus critical in producing a large broadening of the $\rho$, since it interacts equally with baryons and anti-baryons due to $CP$ invariance~\cite{Rapp:2000pe}. Since the concept of chemical freeze-out continues to hold in nuclear collisions at RHIC energy, the total number of baryons plus anti-baryons does not change appreciably until thermal freeze-out at $T_{\rm fo}$$\simeq$100\,MeV, and thus their density drops much slower than would be the case in chemical equilibrium (where it would decrease dramatically due to the large mass penalty on baryons). The upshot of this discussion is that the hadronic in-medium effects at collider energies were predicted to be comparable to that at SPS energies, and thus the low-mass dilepton enhancement at RHIC and LHC is expected to be quite similar in magnitude and shape to what has been observed at SPS. The PHENIX data of Ref.~\cite{Adare:2009qk} do not support this expectation: a large enhancement in central Au-Au($\sqrt{s}$=200\,AGeV) has been reported which cannot be described by the in-medium hadronic effects as is the case at the SPS. One should note, however, that the enhancement in non-central collisions is significantly smaller. A new mechanism should thus be operative in central Au-Au at RHIC, which does not prominently figure at SPS, nor in more peripheral collisions at RHIC. On the other hand, the STAR collaboration has reported preliminary data for dielectrons in central Au-Au($\sqrt{s}$=200\,AGeV)~\cite{Zhao:2011wa}, which indicate a much smaller enhancement, not incompatible with the effects expected from the in-medium $\rho$ broadening. At the recent Quark Matter 2012 meeting, the STAR collaboration went another step further, presenting a systematic dielectron measurement from the RHIC beam-energy scan program~\cite{Geurts:2012}. A persistent low-mass enhancement was found in Au-Au at collisions energies of $\sqrt{s}$=19.6, 39, 62 and 200\,AGeV. The invariant-mass spectra at the lowest energy (19.6\,GeV) exhibited the largest excess over the cocktail, and agree very well with the CERES data in Pb-Au($\sqrt{s}$=17.3\,AGeV). The predictions from hadronic many-body theory (with a moderate QGP portion in the low-mass regime) show good agreement with this excitation function. The STAR measurements (together with the CERES and NA60 data) thus suggest a universal origin of the low-mass dilepton enhancement in URHICs from $\sqrt{s}\simeq$~10-200\,GeV. New data reported by the PHENIX collaboration for peripheral and semi-central Au-Au, taking advantage of the hadron blind detector (HBD), also support this scenario~\cite{Tserruya:2012}. The STAR collaboration furthermore presented first measurements of the dielectron elliptic flow, $v_2$. Within the currently rather large uncertainties of this very challenging measurement, the $v_2$ of the dielectrons in the low-mass region, divided up into several mass bins, was found to be compatible with the simulated $v_2$ of the decay cocktail (mostly due to the long-lived $\pi$, $\eta$, $\omega$ and $\phi$ contributions). At face value, this result implies that the excess radiation carries a $v_2$ which is as large as the hadrons decaying after thermal freezeout. This assertion is, in fact, very consistent with the PHENIX measurement of direct photon $v_2$ in Au-Au collisions~\cite{Adare:2011zr}, which was found to be compatible with those of pions for $q_T\le$3\,GeV. Such an observation is difficult to account for through radiation which is dominated by early QGP emission~\cite{Liu:2009kta,Holopainen:2011pd,Dion:2011pp}. This discrepancy can be noticably reduced in a rather straightforward fireball scenario where most of the bulk $v_2$ is built up by the time the system reaches the phase transition region, in connection with photon emission rates for QGP and hadronic matter which continuously evolve across $T_{\rm pc}$~\cite{vanHees:2011vb}. The latter feature is indeed satisfied when merging hadronic many-body calculations for photon production~\cite{Turbide:2003si} with QGP rates in a complete leading-order perturbative evaluation~\cite{Arnold:2001ms}. At the same time, a fireball evolution with fully built-up elliptic flow and fairly large radial flow at $\sim$$T_{\rm pc}$, is supported empirically by the systematics of the measured spectra and $v_2$ of multistrange particles ($\Omega^-$, $\phi$ and $\Xi$), as well as by the constituent quark-number scaling of light and strange hadrons; it is possible to realize these features in explicit hydrodynamic simulations~\cite{He:2011zx}. Furthermore, the PHENIX collaboration extracted an inverse slope of their excess direct-photon spectra (defined by subtracting the primordial contribution from hard $NN$ collisions; late decays, such as $\pi^0,\eta\to\gamma\gamma$, are already taken out in the definition of ``direct" photons). It was found to be $T_{\rm eff}\simeq(221\pm19^{\rm stat}\pm19^{\rm sys})$\,MeV. This is a rather soft slope given its approximate decomposition into a true medium temperature and the blue-shift effect on massless particles due to an average radial flow velocity, $T_{\rm eff}\simeq T\sqrt{(1+\beta_{\rm av})/(1-\beta_{\rm av})}$. For example, for an average flow velocity of $\beta_{\rm av}=0.3(0.4)$, the blue-shift correction amounts to $\sim$35(50)\%. This further corroborates that the prevalent emission of the photons measured by PHENIX should be around $T_{\rm pc}$. To summarize this section, the excess of EM radiation observed in URHICs to date is remarkably consistent with a thermal source of dileptons and photons from a hydrodynamically evolving medium, and thus naturally fits into the current ``standard model" of these reactions. Employing state-of-the-art emission rates allows for an accurate description of precision dilepton data at the SPS, and accounts for data available at both lower SPS energy and the most the recent spectra obtained in a first systematic energy scan in the collider regime of RHIC. Slope analyses of both mass and $q_T$ spectra, as well as their large $v_2$, give strong indications for this excess radiation to emanate from around the phase transition temperature predicted by thermal lattice QCD. The similarity in magnitude and spectral shape of the excess radiation over the now available formidable range in collision energy suggests a universal origin of the observations (and further points to emission around $T_{\rm pc}$). In the following section we will revisit how microscopic mechanismis of in-medium $\rho$ broadening, which is consistent with the data, relates to chiral symmetry restoration in the medium. \section{Implications for Chiral Restoration} \label{sec_theo} \begin{figure}[!t] \begin{minipage}{19pc} \includegraphics[width=19pc]{Vec-vac.eps} \end{minipage}\hspace{1pc \begin{minipage}{19pc} \includegraphics[width=19pc]{AxV-vac.eps} \end{minipage} \caption{Vector (left panel) and axialvector (right panel) spectral functions in vacuum~\cite{Hohler:2012xd}. Notable features are the inclusion of excited states $\rho'$ and $a_1'$ (long-dashed lines) and a universal perturbative continuum (dash-dotted lines) which starts at significantly higher energies than in most previous sum-rule analyses. } \label{fig_vac} \end{figure} Rigorous connections between chiral order parameters and the $\rho$ spectral function (or more precisely: vector-isovector spectral function) can be made via well-known sum rule techniques. These are usually divided into two classes, namely the QCD sum rules (QCDSRs)~\cite{Shifman:1978bx} and Weinberg (or chiral) sum rules (WSRs)~\cite{Weinberg:1967,Das:1967ek}. The former are formulated in a given hadronic channel and utilize a dispersion integral to relate the physical spectral function to an expansion in spacelike momentum transfer with coefficients governed by quark and gluon condensates (operator-product expansion). For the vector channel, one has \begin{equation} \frac{1}{M^2}\!\int_0^\infty \!ds \frac{\rho_V(s)}{s} e^{-s/M^2} = \frac{1}{8\pi^2} \left(1+\frac{\alpha_s}{\pi}\right) +\frac{m_q \langle\bar{q}q\rangle}{M^4} +\frac{1}{24 M^4}\langle\frac{\alpha_s}{\pi} G_{\mu\nu}^2\rangle - \frac{56 \pi \alpha_s}{81 M^6} \langle \mathcal{O}_4^V \rangle \ldots \, , \label{qcdsr} \end{equation} where $\rho_V=-{\rm Im \Pi_V} / \pi$ is the spectral function, $\langle\bar{q}q\rangle$ the chiral condensate, $\langle\frac{\alpha_s}{\pi} G_{\mu\nu}^2\rangle$ the gluon condensate and $\langle \mathcal{O}_4^V \rangle$ the vector 4-quark condensate. The WSRs are, in a sense, more closely related to chiral symmetry (breaking). They involve energy-weighted integrals (or moments) over the difference of vector and axialvector spectral functions, which directly result in order parameter of chiral symmetry breaking, such as the quark condensate, pion decay constant or chirally-breaking part of the 4-quark condensate. They read \begin{eqnarray} f_n = \int\limits_0^\infty ds \ s^n \ \left[\rho_V(s) - \rho_A(s) \right] \ , \qquad \qquad \qquad \label{wsr} \\ f_{-2} = f_\pi^2 \frac{\langle r_\pi^2 \rangle}{3} - F_A \ , \quad f_{-1} = f_\pi^2 \ , \quad f_0 = f_\pi^2 m_\pi^2 \ , \quad f_1 = -2\pi \alpha_s \langle {\cal O}_4^\chi \rangle \ \label{fn} \end{eqnarray} ($r_\pi$: pion charge radius, $F_A$: coupling constant for the radiative pion decay, $\pi^\pm\to\mu^\pm \nu_\mu \gamma$, $\langle {\cal O}_4^\chi \rangle$: chirally breaking 4-quark condensate), and are referred to as WSR-0 through -3. They have been shown to remain valid in the medium~\cite{Kapusta:1993hq}. This is quite a fortunate situation as the in-medium vector spectral function is the only one readily available from experiment. A quantitative use of these sum rules in medium requires to first establish their accuracy and sensitivity in vacuum. This has recently been revisited by simultaneously analyzing both sum-rule types in connection with $\tau$-decay data~\cite{Barate:1998uf,Ackerstaff:1998yj} which give accurate information on the vector and axialvector spectral function in vacuum up to energies of the $\tau$ mass, $\sqrt{s} < m_\tau \simeq1.78$\,GeV. The low-energy part of the vector spectral function has been taken from the microscopic model for the $\rho$ meson that figures in the discussion of the dilepton data of the previous section, while the $a_1$ has been fit to the data with a Breit-Wigner ansatz. A novel element in this analysis is the postulate of a universal perturbative continuum in both vector and axialvector channel. Besides its underlying physical motivation of degeneracy in the perturbative domain, it also allows for an improved description of the intermediate-energy vector data through the introduction of a $\rho'$ resonance. It then turns out that a quantitative agreement with the right-hand-side ({\em rhs}) of WSRs 0-2 unequivocally requires the presence of an excited axialvector resonance, at a mass of about $m_{a_1'}\simeq1.8$\,GeV (WSR-3 also shows a large improvement)~\cite{Hohler:2012xd}. This indicates a rather high sensitivity of the WSRs to chiral breaking effects. Another way of looking at this is to examine the values of the integrals on the {\it rhs} of the WSRs, Eq.~(\ref{wsr}), as a function of the upper integration limit, $I_n(s_{\rm up})$. One finds oscillations with an amplitude much larger than the asymptotic values as given by the left-hand side ({\it lhs}), i.e., the latter are a result of formidable cancellations and therefore are to be considered as ``small". Finally, the thus constructed vacuum spectral functions have been tested with their respective QCDSR~\cite{Hohler:2012xd}. Within the current theoretical uncertainties of the additionally involved chiral blind operators, such as the gluon condensate, these are also reasonably well satisfied, within a typical margin of $\sim$0.5\%. Let us now turn to the evaluation of the sum rules in medium (see, e.g., Ref.~\cite{Friman:2011zz} for a recent survey). As a first step, it is instructive to examine the consequences of model-independent low-temperature expansions. As shown in Ref.~\cite{Dey:1990ba}, the leading $T$-dependence in the vector and axial-vector channels in the chiral limit amounts to a mutual mixing of their vacuum form, $\Pi_{V,A}^{0}$, as \begin{equation} \Pi_{V,A}(q;T) = (1-\varepsilon(T))~\Pi_{V,A}^{0}(q) + \varepsilon(T)~\Pi_{A,V}^{0}(q) \ , \label{chi-mix} \end{equation} with the mixing parameter $\varepsilon(T)= T^2/6f_\pi^2$ (which is proportional to the scalar pion density). The physical process realizing the mixing is a resonant interaction of the $\rho$-meson with pions into an $a_1$, $\rho+\pi\to a_1$. It turns out that the chiral mixing straightforwardly satisfies the in-medium WSR-1 and -2, provided they are fulfilled in vacuum~\cite{Kapusta:1993hq}. Complications can arise, however, at the level of the spectral functions, especially in set-ups with a rather small vector continuum threshold, where the latter does not separate from the nonperturbative resonance region of the axialvector, i.e., the $a_1$ mass. This problem does not occur in the implementations of degenerate continua with higher threshold energy~\cite{Marco:2001dh,Hohler:2012xd}; here, the continua stay invariant and the mixing only operates on the nonperturbative part of both spectral functions. One can furthermore evaluate the QCD sum rules in the mixing scenario~\cite{Holt:2012,Kwon:2010fw}). Employing the model-independent $T$-dependencies of quark and gluon condensates, one finds both vector and axialvector QCDSRs to be satisfied within 0.7\% or so up to temperatures of $T$=150\,MeV~\cite{Holt:2012}, where $\varepsilon\simeq0.2$ so that the expansion starts to become unreliable (note that $\varepsilon\simeq0.5$, which is reached at $T\simeq 225$\,MeV (160\,MeV in the chiral limit), corresponds to full mixing, i.e., degeneracy of $V$ and $A$ correlators in Eq.~(\ref{chi-mix}); thermal excitations other than the pion are expected to take over well before that). We recall that a mixing scenario can also be formulated in cold nuclear matter, through a coupling to the nuclear pion cloud~\cite{Krippa:1997ss,Chanfray:1999me}. The coupling of these pions proceeds through the pion cloud of the $\rho$, thereby creating axial currents~\cite{Chanfray:1999me}. \begin{figure}[!t] \begin{minipage}{19pc} \includegraphics[width=19pc]{q2cond.eps} \end{minipage}\hspace{1pc \begin{minipage}{19pc} \vspace{0.6pc} \includegraphics[width=18pc]{VecSFTtd.eps} \end{minipage} \caption{Left panel: Temperature dependence of the chiral quark condensate, normalized to its vacuum value, as obtained from thermal lattice QCD~\cite{Borsanyi:2010bp} for different temporal lattice sizes (indicated by different symbols); the dashed line is the hadron-resonance gas result where the condensate is diminished by the quark content of the thermal excitations; the solid line includes a $T^{10}$ correction which improves the agreement with lattice data at $T>$~140\,MeV. Right panel: In-medium vector spectral functions in the isovector channel including the calculated in-medium $\rho$ contributions at vanishing chemical potential at low mass~\cite{Rapp:1999us}, supplemented with an in-medium $\rho'$ contribution and a fixed perturbative continuum as deduced from in-medium QCD sum rules~\cite{Hohler:2012fj}.} \label{fig_med} \end{figure} While conceptually attractive (and rigorous), the $V$-$A$ mixing mechanism alone is insufficient to account for medium effects necessary to understand dilepton data. Most importantly, it lacks the broadening of the $\rho$ spectral shape that is pivotal in the description of the low-mass excess observed in experiment. We recall that this broadening is essentially due to two mechanisms~\cite{Rapp:1999us}: one is the medium modification of the $\rho$-meson's pion cloud (which includes virtual pion-exchange processes, i.e., chiral mixing), and the other is due to direct resonance interactions of the $\rho$ with heat bath particles $h$, $\rho+h \to R$, leading to the excitation of further resonances, $R$. How are these processes related to chiral symmetry restoration? To elaborate on this question, let us recall the model-independent, low-temperature and low-density result for the chiral condensate in a dilute hadronic medium (e.g., pion or nucleon gas) which reads~\cite{Gerber:1988tt,Drukarev:1991fs,Cohen:1991nk} \begin{equation} \frac{\langle\bar qq\rangle (T,\mu_B)}{\langle \bar qq\rangle_0} \ = \ 1-\sum\limits_h \frac{\varrho_h^s \Sigma_h}{m_\pi^2 f_\pi^2} \ \simeq \ 1 - \frac{T^2}{8f_\pi^2} - \frac{1}{3} \frac{\varrho_N}{\varrho_0} - \cdots \label{qqbar-med} \end{equation} with $\rho_h^s$: scalar density of hadron $h$. The sigma term, $\Sigma_h = m_q \langle h|\bar qq|h\rangle$, can be defined as the expectation value of the chiral-symmetry breaking term of the QCD Lagrangian inside a hadron $h$. For the pion and nucleon they have been evaluated from both chiral perturbation theory~\cite{Gerber:1988tt,Drukarev:1991fs,Cohen:1991nk} and lattice QCD~\cite{MartinCamalich:2010fp}. The second equality in Eq.~(\ref{qqbar-med}) follows from the chiral limit for the pion and a value of $\Sigma_N$=45\,MeV for the nucleon. More recent evaluations suggest the latter to be significantly larger, around 60\,MeV~\cite{MartinCamalich:2010fp}. The above expression has been generalized to a resonance gas of hadrons to leading order in their densities. Note that, although there are formally no interactions included, the excited hadronic states may be thought of as being built up by resonance interactions of the stable pions and nucleons. In the nonrelativistic limit, the pertinent sigma terms may be estimated using $\bar qq\simeq q^\dagger q$, so that $\Sigma_h/ m_q$ simply counts the number of light quarks in $h$. With $\Sigma_h>0$, one obtains the well-known result that the mere presence of hadrons diminishes the (negative) chiral condensate of the vacuum, $\langle 0|\bar qq|0\rangle\equiv\langle\bar qq\rangle_0\simeq-2$\,fm$^{-3}$ per light-quark flavor, i.e., the ``vacuum cleaner" effect. The very existence of the resonance excitations in the hadron gas, which is one of the components in the $\rho$-meson broadening, is thus intimately related to the reduction of the chiral condensate. This notion can be carried further by realizing that the sigma term can be decomposed into a short-distance part associated with the hadron's quark core and a long-distance part associated with its pion cloud, $\Sigma_h = \Sigma_h^{\rm core} + \Sigma_h^{\pi}$, see, e.g., Refs.~\cite{Jameson:1992,Birse:1992} for the nucleon case. These two terms naturally find their counterparts in the medium effects of the $\rho$ spectral function, namely the ones induced by direct resonance excitations as well as through the coupling of its pion cloud to the medium, respectively. This puts the medium effects due to chiral mixing through (virtual) pions and due to the hadron-resonance gas excitations on equal footing. Also recall that the hadron-resonance gas appears to be a good approximation to the QCD partition function for temperatures up to close to (and even slightly above) $T_{\rm pc}$~\cite{Karsch:2003vd}. The next question is how this connection works out more quantitatively when analyzing the in-medium sum rules. This has been revisited very recently~\cite{Hohler:2012fj} (see also Ref.~\cite{Ayala:2012ch}), by adopting the newly suggested description of the vector correlator with ground and excited state and a universal high-energy continuum (recall Fig.~\ref{fig_vac}). For the ground state the in-medium $\rho$ spectral function as used in dilepton calculations has been employed, while the perturbative continuum remains unchanged. This leaves the Breit-Wigner parameters of the in-medium $\rho'$ to be adjusted (in lieu of the continuum threshold in previous QCDSR analyses). For the temperature dependence of the condensates, the ``non-interacting" hadron-resonance gas expression, Eq.~(\ref{qqbar-med}), has been used for the 2-quark condensate, which turns out to agree rather well with lattice data, cf.~left panel of Fig.~\ref{fig_med} (a small correction has been introduced to better reproduce the lattice data in the vicinity of $T_{\rm pc}^\chi$). The 4-quark and gluon condensate are treated analogously, where the former includes correction terms to vanish at the same temperature as the 2-quark condensate. It turns out that the QCDSRs can be rather well satisfied even until close to the vanishing of the quark condensates, provided that the $\rho'$ also melts, cf.~right panel of Fig.~\ref{fig_med}. Of course, one should keep in mind that the vanishing of the quark condensates in the hadron-resonance gas is unrelated to a real phase transition, although it may still indicate that this model captures basic aspects of the medium when extrapolated close to $T_{\rm pc}$. This argument also applies to the in-medium calculations of the $\rho$ spectral function which should be rather reliable up to temperatures of about $T\simeq150$\,MeV, where the total hadron density has reached about 2$\varrho_0$. As discussed in the previous section, it is quite intriguing that dilepton data support a smooth extrapolation of the medium effects into the transition region. The analyses of the WSRs in this framework requires the knowledge of the in-medium axialvector spectral function, which is not available (yet). Preliminary studies using ans\"atze for the in-medium Breit-Wigner shape for $a_1$ and $a_1'$ that satisfy the axialvector QCDSR, indicate agreement with the in-medium WSRs, with a tendency of the vector and axialvector to degenerate into rather structureless spectral functions. \begin{figure}[!t] \begin{minipage}{17pc} \includegraphics[width=17pc]{Pii180.eps} \vspace{-1.5pc} \end{minipage} \hspace{1pc} \begin{minipage}{17pc} \vspace{-1.5pc} \includegraphics[width=16.3pc,angle=-90]{rho-em-4.ps} \end{minipage} \caption{Left panel: Euclidean correlators at vanishing 3-momentum and normalized to the free $q \bar q$ continuum (for $N_f$=2 light quarks), as a function of imaginary time, $\tau$, in units of inverse temperature; the thermal lattice QCD results in quenched approximation at 1.45\,$T_c$ (black squares)~\cite{Ding:2010ga} are compared to effective hadronic model evaluations of Eq.~(\ref{Pi-tau2})~\cite{Rapp:2002pn} using either vacuum $\rho$ and $\omega$ spectral functions plus continuum (dashed line), or in-medium $\rho$ and $\omega$~\cite{Rapp:1999us} at $T$=180\,MeV with either vacuum or in-medium reduced continuum (lower and upper solid line, respectively). Right panel: Spectral functions (normalized by isospin degeneracy, temperature and energy) corresponding to the correlators in the left panel; the lQCD result (black solid line) is extracted from the data points in the right panel by a 3-parameter fit ansatz~\cite{Ding:2010ga}; for the hadronic spectral functions only the isovector part is shown ($\rho$ meson plus continuum), where the in-medium one (red line) uses a vacuum continuum and thus corresponds to the lower red line in the left panel.} \label{fig_lat} \end{figure} Another test of the in-medium vector spectral function, and of the associated chiral-restoration mechanism, can be provided by thermal lattice QCD. In the latter, the information on the correlation function in a specific hadronic channel, $\alpha$, is routinely computed in euclidean time, $\Pi_\alpha(\tau,\vec r)$. After a Fourier transform in the spatial coordinates (from $\vec r$ to $\vec q$), the relation to the spectral function in the physical (timelike) regime takes the form \begin{equation} \Pi_\alpha(\tau,q;T)= \int\limits_0^\infty \frac{dq_0}{2\pi} \ \rho_\alpha(q_0,q;T) \ \frac{\cosh[q_0(\tau-1/2T)]}{\sinh[q_0/2T]} \ . \label{Pi-tau2} \end{equation} Thus, via simple a integration over a model spectral function one can directly compare to ``lattice data". The computational effort to compute the euclidean correlators in full QCD with dynamical quarks is formidable and no results for light vector mesons are currently available. However, in the quenched approximation, these calculations have achieved good accuracy in a gluon plasma above $T_c$~\cite{Ding:2010ga}. It is instructive to compare the trends in these computations to what one obtains from the spectral functions used in the interpretation of dilepton data. This is shown in the left panel of Fig.~\ref{fig_lat}, depicting the euclidean correlators normalized to the perturbative (non-interacting) $q\bar q$ continuum. The hadronic spectral function, extrapolated to a temperature of $T$=180\,MeV, shows a significant enhancement at large $\tau$ which is a direct manifestation of the low-mass enhancement generated by medium effects (critical for the description of dilepton data). In fact, even the vacuum spectral function shows this effect, caused by the free $\rho$ and $\omega$ resonances; the in-medium broadening roughly doubles this enhancement. The quenched lattice data~\cite{Ding:2010ga} show a surprisingly similar trend, given that they are computed at 1.45\,$T_c$. Note that chiral symmetry is restored under these conditions~\cite{Boyd:1995cw}. To extract the spectral function from the lattice data is more involved. In Ref.~\cite{Ding:2010ga} a physically motivated 3-parameter ansatz has been fit to the correlators resulting in a conductivity maximum at low-energy, followed by a smooth transition into the perturbative continuum at high energy (see black solid line in the right panel of Fig.~\ref{fig_lat}). Comparing this to the hadronic spectral functions, one observes a trend suggestive of approaching the lattice data via a melting of the vacuum $\rho$ resonance structure. \section{Conclusions} \label{sec_concl} Low-mass dilepton data in ultrarelativistic heavy-ion collisions provide a unique glimpse at the vector spectral function inside the produced hot and dense medium. After the initial discovery of large medium effects in the early and mid 90's, recent data are now allowing for quantitative tests of theoretical calculations. At SPS energies, a strongly broadened $\rho$-meson spectral function, due to many-body effects in hot and dense hadronic matter, accounts well for both CERES and NA60 data. Slope analyses of invariant-mass and transverse-momentum spectra corroborate that the observed spectral modifications at low mass originate from temperatures in the vicinity of the QCD transition region, $T_{\rm pc}$$\simeq$160-170\,MeV. If this picture is correct, very similar effects are expected for the low-mass dilepton spectra at collider energies. Very recent STAR data have given first evidence for the universality of the low-mass excess with collision energy, but are hopefully only the beginning of a systematic multi-differential investigation of dilepton observables. The transition from the baryon-rich to net-baryon free matter is a critical test of the current understanding of medium effects driven by baryon plus anti-baryon densities in the vector spectral function. At the same time, the large PHENIX photon-$v_2$ is most naturally associated with radiation from around $T_{\rm pc}$, even at full RHIC energy. The large PHENIX dilepton excess in central Au-Au remains a puzzle which most likely requires a new type of radiation source. We have then argued that the mechanisms underlying these medium effects, namely resonance excitations and (virtual) pion-cloud modifications, find their direct counterpart in the sigma-terms of the heat-bath particles. This is significant as the sigma-terms are at the origin of the reduction of the chiral quark condensate. Even though this mechanism only captures the leading dependence in the scalar density of the medium particles, it is remarkable that this ansatz describes the lattice data for the quark condensate rather well until close to the (pseudo-) transition temperature. Although the resonance gas does not incorporate criticality, the resonance excitations represent interaction contributions which amount to higher orders in the density of the stable pions and nucleons. When evaluating the relation between the in-medium vector spectral function and the decreasing condensate(s) more quantitatively using QCD sum rules, a reasonable consistency is found; studies of the Weinberg sum rules are ongoing. Finally, the calculations of euclidean correlators and their comparison to current lattice data also reveal a common trend toward a rather structureless spectral function. All this suggests that the $\rho$-meson melting scenario is quite consistent with different angles on chiral symmetry restoration. To develop these indications into a quantitative proof, and/or unravel hitherto unknown aspects of chiral restoration, remains a challenge. Experimental information from the collider experiments will be critical in guiding further theoretical efforts, and vice versa. \vspace{1pc} \noindent {\bf Acknowledgment} I thank H.~van Hees, C.~Gale and J.~Wambach for fruitful collaboration, and P.~Hohler and N.~Holt for their recent contributions to the sum rule analyses. This work is supported by the US National Science Foundation under grant no.~PHY-0969394 and by the A.-v.-Humboldt foundation. \section*{References}
{ "timestamp": "2012-10-16T02:01:01", "yymm": "1210", "arxiv_id": "1210.3660", "language": "en", "url": "https://arxiv.org/abs/1210.3660" }
\section{Introduction: basic concepts and notation} Let $S$ denote a closed marginally outer trapped surface (MOTS) in the spacetime $({\cal V},g)$. This means that the outer null expansion vanishes $\theta_{\vec k}=0$, where here the two future-pointing null vector fields orthogonal to $S$ are denoted by $\vec\ell$ and $\vec k$, the latter is declared to be outer, and we set $\ell^\mu k_{\mu}=-1$ as a convenient normalization. If in addition the other null expansion is non-positive ($\theta_{\vec\ell} \leq 0$), then $S$ is called a marginally trapped surface (MTS). I will also use the concept of outer trapped surface (OTS) when just $\theta_k <0$ and of future trapped surface (TS) if both expansions are negative: $\theta_k<0$ and $\theta_\ell <0$. A hypersurface foliated by M(O)TS is called a marginally (outer) trapped tube, abbreviated to M(O)TT. For further explanations check \cite{AG,AK,S,S1,Wald}. \subsection{Stability operator for MOTS} As proven in \cite{AMS,AMS1}, the variation of the vanishing expansion $\delta_{f\vec n} \theta_{\vec k}$ along any normal direction $f\vec n$ such that $k_\mu n^\mu=1$ reads \begin{equation} \delta_{f\vec n} \theta_{\vec k}=-\Delta_{S}f+2s^B\overline\nabla_{B}f+ f\left(K_{S}-s^B s_{B}+\overline\nabla_{B}s^B-\left.G_{\mu\nu}k^\mu \ell^{\nu}\right|_S -\frac{n^\rho n_{\rho}}{2}\, W\right) \label{deltatheta} \end{equation} where $K_{S}$ is the Gaussian curvature on $S$, $\Delta_{S}$ its Laplacian, $G_{\mu\nu}$ the Einstein tensor, $\overline\nabla$ the covariant derivative on $S$, $s_{B}=k_{\mu}e^\sigma_{B}\nabla_{\sigma}\ell^\rho$ (with $\vec e_{B}$ the tangent vector fields on $S$), and $$ W\equiv \left.G_{\mu\nu}k^\mu k^{\nu}\right|_S +\sigma^2 \label{W} $$ with $\sigma^2$ the shear scalar of $\vec k$ at $S$. Obviously $W\geq 0$ whenever $\left.G_{\mu\nu}k^\mu k^{\nu}\right|_S\geq 0$ (for instance if the null convergence condition holds \cite{HE}). Under this hypothesis, $W=0$ can only happen if $\left.G_{\mu\nu}k^\mu k^{\nu}\right|_S=\sigma^2=0$. This leads to Isolated Horizons \cite{AK}, and I shall assume $W>0$ throughout. Note that the direction $\vec n$ is selected by fixing its norm: \begin{equation} \vec n =-\vec\ell +\frac{n_{\mu}n^{\mu}}{2}\vec k \label{n} \end{equation} and observe also that the causal character of $\vec n$ is totally unrestricted. The righthand side in formula (\ref{deltatheta}) defines a differential operator $L_{\vec n}$ acting (linearly) on the function $f$: $\delta_{f\vec n} \theta_{\vec k}\equiv L_{\vec n} f$. $L_{\vec n}$ is an elliptic operator on $S$, called \underline{the stability operator} for the MOTS $S$ in the normal direction $\vec n$. $L_{\vec n}$ is not self-adjoint in general, however it has a real principal eigenvalue $\lambda_{\vec n}$, and the corresponding (real) eigenfunction $\phi_{\vec n}$ can be chosen to be positive on $S$ \cite{AMS,AMS1}. The (strict) stability of the MOTS $S$ is ruled by the (positivity) non-negativity of the principal eigenvalue $\lambda_{\vec n}$ \cite{AMS,AMS1}. \section{Spherically symmetric spacetimes} In advanced coordinates, spherically symmetric spacetimes have the line-element $$ ds^2=-e^{2\alpha}\left(1-\frac{2m}{r}\right)dv^2+2e^\alpha dvdr+r^2d\Omega^2 \, . $$ where $\alpha$ and $m$ are functions of $v$ and $r$. For each round sphere defined by $ \{r,v\}=$consts., its future null normals are $$ \vec\ell= -e^{-\alpha}\partial_r , \hspace{1cm} \vec k=\partial_v +\frac{1}{2}\left(1-\frac{2m}{r}\right)e^{\alpha}\partial_r $$ so that their null expansions are: $$ \theta^{sph}_{\vec k} =\frac{e^{\alpha}}{r}\left(1-\frac{2m}{r}\right), \hspace{1cm} \theta^{sph}_{\vec\ell} =-\frac{2e^{-\alpha}}{r} $$ The set $\mbox{A3H} : \hspace{1mm} r-2m(r,v)=0\hspace{2mm} (\Leftrightarrow \theta^{sph}_{\vec k}=0)$ is an MTT. A3H is actually the only {\em spherically symmetric} MTT : the only spherically symmetric hypersurface foliated by MTSs ---be they round spheres or not \cite{BS}. The round spheres are untrapped if $r>2m$, and trapped if $r<2m$. One can further prove \cite{BS} that any closed trapped surface cannot be fully contained in a region with $r\geq 2m$, so that all of them must intersect the region $\{r<2m\}$. However, how much must a TS penetrate into $\{r<2m\}$? Let $\varsigma\subset$ A3H be any MT round sphere (i.e., $\theta^{sph}_{\vec k}=0$) defined by $r=r_\varsigma=$const. The variation $\delta_{f \vec n} \theta^{sph}_{\vec k}$ along normal directions simplifies drastically in this case, because $\sigma^2=0$ ($\vec k$ is shear-free ) and $s_B=0$. In other words, most of the terms in the variation formula vanish and the variation simplifies to $$ \delta_{f\vec n} \theta^{sph}_{\vec k}=-\Delta_{\varsigma}f+f\left(\frac{1}{r_{\varsigma}^2}-G_{\mu\nu}k^\mu \ell^{\nu}-\frac{1}{2} n_\rho n^\rho \, G_{\mu\nu}k^\mu k^\nu\right) $$ Selecting $f=$constant, the vector $\vec n$ such that the expression enclosed in brackets vanishes produces no variation on $\theta^{sph}_{\vec k}$, meaning that $\vec n$ is tangent to the A3H simply leading to other marginally trapped round spheres on A3H. Let us call such a vector field $\vec m$, so that $\vec m =-\vec\ell +\frac{m_{\mu}m^{\mu}}{2}\vec k $ with $ \frac{1}{r_{\varsigma}^2}-\left.G_{\mu\nu}k^\mu \ell^{\nu}\right|_\varsigma - \left.\frac{m_\rho m^\rho}{2}G_{\mu\nu}k^\mu k^{\nu}\right|_\varsigma =0 \label{m} $ characterizes A3H. Consider now the parts of $\mbox{A3H}$ with $G_{\mu\nu}k^\mu k^{\nu}> 0$ (i.e. $W>0$). From the properties of $\vec m$ one deduces that the perturbation along $f \vec n$ will enter into the region with trapped round spheres (that is, $\{r<2m\}$) at points with $f(n_{\mu}n^{\mu} - m_{\mu}m^{\mu}) >0$. Note that \begin{equation} (G_{\rho\sigma}k^\rho k^{\sigma}|_{\varsigma})\, \, f(n_{\mu}n^{\mu} - m_{\mu}m^{\mu}) =-2(\Delta_{\varsigma}f+\delta_{f\vec n}\theta^{sph}_{\vec k}). \label{deltatheta1} \end{equation} In order to construct examples of TSs which lie partly in $\{r>2m\}$, consider the case $n_{\mu}n^{\mu} - m_{\mu}m^{\mu} >0$. For this choice the deformed surface enters the region $\{r<2m\}$ at points with $f>0$. Setting $f\equiv a_{0}+\tilde f $ for some as yet undetermined function $\tilde f$ and a constant $a_0$, Eq.(\ref{deltatheta1}) can be split into two parts \begin{eqnarray*} (G_{\rho\sigma}k^\rho k^{\sigma}|_{\varsigma})\, a_{0}(n_{\mu}n^{\mu}- m_{\mu}m^{\mu}) +2 \delta_{f\vec n}\theta^{sph}_{\vec k} = 0, \\ \frac{1}{2}(G_{\rho\sigma}k^\rho k^{\sigma}|_{\varsigma})(n_{\mu}n^{\mu} - m_{\mu}m^{\mu}) = -\frac{\Delta_{\varsigma}\tilde f}{\tilde f} > 0 . \end{eqnarray*} By our assumptions the first of these implies that $\delta_{f \vec n} \theta^{sph}_{\vec k} < 0$ if $a_{0}>0$, so that the deformed surface will be trapped. The second, in turn, is a mild restriction on the function $\tilde f$. A simple solution is to choose $\tilde f$ to be an eigenfunction of the Laplacian $\Delta_{\varsigma}$, say $\tilde f =c_l P_{l}$ for a fixed $l\in \mathbb{N}$ and constant $c_{l}$, where $P_{l}$ are the Legendre polynomials. Even more interestingly, we are ready to answer the question of how small the fraction of any closed TS that extends outside $\{r<2m\}$ can be made. The aim is to produce a $C^2$ function $\tilde f$ defined on the sphere (i) obeying the inequality $\displaystyle{-\frac{\Delta_{\varsigma}\tilde f}{\tilde f} > 0}$, and (ii) positive only in a region that we can make arbitrarily small. By choosing a sufficiently small constant $a_0$ requirement (ii) implies that the part of the surface extending outside $\{r>2m\}$ can be made arbitrarily small. To find $\tilde f$ explicitly, introduce stereographic coordinates $\{\rho, \varphi\}$ on the sphere, so that the Laplacian takes the form $ \Delta_{\varsigma} = \Omega^{-1} \left( \partial_\rho^2 + \frac{1}{\rho}\partial_\rho + \frac{1}{\rho^2}\partial_\varphi^2 \right) \ , \hspace{2mm} \Omega = \frac{4r_{\varsigma}^2} {(1+\rho^2)^2} $, Then, a solution for $\tilde f$ is the axially symmetric function \begin{equation} \tilde f (\rho ) = \left\{ \begin{array}{lll} c_1 \left( e^{\frac{1}{2a}(2a-\rho^2)} - 1\right) & & \rho^2 < 4a \\ \\ \frac{8c_1a}{e}\frac{1}{\rho^2} -c_1(1+e^{-1}) & & \rho^2 > 4a \ . \end{array} \right. \end{equation} This function is $C^2$ (and can be further smoothed if necessary), and it is positive only if $\rho^2 < 2a$, that is on a disk surrounding the origin (the pole) whose size can be chosen at will. It obeys $$ - \frac{\Delta_{\varsigma} \tilde f}{\tilde f} = \left\{ \begin{array}{lll} \frac{\Omega^{-1}}{a^2}\frac{2a-\rho^2} {1-e^{-\frac{1}{2a}(2c-\rho^2)}} & & \rho^2 < 4a \\ \\ \frac{32a\Omega^{-1}}{\rho^4}\frac{\rho^2}{(e+1)\rho^2-8a} \ , & & \rho^2 > 4a \ . \end{array} \right. $$ which is always larger than zero. Thus we have proven the following important and perhaps surprising result \cite{BS}. \begin{theorem}[Bengtsson \& JMMS 2011] In spherically symmetric spacetimes, there are closed f-trapped surfaces (topological spheres) penetrating both sides of the (non-isolated part of the) apparent 3-horizon $\mbox{A3H}\backslash\mbox{A3H}^{iso}$ {\em with arbitrarily small portions} outside the region $\{r>2m\}$. \label{th} \end{theorem} \section{Cores} The (future)-trapped region $\mathscr{T}$ of a spacetime is defined as the set of points $x\in {\cal V}$ such that $x$ lies on a closed (future) TS \cite{BS}. This is a space-time concept, not to be confused with the outer trapped region within spacelike hypersurfaces, which is defined as the union of the interiors of all (bounding) OTS in the given hypersurface \cite{AMS,AM}. I denote by $\mathscr{B}$ the boundary of the future trapped region $\mathscr{T}$: $\mathscr{B} \equiv \partial \mathscr{T}$. Closed TSs are clairvoyant , highly non-local objects \cite{AK,BS}. They cross MTTs and even enter flat portions of the space-time \cite{ABS,BS0,BS}. In conjunction with the non-uniqueness of MTTs \cite{AG,BS}, this poses a fundamental puzzle for the physics of black holes. Although several solutions can be pursued, a popular one is trying to define a preferred MTT. Hitherto, though, there has been no good definition for that. We have put forward a novel strategy \cite{BS}. The idea is based on the simple question: {\em what part of the spacetime is absolutely indispensable for the existence of the black hole?} \begin{defi}[Cores of Black Holes] A region $\mathscr{Z}$ is called the {\em core} of the f-trapped region $\mathscr{T}$ if it is a minimal closed connected set that needs to be removed from the spacetime in order to get rid of all closed f-trapped surfaces in $\mathscr{T}$, and such that any point on the boundary $\partial\mathscr{Z}$ is connected to $\mathscr{B}=\partial \mathscr{T}$ in the closure of the remainder. \end{defi} \begin{itemize} \item Here, ``minimal" means that there is no other set $\mathscr{Z}'$ with the same properties and properly contained in $\mathscr{Z}$. \item The final technical condition states that the excised space-time $({\cal V}\backslash \mathscr{Z},g)$ has the property that $\forall x\in {\cal V}\backslash \mathscr{Z}\cup \partial \mathscr{Z}$ there is continuous curve $\gamma\subset {\cal V}\backslash \mathscr{Z}\cup \partial \mathscr{Z}$ joining $x$ and $\mathscr{B}$ ($\gamma$ can have zero length if $\mathscr{B}\cap \partial \mathscr{Z}\neq \emptyset$). The reason why this is needed are explained in \cite{BS}. \end{itemize} In spherically symmetric spacetimes one can prove that the region $\mathscr{Z}\equiv \{r\leq 2m\}$ is a core \cite{BS}. The proof is founded on the previous Theorem \ref{th}. It should be observed that this is an interesting and maybe deep result, for the concept of core is global and requires full knowledge of the future while A3H is quasi-local. It is thus surprising that $\mbox{A3H} = \partial \mathscr{Z}$. Actually, one can further prove that in spherically symmetric spacetimes, $\mathscr{Z}=\{r\leq 2m\}$ are the only spherically symmetric cores of $\mathscr{T}$. Therefore, $\partial\mathscr{Z}=\mbox{A3H}$ are the only spherically symmetric boundaries of a core. Nevertheless, there exist non-spherically symmetric cores of the f-trapped region in spherically symmetric spacetimes. This implies the non-uniqueness of cores, and of their boundaries \cite{BS}. Still, the identified core $\mathscr{Z}=\{r\leq 2m\}$ might be unique in the sense that its boundary $\partial\mathscr{Z}=\mbox{A3H}$ is a MTT: we do not know whether other cores share this property or not \cite{BS}. To study whether or not Theorem \ref{th} can be generalized to general situations, thereby providing the possibility of selecting a unique MTT as the boundary of a selected core, consider the family of operators, parameterized by a function $z\in C^\infty (S)$, with a similar structure as that of $L_{\vec n}$: $L_{z}f=- \Delta_{S}f+2s^B\overline\nabla_{B}f + z f$. Each $L_{z}$ has a principal {\em real} eigenvalue $\lambda_{z}$ ---which depends on $z$--- and the corresponding eigenfunction $\phi_{z}>0$. For any given $z$ one easily gets $$ \oint_S L_z f=\oint_S \left(2s^B\overline\nabla_B f + zf \right)=\oint_S \left(z-2\overline\nabla_B s^B \right)f $$ in particular for the principal eigenfunction $$ \lambda_z \oint_S \phi_z =\oint_S \left(z-2\overline\nabla_B s^B \right) \phi_z \, . $$ This provides \begin{enumerate} \item a formula for the principal eigenvalue \begin{equation} \lambda_z =\frac{\oint_S \left(z-2\overline\nabla_{B}s^B \right) \phi_z}{\oint_S \phi_z } \, .\label{lambdaz} \end{equation} \item bounds for $\lambda_z$ \begin{equation} \min_{S} \left(z-2\overline\nabla_{B}s^B\right) \leq \lambda_z \leq \max_S \left(z-2\overline\nabla_{B}s^B \right) \, . \label{lambdazbounds} \end{equation} \item and that $\lambda_z - \left(z-2\overline\nabla_{B}s^B\right)$ must vanish somewhere on $S$ for all $z$. \end{enumerate} On any MOTS, varying $\theta_{\vec k}=0$ along the direction $\phi_{z}\vec n$ one derives $$ \frac{L_{\vec n} \phi_z}{\phi_{z }} =\lambda_{z}-z+K_S-s^B s_{B}+\overline\nabla_{B}s^B -\left.G_{\mu\nu}k^\mu \ell^{\nu}\right|_S -\frac{n^\rho n_{\rho}}{2}W \, . $$ Thus, {\em whenever} $W\neq 0$ on $S$, one can choose for any $z$ a variation vector $\vec m_{z}=-\vec\ell +M_{z}\vec k$ such that the righthand side vanishes \begin{equation} M_{z}=\frac{m^\rho_{z}m_{z\rho}}{2}=\frac{1}{W}\left(\lambda_{z}-z+K_{S}-s^B s_{B}+\overline\nabla_{B}s^B -\left.G_{\mu\nu}k^\mu \ell^{\nu}\right|_S\right) \label{M} \end{equation} hence $\delta_{\phi_{z}\vec m_{z}}\, \theta_{\vec k}=0$. Observe that this $\vec m_{z}$ depends on the chosen function $z$. The general variation of $\theta_{\vec k}$ along $\vec m_{z}$ reads \begin{equation} \delta_{f\vec m_{z}}\, \theta_{\vec k}=-\Delta_{S}f+2s^B\overline\nabla_{B}f+f(z-\lambda_{z})=(L_{z}-\lambda_{z})f \label{deltamzeta} \end{equation} so that the stability operator $L_{\vec m_{z}}$ of $S$ along $\vec m_{z}$ is simply $L_{z}-\lambda_{z}$ which obviously has a vanishing principal eigenvalue. The directions $\vec{m}_z$ define locally MOTTs including any given {\em stable} MOTS $S$ \cite{AMS,AMS1}. These MOTTs will generically be different for different $z$. In fact, given that $\forall z_1,z_2\in C^\infty(S)$, $\vec{m}_{z_1}-\vec{m}_{z_2}=\frac{1}{W}\left(\lambda_{z_1}-z_1-\lambda_{z_2}+z_2\right) \vec k$ one can easily prove that $$ \vec{m}_{z_1}=\vec{m}_{z_2} \Longleftrightarrow z_1 -z_2 =\mbox{const.} $$ Now, for any given $z$ rewrite $\delta_{f\vec n}\theta_{\vec k} =L_{\vec n} f$ using (\ref{M}) so that \begin{equation} \frac{W}{2}f\left(n^\rho n_{\rho}-m^\rho_{z}m_{z\rho}\right)= (L_{z}-\lambda_{z})f -\delta_{f\vec n} \theta_{\vec k} \label{n-m} \end{equation} Consider the particular function $z=2\overline\nabla_{B}s^B$. This may be the natural generalization of the spherically symmetric MTT shown above. Observe that, for such a choice of $z$, and letting $L\equiv L_{2\overline\nabla_{B}s^B}$, its principal eigenvalue (say $\mu$) vanishes, as follows immediately from either (\ref{lambdaz}) or (\ref{lambdazbounds}). Moreover, $$ L f =-\Delta_{S}f +2\overline\nabla_{B}(f s^B) =-\overline\nabla_{B}\left(\overline\nabla^B f-2fs^B \right). $$ so that $L$ is a divergence and thus $\oint_{S}Lf =0, \hspace{3mm} \forall f $. Moreover, (\ref{n-m}) reduces to \begin{equation} \frac{W}{2}f\left(n^\rho n_{\rho}-m^\rho m_{\rho}\right)= Lf -\delta_{f\vec n}\theta_{\vec k} \label{n-m2} \end{equation} where now the vector $\vec m =-\vec \ell +\frac{m^\rho m_{\rho}}{2}\vec k$ is defined by $$ \frac{m^\rho m_{\rho}}{2}=\frac{1}{W}\left(K_{S}-\overline\nabla_{B}s^B -s^B s_{B}-\left.G_{\mu\nu}k^\mu \ell^{\nu}\right|_S\right) $$ as follows from (\ref{M}). For any other direction $\vec m_z$ defining a local M(O)TT $$ \frac{W}{2}\left(m_z^\rho m_{z\rho}-m^\rho m_{\rho}\right)=\lambda_z -(z-2\overline\nabla_Bs^B) $$ and therefore point (iii) above leads to \begin{result} The local M(O)TT defined by the direction $\vec m$ is such that any other nearby local M(O)TT must interweave it with non-trivial intersections to both of its sides, that is to say, the vector $\vec{m}_z -\vec m$ changes causal character on any of its M(O)TSs. \end{result} Concerning cores, I try to follow the same steps as in spherical symmetry, and thus I start with a function $f=a_{0}\phi +\tilde f$ for a constant $a_{0}>0$ and $\phi >0$ is the principal eigenfunction of $L$. Then (\ref{n-m2}) becomes $$ \frac{W}{2}(a_{0}\phi +\tilde f) \left(n^\rho n_{\rho}-m^\rho m_{\rho}\right)=L\tilde f-\delta_{f\vec n}\theta_{\vec k} $$ that can be split into two parts: \begin{eqnarray} \frac{W}{2}a_{0}\phi \left(n^\rho n_{\rho}-m^\rho m_{\rho}\right)=-\delta_{f\vec n}\theta_{\vec k} \label{first}\\ \frac{W}{2}\tilde f \left(n^\rho n_{\rho}-m^\rho m_{\rho}\right)=L\tilde f \label{second} \end{eqnarray} Eq.(\ref{first}) tells us that $\delta_{f\vec n}\theta_{\vec k}<0$ whenever $\vec n$ points ``above'' $\vec m$ if $a_0>0$ is chosen. Therefore, using (\ref{second}) the problem one needs to solve can be reformulated as follows: {\em Is there a function $\tilde f$ on $S$ such that (i) $L \tilde f/\tilde f \geq \epsilon >0$, (ii) $\tilde f$ changes sign on $S$, (iii) $\tilde f$ is positive in a region as small as desired? } To prove that there are OTSs penetrating both sides of the MOTT it is enough to comply with points (i) and (ii) only. This does happen if $L$ has more real eigenvalues, for any real eigenvalue is strictly positive (as $\mu =0$), hence the corresponding eigenfunction must change sign on $S$, because integration of $L\psi = \lambda \psi$ on $S$ implies $\oint \psi =0$. However, even if there are no other real eigenvalues the result might still hold in general. In any case, the above leads to the analysis of the condition $L\tilde f/\tilde f >0$ for functions $\tilde f$. \section*{Acnodledgments} Supported by grants FIS2010-15492 (MICINN), GIU06/37 (UPV/EHU) and P09-FQM- 4496 (J. Andaluc\'{\i}a--FEDER) and UFI 11/55 (UPV/EHU). \section*{References}
{ "timestamp": "2012-10-16T02:02:09", "yymm": "1210", "arxiv_id": "1210.3731", "language": "en", "url": "https://arxiv.org/abs/1210.3731" }
\section{Introduction} The low-rank matrix completion is referred to recover an unknown low-rank matrix, exactly or approximately, from the under-sampled observations with or without noises. This problem is of considerable interest in many application areas, from machine learning to quantum state tomography. A basic idea to address a low-rank matrix completion problem is to minimize the rank of a matrix subject to certain constraints involving observations. Given that the direct minimization of rank function is generally NP-hard, a widely-used convex relaxation approach is to replace the rank function with the nuclear norm as the latter is the convex envelope of the rank function over a unit ball of the spectral norm \cite{Faz02}. \medskip Nuclear norm minimization (NNM) has been observed to provide a low-rank solution in practice for a long time (see, e.g., \cite{MesP97,Mes98,Faz02}). The first theoretical characterization for the minimum rank solution of the NNM was given by Recht, Fazel and Parrilo \cite{RecFP10}, with the help of the concept of Restricted Isometric Property (RIP). Recognizing that the matrix completion problem does not obey the RIP, Cand{\`e}s and Recht \cite{CanR09} introduced the concept of incoherence property and proved that most low-rank matrices can be exactly recovered from a surprisingly small number of noiseless observations of randomly sampled entries via the NNM. The bound of the number of sampled entries was later improved to be near-optimal by Cand{\'e}s and Tao \cite{CanT10} through a counting argument. Such a bound was also obtained by Keshavan et al. \cite{KesMO10} for their proposed OptSpace algorithm. Later, Gross \cite{Gro11} sharpened the bound by employing a novel technique from quantum information theory developed in \cite{GroLFBE10}, with the extension of noiseless observations of entries to coefficients relative to any basis. This technique was also adapted by Recht \cite{Rec11}. All the above results focus on noiseless matrix completion. The matrix completion with noise was first addressed by Cand{\'e}s and Plan \cite{CanP10}. More recently, nuclear norm penalized estimators for matrix completion with noise have been well studied by Koltchinskii, Lounici and Tsybakov \cite{KolLT11}, Negahban and Wainwright \cite{NegW12}, and Klopp \cite{Klo12}. Besides the nuclear norm, several other penalties for matrix completion have also been studied in \cite{RohT11,Klo11,Kol12,SreRJ05,FoyS11}. \medskip The NNM has been demonstrated to be a successful approach to encourage a low-rank solution in many situations. However, the efficiency of the NNM may be challenged under general sampling schemes. For example, the conditions characterized by Bach \cite{Bac08} for rank consistency of the nuclear norm penalized least squares estimator may not be satisfied. In particular, for matrix completion problems, Salakhutdinov and Srebro \cite{SalS10} showed that when certain rows and/or columns are sampled with high probability, the NNM may fail in the sense that the number of observations required for recovery is much more than the setting of most matrix completion problems. Negahban and Wainwright \cite{NegW12} also pointed out the impact of such heavy sampling schemes on the recovery error bound. As a remedy for this, a weighted nuclear norm (trace norm), based on row- and column-marginals of the sampling distribution, was suggested in \cite{NegW12, SalS10, FoySSS11} if the prior information on sampling distribution is available. \medskip When the true matrix possesses a symmetric/Hermitian positive semidefinite structure, the impact of general sampling schemes on the recoverability of the NNM is more remarkable. In this situation, the nuclear norm reduces to the trace and thus only depends on diagonal entries rather than all entries as the rank function does. As a result, if diagonal entries are heavily sampled, the ability of the NNM to promote a low-rank solution, as well as the recoverability, will be highly weakened. This phenomenon is fully reflected in the widely-used correlation matrix completion problem, for which the nuclear norm becomes a constant and completely loses effectiveness for matrix recovery. Another example of particular interest in quantum state tomography is to recover a density matrix of a quantum system from Pauli measurements (see, e.g., \cite{GroLFBE10, FlaGLE12, Wan12}). A density matrix is a Hermitian positive semidefinite matrix of trace one. Obviously, if the constraints of positive semidefiniteness and trace one are simultaneously imposed on the NNM, the nuclear norm completely fails in promoting a low-rank solution. Thus, one of the two constraints has to be abandoned in the NNM and then be restored in the post-processing stage. In fact, this idea has been explored in \cite{GroLFBE10,FlaGLE12} and the numerical results there indicated its relative efficiency though it is at best sub-optimal. \medskip In this paper, with a strong motivation to optimally address the difficulties in correlation and density matrix completion problems, we propose a low-rank matrix completion model with fixed basis coefficients. In our setting, for any given basis of the matrix space, a few basis coefficients of the true matrix are assumed to be fixed due to a certain structure or some prior information, and the rest are allowed to be observed with noises under general sampling schemes. Certainly, one can apply the nuclear norm penalized technique to our model. The challenge is that, as argued earlier, this may not yield a desired low-rank solution with small estimation errors. Here, we introduce a rank-correction step to address this critical issue provided that a reasonable initial estimator is available. A satisfactory choice of the initial estimator is the nuclear norm penalized estimator or one of its analogies. The rank-correction step solves a convex ``nuclear norm $-$ rank-correction term $+$ proximal term'' regularized least squares problem with fixed basis coefficients (and the possible positive semidefinite constraint). The rank-correction term is a linear term constructed from the initial estimator, and the proximal term is a quadratic term added to ensure the boundness of the solution to the convex problem. The resulting convex matrix optimization problem can be solved by the efficient algorithms recently developed in \cite{JiaST12, JiaST12_1, JiaST12_2} even for large-scale cases. \medskip The idea of using a two-stage or even multi-stage procedure is not brand new for dealing with sparse recovery in the statistical and machine learning literature. The $l_1$-norm penalized least squares method, also known as the Lasso \cite{Tib96}, is very attractive and popular for variable selection in statistics, thanks to the invention of the fast and efficient LARS algorithm \cite{EfrHJT04}. On the other hand, the $l_1$-norm penalty has long been known by statisticians to yield biased estimators and cannot attain the estimation optimality \cite{FanL01, FanP04}. The issue of bias can be overcome by nonconvex penalization methods, see, e.g., \cite{LenLW06,Fan97,Zha10}. A multi-stage procedure naturally occurs if the nonconvex problem obtained is solved by an iterative algorithm \cite{ZouL08}. In particular, once a good initial estimator is used, a two-stage estimator is enough to achieve the desired asymptotic efficiency, e.g., the adaptive Lasso proposed by Zou \cite{Zou06}. There are also a number of important papers in this line on variable selection, including \cite{LenLW06,MeiB06,ZhaY07,HuaMZ10,ZhoVB09, Mei07,FanL08}, to name only a few. For a broad overview, the interested readers are referred to the recent survey papers \cite{FanL10, FanLQ11}. It is natural to extend the ideas from the vector case to the matrix case. Recently, Bach \cite{Bac08} made an important step in extending the adaptive Lasso of Zou \cite{Zou06} to the matrix case for seeking rank consistency under general sampling schemes. However, it is not clear how to apply Bach's idea to our matrix completion model with fixed basis coefficients since the required rate of convergence of the initial estimator for achieving asymptotic properties is no longer valid as far as we can see. More critically, there are numerical difficulties in efficiently solving the resulting optimization problems. Such difficulties also occur when the reweighted nuclear norm proposed by Mohan and Fazel \cite{MohF10} is applied to the rectangular matrix completion problems. \medskip The rank-correction step to be proposed in this paper is for the purpose to overcome the above difficulties. This approach is inspired by the majorized penalty method recently proposed by Gao and Sun \cite{GaoS10} for solving structured matrix optimization problems with a low-rank constraint. For our proposed rank-correction step, we provide a non-asymptotic recovery error bound in the Frobenius norm, following a similar argument adopted by Klopp in \cite{Klo12}. The obtained error bound indicates that adding the rank-correction term could help to substantially improve the recoverability. As the estimator is expected to be of low-rank, we also study the asymptotic property --- rank consistency in the sense of Bach \cite{Bac08}, under the setting that the matrix size is assumed to be fixed. This setting may not be ideal for analyzing asymptotic properties for matrix completion, but it does allow us to take the crucial first step to gain insights into the limitation of the nuclear norm penalization. Among others, the concept of constraint nondegeneracy for conic optimization problem plays a key role in our analysis. Interestingly, our results of recovery error bound and rank consistency suggest a consistent criterion for constructing a suitable rank-correction function. In particular, for the correlation and density matrix completion problems, we prove that the rank consistency automatically holds for a broad selection of rank-correction functions. To achieve better performance for recovery, the rank-correction step may be iteratively used for several times, especially when the sample ratio is relatively low. Finally, we remark that our results can also be used to provide a theoretical foundation for the majorized penalty method of Gao and Sun \cite{GaoS10} and Gao \cite{Gao10} for structured low-rank matrix optimization problems. \medskip This paper is organized as follows. In Section \ref{section2}, we introduce the observation model of matrix completion with fixed basis coefficients and the formulation of the rank-correction step. In Section \ref{section3}, we establish a non-asymptotic recovery error bound and discuss the impact of the rank-correction term on recovery. Section \ref{section4} provides necessary and sufficient conditions for rank consistency. Section \ref{section5} is devoted to the construction of the rank-correction function. In Section \ref{section6}, we report numerical results to validate the efficiency of our proposed rank-corrected procedure. We conclude this paper in Section \ref{section7}. All proofs are left in the Appendix. \medskip \noindent {\bf Notation.} Here we provide a brief summary of the notation used in this paper. \begin{itemize} \item[$\bullet$\ ] Let $\mathbb{R}^{n_1\times n_2}$ and $\mathbb{C}^{n_1\times n_2}$ denote the space of all $n_1\times n_2$ real and complex matrices, respectively. Let $\mathcal{S}^n(\mathcal{S}_{+}^n,\,\mathcal{S}_{++}^n)$ denote the set of all $n\times n$ real symmetric (positive semidefinite, positive definite) matrices and $\mathcal{H}^n(\mathcal{H}_{+}^n,\,\mathcal{H}_{++}^n)$ denote the set of all $n\times n$ Hermitian (positive semidefinite, positive definite) matrices. Let $\mathbb{S}^n\,(\mathbb{S}^n_+,\,\mathbb{S}^n_{++})$ represent $\mathcal{S}^n\,(\mathcal{S}^n_+,\,\mathcal{S}^n_{++})$ for the real case and $\mathcal{H}^n\,(\mathcal{H}^n_+,\,\mathcal{H}^n_{++})$ for the complex case. \item[$\bullet$\ ] Let $\mathbb{V}^{n_1\times n_2}$ represent $\mathbb{R}^{n_1\times n_2}$, $\mathbb{C}^{n_1\times n_2}$, $\mathcal{S}^n$ or $\mathcal{H}^n$. We define $n:=\min(n_1,n_2)$ for the previous two cases and stipulate $n_1=n_2=n$ for the latter two cases. Let $\mathbb{V}^{n_1\times n_2}$ be endowed with the trace inner product $\langle \cdot, \cdot \rangle$ and its induced norm $\|\cdot\|_{F}$, i.e., $\langle X, Y \rangle:= \text{Re}\big(\text{Tr}(X^{\mathbb{T}}Y)\big)$ for $X, Y\in\mathbb{V}^{n_1\times n_2}$, where $``\text{Tr}"$ stands for the trace of a matrix and $``\text{Re}"$ means the real part of a complex number. \item[$\bullet$\ ] For the real case, $\mathbb{O}^{n\times k}$ denotes the set of all $n\times k$ real matrices with orthonormal columns, and for the complex case, $\mathbb{O}^{n\times k}$ denotes the set of all $n\times k$ complex matrices with orthonormal columns. When $k=n$, we write $\mathbb{O}^{n\times k}$ as $\mathbb{O}^n$ for short. \item[$\bullet$\ ] The notation $^{\mathbb{T}}$ denotes the transpose for the real case and the conjugate transpose for the complex case. The notation $^\ast$ means the adjoint of operator. \item[$\bullet$\ ] For any given vector $x$, $\text{Diag}(x)$ denotes a rectangular diagonal matrix of suitable size with the $i$-th diagonal entry being $x_i$. \item[$\bullet$\ ] For any $x \in \mathbb{R}^n$, let $\|x\|_2$ and $\|x\|_\infty$ denote the Euclidean norm and the maximum norm, respectively. For any $X\in \mathbb{V}^{n_1\times n_2}$, let $\|X\|$ and $\|X\|_*$ denote the spectral norm and the nuclear norm, respectively. \item[$\bullet$\ ] The notations $\stackrel{a.s.}{\rightarrow}$, $\stackrel{p}{\rightarrow}$ and $\stackrel{d}{\rightarrow}$ mean almost sure convergence, convergence in probability and convergence in distribution, respectively. We write $x_m = O_p(1)$ if $x_m$ is bounded in probability. \item[$\bullet$\ ] For any set $K$, let $|K|$ denote the cardinality of $K$ and let $\delta_{K}(x)$ denote the indicator function of $K$, i.e., $\delta_{K}(x)=0$ if $x\in K$, and $\delta_{K}(x)=+\infty$ otherwise. Let $I_n$ denote the $n\times n$ identity matrix. \end{itemize} \medskip \section{Problem formulation}\label{section2} In this section, we formulate the model of the matrix completion problem with fixed basis coefficients, and then propose a rank-correction step for solving this class of problems. \subsection{The observation model} \label{subsecobs} Let $\{\Theta_1,\ldots,\Theta_d\}$ be a given orthonormal basis of the given real inner product space $\mathbb{V}^{n_1\times n_2}$. Then, any matrix $X\in\mathbb{V}^{n_1\times n_2}$ can be uniquely expressed in the form of $X=\sum_{k=1}^d \langle \Theta_k, X \rangle\Theta_k$, where $\langle \Theta_k, X\rangle$ is called the basis coefficient of $X$ relative to $\Theta_k$. Let $\overline{X} \in \mathbb{V}^{n_1\times n_2}$ be the unknown low-rank matrix to be recovered. In some practical applications, for example, the correlation and density matrix completion, a few basis coefficients of the unknown matrix $\overline{X}$ are fixed (or assumed to be fixed) due to a certain structure or reliable prior information. Throughout this paper, we let $\alpha\subseteq\{1,2,\ldots,d\}$ denote the set of the indices relative to which the basis coefficients are fixed, and $\beta$ denote the complement of $\alpha$ in $\{1,2,\ldots,d\}$, i.e., $\alpha\cap\beta=\emptyset$ and $\alpha\cup\beta = \{1,\ldots, d\}$. We define $d_1: = |\alpha|$ and $d_2 := |\beta|$. \medskip When a few basis coefficients are fixed, one only needs to observe the rest for recovering the unknown matrix $\overline{X}$. Assume that we are given a collection of $m$ noisy observations of the basis coefficients relative to $\{\Theta_{k}:k\in\beta\}$ in the following form \begin{equation}\label{eqnobsori} y_i = \left\langle \Theta_{\omega_i}, \overline{X}\right\rangle + \nu \xi_i, \quad i = 1, \ldots, m, \end{equation} where $\omega_i$ are the indices randomly sampled from the index set $\beta$, $\xi_i$ are the independent and identically distributed (i.i.d.) noises with $\mathbb{E}(\xi_i)=0$ and $\mathbb{E}(\xi^2_i)=1$, and $\nu>0$ controls the magnitude of noise. Unless otherwise stated, we assume a general weighted sampling (with replacement) scheme with the sampling distributions of $\omega_i$ as follows. \begin{assumption}\label{asmpprob} The indices $\omega_1,\ldots, \omega_m$ are i.i.d. copies of a random variable $\omega$ that has a probability distribution $\Pi$ over $\{1,\ldots, d\}$ defined by \[ {\rm Pr}(\omega = k) =\left\{\begin{array}{ll} 0 & {\rm if}\ k\in \alpha,\\ p_k>0 & {\rm if}\ k\in \beta. \end{array}\right. \] \end{assumption} Note that each $\Theta_k, k\in\beta$ is assumed to be sampled with a positive probability in this sampling scheme. In particular, when the sampling probability of all $k\in \beta$ are equal, i.e., $p_k =1/d_2 \ \forall\, k\in \beta$, we say that the observations are sampled uniformly at random. \medskip Next, we present some examples of low-rank matrix completion problems in the above settings. \begin{description} \item[(1)] {\bf Correlation matrix completion.} A correlation matrix is an $n\times n$ real symmetric or Hermitian positive semidefinite matrix with all diagonal entries being ones. Let $e_i$ be the vector with the $i$-th entry being one and the others being zeros. Then, $\langle e_ie_i^{\mathbb{T}}, \overline{X}\rangle =\overline{X}_{ii}=1 \ \forall\, 1\leq i\leq n$. The recovery of a correlation matrix is based on the observations of entries. For the real case, $\mathbb{V}^{n_1\times n_2} = \mathcal{S}^n$, $d = n(n+1)/2$, $d_1=n$, $$\Theta_{\alpha} = \big\{e_ie_i^{\mathbb{T}} \ | \ 1\leq i \leq n\big\} \quad \text{and} \quad \Theta_\beta = \left\{\frac{1}{\sqrt{2}}(e_ie_j^{\mathbb{T}}+e_je_i^{\mathbb{T}}) \ \Big| \ 1\leq i < j \leq n\right\};$$ and for the complex case, $\mathbb{V}^{n_1\times n_2} = \mathcal{H}^n$, $d=n^2$, $d_1 =n$, $$\Theta_\alpha =\!\big\{e_ie_i^{\mathbb{T}} \ | \ 1\leq i \leq n\big\} \ \ \text{and} \ \ \Theta_\beta\!=\!\left\{\!\frac{1}{\sqrt{2}}(e_ie_j^{\mathbb{T}}\!+e_je_i^{\mathbb{T}}),\frac{\sqrt{-1}}{\sqrt{2}}(e_ie_j^{\mathbb{T}}\!-e_je_i^{\mathbb{T}}) \ \Big| \ i<j\!\right\}.$$ Here, $\sqrt{-1}$ represents the imaginary unit. Of course, one may fix some off-diagonal entries in specific applications. \item[(2)] {\bf Density matrix completion.} A density matrix of dimension $n=2^l$ for some positive integer $l$ is an $n\times n$ Hermitian positive semidefinite matrix with trace one. In quantum state tomography, one aims to recover a density matrix from Pauli measurements (observations of the coefficients relative to the Pauli basis) \cite{GroLFBE10, FlaGLE12}, given by $$\Theta_\alpha = \left\{\frac{1}{\sqrt{n}}I_n\right\} \ \text{and} \ \Theta_\beta = \left\{\frac{1}{\sqrt{n}} (\sigma_{s_1}\otimes \cdots \otimes \sigma_{s_l}) \ \Big | \ (s_1,\ldots,s_l) \in \{0,1,2,3\}^l\right\}\Big \backslash \Theta_\alpha,$$ where ``$\otimes$'' means the Kronecker product of two matrices and $$\sigma_0 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix},\ \sigma_1 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \ \sigma_2 = \begin{pmatrix} 0 & -\sqrt{-1} \\ \sqrt{-1} & 0 \end{pmatrix}, \ \sigma_3 = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}$$ are the Pauli matrices. In this setting, $\mathbb{V}^{n_1\times n_2} = \mathcal{H}^{n}$, $\text{Tr}(\overline{X}) = \langle I_n, \overline{X} \rangle = 1$, $d = n^2$, and $d_1 = 1$. \item[(3)] {\bf Rectangular matrix completion.} Assume that a few entries of a rectangular matrix are known and let $\mathcal{I}$ be the index set of these entries. One aims to recover this rectangular matrix from the observations of the rest entries. In the real case, $\mathbb{V}^{n_1\times n_2} = \mathbb{R}^{n_1\times n_2}$, $d= n_1n_2$, $d_1 = |\mathcal{I}|$, $$\Theta_\alpha = \big\{e_ie_j^{\mathbb{T}} \ | \ (i,j) \in \mathcal{I}\big\} \quad \text{and} \quad \Theta_\beta = \big\{e_ie_j^{\mathbb{T}} \ | \ (i,j) \notin \mathcal{I}\big\};$$ and in the complex case, $\mathbb{V}^{n_1\times n_2} = \mathbb{C}^{n_1\times n_2}$, $d= 2n_1n_2$, $d_1 = 2|\mathcal{I}|$, $$\Theta_\alpha = \big\{e_ie_j^{\mathbb{T}}, \sqrt{-1}e_ie_j^{\mathbb{T}} \ | \ (i,j) \in \mathcal{I}\big\} \quad \text{and} \quad \Theta_\beta = \big\{e_ie_j^{\mathbb{T}}, \sqrt{-1}e_ie_j^{\mathbb{T}} \ | \ (i,j) \notin \mathcal{I}\big\}.$$ \end{description} Now we introduce some linear operators that are frequently used in the subsequent sections. For any given index set $\pi \subseteq \{1,\ldots,d\}$, say $\alpha$ and $\beta$, we define the linear operators $\mathcal{R}_\pi$: $\mathbb{V}^{n_1\times n_2} \rightarrow \mathbb{R}^{|\pi|}$ and $\mathcal{P}_\pi$: $\mathbb{V}^{n_1\times n_2} \rightarrow \mathbb{V}^{n_1\times n_2}$, respectively, by \begin{equation}\label{operator-R-P} \mathcal{R}_\pi(X):= \big(\langle \Theta_k, X \rangle \big)^{\mathbb{T}}_{k\in\pi}\ \ {\rm and}\ \ \mathcal{P}_\pi(X) := \sum_{k\in \pi} \langle \Theta_k, X \rangle \Theta_k, \quad\ X \in \mathbb{V}^{n_1\times n_2}. \end{equation} It is easy to see that $\mathcal{P}_\pi=\mathcal{R}_\pi^*\mathcal{R}_\pi$. Define the self-adjoint operators $\mathcal{Q}_\beta: \mathbb{V}^{n_1\times n_2} \rightarrow \mathbb{V}^{n_1\times n_2}$ and $\mathcal{Q}_\beta^\dag: \mathbb{V}^{n_1\times n_2} \rightarrow \mathbb{V}^{n_1\times n_2}$ associated with the sampling probability, respectively, by \begin{equation}\label{operator-Q} \mathcal{Q}_\beta(X): = \sum_{k\in \beta} p_k \langle \Theta_k, X \rangle \Theta_k \quad \text{and} \quad \mathcal{Q}_\beta^\dag (X): = \sum_{k\in \beta} \frac{1}{p_k} \langle \Theta_k, X \rangle \Theta_k, \quad X \in \mathbb{V}^{n_1\times n_2}. \end{equation} One may easily verify that the operators $\mathcal{Q}_\beta$, $\mathcal{Q}_\beta^\dag$ and $\mathcal{P}_\beta$ satisfy the following relations \begin{equation}\label{relation-operator} \mathcal{Q}_\beta\mathcal{Q}_\beta^\dag = \mathcal{Q}_\beta^\dag \mathcal{Q}_\beta = \mathcal{P}_\beta,\ \ \mathcal{P}_\beta\mathcal{Q}_\beta =\mathcal{Q}_\beta\mathcal{P}_\beta= \mathcal{Q}_\beta,\ \ \mathcal{Q}_\beta^\dag\mathcal{R}_{\alpha}^* = 0. \end{equation} Let $\Omega$ be the multiset of all the sampled indices from the index set $\beta$, i.e., $\Omega =\{\omega_1, \ldots, \omega_m\}$. With a slight abuse on notation, we define the sampling operator $\mathcal{R}_\Omega$: $\mathbb{V}^{n_1\times n_2} \rightarrow \mathbb{R}^m$ associated with $\Omega$ by $$\mathcal{R}_\Omega(X) := \big(\langle \Theta_{\omega_1}, X \rangle, \ldots, \langle \Theta_{\omega_m}, X \rangle \big)^{\mathbb{T}}, \quad X \in \mathbb{V}^{n_1\times n_2}.$$ Then, the observation model (\ref{eqnobsori}) can be expressed in the following vector form \begin{equation}\label{eqnobs} y = \mathcal{R}_\Omega(\overline{X})+\nu\xi, \end{equation} where $y = (y_1,\ldots,y_m)^{\mathbb{T}} \in \mathbb{R}^m$ and $\xi =\!(\xi_1,\ldots, \xi_m)^{\mathbb{T}} \in \mathbb{R}^m$ denote the observation vector and the noise vector, respectively. \medskip \subsection{The rank-correction step} In many situations, the nuclear norm is able to encourage a low-rank solution for matrix recovery, but its efficiency may be challenged if the observations are sampled at random obeying a general distribution such as the one considered in \cite{SalS10}. The setting of fixed basis coefficients in our matrix completion model can be regarded to be under an extreme sampling scheme. In particular, for the correlation and density matrix completion, the nuclear norm completely loses its efficiency since in this case it reduces to a constant. In order to overcome the shortcomings of the nuclear norm penalization, we propose a rank-correction step to generate an estimator in pursuit of a better recovery performance. \medskip For convenience of discussions, in the rest of this paper, for any given $X\in\mathbb{V}^{n_1\times n_2}$, we denote by $\sigma(X)=\big(\sigma_1(X), \ldots, \sigma_n(X)\big)^{\mathbb{T}}$ the singular value vector of $X$ arranged in the nonincreasing order and define $$\mathbb{O}^{n_1,n_2}(X) :=\big\{(U,V) \in \mathbb{O}^{n_1}\times \mathbb{O}^{n_2}\mid X = U\text{Diag}\big(\sigma(X)\big)V^\mathbb{T}\big\}.$$ In particular, when $\mathbb{V}^{n_1\times n_2} = \mathbb{S}^n$, we denote by $\lambda(X)=\big(\lambda_1(X), \ldots, \lambda_n(X)\big)^{\mathbb{T}} $ the eigenvalue vector of $X$ with $|\lambda_1(X)|\geq \ldots \geq |\lambda_n(X)|$ and define $$\mathbb{O}^n (X) :=\big\{P \in \mathbb{O}^n \mid X = P\text{Diag}(\lambda(X))P^\mathbb{T}\big\}.$$ Before stating our rank-correction step, we introduce the concept of spectral operator associated to a symmetric vector-valued function. A function $f: \mathbb{R}^n \rightarrow \mathbb{R}^n$ is said to be symmetric if $$f(x) = Q^{\mathbb{T}} f(Qx) \quad \forall \, \text{signed permutation matrix} \ Q \ \text{and} \ x \in \mathbb{R}^n,$$ where a signed permutation matrix is a real matrix that contains exactly one nonzero entry $1$ or $-1$ in each row and column and $0$ elsewhere. From this definition, we see that $$f_i(x) = 0 \quad \text{if} \ x_i=0.$$ The spectral operator $F: \mathbb{V}^{n_1\times n_2}\rightarrow \mathbb{V}^{n_1\times n_2}$ associated with the function $f$ is defined by \begin{align}\label{Foperator} F(X): = U \text{Diag}\big(f(\sigma(X))\big) V^\mathbb{T}, \end{align} where $(U,V)\in \mathbb{O}^{n_1,n_2}(X)$ and $X \in \mathbb{V}^{n_1\times n_2}$. From \cite[Theorems 3.1 \& 3.6]{Din12}, the symmetry of $f$ guarantees the well-definiteness of the spectral operator $F$, and the continuous differentiability of $f$ implies the continuous differentiability of $F$. When $\mathbb{V}^{n_1\times n_2}=\mathbb{S}^n$, we have that $$F(X) = P\text{Diag}\big(f(|\lambda(X)|)\big)\big(P\text{Diag}(s(X))\big)^\mathbb{T},$$ where $P \in \mathbb{O}^n(X)$ and $s(X)\in\mathbb{R}^n$ with the $i$-th component $s_i(X)=-1$ if $\lambda_i(X)<0$ and $s_i(X)=1$ otherwise. In particular for the positive semidefinite case, both $U$ and $V$ in (\ref{Foperator}) reduce to $P$. For more details on spectral operators, the readerS may refer to the PhD thesis \cite{Din12}. \medskip Given a spectral operator $F\!:\mathbb{V}^{n_1\times n_2} \rightarrow \mathbb{V}^{n_1\times n_2}$ and an initial estimator $\widetilde{X}_m$ for the unknown matrix $\overline{X}$, say the nuclear norm penalized least squares estimator or one of its analogies, our rank-correction step is to solve the convex optimization problem \begin{equation}\label{eqnrcs} \begin{aligned} \min_{X\in \mathbb{V}^{n_1\times n_2}}&\ \frac{1}{2m} \left\|y - \mathcal{R}_\Omega(X)\right\|_2^2 + \rho_m \left(\|X\|_* - \langle F(\widetilde{X}_m),X\rangle + \frac{\gamma_m}{2}\|X-\widetilde{X}_m\|_F^2\right)\\ \text{s.t.}\ \ \ & \mathcal{R}_\alpha(X) = \mathcal{R}_\alpha(\overline{X}), \end{aligned} \end{equation} where $\rho_m > 0$ and $\gamma_m \geq 0$ are the regularization parameters depending on the number of observations. The last quadratic proximal term is added to guarantee the boundness of the solution to (\ref{eqnrcs}). If the function $\|X\|_* - \langle F(\widetilde{X}_m), X \rangle$ is level-bounded, one may simply set $\gamma_m = 0$. Clearly, when $F \equiv 0$ and $\gamma_m\!=0$, the problem (\ref{eqnrcs}) reduces to the nuclear norm penalized least squares problem. In the sequel, we call $-\langle F(\widetilde{X}_m), X \rangle$ the rank-correction term. If the true matrix is known to be positive semidefinite, we add the constraint $X\in\mathbb{S}^n_+$ to (\ref{eqnrcs}). Thus, the rank-correction step is to solve the convex conic optimization problem \begin{equation}\label{eqnrcspos} \begin{aligned} \min_{X\in\mathbb{S}^n} & \ \frac{1}{2m} \left\|y - \mathcal{R}_\Omega(X)\right\|_2^2 + \rho_m \left(\langle I - F(\widetilde{X}_m),X\rangle + \frac{\gamma_m}{2}\|X-\widetilde{X}_m\|_F^2\right)\\ \text{s.t.} \ & \ \mathcal{R}_\alpha(X) = \mathcal{R}_\alpha(\overline{X}),\ \ X\in\mathbb{S}^n_+. \end{aligned} \end{equation} For this case, we assume that the initial estimator $\widetilde{X}_m$ belongs to $\mathbb{S}_+^n$ as the projection of any estimator onto $\mathbb{S}^n_+$ can approximate the true matrix $\overline{X}$ better. \medskip The rank-correction step above is inspired by the majorized penalty approach recently proposed by Gao and Sun \cite{GaoS10} for solving the rank constrained matrix optimization problem: \begin{equation}\label{major1} \min_{X \in \mathcal{C}} \big\{h(X):\ {\rm rank}(X) \leq r\big\}, \end{equation} where $r \geq 1$, $h:\mathbb{V}^{n_1\times n_2} \rightarrow \mathbb{R}$ is a given continuous function and $\mathcal{C} \in \mathbb{V}^{n_1\times n_2}$ is a closed convex set. Note that for any $X \in \mathbb{V}^{n_1\times n_2}$, the constraint $\text{rank}(X)\leq r$ is equivalent to $$ 0=\sigma_{r+1}(X)+\cdots + \sigma_n(X) = \|X\|_* - \|X\|_{(r)},$$ where $\|X\|_{(r)}:=\sigma_1(X)+\cdots + \sigma_r(X)$ denotes the Ky Fan $r$-norm. The central idea of the majorized penalty approach is to solve the following penalized version of (\ref{major1}): $$ \min_{X \in \mathcal{C}}\ h(X) + \rho\big(\|X\|_*-\|X\|_{(r)}\big),$$ where $\rho>0$ is the penalty parameter. With the current iterate $X^k$, the majorized penalty approach yields the next iterate $X^{k+1}$ by solving the convex optimization problem \begin{equation}\label{eqnmpa} \min_{X\in\mathcal{C}}\ \widehat{h}^k(X) + \rho\Big(\|X\|_* - \langle G^k, X \rangle +\frac{\gamma_k}{2}\|X-X^k\|_F^2\Big), \end{equation} where $\gamma_k \geq 0$, $G^k$ is a subgradient of the convex function $\|X\|_{(r)}$ at $X^k$, and $\widehat{h}^k$ is a convex majorization function of $h$ at $X^k$. Comparing with (\ref{eqnrcs}), one may notice that our proposed rank-correction step is close to one step of the majorized penalty approach. \medskip Due to the structured randomness of matrix completion, we expect that the estimator generated from the rank-correction step possesses some favorable properties for recovery. The key issue is how to construct the rank-correction function $F$ to make such improvements possible. In the next two sections, we provide theoretical supports to our proposed rank-correction step, from which some important guidelines on the construction of $F$ can be captured. \medskip Henceforth, we let $\widehat{X}_m$ denote the estimator generated from the rank-correction step (\ref{eqnrcs}) or (\ref{eqnrcspos}) for the corresponding cases and let $r=\text{rank}(\overline{X})\geq 1$. For any $X\in\mathbb{V}^{n_1\times n_2}$ and any $(U,V)\in \mathbb{O}^{n_1,n_2}(X)$, we write $U = [U_1 \ \ U_2 ]$ and $V =[V_1 \ \ V_2 ]$ with $U_1 \in \mathbb{O}^{n_1\times r}$, $U_2 \in \mathbb{O}^{n_1\times (n_1-r)}$, $V_1 \in \mathbb{O}^{n_2\times r}$ and $V_2 \in \mathbb{O}^{n_2\times (n_2-r)}$. In particular, for any $X\in\mathbb{S}_+^n$ and any $P \in \mathbb{O}^n(X)$, we write $P = [P_1 \ \ P_2]$ with $P_1 \in \mathbb{O}^{n \times r}$ and $P_2 \in \mathbb{O}^{n\times (n-r)}$. \medskip \section{Error bounds}\label{section3} In this section, we aim to derive a recovery error bound in the Frobenius norm for the rank-correction step and discuss the impact of the rank-correction term on the obtained bound. The following analysis focuses on the rectangular case. All the results obtained in this section are applicable to the positive semidefinite case since adding more prior information can only improve recoverability. \medskip We first introduce the orthogonal decomposition $\mathbb{V}^{n_1\times n_2}=T\oplus T^\perp$ with \begin{gather*} T:=\Big\{X\in \mathbb{V}^{n_1\times n_2} \ |\ X = X_1 + X_2\ {\rm with}\ {\rm col}(X_1)\subseteq {\rm col}(\overline{X}), {\rm row}(X_2)\subseteq {\rm row}(\overline{X})\Big\},\\ T^{\bot}:=\Big\{X\in \mathbb{V}^{n_1\times n_2}\ |\ {\rm row}(X)\perp {\rm row}(\overline{X})\ {\rm and}\ {\rm col}(X)\perp {\rm col}(\overline{X})\Big\}, \end{gather*} where ${\rm row}(X)$ and ${\rm col}(X)$ denote the row space and column space of the matrix $X$, respectively. Let $\mathcal{P}_{T}:\mathbb{V}^{n_1\times n_2} \rightarrow \mathbb{V}^{n_1\times n_2}$ and $\mathcal{P}_{T^\perp}: \mathbb{V}^{n_1\times n_2} \rightarrow \mathbb{V}^{n_1\times n_2}$ be the orthogonal projection operators onto the subspaces $T$ and $T^\perp$, respectively. It is not hard to verify that \begin{equation}\label{Operator-PT} \mathcal{P}_T(X) = \overline{U}_1\overline{U}_1^\mathbb{T}X + X \overline{V}_1\overline{V}_1^\mathbb{T} - \overline{U}_1\overline{U}_1^\mathbb{T}X \overline{V}_1 \overline{V}_1^\mathbb{T} \ \ \text{and}\ \ \mathcal{P}_{T^\perp}(X) = \overline{U}_2\overline{U}_2^\mathbb{T} X \overline{V}_2\overline{V}_2^\mathbb{T} \end{equation} for any $X\in\mathbb{V}^{n_1\times n_2}$ and $(\overline{U},\overline{V})\in\mathbb{O}^{n_1,n_2}(\overline{X})$. Define $a_m$ and $b_m$, respectively, by \begin{equation}\label{eqndelalpbet} a_m : = \|\overline{U}_1\overline{V}_1^\mathbb{T} - \mathcal{P}_T(F(\widetilde{X}_m)+\gamma_m\widetilde{X}_m))\| \ \ \text{and}\ \ b_m: = 1- \|\mathcal{P}_{T^\perp} (F(\widetilde{X}_m)+\gamma_m\widetilde{X}_m))\|. \end{equation} \medskip Note that the first term in the objective function of (\ref{eqnrcs}) can be rewritten as $$\frac{1}{2m} \left\|y-\mathcal{R}_\Omega(X)\right\|_2^2 = \frac{1}{2m}\left\|\mathcal{R}_\Omega(X-\overline{X})\right\|_2^2 - \frac{\nu}{m}\left\langle \mathcal{R}_\Omega^*(\xi), X\right\rangle.$$ Using the optimality of $\widehat{X}_m$ to the problem (\ref{eqnrcs}), we obtain the following result. \begin{theorem}\label{thmopbd} Assume that $\|\mathcal{P}_{T^\perp}(F(\widetilde{X}_m)\!+\gamma_m\widetilde{X}_m)\|\!<1$. For any given $\kappa>1$, if \begin{equation}\label{eqndefrho} \rho_m \geq \frac{\kappa \nu}{b_m}\Big\|\frac{1}{m}\mathcal{R}_\Omega^*(\xi)\Big\|, \end{equation} then the following inequality holds: \begin{equation}\label{eqnopbd} \frac{1}{2m} \big\|\mathcal{R}_\Omega(\widehat{X}_m\!-\overline{X})\big\|_2^2 \le \!\sqrt{2r} \Big(a_m\!+\!\frac{b_m}{\kappa}\Big)\rho_m\|\widehat{X}_m\!-\overline{X}\|_F +\frac{\rho_m\gamma_m}{2} \left(\!\|\overline{X}\|_F^2 -\!\|\widehat{X}_m\|_F^2\right). \end{equation} \end{theorem} Theorem \ref{thmopbd} shows that, to derive an error bound on $\|\widehat{X}_m-\overline{X}\|_{F}$, we only need to establish the relation between $\|\widehat{X}_m-\overline{X}\|_F^2$ and $\frac{1}{m}\|\mathcal{R}_\Omega(\widehat{X}_m-\overline{X})\|_{2}^2$. It is well known that the sampling operator $\mathcal{R}_\Omega$ does not satisfy the RIP, but it has a similar property with high probability under certain conditions (see, e.g., \cite{NegW12, KolLT11, Klo12, Liu11}). For deriving such a property, here, we impose a bound restriction on the true matrix $\overline{X}$ in the form of $\|\mathcal{R}_\beta(\overline{X})\|_\infty\leq c$. This condition is very mild since a bound is often known in some applications such as in the correlation and density matrix completion. Correspondingly, we add the bound constraint $\|\mathcal{R}_\beta(X)\!\|_\infty\!\leq c$ to the problem (\ref{eqnrcs}) in the rank-correction step. Since the feasible set is bounded in this case, we simply set $\gamma_m =0$ and let $\widehat{X}_m^{c}$ denote the estimator generated from the rank-correction step in this case. \medskip The above boundedness setting is similar to the one adopted by Klopp \cite{Klo12} for the nuclear norm penalized least squares estimator. A slight difference is that the upper bound is imposed on the basis coefficients of $\overline{X}$ relative to $\{\Theta_{k}:k\in\beta\}$ rather than all the entries of $\overline{X}$. It is easy to see that if the bound is not too tight, the estimator $\widehat{X}_m^{c}$ is the same as $\widehat{X}_m$. Therefore, we next derive the recovery error bound of $\widehat{X}_m^c$ instead of $\widehat{X}_m$, by following Klopp's arguments in \cite{Klo12}, which are also in line with the work done by Negahban and Wainwright \cite{NegW12}. \medskip Let $\mu_1$ be a constant to control the smallest sampling probability for observations as \begin{equation}\label{pk} p_k \geq (\mu_1 d_2)^{-1}\quad \forall\, k\in\beta. \end{equation} It follows from Assumption \ref{asmpprob} that $\mu_1 \geq 1$ and in particular $\mu_1=1$ for the uniform sampling. Note that the magnitude of $\mu_1$ does not depend on $d_2$ or the matrix size. By the definition of $\mathcal{Q}_\beta$, we then have \begin{equation}\label{eqndefiot}\langle \mathcal{Q}_\beta(\Delta), \Delta \rangle \geq (\mu_1 d_2)^{-1}\|\Delta\|_F^2 \quad \forall \, \Delta \in \{\Delta \in \mathbb{V}^{n_1\times n_2} \mid \mathcal{R}_\alpha(\Delta) = 0\}. \end{equation} Let $\{\epsilon_1,\ldots, \epsilon_m\}$ be an i.i.d. Rademacher sequence, i.e., an i.i.d. sequence of Bernoulli random variables taking the values $1$ and $-1$ with probability $1/2$. Define \begin{equation}\label{defvaritheta} \vartheta_m := \mathbb{E} \Big\|\frac{1}{m}\mathcal{R}_\Omega^*(\epsilon)\Big\|\ \ {\rm with}\ \ \epsilon= (\epsilon_1,\ldots,\epsilon_m)^{\mathbb{T}}. \end{equation} Then, we can obtain a similar result to \cite[Lemma 12]{Klo12} by showing that the sampling operator $\mathcal{R}_\Omega$ satisfies some approximate RIP for the matrices in the following set \begin{align*} \mathcal{C}(r): = & \bigg\{\Delta \in \mathbb{V}^{n_1\times n_2}\mid\ \mathcal{R}_\alpha(\Delta)=0,\ \|\mathcal{R}_\beta(\Delta)\|_\infty=1,\ \|\Delta\|_* \leq \sqrt{r}\|\Delta\|_F,\\ & \qquad\qquad\qquad\qquad\qquad\qquad \langle \mathcal{Q}_\beta(\Delta), \Delta \rangle \geq \sqrt{\frac{64\log(n_1+n_2)}{\log(2)m}}\bigg\}. \end{align*} \begin{lemma}\label{lemiso} For all matrices $\Delta \in \mathcal{C}(r)$, with probability at least $1\!-2/(n_1\!+n_2)$, we have \begin{equation*} \frac{1}{m}\|\mathcal{R}_\Omega(\Delta)\|_2^2 \geq \frac{1}{2} \langle \mathcal{Q}_\beta(\Delta),\Delta\rangle - 128\mu_1 d_2 r \vartheta_m^2. \end{equation*} \end{lemma} \medskip Now, by combining Theorem \ref{thmopbd} and Lemma \ref{lemiso}, we obtain the following result. \begin{theorem}\label{thmbdmid} Assume that $\|\mathcal{P}_{T^\perp} (F(\widetilde{X}_m))\|<1$ and $\|\mathcal{R}_\beta(\overline{X})\|_\infty \le c$ for some constant $c$. If $\rho_m$ is chosen to satisfy (\ref{eqndefrho}), then there exists a numerical constant $C$ such that \begin{align*} \frac{\|\widehat{X}_m^c \!- \!\overline{X}\|_F^2}{d_2}\! \leq \! C \max\!\Bigg\{\!\mu_1^2 d_2 r \!\left(\!\Big(a_m \!+ \! \frac{b_m}{\kappa}\Big)^2\!\rho_m^2\! +\! \frac{\kappa^2(a_m\!+b_m)^2}{(\kappa-1)^2b_m^2}c^2 \vartheta_m ^2 \!\right)\!, c^2 \mu_1 \sqrt{\frac{\log(n_1\!+\!n_2)}{m}} \Bigg\} \end{align*} with probability at least $1-{2}/{(n_1\!+n_2)}$. \end{theorem} \medskip In order to choose a parameter $\rho_m$ such that (\ref{eqndefrho}) holds, we need to estimate $\|\frac{1}{m}\mathcal{R}_\Omega^*(\xi)\|$. For this purpose, we make the following assumption on the noises. \begin{assumption}\label{asmpnoi} The i.i.d. noise variables $\xi_i$ are sub-exponential, i.e., there exist positive constants $c_1, c_2$ and $c_3$ such that for all $t>0$, ${\rm Pr}(|\xi_i| \geq t)\leq c_1\exp(-c_2t^{c_3}).$ \end{assumption} The noncommutative Bernstein inequality is a useful tool for the study of matrix completion problems. It provides bounds of the probability that the sum of random matrices deviates from its mean in the operator norm (see, e.g., \cite{Rec11,Tro11,Gro11}). Recently, the noncommutative Bernstein inequality was extended by replacing bounds of the operator norm of matrices with bounds of the Orlicz norms (see \cite{Kol12, KolLT11}). Given any $s\geq 1$, the $\psi_s$ Orlicz norm of a random variable $\theta$ is defined by $$\|\theta\|_{\psi_s}:=\inf\big\{t>0 \mid\ \mathbb{E}\exp(|\theta|^s / t^s) \leq 2\big\}.$$ The Orlicz norms are useful to characterize the tail behavior of random variables. The following noncommutative Bernstein inequality is taken from \cite[Corollary 2.1]{Kol11}. \begin{proposition}\label{propnbi} Let $Z_1,\ldots, Z_m\in\mathbb{V}^{n_1\times n_2}$ be independent random matrices with mean zero. Suppose that \( \max\big\{\big\|\|Z_i\|\big\|_{\psi_s},2\mathbb{E}^{\frac{1}{2}}(\|Z_i\|^2)\big\} <\varpi_{s} \) for some constant $\varpi_{s}$. Define $$\sigma_Z := \max\left\{\bigg\|\frac{1}{m} \sum_{i=1}^m \mathbb{E}(Z_iZ_i^\mathbb{T})\bigg\|^{1/2},\ \bigg\|\frac{1}{m} \sum_{i=1}^m \mathbb{E}(Z_i^\mathbb{T}Z_i)\bigg\|^{1/2}\right\}.$$ Then, there exists a constant $C$ such that for all $t>0$, with probability at least $1\!-\exp(-t)$, $$\bigg\|\frac{1}{m} \sum_{i=1}^m Z_i\bigg\| \leq C \max\left\{ \sigma_Z\sqrt{\frac{t+\log(n_1+n_2)}{m}}, \varpi_{s}\left(\log\frac{\varpi_{s}}{\sigma_Z}\right)^{1/s}\frac{t+\log(n_1\!+n_2)}{m}\right\}.$$ \end{proposition} It is known that a random variable is sub-exponential if and only its $\psi_1$ Orlicz norm is finite \cite{MilS86}. To apply the noncommutative Bernstein inequality, we let $\mu_2$ be a constant such that \begin{equation}\label{eqndefL} \max\left\{\bigg\|\sum_{k\in\beta} p_k\Theta_k\Theta_k^\mathbb{T}\bigg\|,\ \bigg\|\sum_{k\in\beta} p_k\Theta_k^\mathbb{T}\Theta_k\bigg\|\right\} \leq \frac{\mu_2}{n}. \end{equation} Notice that since $\text{Tr}\big(\sum_{k\in\beta} p_k\Theta_k\Theta_k^\mathbb{T}\big) = \text{Tr}\big(\sum_{k\in\beta} p_k\Theta_k^\mathbb{T}\Theta_k\big)=1$, the lower bound of the term on the left-hand side is $1/n$. This implies that $\mu_2 \geq 1$. In the following, we also assume that the magnitude of $\mu_2$ does not depend on the matrix size. For example, $\mu_2=1$ for the correlation matrix completion under uniform sampling and the density matrix completion described in Section \ref{section2}. The following result extends \cite[Lemma 2]{KolLT11} and \cite[Lemmas 5 \& 6]{Klo12} from the standard basis to an arbitrary orthonormal basis. A similar result can also be found in \cite[Lemma 6]{NegW12}. \begin{lemma}\label{lemben} Under Assumption \ref{asmpnoi}, there exists a constant $C^*$ (only depending on the $\psi_1$ Orlicz norm of $\xi_k$) such that for all $t>0$, with probability at least $1-\exp(-t)$, \begin{equation}\label{eqnnoibd} \left\|\frac{1}{m}\mathcal{R}^*_\Omega(\xi)\right\| \leq C^*\max\left\{\sqrt{\frac{\mu_2(t+\log(n_1+n_2))}{mn}}, \frac{\log(n)(t+\log(n_1+n_2))}{m}\right\}. \end{equation} In particular, when $m\geq n\log^3(n_1+n_2)/\mu_2$, we also have \begin{equation}\label{eqnnoiexpd} \mathbb{E}\bigg\|\frac{1}{m}\mathcal{R}^*_\Omega(\xi)\bigg\| \leq C^*\sqrt{\frac{2e\mu_2\log(n_1+n_2)}{mn}}. \end{equation} \end{lemma} Since Bernoulli random variables are sub-exponential, the right-hand side of (\ref{eqnnoiexpd}) provides an upper bound of $\vartheta_m$ defined by (\ref{defvaritheta}). Now, we choose $t=\log(n_1\!+n_2)$ in Lemma \ref{lemben} for achieving an optimal order bound. With this choice, when $m \geq 2n\log^2(n_1\!+n_2)/\mu_2$, the first term in the maximum of (\ref{eqnnoibd}) dominates the second term. Hence, for any given $\kappa>1$, by choosing \begin{equation}\label{eqnrhoopt} \rho_m = \frac{\kappa \nu}{b_m} C^*\sqrt{\frac{2\mu_2\log(n_1+n_2)}{mn}}, \end{equation} from Theorem \ref{thmbdmid} and Lemma \ref{lemben}, we obtain the following main result for recovery error bound. \begin{theorem}\label{thmstobd} Assume that $\|\mathcal{P}_{T^\perp} (F(\widetilde{X}_m))\|\!<1$, $\|\mathcal{R}_\beta(\overline{X})\|_\infty\!\leq c$ for some constant $c$, and Assumption \ref{asmpnoi} holds. For any given $\kappa>1$, if $\rho_m$ is chosen according to (\ref{eqnrhoopt}), then there exists a numerical constant $C'$ such that, when $m \geq n\log^3(n_1\!+n_2)/\mu_2$, \begin{align} \frac{\|\widehat{X}_m^c\!-\overline{X}\|_F^2}{d_2}\! \leq \!C'\max\Bigg\{& \! \left[\Big(1\!+\!\kappa \frac{a_m}{b_m}\Big)^2\nu^2\!+\!\Big(\frac{\kappa}{\kappa\!-\!1}\Big)^2 \left(1\!+\!\frac{a_m}{b_m}\right)^2c^2\right]\frac{\mu_1^2\mu_2d_2r\log(n_1\!+\!n_2)}{mn},\nonumber \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad c^2 \mu_1 \sqrt{\frac{\log(n_1+n_2)}{m}} \Bigg\} \label{eqnstobd} \end{align} with probability at least $1-3/(n_1\!+n_2)$. \end{theorem} When the matrix size is large, the second term in the maximum of (\ref{eqnstobd}) is negligible, compared with the first term. Thus, Theorem \ref{thmstobd} indicates that for any rank-correction function such that $\|\mathcal{P}_{T^\perp} (F(\widetilde{X}_m))\|<1$, one only needs samples with size of order $d_2 r \log(n_1+n_2)/n$ to control the recovery error. Note that $d_2$ is of order $n_1n_2$ in general. Hence, the order of sample size needed is roughly the degree of freedom of a rank $r$ matrix up to a logarithmic factor in the matrix size. In addition, it is very interesting to notice that the value of $\kappa$ (or the value of $\rho_m$) has a substantial influence on the recovery error bound. The first term in the maximum of (\ref{eqnstobd}) is a sum of two parts related to $\nu$ and $c$, respectively. The part related to $\nu$ will increase as $\kappa$ increases provided $a_m/b_m>0$, while the part related to $c$ will slightly decreases to its limit as $\kappa$ increases. \medskip Theorem \ref{thmstobd} also reveals the impact of the rank-correction term on recovery error. Note that the value of $a_m/b_m$ fully depends on the rank-correction function $F$ when an initial estimator $\widetilde{X}_m$ is given. A smaller value of $a_m/b_m$ brings a smaller error bound and potentially leads to a smaller recovery error for the rank-correction step. Note that for any given $\varepsilon_1\ge 0$ and $0\le \varepsilon_2<1$, we have \begin{eqnarray*} \frac{a_m}{b_m} \leq \frac{\varepsilon_1}{1-\varepsilon_2}\quad {\rm if}\quad \big\|\mathcal{P}_T\big(F(\widetilde{X}_m)) - \overline{U}_1\overline{V}_1^\mathbb{T}\big\|\le\varepsilon_1\ {\rm and}\ \big\|\mathcal{P}_{T^\perp} \big(F(\widetilde{X}_m) \big)\big\|\leq \varepsilon_2. \end{eqnarray*} In particular, if $F\equiv 0$, then the estimator of the rank-correction step reduces to the nuclear norm penalized least squares estimator with $a_m/b_m=1$. Thus, Theorem \ref{thmstobd} shows that, with a suitable rank-correction function $F$, the estimator generated from the rank-correction step for recovery is very likely to perform better than the nuclear norm penalized least squares estimator. In addition, this observation also provides us clues on how to construct a good rank-correction function, to be discussed in Section \ref{section5}. \medskip \section{Rank consistency}\label{section4} In this section we study the asymptotic behavior of the rank of the estimator $\widehat{X}_m$ for both the rectangular case and the positive semidefinite case. Theorem \ref{thmstobd} shows that under mild conditions, the distribution of $\widehat{X}_m$ becomes more and more concentrated to the true matrix $\overline{X}$. Due to the low-rank structure of $\overline{X}$, we expect that the estimator $\widehat{X}_m$ has the same low-rank property as $\overline{X}$. For this purpose, we consider the rank consistency in the sense of Bach \cite{Bac08} under the setting that the matrix size is fixed. \begin{definition} An estimator $X_m$ of the true matrix $\overline{X}$ is said to be rank consistent if $$\lim\limits_{m\rightarrow \infty}{\rm Pr}\big({\rm rank}(X_m)= {\rm rank}(\overline{X})\big)=1.$$ \end{definition} Throughout this section we make the following assumptions: \begin{assumption}\label{asmpfun} The spectral operator $F$ is continuous at $\overline{X}$. \end{assumption} \begin{assumption}\label{asmpini} The initial estimator $\widetilde{X}_m$ satisfies $\widetilde{X}_m \stackrel{p}{\rightarrow} \overline{X}$ as $m\rightarrow \infty$. \end{assumption} In addition, we also need the following properties of the operator $\mathcal{R}_\Omega$ and its adjoint $\mathcal{R}_\Omega^*$. \begin{lemma}\label{lemoper} {\bf (i)} For any given $X \in \mathbb{V}^{n_1\times n_2}$, the random matrix $\displaystyle{\frac{1}{m}} \mathcal{R}_\Omega^* \mathcal{R}_\Omega(X) \stackrel{a.s.}{\rightarrow} \mathcal{Q}_\beta(X)$. \\ \noindent {\bf (ii)} The random vector $\displaystyle{\frac{1}{\sqrt{m}}}\mathcal{R}_{\alpha\cup\beta}\mathcal{R}_\Omega^*(\xi) \stackrel{d}{\rightarrow} N\big(0, {\rm Diag}(p)\big)$, where $p=(p_1,\ldots,p_d)^{\mathbb{T}}$. \end{lemma} Epi-convergence in distribution is useful in proving the convergence in distribution of minimizers or $\varepsilon_m$-minimizers. The following epi-convergence result is taken from \cite{Kni99}. \begin{proposition}\label{propepicon} Let $\{\Phi_m\}$ be a sequence of random lower-semicontinuous functions that epi-converges in distribution to $\Phi$. Assume that \begin{description} \item[(i)] $\widehat{x}_m$ is an $\varepsilon_m$-minimizer of $\Phi_m$, i.e., $\Phi_m(\widehat{x}_m)\leq \inf \Phi_m(x) +\varepsilon_m$, where $\varepsilon_m \stackrel{p}{\rightarrow} 0$; \item[(ii)] $\widehat{x}_m= O_p(1)$; \item[(iii)] the function $\Phi$ has a unique minimizer $\overline{x}$. \end{description} Then, $\widehat{x}_m \stackrel{d}{\rightarrow} \overline{x}$. In addition, if $\Phi$ is a deterministic function, then $\widehat{x}_m \stackrel{p}{\rightarrow} \overline{x}$. \end{proposition} We know fFrom \cite{Gey96} that $\widehat{x}_m$ is guaranteed to be $O_p(1)$ when all $\Phi_m$ are convex functions and $\Phi$ has a unique minimizer. For more details on epi-convergence in distribution, one may refer to King and Wets \cite{KinW91}, Geyer \cite{Gey94}, Pflug \cite{Pfl95} and Knight \cite{Kni99}. In order to apply the epi-convergence theorem to a constrained optimization problem, we need to transform the constrained optimization problem into an unconstrained one by using the indicator function of the feasible set. This leads to the epi-convergence issue of the sum of two sequences of functions. Thus, we need the following epi-convergence result stated in \cite[Lemma 1]{Pfl95}. \begin{proposition}\label{propepisum} Let $\{\Phi_m\}$ be a sequence of random lower-semicontinuous functions and $\{\Psi_m\}$ be a sequence of deterministic lower-semicontinuous functions. If either of the following two assumptions holds: \begin{description} \item[(i)] $\Phi_m$ epi-converges in distribution to $\Phi$ and $\Psi_m$ converges to $\Psi$ with respect to the topology of uniform convergence on compact sets; \item[(ii)] $\Phi_m$ converges in distribution to $\Phi$ with respect to the topology of uniform convergence on compact sets and $\Psi_m$ epi-converges to $\Psi$, \end{description} then $\Phi_m + \Psi_m$ epi-converges in distribution to $\Phi+\Psi$. \end{proposition} Based on the above epi-convergence results, we can analyze the asymptotic behavior of optimal solutions of a sequence of constrained optimization problems. The following result is a direct consequence of the above epi-convergence theorems and Lemma \ref{lemoper}. \begin{theorem}\label{thmcons} If $\rho_m \rightarrow 0$ and $\gamma_m = O_p(1)$, then $\widehat{X}_m \stackrel{p}{\rightarrow} \overline{X}$ as $m\rightarrow \infty$. \end{theorem} Then, according to Theorem \ref{thmcons} and lower semi-continuity of the rank function, it is straightforward to obtain: \begin{corollary}\label{cororkrhs} If $X_m \stackrel{p}{\rightarrow} \overline{X}$, then $\lim\limits_{m\rightarrow \infty}{\rm Pr}\big({\rm rank}(X_m) \geq {\rm rank}(\overline{X})\big)=1$. \end{corollary} In what follows, we focus on the characterization of necessary and sufficient conditions for rank consistency of $\widehat{X}_m$. The idea is similar to that of \cite{Bac08} for the nuclear norm penalized least squares estimator. Note that, unlike for the recover error bound, adding more constraints may break the rank consistency. Therefore, we separate the discussion into the rectangular case and the positive semidefinite case below. \subsection{The rectangular case} Since we have established that $\widehat{X}_m \stackrel{p}{\rightarrow} \overline{X}$, we only need to focus on some neighborhood of $\overline{X}$ for the discussion about the rank consistency of $\widehat{X}_m$. First, we take a look at a local property of the rank function via the directional derivative of the singular value functions. \medskip Let $\sigma'_i(X;\cdot)$ denote the directional derivative function of the $i$-th largest singular value function $\sigma_i(\cdot)$ at $X$. From \cite[Section 5.1]{Lew05} and \cite[Proposition 6]{DinST10}, for $\mathbb{V}^{n_1\times n_2} \ni H \rightarrow 0$, \begin{equation}\label{eqnlocsin} \sigma_i(X + H)-\sigma_i(X)-\sigma'_i (X;H) = O(\|H\|_F^2), \quad i = 1,\ldots, n. \end{equation} Recall that $r=\text{rank}(\overline{X})$. From \cite[Proposition 6]{DinST10}, we have \[ \sigma_{r+1}'(\overline{X};H) = \|\overline{U}_2^\mathbb{T}H\overline{V}_2\|, \quad\ H\in\mathbb{V}^{n_1\times n_2}. \] This leads to the following result for the perturbation of the rank function. A similar result can also be found in \cite[Proposition 18]{Bac08}, whose proof is more involved. \begin{lemma}\label{lemlocrk} Let $\overline{\Delta} \in \mathbb{V}^{n_1\times n_2}$ satisfy $\overline{U}_2^\mathbb{T} \overline{\Delta}\,\overline{V}_2 \neq 0$. Then, for all $\rho\neq 0$ sufficiently small and $\Delta$ sufficiently close to $\overline{\Delta}$, ${\rm rank}(\overline{X}+\rho \Delta) > {\rm rank}(\overline{X})$. \end{lemma} To guarantee the efficiency of the rank-correction term on encouraging a low-rank solution, the parameter $\rho_m$ should not decay too fast. Define $\widehat{\Delta}_m: = \rho_m^{-1}(\widehat{X}_m-\overline{X})$. Then, for a slow decay on $\rho_m$, we can establish the following result. \begin{proposition}\label{propdellim} If $\rho_m \rightarrow 0, \sqrt{m}\rho_m\rightarrow \infty$ and $\gamma_m = O_p(1)$, then $\widehat{\Delta}_m \stackrel{p}{\rightarrow} \widehat{\Delta}$, where $\widehat{\Delta}$ is the unique optimal solution to the following convex optimization problem \begin{equation}\label{eqndellim} \begin{aligned} \min_{\Delta \in \mathbb{V}^{n_1\times n_2}} &\ {\displaystyle \frac{1}{2}} \langle \mathcal{Q}_\beta(\Delta), \Delta \rangle + \langle \overline{U}_1 \overline{V}_1^\mathbb{T} - F(\overline{X}), \Delta \rangle + \|\overline{U}_2^\mathbb{T} \Delta \overline{V}_2\|_*\\ {\rm s.t.}\ \ \ &\ \mathcal{R}_\alpha(\Delta) = 0. \end{aligned} \end{equation} \end{proposition} Note that $\widehat{X}_m = \overline{X} + \rho_m \widehat{\Delta}_m$. From Corollary \ref{cororkrhs}, Lemma \ref{lemlocrk} and Proposition \ref{propdellim}, we see that the condition $\overline{U}_2^\mathbb{T}\widehat{\Delta}\overline{V}_2=0$ is necessary for the rank consistency of $\widehat{X}_m$. From the following property of the unique solution $\widehat{\triangle}$ to (\ref{eqndellim}), we can derive a more detailed necessary condition for rank consistency as stated in Theorem \ref{thmnes} below. \begin{lemma}\label{lemdelcond} Let $\widehat{\Delta}$ be the optimal solution to (\ref{eqndellim}). Then $\overline{U}_2^\mathbb{T} \widehat{\Delta} \overline{V}_2 = 0$ if and only if the linear system \begin{equation}\label{eqndelcond} \overline{U}_2^\mathbb{T}\mathcal{Q}_\beta^\dag(\overline{U}_2 \Gamma\overline{V}_2^\mathbb{T})\overline{V}_2 = \overline{U}_2^\mathbb{T}\mathcal{Q}_\beta^\dag\big(\overline{U}_1 \overline{V}_1^\mathbb{T}-F(\overline{X})\big)\overline{V}_2 \end{equation} has a solution $\widehat{\Gamma} \in \mathbb{V}^{(n_1-r)\times (n_2-r)}$ with $\|\widehat{\Gamma}\| \leq 1$. Moreover, in this case, \begin{equation}\label{eqndelvaldel} \widehat{\Delta} = \mathcal{Q}_\beta^\dag\big(\overline{U}_2\widehat{\Gamma}\,\overline{V}_2^\mathbb{T}-\overline{U}_1\overline{V}_1^\mathbb{T} + F(\overline{X})\big). \end{equation} \end{lemma} \begin{theorem}\label{thmnes} If $\rho_m \rightarrow 0$, $\sqrt{m}\rho_m\rightarrow \infty$ and $\gamma_m = O_p(1)$, then a necessary condition for the rank consistency of $\widehat{X}_m$ is that the linear system (\ref{eqndelcond}) has a solution $\widehat{\Gamma} \in \mathbb{V}^{(n_1-r)\times (n_2-r)}$ with $\|\widehat{\Gamma}\|\le 1$. \end{theorem} By making a slight modification for the necessary condition in Theorem \ref{thmnes}, we provide a sufficient condition for the rank consistency of the estimator $\widehat{X}_m$ as follows. \begin{theorem}\label{thmsuf} If $\rho_m \rightarrow 0, \sqrt{m}\rho_m\rightarrow \infty$ and $\gamma_m = O_p(1)$, then a sufficient condition for the rank consistency of the estimator $\widehat{X}_m$ is that the linear system (\ref{eqndelcond}) has a unique solution $\widehat{\Gamma} \in \mathbb{V}^{(n_1-r)\times (n_2-r)}$ with $\|\widehat{\Gamma}\| < 1$. \end{theorem} \subsection{The positive semidefinite case} For the positive semidefinite case, we first need the following Slater condition. \begin{assumption}\label{asmpslater} There exists some $X^0 \in \mathbb{S}^n_{++}$ such that $\mathcal{R}_\alpha(X^0) = \mathcal{R}_\alpha(\overline{X})$. \end{assumption} \begin{proposition}\label{propdellimpos} If $\rho_m \rightarrow 0, \sqrt{m}\rho_m\rightarrow \infty$ and $\gamma_m = O_p(1)$, then $\widehat{\Delta}_m \stackrel{p}{\rightarrow} \widehat{\Delta}$, where $\widehat{\Delta}$ is the unique optimal solution to the following convex optimization problem \begin{equation}\label{eqndellimpos} \begin{aligned} \min_{\Delta \in \mathbb{S}^n} & \ \ {\displaystyle \frac{1}{2}} \langle \mathcal{Q}_\beta(\Delta), \Delta \rangle + \langle I_n - F(\overline{X}), \Delta \rangle\\ {\rm s.t.} & \ \ \mathcal{R}_\alpha(\Delta) = 0, \quad \overline{P}_2^\mathbb{T} \Delta \overline{P}_2 \in \mathbb{S}_+^{n-r}. \end{aligned} \end{equation} \end{proposition} For the optimal solution $\widehat{\Delta}$ to (\ref{eqndellimpos}), we also have the following further characterization. \begin{lemma}\label{lemdelcondpos} Let $\widehat{\Delta}$ be the optimal solution to (\ref{eqndellimpos}). Then $\overline{P}_2^\mathbb{T} \widehat{\Delta} \overline{P}_2 = 0$ if and only if the linear system \begin{equation}\label{eqndelcondpos} \overline{P}_2^\mathbb{T}\mathcal{Q}_\beta^\dag(\overline{P}_2 \Lambda \overline{P}_2^\mathbb{T})\overline{P}_2 = \overline{P}_2^\mathbb{T}\mathcal{Q}_\beta^\dag\big(I_n- F(\overline{X})\big)\overline{P}_2 \end{equation} has a solution $\widehat{\Lambda} \in \mathbb{S}_+^{n-r}$. Moreover, in this case, \begin{equation}\label{eqndelvaldelpos} \widehat{\Delta} = \mathcal{Q}_\beta^\dag\big(\overline{P}_2\widehat{\Lambda}\,\overline{P}_2^\mathbb{T}-I_n + F(\overline{X})\big). \end{equation} \end{lemma} Note that Lemma \ref{lemlocrk} still holds for the positive semidefinite case if $\overline{U}_2^\mathbb{T} \Delta \overline{V}_2$ is replaced by $\overline{P}_2^\mathbb{T} \Delta \overline{P}_2$. Therefore, in line with the rectangular case, from Lemma \ref{lemdelcondpos}, we have the following necessary condition for rank consistency. \begin{theorem}\label{thmnespos} If $\rho_m \rightarrow 0, \sqrt{m}\rho_m\rightarrow \infty$ and $\gamma_m = O_p(1)$, then a necessary condition for the rank consistency of $\widehat{X}_m$ is that the linear system (\ref{eqndelcondpos}) has a solution $\widehat{\Lambda} \in \mathbb{S}^{n-r}_+$. \end{theorem} Analogies to Theorem \ref{thmsuf}, we have the following sufficient condition for rank consistency for the positive semidefinite case. \begin{theorem}\label{thmsufpos} If $\rho_m \rightarrow 0, \sqrt{m}\rho_m\rightarrow \infty$ and $\gamma_m = O_p(1)$, then a sufficient condition for the rank consistency of $\widehat{X}_m$ is that the linear system (\ref{eqndelcondpos}) has a unique solution $\widehat{\Lambda} \in \mathbb{S}^{n-r}_{++}$. \end{theorem} \subsection{Constraint nondegeneracy and rank consistency} In this subsection, with the help of constraint nondegeneracy, we provide conditions to guarantee that the linear systems (\ref{eqndelcond}) and (\ref{eqndelcondpos}) have a unique solution. The concept of constraint nondegeneracy was pioneered by Robinson \cite{Rob84} and later extensively developed by Bonnans and Shapiro \cite{BonS00}. Consider the following constrained optimization problem \begin{equation}\label{conic-prob} \min_{X\in\mathbb{V}^{n_1\times n_2}}\Big\{\Phi(X)+\Psi(X):\ \mathcal{A}(X)-b\in K\Big\}, \end{equation} where $\Phi:\mathbb{V}^{n_1\times n_2} \to\mathbb{R}$ is a continuously differentiable function, $\Psi:\mathbb{V}^{n_1\times n_2}\to\mathbb{R}$ is a convex function, $\mathcal{A}: \mathbb{V}^{n_1\times n_2}\to\mathbb{R}^l$ is a linear operator, $b\in\mathbb{R}^l$ is a given vector and $K \subseteq \mathbb{R}^l$ is a closed convex set. Let $\widehat{X}$ be a given feasible point of (\ref{conic-prob}) and $\widehat{z}:=\mathcal{A}(\widehat{X})-b$. When $\Psi$ is differentiable at $\widehat{X}$, we say that the constraint nondegeneracy holds at $\widehat{X}$ if \begin{equation}\label{nondegeneracy1} \mathcal{A}\,\mathbb{V}^{n_1\times n_2} + {\rm lin}\big(\mathcal{T}_{K}(\widehat{z})\big) = \mathbb{R}^l, \end{equation} where $\mathcal{T}_{K}(\widehat{z})$ denotes the tangent cone of $K$ at $\widehat{z}$ and ${\rm lin}(\mathcal{T}_{K}(\widehat{z}))$ denotes the largest linearity space contained in $\mathcal{T}_{K}(\widehat{z})$, i.e., ${\rm lin}(\mathcal{T}_{K}(\widehat{z}))=\mathcal{T}_{K}(\widehat{z})\cap(-\mathcal{T}_{K}(\widehat{z}))$. When the function $\Psi$ is nondifferentiable, we can rewrite the optimization problem (\ref{conic-prob}) equivalently as \[ \min_{X\in\mathbb{V}^{n_1\times n_2},t\in\mathbb{R}}\Big\{\Phi(X)+t:\ \widetilde{\mathcal{A}}(X,t)\in K \times {\rm epi}\Psi \Big\}, \] where ${\rm epi}\Psi:=\left\{(X,t)\in\mathbb{V}^{n_1\times n_2}\times\mathbb{R}\ |\ \Psi(X)\le t\right\}$ denotes the epigraph of $\Psi$ and $\widetilde{\mathcal{A}}:\mathbb{V}^{n_1\times n_2}\times\mathbb{R}\rightarrow \mathbb{R}^l\times \mathbb{V}^{n_1\times n_2}\times\mathbb{R}$ is a linear operator defined by \[ \widetilde{\mathcal{A}}(X,t) := \begin{pmatrix} \mathcal{A}(X) - b\\ X\\ t \end{pmatrix}, \quad\ (X,t)\in \mathbb{V}^{n_1\times n_2} \times \mathbb{R}. \] From (\ref{nondegeneracy1}) and \cite[Theorem 6.41]{RocW98}, the constraint nondegeneracy holds at $(\widehat{X}, \widehat{t})$ with $\widehat{t}=\Psi(\widehat{X})$ if \[ \widetilde{\mathcal{A}}\begin{pmatrix} \mathbb{V}^{n_1\times n_2} \\ \mathbb{R} \end{pmatrix} +\begin{pmatrix} {\rm lin}\big(\mathcal{T}_{K}(\widehat{X})\big) \\ {\rm lin}\big(\mathcal{T}_{{\rm epi}\Psi}(\widehat{X},\widehat{t})\big) \end{pmatrix} = \begin{pmatrix} \mathbb{R}^l\\ \mathbb{V}^{n_1\times n_2}\\ \mathbb{R} \end{pmatrix}. \] By the definition of $\widetilde{\mathcal{A}}$, it is not difficult to verify that this condition is equivalent to \begin{equation}\label{nondegeneracy2} [\mathcal{A}\ \ 0]\big({\rm lin}(\mathcal{T}_{{\rm epi}\Psi}(\widehat{X},\widehat{t}))\big) + {\rm lin}\big(\mathcal{T}_{K}(\widehat{X})\big) = \mathbb{R}^l. \end{equation} By letting $\Psi=\|\cdot\|_*, \mathcal{A}=\mathcal{R}_{\alpha}$ and $K\!=\{0\}$, one can see that the problem (\ref{eqnrcs}) takes the form of (\ref{conic-prob}). By the expression of $\mathcal{T}_{{\rm epi}\Psi}(\overline{X},\overline{t})$ with $\overline{t}=\|\overline{X}\|_*$ (e.g., see \cite{JiaST12}), we see that for the problem (\ref{eqnrcs}), the condition (\ref{nondegeneracy2}) reduces to \begin{equation}\label{eqncndc} \mathcal{R}_\alpha\big(\mathcal{T}(\overline{X})\big) = \mathbb{R}^{d_1}, \end{equation} where \begin{equation}\label{TX} \mathcal{T}(\overline{X}) = \big\{H \in \mathbb{V}^{n_1\times n_2} \mid \overline{U}_2^\mathbb{T} H \overline{V}_2 = 0 \big\}. \end{equation} Hence, we say that the constraint nondegeneracy holds at $\overline{X}$ to the problem (\ref{eqnrcs}) if the condition (\ref{eqncndc}) holds. By letting $\Psi =\delta_{\mathbb{S}_{+}^n}, \mathcal{A}=\mathcal{R}_{\alpha}$ and $K=\{0\}$, we can see that the problem (\ref{eqnrcspos}) takes the form of (\ref{conic-prob}) , and now that the condition (\ref{nondegeneracy2}) reduces to \begin{equation}\label{eqncndcpos} \mathcal{R}_\alpha\big(\text{lin}(\mathcal{T}_{\mathbb{S}^n_+}(\overline{X}))\big) = \mathbb{R}^{d_1}. \end{equation} Thus, we say that the constraint nondegeneracy holds at $\overline{X}$ to the problem (\ref{eqnrcspos}) if the condition (\ref{eqncndcpos}) holds. From Arnold's characterization of the tangent cone $\mathcal{T}_{\mathbb{S}^n_+}(\overline{X})=\big\{H \in\mathbb{S}^n \mid \overline{P}_2^\mathbb{T} H \overline{P}_2 \in \mathbb{S}_+^{n-r} \big\}$ in \cite{Arn71}, we can write the linearity space $\text{lin}(\mathcal{T}_{\mathbb{S}^n_+}(\overline{X}))$ explicitly as $$\text{lin}(\mathcal{T}_{\mathbb{S}^n_+}(\overline{X}))=\big\{H \in \mathbb{S}^{n} \mid \overline{P}_2^\mathbb{T} H \overline{P}_2 = 0 \big\}.$$ \medskip Interestingly, for some special matrix completion problems, the constraint nondegeneracy automatically hold at $\overline{X}$, as stated in the following proposition. \begin{proposition}\label{propcorcn} For the following matrix completion problems: \begin{description} \item[(i)] the covariance matrix completion with partial positive diagonal entries being fixed , in particular, the correlation matrix completion with all diagonal entries being fixed as ones; \item[(ii)] the density matrix completion with its trace being fixed as one, \end{description} the constraint nondegeneracy (\ref{eqncndcpos}) holds at $\overline{X}$. \end{proposition} \medskip Next, we take a closer look at the solutions to the linear systems (\ref{eqndelcond}) and (\ref{eqndelcondpos}). Define linear operators $\mathcal{B}_1:\mathbb{V}^{r\times r} \to \mathbb{V}^{(n_1-r)\times (n_2-r)}$ and $\mathcal{B}_2:\mathbb{V}^{(n_1-r)\times(n_2-r)} \to \mathbb{V}^{(n_1-r)\times(n_2-r)}$ associated with $\overline{X}$, respectively, by \begin{equation}\label{linear-operator} \mathcal{B}_1(Y): = \overline{U}_2^\mathbb{T}\mathcal{Q}_{\beta}^{\dag}(\overline{U}_1 Y \overline{V}_1^T)\overline{V}_2 \ \ {\rm and}\ \ \mathcal{B}_2(Z): = \overline{U}_2^\mathbb{T}\mathcal{Q}_{\beta}^{\dag}(\overline{U}_2 Z \overline{V}_2^T)\overline{V}_2, \end{equation} where $Y \in \mathbb{V}^{r\times r}$ and $Z \in \mathbb{V}^{(n_1-r)\times (n_2-r)}$. Note that the operator $\mathcal{B}_2$ is self-adjoint and positive semidefinite according to the definition of $\mathcal{Q}_{\beta}^{\dag}$. Let $\widehat{g}(\overline{X})$ be the vector in $\mathbb{R}^r$ defined by \begin{equation}\label{eqnvecgr} \widehat{g}(\overline{X}):=\big(1-f_1(\sigma(\overline{X})),\ldots,1-f_r(\sigma(\overline{X}))\big)^{\mathbb{T}}. \end{equation} Then, by the definition of the spectral operator $F$, we can rewrite (\ref{eqndelcond}) in the following concise form \begin{equation}\label{eqndelcond1eqiv} \mathcal{B}_2(\Gamma) = \mathcal{B}_1\big({\rm Diag}(\widehat{g}(\overline{X}))\big), \quad \Gamma \in \mathbb{V}^{(n_1-r)\times (n_1-r)}. \end{equation} For the positive semidefinite case $\mathbb{V}^{n_1\times n_2} = \mathbb{S}^n$ and $\overline{X} \in \mathbb{S}_+^n$, both $\overline{U}_i$ and $\overline{V}_i$ reduce to $\overline{P}_i$ for $i=1,2$. In this case, the linear system (\ref{eqndelcondpos}) can be concisely written as \begin{equation}\label{eqndelcond2eqiv} \mathcal{B}_2(\Lambda) =\mathcal{B}_2(I_{n-r})+\mathcal{B}_1\big({\rm Diag}(\widehat{g}(\overline{X}))\big), \quad \Lambda \in \mathbb{S}^{n-r}. \end{equation} \begin{proposition}\label{propsoluni} For the rectangular case, if the constraint nondegeneracy (\ref{eqncndc}) holds at $\overline{X}$ to the problem (\ref{eqnrcs}), then the linear operators $\mathcal{B}_2$ defined by (\ref{linear-operator}) is self-adjoint and positive definite. For the positive semidefinite case, if the constraint nondegeneracy (\ref{eqncndcpos}) holds at $\overline{X}$ to the problem (\ref{eqnrcspos}), then the linear operators $\mathcal{B}_2$ is also self-adjoint and positive definite. \end{proposition} According to Proposition \ref{propsoluni}, the constraint nondegeneracy at $\overline{X}$ to the problem (\ref{eqnrcs}) and (\ref{eqnrcspos}), respectively, implies that the linear system (\ref{eqndelcond}) has a unique solution $\widehat{\Gamma} = \mathcal{B}_2^{-1}\mathcal{B}_1\big({\rm Diag}({\widehat g}(\overline{X}))\big)$ and the linear system (\ref{eqndelcondpos}) has a unique solution $\widehat{\Lambda} = I_{n-r} + \mathcal{B}_2^{-1}\mathcal{B}_1\big({\rm Diag}({\widehat g}(\overline{X}))\big)$. Then, from Theorems \ref{thmsuf} and \ref{thmsufpos}, we can obtain the following main result for rank consistency. \begin{theorem}\label{thmgenconsis} Suppose that $\rho_m \rightarrow 0,\,\sqrt{m}\rho_m\rightarrow \infty$ and $\gamma_m = O_p(1)$. For the rectangular case, if the constraint nondegeneracy (\ref{eqncndc}) holds at $\overline{X}$ to the problem (\ref{eqnrcs}) and \begin{equation}\label{eqnsufso} \big\|\mathcal{B}_2^{-1}\mathcal{B}_1 \big({\rm Diag}(\widehat{g}(\overline{X}))\big)\big\| <1, \end{equation} then the estimator $\widehat{X}_m$ generated from the rank-correction step (\ref{eqnrcs}) is rank consistent. For the positive semidefinite case, if the constraint nondegeneracy (\ref{eqncndcpos}) holds at $\overline{X}$ to the problem (\ref{eqnrcspos}) and \begin{equation}\label{eqnsufsopos} I_{n-r} + \mathcal{B}_2^{-1}\mathcal{B}_1 \big({\rm Diag}(g_r(\overline{X}))\big) \in \mathbb{S}_{++}^{n-r}, \end{equation} then the estimator $\widehat{X}_m$ generated from the rank-correction step (\ref{eqnrcspos}) is rank consistent. \end{theorem} From Theorem \ref{thmgenconsis}, it is not difficult to see that there exists some threshold $\overline{\varepsilon} >0$ (depending on $\overline{X}$) such that the condition (\ref{eqnsufso}) holds if $|1-f_i(\sigma(\overline{X}))| \leq \overline{\varepsilon} \ \forall\, 1\leq i \leq r$. In other words, when $F(\overline{X})$ is sufficiently close to $\overline{U}_1\overline{V}_1^\mathbb{T}$, the condition (\ref{eqnsufso}) holds automatically and so does the rank consistency. Thus, Theorem \ref{thmgenconsis} provides us a guideline to construct a suitable rank-correction function for rank consistency. This is another important aspect of what we can benefit from the rank-correction step, besides the reduction of recovery error discussed in Section \ref{section3}. \medskip The next theorem shows that for the covariance (correlation) and density matrix completion problems with fixed basis coefficients described in Proposition \ref{propcorcn}, if observations are sampled uniformly at random, the rank consistency can be guaranteed for a broad class of rank-correction functions $F$. \begin{theorem}\label{thmrccordencons} For the covariance (correlation) and density matrix completion problems defined in Proposition \ref{propcorcn} under uniform sampling, if $\rho_m \rightarrow 0,\,\sqrt{m}\rho_m\rightarrow \infty,\,\gamma_m = O_p(1)$ and $F$ is a spectral operator associated with a symmetric function $f:\mathbb{R}^n \rightarrow \mathbb{R}^n$ such that for $i=1,\ldots,n$, \begin{equation}\label{eqnfcorden} f_i(x) \geq 0 \ \ \forall\, x\in \mathbb{R}_+^n \quad \text{and} \quad f_i(x)=0 \ \ \text{if and only if} \ \ x_i =0, \end{equation} then the estimator $\widehat{X}_m$ generated from the rank-correction step is rank consistent. \end{theorem} \medskip \section{Construction of the rank-correction function}\label{section5} In this section, we focus on the construction of a suitable rank-correction function $F$ based on the results obtained in Sections \ref{section3} and \ref{section4}. As can be seen from Theorem \ref{thmstobd}, a smaller value of $a_m/b_m$ potentially leads to a smaller recovery error. Thus, we desire a construction of the rank-correction function such that $F(\widetilde{X}_m)$ is close to $\overline{U}_1\overline{V}_1^\mathbb{T}$. Meanwhile, according to Theorem \ref{thmgenconsis}, we also desire that $F(\overline{X})$ is close to $\overline{U}_1\overline{V}_1^\mathbb{T}$ for rank consistency. Notice that a reasonable initial estimator $\widetilde{X}_m$ should not deviate too much from the true matrix $\overline{X}$. Therefore, the above two criteria consistently suggest a natural idea to construct a rank-correction function $F$, if possible, such that \begin{equation}\label{eqnrcfchoopt} F(X)\rightarrow \overline{U}_1\overline{V}_1^\mathbb{T} \quad \text{as} \quad X\rightarrow \overline{X}. \end{equation} Next, we proceed the construction of the rank-correction function $F$ for the rectangular case. For the positive semidefinite case, one may just replace the singular value decomposition with the eigenvalue decomposition and conduct exactly the same analysis. \subsection{The rank is known}\label{subsecrankknown} If the rank of the true matrix $\overline{X}$ is known in advance, we construct the rank-correction function $F$ by \begin{equation}\label{eqnchofun1} F(X) := U_1 V_1^\mathbb{T}, \end{equation} where $(U,V)\in \mathbb{O}^{n_1,n_2}(X)$ and $X \in \mathbb{V}^{n_1\times n_2}$. Note that $F$ defined by (\ref{eqnchofun1}) is not a spectral operator over the whole space of $\mathbb{V}^{n_1\times n_2}$, but in a neighborhood of $\overline{X}$ it is indeed a spectral operator and is actually twice continuously differentiable (see, e.g., \cite[Proposition 8]{DinST10}). Hence, it satisfies the criterion (\ref{eqnrcfchoopt}). With this rank-correction function, the rank-correction step is essentially the same as one step of the majorized penalty method developed in \cite{GaoS10}. By Theorem \ref{thmsuf} and Proposition \ref{propsoluni}, we immediately obtain the following result. \begin{corollary} Suppose that the rank of the true matrix $\overline{X}$ is known and the constraint nondegeneracy holds at $\overline{X}$. If $\rho_m\rightarrow 0,\,\sqrt{m}\rho_m\rightarrow \infty,\,\gamma_m = O_p(1)$ and $F$ is chosen by (\ref{eqnchofun1}), then the estimator $\widehat{X}_m$ generated from the rank-correction step is rank consistent. \end{corollary} \subsection{The rank is unknown} If the rank of the true matrix $\overline{X}$ is unknown, then the rank-correction function $F$ cannot be defined by (\ref{eqnchofun1}). What we will do is to construct a spectral operator $F$ to imitate the case when the rank is known. Here, we propose $F$ to be a spectral operator \begin{equation}\label{eqnchofun} F(X) := U \text{Diag}\big(f(\sigma(X))\big)V^\mathbb{T} \end{equation} associated with the symmetric function $f:\mathbb{R}^n \rightarrow \mathbb{R}^n$ defined by \begin{equation}\label{eqnchofun2} f_i(x) = \begin{cases} {\displaystyle \phi\left(\frac{x_i}{\|x\|_\infty}\right)} \quad & \text{if}\ x \in \mathbb{R}^n \backslash \{0\},\\ 0 \quad & \text{if}\ x =0,\end{cases} \end{equation} where $(U,V)\in \mathbb{O}^{n_1,n_2}(X)$, $X \in \mathbb{V}^{n_1\times n_2}$, and the scalar function $\phi:\mathbb{R} \rightarrow \mathbb{R}$ takes the form \begin{equation}\label{eqnchofun3} \phi(t): = \text{sgn}(t) (1+\varepsilon^\tau)\frac{|t|^\tau}{|t|^\tau+\varepsilon^\tau}, \quad t \in \mathbb{R}, \end{equation} for some $\tau >0$ and $\varepsilon >0$. By noting that for each $t$, $\phi(t) \rightarrow \text{sgn}(t)$ as $\varepsilon \downarrow 0$, we directly obtain the following result. \begin{corollary}\label{cororankcons} Suppose that the constraint nondegeneracy holds at $\overline{X}$. If $\rho_m \rightarrow 0,\,\sqrt{m}\rho_m\rightarrow \infty,\,\gamma_m = O_p(1)$, then for any given $\tau >0$, there exists some $\overline{\varepsilon}>0$ such that for any $F$ defined by (\ref{eqnchofun}), (\ref{eqnchofun2}) and (\ref{eqnchofun3}) with $0< \varepsilon \leq \overline{\varepsilon}$, the estimator $\widehat{X}_m$ generated from the rank-correction step is rank consistent. \end{corollary} Corollary \ref{cororankcons} indicates that one needs to choose a small $\varepsilon>0$ in pursuit of rank consistency. Meanwhile, we also need to take care of the influence of a small $\varepsilon>0$ on the recovery error bound which depends on the value of $a_m/b_m$. Certainly, we desire $a_m \approx 0$ and $b_m \approx 1$. This motivates us to choose a function $\phi$, if possible, such that \begin{equation}\label{eqnfunhope} \phi\bigg(\frac{\sigma_i(\widetilde{X}_m)}{\sigma_1(\widetilde{X}_m)}\bigg) \approx \begin{cases} 1 & \quad \text{if}\ 1\leq i\leq \text{rank}(\overline{X}), \\ 0 & \quad \text{if} \ \text{rank}(\overline{X})+1\leq i \leq n.\end{cases} \end{equation} This is also why we normalize the function $\phi$ defined by (\ref{eqnchofun3}) in the interval $t\in [0,1]$ such that $\phi(0)=0$ and $\phi(1)=1$. However, as indicated by Corollary \ref{cororkrhs}, the initial estimator $\widetilde{X}_m$ is very possible to have a higher rank than $\overline{X}$ when it approaches to $\overline{X}$. It turns out that when $\varepsilon>0$ is tiny, $\phi\big(\sigma_i(\widetilde{X}_m)/\sigma_1(\widetilde{X}_m)\big) \approx 1$ for $\text{rank}(\overline{X})+1 \leq i \leq \text{rank}(\widetilde{X}_m)$, which violates our desired property (\ref{eqnfunhope}). As a result, $\varepsilon>0$ should be chosen to be small but balanced. Notice that $\phi(\varepsilon)= (1+\varepsilon^\tau)/2 \approx 1/2$ if $\varepsilon>0$ is small and $\tau >0$ is not too small. Thus, the value of $\varepsilon$ can be regarded as a divide of confidence on whether $\sigma_i(\widetilde{X}_m)$ is believed to come from a nonzero singular values of $\overline{X}$ with perturbation --- positive confidence if $\sigma_i(\widetilde{X}_m) > \varepsilon \sigma_1(\widetilde{X}_m)$ and negative confidence if $\sigma_i(\widetilde{X}_m) < \varepsilon \sigma_1(\widetilde{X}_m)$. On the other hand, the parameter $\tau>0$ mainly controls the shape of the function $\phi$ over $t\in [0,1]$. The function $\phi$ is concave if $0<\tau\leq 1$ and $S$-shaped with a single inflection point at $\big(\frac{\tau-1}{\tau+1}\big)^{1/\tau}\varepsilon$ if $\tau>1$. Moreover, the steepness of the function $\phi$ increases when $\tau$ increases. In particular, if $0<\varepsilon<1$ and $\tau$ is very large, $\phi$ is very close to the step function taking the value $0$ if $0\leq t<\varepsilon$ and the value $1$ if $ \varepsilon < t \leq 1$. In this case, there exists some $\varepsilon$ such that the desired property (\ref{eqnfunhope}) can be achieved and that the corresponding rank-correction function $F$ is very close to the one defined by (\ref{eqnchofun1}). Thus, it seems to be a good idea to choose an $S$-shaped function $\phi$ with a large $\tau$. However, in practice, the parameter $\varepsilon$ should be pre-determined. Since $\text{rank}(\overline{X})$ is unknown and the singular values of $\widetilde{X}_m$ are unpredictable, it is hard to choose a suitable $\varepsilon$ in advance, and hence, it will be too risky to choose a large $\tau$ for recovery. As a result, one has to be somewhat conservative to choose $\tau$, sacrificing some optimality of recovery in exchange for robustness strategically. If the initial estimator is generated from the nuclear norm penalized least squares problem, we recommend the choices $\tau = 1$ or $2$ and $\varepsilon = 0.01 \sim 0.1$ as these choices show stable performance for plenty of problems, as validated in Section 6. \begin{figure} \begin{center} \subfigure[$\varepsilon=0.1$ with different $\tau>0$]{\includegraphics[height=6.5cm]{different_function_epsilonfix.pdf}} \hspace{0.5cm} \subfigure[$\tau=2$ with different $\varepsilon>0$]{\includegraphics[height=6.5cm]{different_function_taufix.pdf}} \caption{Shapes of the function $\phi$ with different $\tau>0$ and $\varepsilon>0$}\label{dad} \end{center} \end{figure} \medskip We also remark that for the positive semidefinite case, the rank-correction function defined by (\ref{eqnchofun}), (\ref{eqnchofun2}) and (\ref{eqnchofun3}) is related to the reweighted trace norm for the matrix rank minimization proposed by Fazel et al. \cite{FazHB03, MohF10}. The reweighted trace norm in \cite{FazHB03, MohF10} for the positive semidefinite case is $\langle (X^k+\varepsilon I_n)^{-1}, X\rangle$, which arises from the derivative of the surrogate function $\log\det(X+\varepsilon I_n)$ of the rank function at an iterate $X^k$, where $\varepsilon$ is a small positive constant. Meanwhile, in our proposed rank-correction step, if we choose $\tau = 1$, then $I_n-\frac{1}{1+\varepsilon}F(\widetilde{X}_m) = \varepsilon'(\widetilde{X}_m+\varepsilon' I_n)^{-1}$ with $\varepsilon' =\varepsilon \|\widetilde{X}_m\|$. Superficially, similarity occurs; however, it is notable that $\varepsilon'$ depends on $\widetilde{X}_m$, which is different from the constant $\varepsilon$ in \cite{FazHB03, MohF10}. More broadly speaking, the rank-correction function $F$ defined by (\ref{eqnchofun}), (\ref{eqnchofun2}) and (\ref{eqnchofun3}) is not a gradient of any real-valued function. This distinguishes our proposed rank-correction step from the reweighted trace norm minimization in \cite{FazHB03, MohF10} even for the positive semidefinite case. \medskip \section{Numerical experiments}\label{section6} In this section, we validate the power of our proposed rank-corrected procedure on the recovery by applying it to the positive semidefinite matrix completion problems. In solving the optimization problem in the rank-correction step (\ref{eqnrcspos}), we adopted the code developed by Jiang et al. \cite{JiaST12} for large scale linearly constrained convex semidefinite programming problems. The implemented code is based on an inexact version of the accelerated proximal gradient method \cite{Nes83,BecT09}. All tests were run in MATLAB under Windows 7.0 operating system on an Intel Core(TM) i7-2720 QM 2.20GHz CPU with 8.00GB memory. \medskip For convenience, in the sequel, the {\rm NNPLS} estimator and the {\rm RCS} estimator, respectively, stand for the estimators from the nuclear norm penalized least squares problem (i.e., the problem (\ref{eqnrcspos}) with $F\equiv 0 $ and $\gamma_m=0$) and the rank-correction step. Let $X_m$ be an estimator. The {\bf relative error} ({\bf relerr} for short) of $X_m$ is defined by \[ {\rm relerr}=\frac{\|X_m - \overline{X}\|_{F}}{\max(10^{-8},\|\overline{X}\|_{F})}. \] \subsection{Influence of fixed basis coefficients on the recovery}\label{subsec6.1} In this subsection, we take the correlation matrix completion for example to test the performance of the {\rm NNPLS} estimator and the {\rm RCS} estimator with different patterns of fixed basis coefficients. We randomly generated the true matrix $\overline{X}$ by the following command: \begin{verbatim} M = randn(n,r); ML = weight*M(:,1:k); M(:,1:k) = ML; Xtemp = M*M'; D = diag(1./sqrt(diag(Xtemp))); X_bar = D*Gtemp*D \end{verbatim} \vspace{-0.5cm} \noindent where the parameter {\ttfamily weight} is used to control the relative magnitude difference between the first {\ttfamily k} largest eigenvalues and the other nonzero eigenvalues. In our experiment, we set {\ttfamily weight} $=5$ and {\ttfamily k} $=1$, and took $\overline{X}=$ {\ttfamily X\_bar} with dimension {\ttfamily n} $=1000$ and rank {\ttfamily r} $=5$. We randomly fixed partial diagonal and off-diagonal entries of $\overline{X}$ and sampled the rest entries uniformly at random with i.i.d. Gaussian noise at the noise level $10\%$. \medskip In Figure \ref{figure2}, we plot the curves of the relative error and the rank of the NNPLS estimator and the RCS estimator with different patterns of fixed entries. In the captions of the subfigures, {\bf diag} means the number of fixed diagonal entries and {\bf non-diag} means the number of fixed off-diagonal entries. The subfigures on the left-hand side and the right-hand side show the performance of the NNPLS estimator and the RCS estimator, respectively. For the RCS estimator, the rank-correction function $F$ is defined by (\ref{eqnchofun}), (\ref{eqnchofun2}) and (\ref{eqnchofun3}) with $\tau = 2$ and $\varepsilon = 0.02$, and the initial $\widetilde{X}_m$ is chosen from those points of the corresponding subfigures on the left-hand side such that $\big|\|y-\mathcal{R}_\Omega(\widetilde{X}_m)\|_2/\|y\|_2-0.1\big|$ attains the smallest value. \begin{figure}[htbp] \begin{center} \subfigure[Nuclear norm: diag =0, off-diag = 0]{\includegraphics[width=7cm]{comparison_fix_1-1.pdf}} \subfigure[Rank correction step: diag=0, off-diag=0]{\includegraphics[width=7cm]{comparison_fix_1-2.pdf}}\\ \subfigure[Nuclear norm: diag =n/2, off-diag = 0]{\includegraphics[width=7cm]{comparison_fix_3-1.pdf}} \subfigure[Rank correction step: diag=n/2, off-diag=0]{\includegraphics[width=7cm]{comparison_fix_3-2.pdf}}\\ \subfigure[Nuclear norm: diag =n, off-diag = 0]{\includegraphics[width=7cm]{comparison_fix_5-1.pdf}} \subfigure[Rank correction step: diag=n, off-diag=0]{\includegraphics[width=7cm]{comparison_fix_5-2.pdf}}\\ \subfigure[Nuclear norm: diag =n, off-diag = n/2]{\includegraphics[width=7cm]{comparison_fix_9-1.pdf}} \hspace{0cm} \subfigure[Rank correction step: diag=n, off-diag=n/2]{\includegraphics[width=7cm]{comparison_fix_9-2.pdf}}\\ \caption{Influence of fixed basis coefficients on recovery (sample ratio $=6.38\%$)\ \label{figure2}} \end{center} \end{figure} \medskip From the subfigures on the left-hand side, we observe that as the number of fixed diagonal entries increases, the parameter $\rho_m$ for the smallest recovery error deviates more and more from the one for attaining the true rank. In particular, when {\bf diag} $=n$, the NNPLS estimator reduces to the (constrained) least squares estimator so that one cannot benefit from the NNPLS estimator for encouraging a low-rank solution. This implies that the NNPLS estimator does not possess the rank consistency when some entries are fixed. However, the subfigures on the right-hand side indicate that the RCS estimator can yield a solution with the correct rank as well as a desired small recovery error simultaneously, with the parameter $\rho_m$ in a large interval. This exactly validates the theoretical result of Theorem \ref{thmrccordencons} for rank consistency. \subsection{Performance of different rank-correction functions for recovery} In this subsection, we test the performance of different rank-correction functions for recovering a correlation matrix. We randomly generated the true matrix $\overline{X}$ by the command in Subsection 6.1 with {\ttfamily n} $=1000$, {\ttfamily r} $=10$, {\ttfamily weight} $=2$ and {\ttfamily k} $ = 5$. We fixed all the diagonal entries of $\overline{X}$ and sampled partial off-diagonal entries uniformly at random with i.i.d. Gaussian noise at the noise level $10\%$. We chose the (nuclear norm penalized) least squares estimator to be the initial estimator $\widetilde{X}_m$. In Figure \ref{figure3}, we plot four curves corresponding to the rank-correction functions $F$ defined by (\ref{eqnchofun}), (\ref{eqnchofun2}) and (\ref{eqnchofun3}) with $\tau = 2$ and different $\varepsilon$, and another two curves corresponding to the rank-correction functions $F$ defined by (\ref{eqnchofun1}) at $\widetilde{X}_m$ (i.e., $\widetilde{U}_1\widetilde{V}_1^\mathbb{T}$) and $\overline{X}$ (i.e., $\overline{U}_1\overline{V}_1^\mathbb{T}$), respectively. The values of $a_m$, $b_m$ and the optimal recovery error with different $\rho_m$ are listed in Table \ref{tab1}. \medskip As can be seen from Figure \ref{figure3}, when $\rho_m$ increases, the recovery error decreases with the rank and then increases after the correct rank is attained, except for the case $\overline{U}_1\overline{V}_1^\mathbb{T}$. This validates our discussion about the recovery error at the end of Section \ref{section3}. Moreover, for a smaller $\varepsilon$, the curve of recovery error changes more gently, though a certain optimality in the sense of recovery error is sacrificed. This means that the choice of a relatively small $\varepsilon$, say $0.01$ or $0.02$, is more robust for those ill-conditioned problems. From Table \ref{tab1}, we see that a smaller $a_m/b_m$ corresponds to a better optimal recovery error. It is worthwhile to point out that, even if $a_m/b_m$ is larger than $1$, the performance of the RCS estimator for recovery is still much better than that of the NNPLS estimator. \begin{table}[htbp] \begin{center} \caption{Influence of the rank-correction term on the recovery error\ \ \label{tab1}} \vspace{0.1cm} \begin{tabular}{|c|c|c|c|c|} \hline rank-correction function & $a_m$ & $b_m$ & $a_m/b_m$ & optimal relerr \\ \hline \hline zero function & $1$ & $1$ & $1$ & $10.85\%$ \\ $\varepsilon= 0.01, \tau = 2$ & $0.1420$ & $0.2351$ & $0.6038$ & $5.96\%$ \\ $\varepsilon = 0.02, \tau = 2$ & $0.1459$ & $0.5514$ & $0.2646$ & $5.80\%$ \\ $\varepsilon = 0.05, \tau = 2$ & $0.1648$ & $0.8846$ & $0.1863$ & $5.75\%$ \\ $\varepsilon = 0.1, \tau = 2$ & $0.2399$ & $0.9681$ & $0.2478$ & $5.77\%$ \\ $\widetilde{U}_1\widetilde{V}_1^\mathbb{T}$ (initial) & $0.1445$ & $0.9815$ & $0.1472$ & $5.75\%$ \\ $\overline{U}_1\overline{V}_1^\mathbb{T}$ (true) & $0$ & $1$ & $0$ & $2.25\%$ \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[htbp] \begin{center} \subfigure{\includegraphics[width=\textwidth]{error.pdf}}\\ \subfigure{\includegraphics[width=\textwidth]{rank.pdf}} \caption{Influence of the rank-correction term on the recovery\ \label{figure3}} \end{center} \end{figure} \subsection{Performance for different matrix completion problems} In this subsection, we test the performance of the RCS estimator for the covariance and density matrix completion problems. As can be seen from Figure \ref{figure2}, a good choice of the parameter $\rho_m$ for the RCS estimator could be the smallest one such that the rank becomes stable. Such a parameter $\rho_m$ can be found by the bisection search method. This is actually what we benefit from rank consistency. In the following numerical experiments, we apply the above strategy to find a suitable $\rho_m$ for the RCS estimator, and choose the rank-correction function $F$ defined by (\ref{eqnchofun}), (\ref{eqnchofun2}) and (\ref{eqnchofun3}) with $\tau=2$ and $\varepsilon=0.02$. \medskip We first take the covariance matrix completion for example to test the performance of the RCS estimator with different initial estimators $\widetilde{X}_m$. The true matrix $\overline{X}$ is generated by the command in Subsection 6.1 with {\ttfamily n} $=500$, {\ttfamily r} $=5$, {\ttfamily weight} $=3$ and {\ttfamily k} $=1$ except that {\ttfamily D = eye(n)}. We depict the numerical results in Figure 4, where the dash curves represent the relative recovery error and the rank of the NNPLS estimator with different $\rho_m$, and the solid curves represent the relative recovery error and the rank of the RCS estimator with $\widetilde{X}_m$ chosen to be the corresponding NNPLS estimator. As can be seen from Figure 4, the RCS estimator substantially improves the quality of the NNPLS estimator in terms of both the recovery error and the rank. We also observe that when the initial $\widetilde{X}_m$ has a large deviation from the true matrix, the quality of the RCS estimator may still not be satisfied. Thus, it is natural to ask whether further rank-correction steps could improve the quality. The answer can be found from Table \ref{tab2} below, where the numerical results of the covariance matrix completion are reported. We also report the numerical results of the density matrix completion in Table \ref{tab3}. \begin{figure}[htbp] \begin{center} \includegraphics[width=\textwidth]{comparison_NNPLS-RCS.pdf} \caption{Performance of the RCS estimator with different initial $\widetilde{X}_m$} \end{center} \end{figure} \begin{table}[htbp] \begin{center} \renewcommand\arraystretch{1.2} {\caption{\label{tab2} Performance for covariance matrix completion problems with $n=1000$}} \vspace{0.1cm} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & {\rm diag/}& & {\rm NNPLS}& {\rm 1st RCS}& {\rm 2st RCS}& {\rm 3rd RCS}\\ \cline{4-7} \raisebox{1.5ex}[0pt]{$r$} & {\rm off-diag} & \raisebox{1.5ex}[0pt]{$\renewcommand{\arraystretch}{0.85} \begin{array}{c} {\rm sample} \\ {\rm ratio} \end{array}$} &{\rm relerr (rank)}&{\rm relerr(rank)}&{\rm relerr (rank)}&{\rm relerr (rank)}\\ \hline \hline & 1000/0 & 2.40\% & 1.95e-1 (47) & 1.27e-1 (5) & 1.18e-1 (5) & 1.12e-1 (5)\\ & 1000/0 & 7.99\% & 6.10e-2 (51) & 3.41e-2 (5) & 3.37e-2 (5) & 3.36e-2 (5)\\ \raisebox{1.5ex}[0pt]{5} & 500/50 & 2.39\% & 2.01e-1 (45) & 1.10e-1 (5) & 9.47e-2 (5) & 8.97e-2 (5)\\ & 500/50 & 7.98\% & 7.19e-2 (32) & 3.77e-2 (5) & 3.59e-2 (5) & 3.58e-2 (5)\\ \hline & 1000/0 & 5.38\% & 1.32e-1 (74) & 7.68e-2 (10) & 7.39e-2 (10) & 7.36e-2 (10)\\ & 1000/0 & 8.96\% & 9.18e-2 (78) & 5.15e-2 (10) & 5.08e-2 (10) & 5.08e-2 (10)\\ \raisebox{1.5ex}[0pt]{10} & 500/100 & 5.37\% & 1.58e-1 (57) & 8.66e-2 (10) & 7.74e-2 (10) & 7.60e-2 (10)\\ & 500/100 & 8.96\% & 1.02e-1 (49) & 5.36e-2 (10) & 5.24e-2 (10) & 5.25e-2 (10)\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[htbp] \begin{center} \renewcommand\arraystretch{1.2} {\caption{\label{tab3} Performance for density matrix completion problems with $n=1024$}} \vspace{-0.3cm} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\rotatebox{90}{noise\ }} & \multirow{2}{*}{$r$}& & &{\rm NNPLS1}&{\rm NNPLS2}&{\rm RCS}\\ \cline{5-7} & & \raisebox{1.5ex}[0pt]{$\renewcommand{\arraystretch}{0.85} \begin{array}{c} {\rm noise} \\ {\rm level} \end{array}$} & \raisebox{1.5ex}[0pt]{$\renewcommand{\arraystretch}{0.85} \begin{array}{c} {\rm sample} \\ {\rm ratio} \end{array}$} & {\rm fidelity}\ \ {\rm relerr}\ \ {\rm rank}& {\rm fidelity}\ \ {\rm relerr}\ \ {\rm rank}& {\rm fidelity}\ \ {\rm relerr}\ \ {\rm rank}\\ \hline \hline \multirow{4}{*}{\rotatebox{90}{statistical\ }} & & 10.0\% & 1.5\% & 0.697\ \ \ 2.59e-1\ \ 3\ \ & 0.955\ \ \ 2.50e-1\ \ \ 3\ & 0.987\ \ \ 1.02e-1\ \ \ 3\ \\ &\raisebox{1.5ex}[0pt]{3} & 10.0\% & 4.0\% & 0.915\ \ \ 8.04e-2\ \ 3\ \ & 0.997\ \ \ 6.84e-2\ \ \ 3\ & 0.998\ \ \ 4.13e-2\ \ \ 3\ \\ \cline{2-7} & & 10.0\% & 2.0\% & 0.550\ \ \ 3.71e-1\ \ 5\ \ & 0.908\ \ \ 4.23e-1\ \ \ 5\ & 0.972\ \ \ 1.61e-1\ \ \ 5\ \\ &\raisebox{1.5ex}[0pt]{5} & 10.0\% & 5.0\% & 0.889\ \ \ 1.03e-1\ \ 5\ \ & 0.995\ \ \ 9.18e-2\ \ \ 5\ & 0.997\ \ \ 4.91e-2\ \ \ 5\ \\ \hline \hline \multirow{4}{*}{\rotatebox{90}{mixed\ }} & & 12.4\% & 1.5\% & 0.654\ \ \ 2.93e-1\ \ 3\ \ & 0.957\ \ \ 2.43e-1\ \ \ 3\ & 0.988\ \ \ 1.06e-1\ \ \ 3\ \\ &\raisebox{1.5ex}[0pt]{3} & 12.4\% & 4.0\% & 0.832\ \ \ 1.49e-1\ \ 3\ \ & 0.995\ \ \ 8.14e-2\ \ \ 3\ & 0.997\ \ \ 6.41e-2\ \ \ 3\ \\ \cline{2-7} & & 12.4\% & 2.0\% & 0.521\ \ \ 3.95e-1\ \ 5\ \ & 0.912\ \ \ 4.09e-1\ \ \ 5\ & 0.977\ \ \ 1.51e-1\ \ \ 5\ \\ &\raisebox{1.5ex}[0pt]{5} & 12.5\% & 5.0\% & 0.817\ \ \ 1.61e-1\ \ 5\ \ & 0.987\ \ \ 1.01e-1\ \ \ 5\ & 0.996\ \ \ 7.09e-2\ \ \ 5\ \\ \hline \end{tabular} \end{center} \end{table} For the covariance matrix completion problems, we generated the true matrix $\overline{X}$ by the command in Subsection \ref{subsec6.1} with {\ttfamily n} $=1000$, {\ttfamily weight} $=3$ and {\ttfamily k} $=1$ except that {\ttfamily D = eye(n)}. The rank of $\overline{X}$ and the number of fixed diagonal and non-diagonal entries of $\overline{X}$ are reported in the first and the second columns of Table \ref{tab2}, respectively. We sampled partial off-diagonal entries uniformly at random with i.i.d. Gaussian noise at the noise level $10\%$. The first RCS estimator is using the NNPLS estimator as the initial estimator $\widetilde{X}_m$, and the second (third) RCS estimator is using the first (second) RCS estimator as the initial estimator $\widetilde{X}_m$. From Table \ref{tab2}, we see that when the sample ratio is reasonable, one rank-correction step is enough to yield a desired result. Meanwhile, when the sample ratio is very low, especially if some off-diagonal entries are further fixed, one or two more rank-correction steps can still improve the quality of estimation. \medskip For the density matrix completion problems, we generated the true density matrix $\overline{X}$ by the following command: \begin{verbatim} M = randn(n,r)+i*randn(n,r); ML = weight*M(:,1:k); M(:,1:k) = ML; Xtemp = M*M'; X_bar = Xtemp/sum(diag((Xtemp))). \end{verbatim} \vspace{-0.5cm} During the testing, we set {\ttfamily n} $=1024$, {\ttfamily weight} $=2$ and {\ttfamily k} $=1$, and sampled partial Pauli measurements except the trace of $\overline{X}$ uniformly at random with i.i.d. Gaussian noise at the noise level $10\%$. Besides the above statistical noise, we further added the depolarizing noise, which frequently appears in quantum systems, with strength $0.01$. This case is labeled as the mixed noise in the last four rows of Table \ref{tab3}. We remark here that the depolarizing noise differs from our assumption on noise since it does not have randomness. One may refer to \cite{GroLFBE10, FlaGLE12} for details of the quantum depolarizing channel. In Table \ref{tab3}, the (squared) {\bf fidelity} is a measure of the closeness of two quantum states, defined by $\big\|\widehat{X}_m^{1/2} \overline{X}^{1/2}\big\|_{*}^2$, the NNPLS1 estimator means the NNPLS estimator by dropping the trace one constraint, and the NNPLS2 estimator means the one obtained by normalizing the NNPLS1 estimator to be of trace one. Note that the NNPLS2 estimator was ever used by Flammia et al. \cite{FlaGLE12}. Table \ref{tab3} shows that the RCS estimator is superior to the NNPLS2 estimator in terms of both the fidelity and the relative error. \section{Conclusions}\label{section7} In this paper, we proposed a rank-corrected procedure for low-rank matrix completion problems with fixed basis coefficients. This approach can substantially overcome the limitation of the nuclear norm penalization for recovering a low-rank matrix. We studied the impact of adding the rank-correction term on both the reduction of the recovery error bounds and the rank consistency (in the sense of Bach \cite{Bac08}). Due to the presence of fixed basis coefficients, constraint nondegeneracy plays an important role in our analysis. Extensive numerical experiments show that our approach can significantly improve the recovery performance in the sense of both the recovery error and the rank, compared with the nuclear norm penalized least square estimator. As a byproduct, our results also provide a theoretical foundation for the majorized penalty method of Gao and Sun \cite{GaoS10} and Gao \cite{Gao10} for structured low-rank matrix optimization problems. \medskip Our proposed rank-correction step also allows additional constraints according to other possible prior information. In particular, for additional linear constraints, all the theoretical results in this paper hold with slight modifications. In order to better fit the under-sampling setting of matrix completion, in the future work, it would be of great interest to extend the asymptotic rank consistency results to the case that the matrix size is allowed to grow. It would also be interesting to extend this approach to deal with other low-rank matrix problems. \bigskip \section*{Acknowledgments} The authors would like to thank Dr. Kaifeng Jiang for helpful discussions on using the accelerated proximal gradient method to solve the density matrix completion problem. \bigskip \section*{Appendix} \noindent {\bf Proof of Theorem \ref{thmopbd}} \noindent Let $\Delta_m:=\widehat{X}_m - \overline{X}$. Since $\widehat{X}_m$ is optimal to (\ref{eqnrcs}) and $\overline{X}$ is feasible to (\ref{eqnrcs}), it follows that \begin{align} \frac{1}{2m} \|\mathcal{R}_\Omega(\Delta_m)\|_2^2 \leq & \ \Big\langle \frac{\nu}{m}\mathcal{R}_\Omega^*(\xi), \Delta_m\Big\rangle - \rho_m\big(\|\widehat{X}_m\|_*-\|\overline{X}\|_* - \langle F(\widetilde{X}_m)+\gamma_m\widetilde{X}_m, \Delta_m\rangle\big) \nonumber \\ & \ +\frac{\rho_m\gamma_m}{2} \big( \|\overline{X}\|_F^2-\|\widehat{X}_m\|_F^2\big). \label{eqnopbd1} \end{align} Then, it follows from (\ref{eqndefrho}) that \begin{align} \Big\langle \frac{\nu}{m}\mathcal{R}_\Omega^*(\xi), \Delta_m \Big\rangle & \ \leq \nu \Big\|\frac{1}{m}\mathcal{R}_\Omega^*(\xi)\Big\|\big(\|\mathcal{P}_T(\Delta_m)\|_* +\|\mathcal{P}_{T^\perp}(\Delta_m)\|_*\big) \nonumber \\ & \ \leq \frac{\rho_mb_m}{\kappa} \big(\|\mathcal{P}_T(\Delta_m)\|_* +\|\mathcal{P}_{T^\perp}(\Delta_m)\|_*\big). \label{eqnopbd3} \end{align} From the directional derivative of the nuclear norm at $\overline{X}$ (see \cite[Theorem 1]{Wat92}), we have \begin{align*} \|\widehat{X}_m \|_* - \|\overline{X}\|_* \ge \langle \overline{U}_1\overline{V}_1^\mathbb{T}, \Delta_m\rangle +\|\overline{U}_2^\mathbb{T}\Delta_m\overline{V}_2\|_*. \end{align*} This, together with equations (\ref{Operator-PT}) and (\ref{eqndelalpbet}), implies that \begin{align} &\ \|\widehat{X}_m \|_* - \|\overline{X}\|_* -\langle F(\widetilde{X}_m)+\gamma_m \widetilde{X}_m, \Delta_m\rangle \nonumber\\ \geq & \, \langle \overline{U}_1\overline{V}_1^\mathbb{T}, \Delta_m\rangle +\|\overline{U}_2^\mathbb{T}\Delta_m\overline{V}_2\|_* -\langle F(\widetilde{X}_m)+\gamma_m \widetilde{X}_m, \Delta_m\rangle \nonumber\\ = & \, \langle \overline{U}_1\overline{V}_1^\mathbb{T}\! -\! \mathcal{P}_T(F(\widetilde{X}_m)\! + \!\gamma_m \widetilde{X}_m), \Delta_m\rangle \!+\! \|\mathcal{P}_{T^\perp}(\Delta_m)\|_*\! -\!\langle \mathcal{P}_{T^\perp}(F(\widetilde{X}_m)\!+\!\gamma_m \widetilde{X}_m), \Delta_m\rangle \nonumber \\ = & \, \langle \overline{U}_1\overline{V}_1^\mathbb{T}\! -\! \mathcal{P}_T(F(\widetilde{X}_m)\!+\! \gamma_m \widetilde{X}_m),\! \mathcal{P}_T(\Delta_m)\rangle \!+ \!\|\mathcal{P}_{T^\perp}(\Delta_m)\|_*\! -\!\langle \mathcal{P}_{T^\perp}(F(\widetilde{X}_m)\!+\!\gamma_m \widetilde{X}_m),\!\mathcal{P}_{T^\perp}(\Delta_m)\rangle \nonumber \\ \geq & \, -\!\|\overline{U}_1\overline{V}_1^\mathbb{T}\!-\!\mathcal{P}_T(F(\widetilde{X}_m)\! +\!\gamma_m\widetilde{X}_m)\|\|\mathcal{P}_T(\Delta_m)\|_*\!+ \!\big(1\!-\!\|\mathcal{P}_{T^\perp}(F(\widetilde{X}_m\!+\!\gamma_m \widetilde{X}_m)\|\big) \|\mathcal{P}_{T^\perp}(\Delta_m)\|_* \nonumber \\ = & \, -a_m \|\mathcal{P}_T(\Delta_m)\|_* +b_m \|\mathcal{P}_{T^\perp}(\Delta_m)\|_*. \label{eqnopbd2} \end{align} By substituting (\ref{eqnopbd2}) and (\ref{eqnopbd3}) into (\ref{eqnopbd1}), we obtain that \begin{equation}\label{eqnopbdtig} \begin{aligned} \frac{1}{2m} \|\mathcal{R}_\Omega(\Delta_m)\|_2^2 \leq & \ \rho_m\left(\Big(a_m+\frac{b_m}{\kappa}\Big)\|\mathcal{P}_T(\Delta_m)\|_*- \frac{\kappa-1}{\kappa}b_m\|\mathcal{P}_{T^\perp}(\Delta_m)\|_*\right) \\ & +\frac{\rho_m\gamma_m}{2} (\|\overline{X}\|_F^2 -\|\widehat{X}_m\|_F^2). \end{aligned} \end{equation} Note that $\text{rank}(\mathcal{P}_T(\Delta_m))\leq 2r$. Hence, $\|\mathcal{P}_T(\Delta_m)\|_* \leq \sqrt{2r} \|\mathcal{P}_T(\Delta_m)\|_F \leq\sqrt{2r}\|\Delta_m\|_F,$ and the desired result follows from (\ref{eqnopbdtig}). Thus, we complete the proof. \bigskip \noindent {\bf Proof of Lemma \ref{lemiso}.} \noindent The proof is similar to that of \cite[Lemma 12]{Klo12}. We need to show that the event $$\mathcal{E}=\!\left\{\exists \, \Delta \in \mathcal{C}(r)\ \text{such that} \ \left|\frac{1}{m}\|\mathcal{R}_\Omega(\Delta)\|_2^2-\!\langle \mathcal{Q}_\beta(\Delta),\Delta\rangle \right|\!\geq \frac{1}{2} \langle \mathcal{Q}_\beta(\Delta),\Delta\rangle + 128\mu_1 d_2 r \vartheta_m^2 \right\}$$ occurs with probability less than $2/(n_1+ n_2)$. For any given $\varepsilon\!>\!0$, we decompose $\mathcal{C}(r)$ as $$\mathcal{C}(r) = \bigcup_{k=1}^\infty \left\{\Delta \in \mathcal{C}(r) \mid 2^{k-1} \varepsilon \leq \langle \mathcal{Q}_\beta(\Delta),\Delta\rangle \leq 2^k \varepsilon\right\}.$$ For any $a>0$, let $\mathcal{C}(r,a):=\{\Delta \in \mathcal{C}(r) \mid \langle \mathcal{Q}_\beta(\Delta),\Delta\rangle \leq a\}.$ Then we get $\mathcal{E} \subseteq \cup_{k=1}^\infty \mathcal{E}_k$ with $$\mathcal{E}_k= \Big\{\exists \, \Delta \in \mathcal{C}(r,2^k \varepsilon)\ \text{such that} \ \Big|\frac{1}{m}\|\mathcal{R}_\Omega(\Delta)\|_2^2-\langle \mathcal{Q}_\beta(\Delta),\Delta\rangle \Big| \geq 2^{k-2}\varepsilon + 128\mu_1 d_2 r \vartheta_m^2 \Big\}.$$ Then, we need to estimate the probability of each event $\mathcal{E}_k$. Define $$Z_a:= \sup_{\Delta \in \mathcal{C}(r,a)} \Big| \frac{1}{m} \|\mathcal{R}_\Omega(\Delta)\|_2^2-\langle \mathcal{Q}_\beta(\Delta),\Delta \rangle \Big|.$$ Notice that for any $\Delta \in \mathbb{V}^{n_1\times n_2}$, $$\frac{1}{m} \|\mathcal{R}_\Omega(\Delta)\|_2^2 = \frac{1}{m}\sum_{i=1}^m \langle \Theta_{\omega_i}, \Delta\rangle^2 \stackrel{a.s.}{\rightarrow} \mathbb{E}(\langle \Theta_{\omega_i}, \Delta\rangle^2) = \langle \mathcal{Q}_\beta(\Delta),\Delta\rangle.$$ Since $\|\mathcal{R}_\beta(\Delta)\|_\infty \leq 1$ for all $\Delta \in \mathcal{C}(r)$, from Massart's Hoeffding type concentration inequality \cite[Theorem 9]{Mas00} for suprema of empirical processes, we have \begin{equation}\label{eqnmasineq} {\rm Pr}\left( Z_a \geq \mathbb{E}(Z_a) + t\right) \leq \exp(-mt^2/2) \quad \forall \, t>0. \end{equation} Next, we use the standard Rademacher symmetrization in the theory of empirical processes to further derive an upper bound of $\mathbb{E}(Z_a)$. Let $\{\epsilon_1,\ldots, \epsilon_m\}$ be a Rademacher sequence. Then, we have \begin{align} \mathbb{E}(Z_a) = & \ \mathbb{E} \bigg(\sup_{\Delta \in \mathcal{C}(r,a)} \Big| \frac{1}{m} \sum_{i=1}^m \langle \Theta_{\omega_i}, \Delta \rangle^2 -\mathbb{E}\big(\langle \Theta_{\omega_i}, \Delta \rangle^2\big) \Big|\bigg) \nonumber\\ \leq & \ 2 \mathbb{E} \bigg(\sup_{\Delta \in \mathcal{C}(r,a)} \Big|\frac{1}{m}\sum_{i=1}^m \epsilon_i \langle \Theta_{\omega_i}, \Delta\rangle^2 \Big|\bigg) \leq 8 \mathbb{E} \bigg(\sup_{\Delta \in \mathcal{C}(r,a)} \Big|\frac{1}{m}\sum_{i=1}^m \epsilon_i \langle \Theta_{\omega_i}, \Delta\rangle \Big|\bigg) \nonumber \\ = & \ 8 \mathbb{E} \bigg(\sup_{\Delta \in \mathcal{C}(r,a)} \Big|\frac{1}{m}\sum_{i=1}^m \langle \mathcal{R}_\Omega^*(\epsilon), \Delta\rangle \Big|\bigg) \leq 8 \mathbb{E} \Big\|\frac{1}{m}\mathcal{R}_\Omega^*(\epsilon)\Big\|\bigg( \sup_{\Delta \in \mathcal{C}(r,a)} \|\Delta\|_*\bigg), \label{eqnmasineq1} \end{align} where the first inequality follows from the symmetrization theorem (e.g., see \cite[Lemma 2.3.1]{VanW96} and \cite[Theorem 14.3]{BuhV11}) and the second inequality follows from the contraction theorem (e.g., see \cite[Theorem 4.12]{LedT91} and \cite[Theorem 14.4]{BuhV11}). Moreover, from (\ref{eqndefiot}), we have \begin{equation}\label{eqnmasineq2} \|\Delta\|_*\leq \sqrt{r} \|\Delta\|_F \leq \sqrt{\mu_1 r d_2 \langle \mathcal{Q}_\beta(\Delta), \Delta\rangle} \leq \sqrt{\mu_1 r d_2 a} \quad \forall \, \Delta \in \mathcal{C}(r,a). \end{equation} Combining (\ref{eqnmasineq1}) and (\ref{eqnmasineq2}) with the definition of $\vartheta_m$ in (\ref{defvaritheta}), we obtain that \begin{align*} \mathbb{E}(Z_a) +\frac{a}{8} \leq 8\vartheta_m\sqrt{\mu_1 r d_2 a} +\frac{a}{8} \leq 128 \mu_1 r d_2 \vartheta_m^2+\frac{a}{4}. \end{align*} Then, by choosing $t = a/8$ in (\ref{eqnmasineq}), it follows that $${\rm Pr}\left( Z_a \geq \frac{a}{4} + 128 \mu_1 r d_2 \vartheta_m^2 \right)\leq {\rm Pr}\left(Z_a \geq \mathbb{E}(Z_a)+\frac{a}{8}\right)\leq \exp\left(-\frac{ma^2}{128}\right).$$ This implies that ${\rm Pr}(\mathcal{E}_k)\!\leq \exp(-4^{k}\varepsilon^2m/128)$. Then, by choosing $\varepsilon =\!\sqrt{\frac{64\log(n_1+n_2)}{\log(2) m}}$ and using $e^x \geq 1+x >x$, we have \begin{align*} {\rm Pr}(\mathcal{E}) & \ \leq \sum_{k=1}^\infty {\rm Pr}(\mathcal{E}_k) \leq \sum_{k=1}^\infty \exp\left(-\frac{4^{k} \varepsilon^2 m}{128}\right) < \sum_{k=1}^\infty \exp\left(-\frac{\log(4)k \varepsilon^2 m}{128} \right) \\ & \ \leq \frac{\exp(-\log(2)m\varepsilon^2/64)}{1-\exp(-\log(2)\varepsilon^2 m/64)} = \frac{1}{n_1+n_2-1}. \end{align*} Thus, we complete the proof. \bigskip \noindent {\bf Proof of Theorem \ref{thmbdmid}} \noindent The proof is similar to that of \cite[Theorem 3]{Klo12}. Let $\Delta_m^c :=\widehat{X}_m^c-\overline{X}$. By noting that $\gamma_m = 0$ in this case, from (\ref{eqnopbdtig}), we have $$\Big(a_m+\frac{b_m}{\kappa}\Big)\|\mathcal{P}_T(\Delta_m^c)\|_*- \frac{\kappa-1}{\kappa}b_m\|\mathcal{P}_{T^\perp}(\Delta_m^c)\|_*\geq 0.$$ Then, by setting $t_m\!:= \frac{\kappa}{\kappa-1}(1+\frac{a_m}{b_m})$, together with the above inequality, we obtain that \begin{equation} \label{eqnbdmid1} \|\Delta_m^{c}\|_* \le \|\mathcal{P}_T(\Delta_m^c)\|_* + \|\mathcal{P}_{T^\perp}(\Delta_m^c)\|_* \leq t_m\|\mathcal{P}_T(\Delta_m^{c})\|_* \leq \sqrt{2r}t_m \|\Delta_m^{c}\|_F. \end{equation} Let $c_m := \|\mathcal{R}_\beta(\Delta_m^{c})\|_\infty$. Clearly, $c_m \leq 2c$. We proceed the discussions by two cases: \medskip \noindent {\bf Case 1.} Suppose that $\langle \mathcal{Q}_\beta(\Delta_m^{c}), \Delta_m^{c} \rangle \leq c_m^2\sqrt{\frac{64\log(n_1+n_2)}{\log(2)m}}$. From (\ref{eqndefiot}), we obtain that $$\frac{\|\Delta_m^{c}\|_F^2}{d_2}\leq 4 b^2 \mu_1 \sqrt{\frac{64\log(n_1+n_2)}{\log(2)m}}.$$ \medskip \noindent {\bf Case 2.} Suppose that $\langle \mathcal{Q}_\beta(\Delta_m^{c}), \Delta_m^{c} \rangle> c_m^2\sqrt{\frac{64\log(n_1+n_2)}{\log(2)m}}$. Then, from (\ref{eqnbdmid1}), we have $\Delta_m^{c}/c_m \in \mathcal{C}(2t_m^2r)$. Together with Lemma \ref{lemiso}, it follows that $$\frac{1}{2}\langle \mathcal{Q}_\beta(\Delta_m^{c}), \Delta_m^{c}\rangle \leq \frac{1}{m}\|\mathcal{R}_\Omega(\Delta_m^{c})\|_2^2 + 128 c_m^2 t_m^2 \mu_1 d_2 r \vartheta_m^2.$$ Combining the last inequality with Theorem \ref{thmopbd} and equation (\ref{eqndefiot}), we obtain that \begin{align*} \frac{\|\Delta_m^{c}\|_F^2}{2d_2} \leq & \ \frac{\mu_1}{2}\langle \mathcal{Q}_\beta(\Delta_m^{c}), \Delta_m^{c}\rangle \leq \frac{\mu_1}{m}\|\mathcal{R}_\Omega(\Delta_m^{c})\|_2^2+ 128c_m^2 t_m^2 \mu_1^2 d_2 r\vartheta_m^2 \\ \leq & \ 2\sqrt{2r}\Big( a_m + \frac{b_m}{\kappa}\Big)\mu_1\rho_m\|\Delta_m^{c}\|_F +128 c_m^2 t_m^2 \mu_1^2 d_2 r\vartheta_m^2 \\ \leq & \ \frac{\|\Delta_m^{c}\|_F^2}{4d_2}+8\Big( a_m + \frac{b_m}{\kappa}\Big)^2\mu_1^2\rho_m^2rd_2 + 128 c_m^2 t_m^2 \mu_1^2 d_2 r \vartheta_m^2. \end{align*} By plugging in $t_m$, we have that there exists some constant $C_1$ such that $$\frac{\|\Delta_m^{c}\|_F^2}{d_2} \leq C_1 \mu_1^2 d_2 r \left(\Big( a_m + \frac{b_m}{\kappa}\Big)^2\rho_m^2 + \frac{\kappa^2(a_m+b_m)^2}{(\kappa-1)^2b_m^2}c^2\vartheta_m^2\right).$$ This, together with Case 1, completes the proof. \bigskip \noindent {\bf Proof of Lemma \ref{lemben}.} \noindent Recall that $\frac{1}{m} \mathcal{R}^*_\Omega(\xi) = \frac{1}{m}\sum_{i=1}^m \xi_i \Theta_{\omega_i}$. Let $Z_i := \xi_i \Theta_{\omega_i}$. Since $\mathbb{E}(\xi_i)=0$, the independence of $\xi_i$ and $\Theta_{\omega_i}$ implies that $\mathbb{E}(Z_i)=0$. Since $\|\Theta_{\omega_i}\|_F=1$, we have that $$\|Z_i\|\leq \|Z_i\|_F = |\xi_i|\|\Theta_{\omega_i}\|_F = |\xi_i|.$$ It follows that $\big\|\|Z_i\| \big\|_{\psi_1} \leq \|\xi_i\|_{\psi_1}$. Thus, $\big\|\|Z_i\| \big\|_{\psi_1}$ is finite since $\xi_i$ is sub-exponential. Meanwhile, $\mathbb{E}^\frac{1}{2}(\|Z_i\|^2)\leq \mathbb{E}^\frac{1}{2}(\|Z_i\|_F^2) = \mathbb{E}^\frac{1}{2}(\xi_i^2)=1$. We also have $$\mathbb{E}\big(Z_iZ_i^\mathbb{T}\big)=\mathbb{E}\big(\xi^2_i \Theta_{\omega_i}\Theta_{\omega_i}^\mathbb{T}\big)=\mathbb{E}\big( \Theta_{\omega_i}\Theta_{\omega_i}^\mathbb{T}\big) =\sum_{k\in\beta}p_k \Theta_k \Theta_k^\mathbb{T}.$$ The calculation of $\mathbb{E}\big(Z_i^\mathbb{T} Z_i\big)$ is similar. From (\ref{eqndefL}), we obtain that $\sqrt{1/n}\leq \sigma_Z \leq \sqrt{\mu_2/n}$. Then, applying the noncommutative Bernstein inequality yields (\ref{eqnnoibd}). The proof of (\ref{eqnnoiexpd}) is exactly the same as the proof of Lemma 6 in \cite{Klo12}. For simplicity, we omit the proof. \bigskip \noindent {\bf Proof of Lemma \ref{lemoper}} \noindent (i) From the definition of the sampling operator $\mathcal{R}_\Omega$ and its adjoint $\mathcal{R}_\Omega^*$, we have $$\frac{1}{m} \mathcal{R}_\Omega^* \mathcal{R}_\Omega(X) = \frac{1}{m} \sum_{i=1}^m \langle \Theta_{\omega_i}, X \rangle \, \Theta_{\omega_i}.$$ This is an average value of $m$ i.i.d. random matrices $\langle \Theta_{\omega_i}, X \rangle \Theta_{\omega_i}$. It is easy to see that $\mathbb{E}\big(\langle \Theta_{\omega_i}, X \rangle \Theta_{\omega_i}\big) = \mathcal{Q}_\beta(X).$ The result then follows directly from the strong law of large numbers. \noindent (ii) From the definition of $\mathcal{R}_\Omega^*$ and $\mathcal{R}_{\alpha\cup\beta}$, it is immediate to obtain that $$\frac{1}{\sqrt{m}} \mathcal{R}_{\mathcal{\alpha\cup\beta}} \mathcal{R}_\Omega^*(\xi) = \frac{1}{\sqrt{m}} \mathcal{R}_{\alpha\cup\beta} \bigg(\sum_{i=1}^m \xi_i \Theta_{\omega_i}\bigg) = \frac{1}{\sqrt{m}}\sum_{i=1}^m \xi_i \mathcal{R}_{\alpha\cup\beta}(\Theta_{\omega_i}).$$ Since $\mathbb{E}(\xi_i) = 0$ and $\mathbb{E}(\xi_i^2) =1$, from the independence of $\xi_i$ and $\mathcal{R}_{\alpha\cup\beta}(\Theta_{\omega_i})$, we have $\mathbb{E}\big(\xi_i \mathcal{R}_{\alpha\cup\beta}(\Theta_{\omega_i})\big) = 0$ and ${\rm cov}\big(\xi_i,\mathcal{R}_{\alpha\cup\beta}(\Theta_{\omega_i})\big)=p_i.$ Applying the central limit theorem then yields the desired result. \bigskip \noindent {\bf Proof of Theorem \ref{thmcons}} \noindent Let $\Phi_m$ denote the objective function of (\ref{eqnrcs}) and $\mathcal{F}$ denote the feasible set. Then, the problem (\ref{eqnrcs}) can be concisely written as $$\min_{X \in \mathbb{V}^{n_1\times n_2}}\Phi_m(X) + \delta_\mathcal{F}(X).$$ By Assumptions \ref{asmpfun} and \ref{asmpini} and Lemma \ref{lemoper}, we have that $\Phi_m$ converges pointwise in probability to $\Phi$, where $\Phi(X):=\frac{1}{2}\|\mathcal{Q}_\beta(X-\overline{X})\|_2^2$ for any $X\in\!\mathbb{V}^{n_1\times n_2}$. As a direct extension of Rockafellar \cite[Theorem 10.8]{Roc70}, Andersen and Gill \cite[Theorem II.1]{AndG82} proved that the pointwise convergence in probability of a sequence of random convex function implies the uniform convergence in probability on any compact subset. Then, from Proposition \ref{propepisum} we obtain that $\Phi_m+\delta_\mathcal{F}$ epi-converges in distribution to $\Phi+\delta_\mathcal{F}$. Note that $\overline{X}$ is the unique minimizer of $\Phi(X) + \delta_\mathcal{F}(X)$ since $\Phi(X)$ is strongly convex over the feasible set $\mathcal{F}$. Using the convexity of $\Phi_m$ and $\Phi$, we complete the proof by Proposition \ref{propepicon}. \bigskip \noindent {\bf Proof of Lemma \ref{lemlocrk}} \noindent By replacing $X$ and $H$ in (\ref{eqnlocsin}) with $\overline{X}$ and $\rho\Delta$, respectively, and noting that $\sigma_{r+1}(\overline{X})=0$, we have $\sigma_{r+1}(\overline{X}+\rho\Delta)- \|\overline{U}_2^\mathbb{T}(\rho\Delta) \overline{V}_2\| = O(\|\rho\Delta\|_F^2).$ Since $\overline{U}_2^\mathbb{T}\overline{\Delta} \, \overline{V}_2 \neq 0$, for any $\rho\neq 0$ sufficiently small and $\Delta$ sufficiently close to $\overline{\Delta}$, \begin{eqnarray*} \frac{\sigma_{r+1}(\overline{X}+\rho\Delta)}{|\rho|} &=&\|\overline{U}_2^\mathbb{T} \Delta \overline{V}_2\| + O(|\rho|\|\Delta\|_F^2)\nonumber\\ &\ge& \|\overline{U}_2^\mathbb{T}\overline{\Delta} \, \overline{V}_2\| - \|\overline{U}_2^\mathbb{T}(\Delta-\overline{\Delta}) \overline{V}_2\|+ O(|\rho|\|\Delta\|_F^2) \\ &\ge& \frac{1}{2} \|\overline{U}_2^\mathbb{T}\overline{\Delta} \, \overline{V}_2\| >0. \end{eqnarray*} This implies that ${\rm rank}(\overline{X}+\rho \Delta) > r$. \bigskip \noindent {\bf Proof of Proposition \ref{propdellim}} \noindent By letting $\Delta:= \rho_m^{-1}(X-\overline{X})$ in the optimization problem (\ref{eqndellim}), one can easily see that $\widehat{\Delta}_m$ is the optimal solution to \begin{equation}\label{eqndellimappr} \begin{aligned} \min_{\Delta \in \mathbb{V}^{n_1\times n_2}}&\ {\displaystyle \frac{1}{2m}}\|\mathcal{R}_\Omega(\Delta)\|_2^2 - \frac{\nu}{m \rho_m}\langle \mathcal{R}_\Omega^*(\xi), \Delta\rangle + \frac{1}{\rho_m}\big(\|\overline{X}+\rho_m\Delta \|_* - \|\overline{X}\|_*\big) \\ &\ \ \hspace{2.4cm} - \langle F(\widetilde{X}_m), \Delta\rangle + \frac{\rho_m\gamma_m}{2}\|\Delta\|_F^2 + \gamma_m \langle \overline{X}-\widetilde{X}_m, \Delta \rangle \\ {\rm s.t.}\quad \ & \mathcal{R}_\alpha(\Delta) = 0. \end{aligned} \end{equation} Let $\Phi_m$ and $\Phi$ denote the objective functions of (\ref{eqndellimappr}) and (\ref{eqndellim}), respectively. Let $\mathcal{F}$ denote the feasible set of (\ref{eqndellim}). By the definition of directional derivative and \cite[Theorem 1]{Wat92}, $$\lim_{\rho_m \rightarrow 0} \frac{1}{\rho_m}\big(\|\overline{X}+\rho_m\Delta \|_* - \|\overline{X}\|_*\big) = \langle \overline{U}_1\overline{V}_1^\mathbb{T}, \Delta \rangle + \|\overline{U}_2^\mathbb{T} \Delta \overline{V}_2\|_*.$$ Then, by combining Assumptions \ref{asmpfun} and \ref{asmpini} with Lemma \ref{lemoper}, we obtain that $\Phi_m$ converges pointwise in probability to $\Phi$. By using the same argument as in the proof of Theorem \ref{thmcons}, we obtain that $\Phi_m + \delta_\mathcal{F}$ epi-converges in distribution to $\Phi+\delta_\mathcal{F}$. Moreover, the optimal solution to (\ref{eqndellim}) is unique due to the strong convexity of $\Phi$ over the feasible set $\mathcal{F}$. Therefore, we complete the proof by applying Proposition \ref{propepicon} on the epi-convergence. \bigskip \noindent {\bf Proof of Lemma \ref{lemdelcond}} \noindent Assume that $\overline{U}_2^\mathbb{T}\widehat{\Delta}\overline{V}_2=0$. Since $\widehat{\Delta}$ is the optimal solution to (\ref{eqndellim}), from the optimality condition, the subdifferential of $\|X\|_*$ at $0$, and \cite[Theorem 23.7]{Roc70}, we obtain that there exist some $\widehat{\Gamma} \in \mathbb{V}^{(n_1-r)\times (n_2-r)}$ with $\|\widehat{\Gamma}\|\le 1$ and $\widehat{\eta} \in \mathbb{R}^{d_1}$ such that \begin{equation}\label{eqndellimkkt} \left\{ \begin{aligned} & \mathcal{Q}_\beta(\widehat{\Delta}) + \overline{U}_1\overline{V}_1^\mathbb{T} - F(\overline{X}) - \mathcal{R}_\alpha^*(\widehat{\eta}) - \overline{U}_2 \widehat{\Gamma} \,\overline{V}_2^\mathbb{T}=0,\\ & \mathcal{R}_\alpha(\widehat{\Delta}) = 0. \end{aligned} \right. \end{equation} Then, according to (\ref{relation-operator}), we can easily obtain (\ref{eqndelvaldel}) by applying the operator $\mathcal{Q}_\beta^\dag$ to the first equation of (\ref{eqndellimkkt}) and using the second equation. By further combining (\ref{eqndelvaldel}) and $\overline{U}_2^\mathbb{T}\widehat{\Delta}\overline{V}_2=0$, we obtain that $\widehat{\Gamma}$ is a solution to the linear system (\ref{eqndelcond}). Conversely, if the linear system (\ref{eqndelcond}) has a solution $\widehat{\Gamma}$ with $\|\widehat{\Gamma}\|\leq 1$, then it is easy to check that (\ref{eqndellimkkt}) is satisfied with $\widehat{\Delta}$ being given by (\ref{eqndelvaldel}) and $ \widehat{\eta} = \mathcal{R}_\alpha\big(\overline{U}_1\overline{V}_1^\mathbb{T}\!- F(\overline{X})-\overline{U}_2\widehat{\Gamma}\,\overline{V}_2^\mathbb{T}\big)$. Consequently, $\overline{U}_2^\mathbb{T} \widehat{\Delta} \overline{V}_2 = 0$ follows directly from the equations (\ref{eqndelcond}) and (\ref{eqndelvaldel}). \bigskip \noindent {\bf Proof of Theorem \ref{thmsuf}} \noindent The estimator $\widehat{X}_m$ is the optimal solution to (\ref{eqnrcs}) if and only if there exist a subgradient $\widehat{G}_m$ of the nuclear norm at $\widehat{X}_m$ and a vector $\widehat{\eta}_m \in \mathbb{R}^{d_1}$ such that $(\widehat{X}_m, \widehat{\eta}_m)$ satisfies the KKT conditions: \begin{equation}\label{eqnsufkkt} \left\{ \begin{aligned} &\frac{1}{m}\mathcal{R}_\Omega^*\big(\mathcal{R}_\Omega(\widehat{X}_m)-y\big)+\rho_m \big(\widehat{G}_m\!-\!F(\widetilde{X}_m)+\gamma_m(\widehat{X}_m \!-\!\widetilde{X}_m)\big)- \mathcal{R}_\alpha^*(\widehat{\eta}_m)=0, \\ & \mathcal{R}_\alpha(\widehat{X}_m) = \mathcal{R}_\alpha(\overline{X}). \end{aligned} \right. \end{equation} Let $ (\widehat{U}_m, \widehat{V}_m)\in \mathbb{O}^{n_1,n_2}(\widehat{X}_m)$ with $\widehat{U}_{m,1} \in \mathbb{O}^{n_1\times r}$, $\widehat{U}_{m,2} \in \mathbb{O}^{n_1\times (n_1-r)}$, $\widehat{V}_{m,1} \in \mathbb{O}^{n_2\times r}$ and $\widehat{V}_{m,2} \in \mathbb{O}^{n_2\times (n_2-r)}$. From Theorem \ref{thmcons} and Corollary \ref{cororkrhs}, we know that $\text{rank}(\widehat{X}_m) \geq r$ with probability one. When $\text{rank}(\widehat{X}_m) \geq r$ holds, then from the characterization of the subdifferential of the nuclear norm \cite{Wat92, Wat93}, we have that $\widehat{G}_m = \widehat{U}_{m,1} \widehat{V}_{m,1}^\mathbb{T} + \widehat{U}_{m,2} \widehat{\Gamma}_m\widehat{V}_{m,2}^\mathbb{T}$ for some $\widehat{\Gamma}_m \in \mathbb{V}^{(n_1-r)\times (n_2-r)}$ satisfying $\|\widehat{\Gamma}_m\| \leq 1$. Moreover, if $\|\widehat{\Gamma}_m\| <1$, then $\text{rank}(\widehat{X}_m)=r$. Since $\widehat{X}_m \stackrel{p}{\rightarrow} \overline{X}$, by \cite[Proposition 8]{DinST10} we have $\widehat{U}_{m,1}\widehat{V}_{m,1}^\mathbb{T} \stackrel{p}{\rightarrow} \overline{U}_1 \overline{V}_1^\mathbb{T}$. Together with Lemma \ref{lemoper}, the equation (\ref{eqnobs}) and Lemma \ref{lemdelcond}, it is not hard to obtain that \begin{gather} \frac{1}{m\rho_m}\mathcal{R}_\Omega^*\big(\mathcal{R}_\Omega(\widehat{X}_m)-y\big)+\widehat{U}_{m,1}\widehat{V}_{m,1}^\mathbb{T} - F(\widetilde{X}_m)+\gamma_m(\widehat{X}_m-\widetilde{X}_m) \hspace{4cm} \nonumber\\ \hspace{6cm}\stackrel{p}{\rightarrow} \mathcal{Q}_\beta(\widehat{\Delta})+\overline{U}_1 \overline{V}_1^\mathbb{T} - F(\overline{X}) = \overline{U}_2\widehat{\Gamma}\overline{V}_2^\mathbb{T}, \label{eqnsufls} \end{gather} where the equality follows from (\ref{eqndelvaldel}) and $\widehat{\Gamma}$ is the unique optimal solution to (\ref{eqndelcond}). Then, by applying the operator $\mathcal{Q}_\beta^\dag$ to (\ref{eqnsufkkt}), we obtain from (\ref{eqnsufls}) that \begin{equation}\label{eqnsuflsconv} \overline{U}_2^\mathbb{T}\mathcal{Q}_\beta^\dag(\widehat{U}_{m,2}\widehat{\Gamma}_m\widehat{V}_{m,2}^\mathbb{T})\overline{V}_2 \stackrel{p}{\rightarrow} \overline{U}_2^\mathbb{T}\mathcal{Q}_\beta^\dag(\overline{U}_2\widehat{\Gamma}\overline{V}_2^\mathbb{T})\overline{V}_2. \end{equation} Since $\widehat{X}_m \stackrel{p}{\rightarrow} \overline{X}$, according to \cite[Proposition 7]{DinST10}, there exist two sequences of matrices $Q_{m,U} \in \mathbb{O}^{n_1-r}$ and $Q_{m,V} \in \mathbb{O}^{n_2-r}$ such that \begin{equation}\label{eqnsufuvprop} \widehat{U}_{m,2}Q_{m,U} \stackrel{p}{\rightarrow} \overline{U}_2 \quad \text{and} \quad \widehat{V}_{m,2} Q_{m,V} \stackrel{p}{\rightarrow} \overline{V}_2. \end{equation} Moreover, the uniqueness of the solution to the linear system (\ref{eqndelcond}) is equivalent to the non-singularity of its linear operator. By combining (\ref{eqnsuflsconv}) and (\ref{eqnsufuvprop}), we obtain that $Q_{m,U}^\mathbb{T} \widehat{\Gamma}_m Q_{m,V} \stackrel{p}{\rightarrow} \widehat{\Gamma}.$ Hence, we obtain that $\|\widehat{\Gamma}_m\|<1$ with probability one since $\|\widehat{\Gamma}\|<1$. As discussed above, it follows that $\text{rank}(\widehat{X}_m) = r$ with probability one. \bigskip \noindent {\bf Proof of Proposition \ref{propdellimpos}} \noindent It is easy to verify that $\widehat{\Delta}_m$ is the optimal solution to \begin{equation}\label{eqndellimapprpos} \begin{aligned} \min_{\triangle\in\mathbb{S}^n} & \ \ {\displaystyle \frac{1}{2m}}\|\mathcal{R}_\Omega(\Delta)\|_2^2 - \frac{\nu}{m \rho_m}\langle \mathcal{R}_\Omega^*(\xi), \Delta\rangle + \langle I_n - F(\widetilde{X}_m), \Delta\rangle + \frac{\rho_m\gamma_m}{2}\|\Delta\|_F^2 \\ & \ \ \hspace{8.1cm} + \gamma_m \langle \overline{X}-\widetilde{X}_m, \Delta \rangle\\ {\rm s.t.} & \ \ \Delta \in \mathcal{F}_m := \rho_m^{-1}(\mathcal{C} \cap \mathbb{S}_+^n-\overline{X}), \end{aligned} \end{equation} where $\mathcal{C} := \big\{X \in \mathbb{S}^n \mid \mathcal{R}_\alpha(X) = \mathcal{R}_\alpha(\overline{X})\big\}$. Let $\Phi_m$ and $\Phi$ denote the objective functions of (\ref{eqndellimapprpos}) and (\ref{eqndellimpos}), respectively. Then $\Phi_m$ converges pointwise in probability to $\Phi$. Moreover, by considering the upper limit and lower limit of the family of feasible sets $\mathcal{F}_m$, we know that $\mathcal{F}_m$ converges in the sense of Painlev{\'e}-Kuratowski to the tangent cone $\mathcal{T}_{\mathcal{C}\cap \mathbb{S}_+^n}(\overline{X})$ (see \cite{RocW98, BonS00}). Note that the Slater condition implies that $\mathcal{C}$ and $\mathbb{S}_+^n$ cannot be separated. Then, from \cite[Theorem 6.42]{RocW98}, we have \( \mathcal{T}_{\mathcal{C}\cap \mathbb{S}_+^n}(\overline{X}) = \mathcal{T}_\mathcal{C}(\overline{X}) \cap \mathcal{T}_{\mathbb{S}_+^n}(\overline{X}). \) Clearly, $\mathcal{T}_\mathcal{C}(\overline{X}) = \{\Delta \in \mathbb{S}^n \mid \mathcal{R}_\alpha(\Delta)=0\}$. Moreover, by Arnold \cite{Arn71}, $$\mathcal{T}_{\mathbb{S}^n_+}(\overline{X}) = \big\{\Delta \in\mathbb{S}^n \mid \overline{P}_2^\mathbb{T} \Delta \overline{P}_2 \in \mathbb{S}_+^{n-r} \big\}.$$ Since epi-convergence of functions corresponds to set convergence of their epigraphs \cite{RocW98}, we obtain that $\delta_{\mathcal{F}_m}$ epi-converges to $\delta_{\mathcal{T}_{\mathcal{C} \cap \mathbb{S}_+^n}} = \delta_{\mathcal{T}_\mathcal{C}} + \delta_{\mathcal{T}_{\mathbb{S}_+^n}}$. Then, from Proposition \ref{propepisum}, $\Phi_m + \delta_{\mathcal{F}_m}$ epi-converges in distribution to $\Phi+\delta_{\mathcal{T}_\mathcal{C}} + \delta_{\mathcal{T}_{\mathbb{S}_+^n}}$. In addition, the optimal solution to (\ref{eqndellimpos}) is unique due to the strong convexity of $\Phi$ over the feasible set $\mathcal{C}\cap \mathbb{S}_+^n$. Therefore, we complete the proof by applying Proposition \ref{propepicon} on the epi-convergence. \bigskip \noindent {\bf Proof of Lemma \ref{lemdelcondpos}} \noindent Note that the Slater condition also holds for the problem (\ref{eqndellimpos}). (One may check the point $X^0-\overline{X}$.) Hence, $\widehat{\Delta}$ is the optimal solution to (\ref{eqndellimpos}) if and only if there exists $(\widehat{\zeta}, \widehat{\Lambda}) \in \mathbb{R}^{d_1}\times \mathbb{S}^{n-r}$ such that \begin{equation}\label{eqndellimkktpos} \left\{ \begin{aligned} & \mathcal{Q}_\beta(\widehat{\Delta}) + I_n - F(\overline{X}) - \mathcal{R}_\alpha^*(\widehat{\zeta}) - \overline{P}_2 \widehat{\Lambda} \overline{P}_2^\mathbb{T} =0,\\ & \mathcal{R}_\alpha(\widehat{\Delta}) = 0, \\ & \overline{P}_2^\mathbb{T}\widehat{\Delta} \overline{P}_2 \in \mathbb{S}_+^{n-r},\ \widehat{\Lambda} \in \mathbb{S}_+^{n-r},\ \langle \overline{P}_2^\mathbb{T} \widehat{\Delta} \overline{P}_2, \widehat{\Lambda}\rangle =0. \end{aligned}\right. \end{equation} Applying the operator $\mathcal{Q}_\beta^{\dag}$ to the first equation of (\ref{eqndellimkktpos}) yields the equality (\ref{eqndelvaldelpos}). Assume that $\overline{P}_2^\mathbb{T}\widehat{\Delta}\overline{P}_2=0$. Then, it is immediate to obtain from (\ref{eqndelvaldel}) that $\widehat{\Lambda}$ is a solution to the linear system (\ref{eqndelcondpos}). Conversely, if the linear system (\ref{eqndelcondpos}) has a solution $\widehat{\Lambda} \in \mathbb{S}_+^{n-r}$, then it is easy to check that (\ref{eqndellimkktpos}) is satisfied with $\widehat{\Delta}$ given by (\ref{eqndelvaldelpos}) and $\widehat{\zeta}=\mathcal{R}_\alpha\big(I_n - F(\overline{X})-\overline{P}_2\widehat{\Lambda}\,\overline{P}_2^\mathbb{T}\big)$. Then, $\overline{P}_2^\mathbb{T} \widehat{\Delta} \overline{P}_2 = 0$ directly follows from (\ref{eqndelvaldelpos}) and the first equation of (\ref{eqndellimkktpos}). \bigskip \noindent {\bf Proof of Theorem \ref{thmsufpos}} \noindent The Slater condition implies that $\widehat{X}_m$ is the optimal solution to (\ref{eqnrcspos}) if and only if there exists multipliers $(\widehat{\zeta}_m, \widehat{S}_m) \in \mathbb{R}^{d_1} \times \mathbb{S}^{n}$ such that $(\widehat{X}_m, \widehat{\zeta}_m, \widehat{S}_m)$ satisfy the KKT conditions: \begin{equation}\label{eqnsufkktpos} \left\{ \begin{aligned} &\frac{1}{m}\mathcal{R}_\Omega^*\big(\mathcal{R}_\Omega(\widehat{X}_m)\!-\!y\big)+\rho_m \big(I_n\!-\! F(\widetilde{X}_m)+\gamma_m(\widehat{X}_m \!- \!\widetilde{X}_m)\big)\!- \!\mathcal{R}_\alpha^*(\widehat{\zeta}_m) - \widehat{S}_m=0, \\ &\mathcal{R}_\alpha(\widehat{X}_m) = \mathcal{R}_\alpha(\overline{X}), \\ &\widehat{X}_m \in \mathbb{S}_+^n,\ \widehat{S}_m \in \mathbb{S}_+^n,\ \langle \widehat{X}_m, \widehat{S}_m \rangle =0. \end{aligned}\right. \end{equation} The third equation of (\ref{eqnsufkktpos}) implies that $\widehat{X}_m$ and $\widehat{S}_m$ can have a simultaneous eigenvalue decomposition. Let $\widehat{P}_m \in \mathbb{O}^{n}(\widehat{X}_m)$ with $\widehat{P}_{m,1} \in \mathbb{O}^{n\times r}$ and $\widehat{P}_{m,2} \in \mathbb{O}^{n\times (n-r)}$. From Theorem \ref{thmcons} and Corollary \ref{cororkrhs}, we know that $\text{rank}(\widehat{X}_m) \geq r$ with probability one. When $\text{rank}(\widehat{X}_m) \geq r$ holds, we can write $\widehat{S}_m = \widehat{P}_{m,2} \widehat{\Lambda}_m\widehat{P}_{m,2}^\mathbb{T}$ for some diagonal matrix $\widehat{\Lambda}_m \in \mathbb{S}^{n-r}_+$. In addition, if $\widehat{\Lambda}_m \in \mathbb{S}^{n-r}_{++}$, then $\text{rank}(\widehat{X}_m)=r$. Since $\widehat{X}_m \stackrel{p}{\rightarrow} \overline{X}$, according to \cite[Proposition 1]{DinST10}, there exists a sequence of matrices $Q_m \in \mathbb{O}^{n-r}$ such that $\widehat{P}_{m,2}Q_m \stackrel{p}{\rightarrow} \overline{P}_2$. Then, using the similar arguments to the proof of Theorem \ref{thmsuf}, we obtain that $Q_m^\mathbb{T} \widehat{\Lambda}_m Q_m \stackrel{p}{\rightarrow} \widehat{\Lambda}$. Since $\widehat{\Lambda}\in \mathbb{S}_{++}^n$, we have $\widehat{\Lambda}_m \in \mathbb{S}_{++}^n$ with probability one. Thus, we complete the proof. \bigskip \noindent {\bf Proof of Proposition \ref{propcorcn}} \noindent For the real covariance matrix case, the proof is given in \cite[Lemma 3.3]{QiS06} and \cite[Proposition 2.1]{QiS11}. For the complex covariance matrix case, one can use the similar arguments to prove the result. We next consider the density matrix case. Suppose that $\overline{X}$ satisfies the density constraint, i.e., $\mathcal{R}_{\alpha}(\overline{X})=\text{Tr}(\overline{X})=1$. Note that for any $t\in\mathbb{R}$, we have $t \overline{X} \in \text{lin}(\mathcal{T}_{\mathcal{H}_+^n}(\overline{X}))$. This, along with $\text{Tr}(\overline{X})=1$, implies that \( \text{Tr}\big(\text{lin}(\mathcal{T}_{\mathcal{H}_+^n}(\overline{X}))\big) = \mathcal{R}_{\alpha}\big(\text{lin}(\mathcal{T}_{\mathcal{H}_+^n}(\overline{X}))\big) =\mathbb{R}. \) This means that the constraint nondegeneracy condition (\ref{eqncndcpos}) holds. \bigskip \noindent {\bf Proof of Proposition \ref{propsoluni}} \noindent We prove for the rectangular case by contradiction. Assume that there exists some $\mathbb{V}^{(n_1-r)\times (n_2-r)} \ni \overline{\Gamma} \neq 0$ such that $\mathcal{B}_2(\overline{\Gamma}) = \overline{U}_2^\mathbb{T}\mathcal{Q}_\beta^\dag(\overline{U}_2 \overline{\Gamma} \, \overline{V}_2^\mathbb{T})\overline{V}_2 = 0$. By noting that $\mathcal{Q}_\beta^\dag$ is a self-adjoint and positive semidefinite operator, we obtain $(\mathcal{Q}_\beta^\dag)^{1/2}(\overline{U}_2 \overline{\Gamma} \, \overline{V}_2^\mathbb{T})= 0$. It follows that $\mathcal{P}_\beta(\overline{U}_2 \overline{\Gamma} \, \overline{V}_2^\mathbb{T})= 0$. This, together with $\overline{\Gamma}\neq 0$, implies that $\overline{U}_2 \overline{\Gamma} \, \overline{V}_2^\mathbb{T} = \mathcal{P}_\alpha(\overline{U}_2 \overline{\Gamma} \, \overline{V}_2^\mathbb{T}) \neq 0$ and moreover $\mathcal{R}_\alpha(\overline{U}_2 \overline{\Gamma} \, \overline{V}_2^\mathbb{T}) \neq 0$. However, for any $H \in \mathcal{T}(\overline{X})$, we have $$\langle \mathcal{R}_\alpha(\overline{U}_2 \overline{\Gamma}\, \overline{V}_2^\mathbb{T}), \mathcal{R}_\alpha(H)\rangle = \langle\mathcal{P}_\alpha(\overline{U}_2 \overline{\Gamma} \, \overline{V}_2^\mathbb{T}), H\rangle = \langle \overline{U}_2 \overline{\Gamma} \, \overline{V}_2^\mathbb{T}, H\rangle = \langle \overline{\Gamma}, \overline{U}_2^{\mathbb{T}} H \overline{V}_2 \rangle =0.$$ Thus, the constraint nondegeneracy condition (\ref{eqncndc}) implies that $\mathcal{R}_\alpha(\overline{U}_2 \overline{\Gamma} \, \overline{V}_2^\mathbb{T})=0$. This leads to a contradiction. Therefore, the linear operator $\mathcal{B}_2$ is positive definite. The proof for the positive semidefinite case is similar. \bigskip \noindent {\bf Proof of Theorem \ref{thmrccordencons}} \noindent From Propositions \ref{propcorcn} and \ref{propsoluni}, for both cases, the linear system (\ref{eqndelcondpos}) has a unique solution $\widehat{\Lambda}$. Moreover, uniform sampling results in $\mathcal{Q}_\beta^\dag = \mathcal{P}_\beta /d_2$. Thus, from (\ref{eqndelcondpos}), we get \begin{equation}\label{eqnrccordenconseq} \widehat{\Lambda} - \overline{P}_2^\mathbb{T}\mathcal{P}_\alpha(\overline{P}_2\widehat{\Lambda}\overline{P}_2^\mathbb{T})\overline{P}_2=\overline{P}_2^\mathbb{T}\mathcal{P}_\beta(\overline{P}_2\widehat{\Lambda}\overline{P}_2^\mathbb{T})\overline{P}_2 = \overline{P}_2^\mathbb{T}\mathcal{P}_\beta(I_n-F(\overline{X}))\overline{P}_2. \end{equation} We first prove the covariance matrix completion by contradiction. Without loss of generality, we assume that the first $l$ diagonal entries are fixed and positive. Then, for any $X\in\mathbb{S}_+^n$, $\mathcal{P}_\alpha(X)$ is the diagonal matrix whose first $l$ diagonal entries are $X_{ii}, 1\leq i\leq l$ respectively and the other entries are $0$. Assume that $\widehat{\Lambda} \notin \mathbb{S}^{n-r}_{++}$, i.e., $\lambda_{\rm min}(\widehat{\Lambda})\leq 0$, where $\lambda_{\rm min}(\cdot)$ denotes the smallest eigenvalue. Then, we have $$\lambda_{\rm min}(\widehat{\Lambda}) = \lambda_{\rm min}(\overline{P}_2\widehat{\Lambda}\overline{P}_2^\mathbb{T}) \leq \lambda_{\rm min}\big(\mathcal{P}_\alpha(\overline{P}_2\widehat{\Lambda}\overline{P}_2^\mathbb{T})\big) \leq \lambda_{\rm min}\big(\overline{P}_2^\mathbb{T}\mathcal{P}_\alpha (\overline{P}_2\widehat{\Lambda}\overline{P}_2^\mathbb{T})\overline{P}_2\big), $$ where the equality follows from the fact that $\widehat{\Lambda}$ and $\overline{P}_2 \widehat{\Lambda}\overline{P}_2^\mathbb{T}$ have the same nonzero eigenvalues, the first inequality follows from the fact that the vector of eigenvalues is majorized by the vector of diagonal entries, (e.g., see \cite[Theorem 9.B.1]{MarOA10}), and the second inequality follows from the Courant-Fischer minmax theorem, (e.g., see \cite[Theorem 20.A.1]{MarOA10}). As a result, the left-hand side of (\ref{eqnrccordenconseq}) is not positive definite. Notice that $\overline{P}_2^\mathbb{T} F(\overline{X})\overline{P}_2=0$. Thus, the right-hand side of (\ref{eqnrccordenconseq}) can be written as $$\overline{P}_2^\mathbb{T}\mathcal{P}_\beta(I_n-F(\overline{X}))\overline{P}_2 = \overline{P}_2^\mathbb{T}\mathcal{P}_\beta(I_n)\overline{P}_2 +\overline{P}_2^\mathbb{T} \mathcal{P}_\alpha(F(\overline{X}))\overline{P}_2 = \overline{P}_2^\mathbb{T} \big(\mathcal{P}_\beta(I_n)+\mathcal{P}_\alpha(F(\overline{X}))\big)\overline{P}_2.$$ Since $\text{rank}(\overline{X})=r$, with the choice (\ref{eqnfcorden}) of $F$, we have that for any $1\leq i\leq l$, $$\overline{X}_{ii} = \sum_{j=1}^r\lambda_j(\overline{X}) |\overline{P}_{ij}|^2 >0 \quad \text{implies} \quad \big(F(\overline{X})\big)_{ii} = \sum_{j=1}^r f_i\big(\lambda_j(\overline{X})\big) |\overline{P}_{ij}|^2 >0.$$ Moreover, $\mathcal{P}_\beta(I_n)$ is the diagonal matrix with the last $n-r$ diagonal entries being ones and the other entries being zeros. Thus, $\mathcal{P}_\beta(I_n)+\mathcal{P}_\alpha(F(\overline{X}))$ is a diagonal matrix with all positive diagonal entries. It follows that the right-hand side of (\ref{eqnrccordenconseq}) is positive definite. Thus, we obtain a contradiction. Hence, $\widehat{\Lambda} \in \mathbb{S}^{n-r}_{++}$. Then, from Theorem \ref{thmsufpos}, we obtain the rank consistency. For the density matrix completion, $\mathcal{P}_\alpha(\cdot) = \frac{1}{n}\text{Tr}(\cdot)I_n$. By further using $\overline{P}_2^\mathbb{T} F(\overline{X})\overline{P}_2= 0$ and $\mathcal{P}_\beta(I_n)=0$, we can rewrite (\ref{eqnrccordenconseq}) as $$ \widehat{\Lambda}-\frac{1}{n}\text{Tr}(\widehat{\Lambda})I_{n-r} = \frac{1}{n}\text{Tr}(F(\overline{X}))I_{n-r}.$$ By taking the trace on both sides, we obtain that $\widehat{\Lambda} = \frac{1}{r}\text{Tr}(F(\overline{X}))I_{n-r}.$ Since $\overline{X}$ is a density matrix of rank $r$, with the choice (\ref{eqnfcorden}) of $F$, we have that $$\text{Tr}(\overline{X}) = \sum_{i=1}^n \sum_{j=1}^r\lambda_j(\overline{X}) |\overline{P}_{ij}|^2 = 1 \quad \text{implies} \quad \text{Tr}\big(F(\overline{X})\big) = \sum_{i=1}^n\sum_{j=1}^r f_i\big(\lambda_j(\overline{X})\big) |\overline{P}_{ij}|^2 >0.$$ It follows that $\widehat{\Lambda} \in \mathbb{S}_{++}^{n-r}$ and thus we obtain the rank consistency. \bibliographystyle{plain}
{ "timestamp": "2012-10-16T02:01:53", "yymm": "1210", "arxiv_id": "1210.3709", "language": "en", "url": "https://arxiv.org/abs/1210.3709" }
\section{The Double Chooz Experiment} The primary goal of DC is measurement of the neutrino oscillation parameter $\theta_{13}$ through $\bar \nu_e$ disappearance. The design of the Daya Bay and RENO detectors is similar to that of DC \cite{dc2012}. All three experiments use the inverse beta decay (IBD) interaction ($\bar \nu_e + p \rightarrow e^+ + n$) in liquid scintillator. This interaction is identified by a correlated pair of signals, the first consistent with a positron and the second consistent with a $n$-capture. The DC far detector is positioned 1050 m from the two 4.25 GW$_{th}$ (thermal power) cores of the Chooz Nuclear Power Plant. It consists of four concentric cylindrical regions, with centered chimneys for filling and insertion of calibration sources. The innermost cylinder is the ``Neutrino Target'' (NT), a 10 m$^3$ volume of gadolinium-doped liquid scintillator. The acrylic NT cylinder is surrounded by a 55 cm thick ``$\gamma$ Catcher'' (GC) consisting of Gd-free scintillator. The acrylic cylinder of the GC is immersed in a 105 cm thick nonscintillating oil ``buffer region'' containing 390 10-inch photomultiplier tubes (PMT). These three cylinders, collectively called the ``inner detector'' (ID), are contained in a stainless steel vessel which is encompassed by a 50 cm thick liquid scintillator region forming the ``Inner Veto'' (IV). The IV is surrounded by 15 cm of demagnetized steel, followed by rock. Above this system is the ``Outer Veto'' (OV), consisting of segmented scintillator modules for muon tracking. The detector is shielded from cosmic rays by a 300 meters water equivalent (m.w.e.)\ rock overburden, in a hill topology. The dominant backgrounds in the reactor neutrino experiments are: spallation products, particularly $^9$Li and $^8$He, produced by cosmic muons interacting in oil, which emit a $n$ immediately following the $\beta$-decay process; stopping muons; and fast neutrons produced by muons in the surrounding rock. In this Letter, we refer to the first as ``$\beta$-$n$ backgrounds,'' while the latter two are collectively called ``$\mu$/fast-$n$'' backgrounds. These are directly measured by reactor-off running. The DC overburden being similar to those of Daya Bay and RENO, these results can be applied to those experiments with modest scaling for depth variations. A direct measurement of the backgrounds in the DC oscillation analyses is performed by applying the same $\bar{\nu}_e$ selection criteria as in Refs.\ \cite{dc2011} and \cite{dc2012} to the reactor-off data sample. A minimal set of selection cuts was applied in \cite{dc2011} (``DCI selection''). Two extra cuts were added in \cite{dc2012} (``DCII selection'') to reduce background contamination in the $\bar{\nu}_e$ candidate sample. The results presented here apply to both the DCI and DCII selections, comparing the reactor-off data with expectations from the published reactor-on oscillation analyses~\cite{dc2012}. Candidates are extracted from a sample of triggers (``singles'') above 0.5 MeV that are neither tagged as a background known as ``light noise,'' nor vetoed by the 1 ms muon veto ($\mu$ veto) \cite{dc2012}. The DCI selection then applies four cuts to the prompt ($e^+$) and delayed ($n$) IBD signals: 1) time difference: $2~\mu$s $<$ $\Delta t_{{\rm prompt}/n}$ $<$ $100~\mu$s; 2) prompt trigger: $0.7~ $ < E_{\rm prompt} < 12.2~$MeV; 3) delayed trigger: $6.0~ $ < E_{n} < 12.0~$MeV; 4) multiplicity: no additional valid triggers from $100~\mu$s preceding the prompt signal to $400~\mu$s after it. The DCII selection further rejects candidates according to two more conditions: 5) cosmogenic $\beta$-$n$ background reduction: candidates within a 0.5 s window after a muon depositing high energy ($>$600 MeV) crosses the ID (``showering-$\mu$ veto''); 6) $\mu$/fast-$n$ background reduction: candidates whose prompt signal is coincident with an OV signal (OV veto). During the reactor-off period, the total and showering muon rates (ID only) were 46 and 0.10 s$^{-1}$, respectively, consistent with those during the reactor-on period to 4\% \cite{dc2012}. By applying the $\mu$ veto without and with the additional DCII showering-$\mu$ veto, 7.19 and 6.84 live days, respectively, are obtained. Within these times, a singles rate of 11.01 s$^{-1}$ is measured, again consistent, within 4\%, with that during the reactor-on period. Hence, the same accidental background level is expected for DCI and DCII. Table~\ref{tab:expBkg} shows the estimated background and observed reactor-off event rates for both the DCI and DCII selections. In all cases, the background rate estimation relies on data published in~\cite{dc2012}. The accidental rate uncertainties quoted include an additional effect of day-to-day variations, negligible in~\cite{dc2012}. For the DCII selection, the $^9$Li rate corresponds to the value used as an input for the oscillation fit, which is consistent with the fit output, and the $\mu$/fast-$n$ rate is smaller than that reported in~\cite{dc2012} since OV duty-cycle was 100\% during the reactor-off period. \begin{table}[tb] \caption{ Background rate estimates~\cite{dc2012}, in events/day, for the reactor-off data sample, compared to observation, for the two selections described in the text. \label{tab:expBkg}} \begin{tabular}{cccc|c|c} \hline \hline Rate & $\beta$-$n$ & Accidental & $\mu$/fast $n$ & Total & Total \\ (day$^{-1}$) & & & & Est. & Obs. \\ \hline\hline DCI & 2.10$\pm$0.57 & 0.35$\pm$0.02 & 0.93$\pm$0.26 & 3.4$\pm$0.6 & 2.7$\pm$0.6\\ DCII & 1.25$\pm$0.54 & 0.26$\pm$0.02 & 0.44$\pm$0.20 & 2.0$\pm$0.6& 1.0$\pm$0.4 \\ \hline \hline \end{tabular} \end{table} In order to evaluate the residual neutrino spectrum in the reactor-off period, a dedicated simulation has been performed with FISPACT \cite{FISPACT}, an evolution code predicting the isotope inventory in the reactor cores. The neutrino spectrum is then computed using the BESTIOLE \cite{BESTIOLE} database. The resulting total number of expected neutrino interactions during the reactor-off period is 2.01$\pm$0.80, which, when corrected for the live time ($\mu$ vetoes) and the detection efficiency computed in \cite{dc2012}, yields an expected number of detected neutrino events of 1.49$\pm$0.60 (1.42$\pm$0.57) in the DCI (DCII) analysis. The dominant contribution comes from long-half-life isotopes, so the time distribution of these events is expected to be essentially flat over the several-day reactor-off period. The application of the $\bar{\nu}_e$ selection cuts to the reactor-off data sample yields 21 (8) $\bar{\nu}_e$ candidates in the DCI (DCII) analysis. The DCII analysis vetoes five events using the showering-$\mu$ veto ($\beta$-$n$-like events), and another eight using the OV veto ($\mu$/fast-$n$-like events). Figure \ref{fig:DCIBKG} shows the prompt energy distribution of the candidates, superimposed on the expected spectra of background events and residual neutrinos. Once the expected number of detected neutrinos is subtracted, these numbers yield a measured total background of 2.7$\pm$0.6 events/day (1.0$\pm$0.4 events/day) using DCI (DCII). This result is consistent with the background estimates, as shown in Table \ref{tab:expBkg}, confirming the reliability of the background model for the oscillation analysis. \begin{figure}[tb] \includegraphics[width=0.9\linewidth]{BKG_Std_bkgs.pdf} \includegraphics[width=0.9\linewidth]{BKG_Li9Red_InputLi9Rate_bkgs-eps-converted-to.pdf} \caption{$\bar{\nu}_e$ candidates in the reactor-off data sample, with breakdown by components. Top and bottom figures show DCI and DCII selection results, respectively. Black points: data; histogram: background+$\bar{\nu}_e$ expectation. \label{fig:DCIBKG}} \end{figure} The accidental background rate obtained in the reactor-off data sample is 0.26$\pm$0.02 events/day, in perfect agreement with the prediction in Table~\ref{tab:expBkg}. Unlike other backgrounds, accidentals have no spatial correlation between the prompt and delayed signals. One event in the reactor-off sample with distance between the vertices $\Delta r \approx$3.5~m is clearly accidental-like. Following the analysis presented in \cite{dc2012}, the cosmogenic $\beta$-$n$ background rate can be determined from the time correlation to the parent muon. An exponential decay plus a constant background is fit to the time difference ($\Delta t_{\mu\nu}$) distribution between muons and IBD candidates. DCI selection plus the OV veto (to reduce $\mu$/fast-$n$ contamination) yields $1.7\pm0.9$ $\beta$-$n$-events/day. The number remaining after DCII selection is $1.1\pm0.8$ events/day. The results are in good agreement with the $\Delta t_{\mu\nu}$ fit of the reactor-on data, which indicated $2.1\pm0.6$ ($1.3\pm 0.5$) events/day for DCI+OV (DCII) selection~\cite{dc2012}. The five events tagged by the showering-$\mu$ veto correspond to a $\beta$-$n$ rate of 0.70$\pm$0.31 events/day, consistent with the value in~\cite{dc2012}: 0.89$\pm$0.10 events/day. A sample of stopping muons and fast neutrons is obtained by applying the OV veto (cut 6) to the candidates passing the DCI selection. Eight events are tagged by the OV in the range $E_{\rm prompt}$~=~0.7 to 12.2 MeV, while four are found between 12.2 and 30 MeV, where only $\mu$/fast-$n$ background is expected. Of these, ten events have $\Delta t<3~\mu$s, and their reconstructed vertices populate the region below the detector chimney. These are classified as stopping muons that decay. The remaining two candidates are farther from the chimney and have large $\Delta t$, as expected for fast-neutron events. The overall OV tagging rate for $E_{\rm prompt}<$~30~MeV in the reactor-off period is 1.67$\pm$0.48 events/day, in good agreement with that observed in the reactor-on data: 1.70$\pm$0.10 events/day. Both IV and OV tagging techniques \cite{dc2012} were applied to the reactor-off data, yielding results consistent with those of the reactor-on analysis. The rates of the IBD candidates originating from fast-$n$ (excluding stopped-$\mu$'s) and $\beta$-$n$ backgrounds can be scaled to other experimental sites, such as those of the Daya Bay and RENO detectors and the future DC near detector. As these backgrounds are produced by muons, the first step is scaling the muon flux ($\Phi_\mu$) and mean energy ($\langle E_\mu\rangle$). IBD rates from fast-$n$ and $\beta$-$n$ isotope production can then be computed. The muon flux (in $\mu$/cm$^2$/s) at the DC far site is estimated using two independent methods: the total measured muon rate ($\mu$/s) divided by either 1) the effective detector area, or 2) the detector volume, then multiplied by average path length within the volume. The two methods yield consistent results and are in agreement with a simulation using the MUSIC/MUSUN code~\cite{Music}, which includes a detailed description of the overburden topology. The results also agree with measurements by the CHOOZ experiment~\cite{Chooz}, once the definition of the effective area is correctly taken into account. An average of estimates 1) and 2) is taken as the DC far flux, with an error estimated from the difference between measurement and simulation. A MUSIC/MUSUN simulation also yields the average muon energy at the DC far site. The values are summarized in Table~\ref{tab:scaleinputs}, including measured rates of fast-$n$ and $\beta$-$n$ backgrounds. The fast-$n$ rate was computed as in~\cite{dc2012} for the reactor-on data sample, both using the OV veto (DCII) on the subsample where the OV was fully operational, and on the whole sample excluding this cut (DCI). \begin{table}[tb] \caption{Values for the relevant quantities at the DC far site, used as input for scaling backgrounds with depth. \label{tab:scaleinputs}} \begin{tabular}{ll} \hline\hline Muon flux $\Phi_\mu^{DC}$ & 0.72 $\pm$ 0.04 m$^{-2}$s$^{-1}$ \\ Mean muon energy $\langle E_\mu^{DC} \rangle$ & 63.7 $\pm$ 0.8 GeV \\ Fast-$n$ background rate & 0.33 $\pm$ 0.16 d$^{-1}$ DCI \\ & 0.23 $\pm$ 0.18 d$^{-1}$ DCII \\ {$\beta$-$n$} background rate & 1.7 $\pm$ 0.9 d$^{-1}$ DCI + OV \\ & 1.1 $\pm$ 0.8 d$^{-1}$ DCII \\ \hline\hline \end{tabular} \end{table} The measured muon flux was scaled following two different empirical methods~\cite{Reichenbacher,Bugaev}. Both are applicable for shallow depths and provide consistent results. Such methods assume a flat overburden. The shape of the overburden affects the overall rate, but has only a minor impact on the evolution of the rate with depth. As a realistic evaluation of the effect, we find the difference between the rates for a flat overburden and the hill profile at the DC far site to be 11\%. The mean muon energy was calculated at various depths using the MUSIC/MUSUN simulation code. We take the uncertainty on these values due to overburden shape to be 3.6\%: this comes from our calculations of the mean muon energies at a depth of 300 m.w.e.\ assuming either a flat overburden or the Double Chooz hill profile. The uncertainty due to rock composition is 3.5\% and comes from comparing our results for ``standard'' rock (density 2.65 g/cm$^3$) to those for Chooz rock (density 2.80 g/cm$^3$). An overall systematic error of 6.1\% on mean muon energies takes into account in addition the numerical approximations introduced in the simulation and the uncertainty on primary muon flux. The muon fluxes and mean energies at the various experimental sites are shown in Table \ref{tab:muenergy}; they are in good agreeement with the values quoted in \cite{DB}. \begin{table}[tb] \begin{center} \caption{Muon flux and mean muon energy at the DC near, Daya Bay (DB) and RENO experimental sites. \label{tab:muenergy}} \begin{tabular}{lccccc} \hline\hline Detector & depth & \multicolumn{2}{c}{$\Phi_\mu$ (m$^{-2}$s$^{-1}$)} & \multicolumn{2}{c}{$\langle E_\mu\rangle$ (GeV)} \\ & (m.w.e.) & quoted & calculated & quoted & calculated \\ \hline RENO Near & 120 & N/A & $4.84 \pm 0.27$ & N/A & $33.3\pm2.0$ \\ DC Near & 150 & N/A & $3.12 \pm 0.17$ & N/A & $39.7\pm2.4$ \\ DB EH1 & 250 & 1.27 & $1.08 \pm 0.06$ & 57 & $58.5\pm3.6$ \\ DB EH2 & 265 & 0.95 & $0.95 \pm 0.05$ & 58 & $61.0\pm3.7$ \\ RENO Far & 450 & N/A & $0.28 \pm 0.02$ & N/A & $89.3\pm5.4$ \\ DB EH3 & 860 & 0.056 & $0.05 \pm 0.01$ & 137 & $139.8\pm8.5$ \\ \hline\hline \end{tabular} \end{center} \end{table} The rates of IBD candidates from fast neutrons and $\beta$-$n$ isotopes were assumed to scale with depth ($h$) according to power laws \cite{Zatsepin, Wang}: $$ R_{n/\beta-n}(h) \propto \Phi_\mu(h) \cdot \langle E_\mu(h)\rangle^\alpha\,.$$ Factors due to scintillator composition, summarized in Table~\ref{tab:ScintComp2}, were taken into account, and affect the results by no more than 3\%. Background rates can depend on several other aspects of the experimental apparatus: acceptance, $\mu$ detection efficiency, neutron shielding type and thickness, selection cuts, etc. Thus, detailed use of these rates for other experiments requires corrections to adapt from our detector to the detector of interest. \begin{table}[t] \begin{center} \caption{Different liquid scintillator (LS) properties used for background rate scaling. $M$ indicates the total mass and $m_{LS}$ the molecular mass of the LS, $N_{C/LS}$ and $N_{H/LS}$ are the number of carbon or hydrogen atoms per molecule of LS, $N_C$ ($N_H$) the total number of carbon (hydrogen) atoms in the detector target. \label{tab:ScintComp2}} \begin{tabular}{lccccccc} \hline \hline Experiment & $M$ & $m_{LS}$ & $N_{C_{LS}}$ & $N_{H_{LS}}$ & $N_C$ & $N_H$ \\ & (tons) & (g/mol) & & & $(10^{29})$ &$(10^{29})$ \\ \hline\hline DC & 8.24 & 178.33 & 12.67 & 24.65 & 3.53 & 6.75 \\ RENO & 16.0 & 246.43 & 18 & 30 & 7.04 & 11.7 \\ Daya Bay & 20.0 & 246.43 & 18 & 30 & 8.80 & 14.7 \\ KamLAND & 913.4 & 160.31 & 11 & 22 & 385 & 767 \\ \hline \hline \end{tabular} \end{center} \end{table} For fast-$n$, $\alpha=0.74$ is used, as estimated in~\cite{Zatsepin,Wang} from rates measured by several experiments at different depths. The prompt signal in fast-$n$ background events arises from the recoil of a free proton in the target; for simplicity, we scale the rate to the number of hydrogen atoms in the target scintillators, assuming that interactions scale with detector volume, as is frequently done in the literature. The results are summarized in Table~\ref{tab:rescn} and compared to measured values~\cite{DB,RENO}, normalized to the muon flux at the DC far site, in Fig.~\ref{fig:rescn}. The value quoted by RENO is obtained without a dedicated muon veto, and is thus comparable to our DCI result, while Daya Bay applies a water muon veto and is thus more similar to our DCII results. The Daya Bay measurements are lower than our extrapolation, which could be due to the water surrounding their detectors. For RENO, our extrapolation yields lower values than the measured ones \begin{table}[tb] \caption{Fast-$n$ background rates measured at DC far and scaled to other depths. \label{tab:rescn}} \begin{tabular}{lccc} \hline\hline & & \multicolumn{2}{c}{\bf Fast-$n$ background rate}\\ Detector & depth & \multicolumn{2}{c}{\bf (day $\cdot 10^{30}$H)$^{-1}$} \\ & (m.w.e.) & no OV veto & OV veto\\ \hline RENO near & 120 & 2.0 $\pm$ 1.0 & 1.4 $\pm$ 1.1 \\ DC near & 150 & 1.44 $\pm$ 0.76 & 1.01 $\pm$ 0.82 \\ Daya Bay EH1 & 250 & 0.67 $\pm$ 0.33 & 0.46 $\pm$ 0.37 \\ Daya Bay EH2 & 265 & 0.60 $\pm$ 0.30 & 0.42 $\pm$ 0.33 \\ DC far & 300 & 0.49 $\pm$ 0.24 & 0.34 $\pm$ 0.27 \\ RENO far & 450 & 0.24 $\pm$ 0.12 & 0.16 $\pm$ 0.13 \\ Daya Bay EH3 & 860 & 0.06 $\pm$ 0.03 & 0.04 $\pm$ 0.03 \\ \hline\hline \end{tabular} \end{table} \begin{figure}[htb] \includegraphics[width=\linewidth]{Neutron5.pdf} \caption{Scaling of DC fast-$n$ background rates and comparison with quoted values. Empty (full) markers indicate quoted results using a selection without (with) an external muon veto; lines and shaded bands represent our scaling of the DC measurements with their uncertainty. Values were scaled by number of H atoms and normalized to muon flux at DC far site. \label{fig:rescn}} \end{figure} For the scaling of {$\beta$-$n$} rates, the exponent $\alpha$ has never been measured experimentally. In~\cite{Hagner}, the combined rate of {$^9$Li} and {$^8$He} was measured at a single energy, and the value $\alpha =$ 0.73$\pm$0.10 was used to extrapolate this rate to KamLAND and Borexino energies. In~\cite{KamLAND}, the value $\alpha =$ 0.801$\pm$0.026 is given for $\beta$-$n$ based on FLUKA simulations for various muon energies. A similar simulation, based on GEANT4, is described in~\cite{Zbiri}, where the resulting value for $\alpha$ is 1.06. To be conservative, we choose $\alpha =$ 0.84$\pm$0.22, ranging from the lower bound of~\cite{Hagner} to the result of~\cite{Zbiri}. As cosmogenic isotope production scales with the number of target carbon atoms, rates are normalized to the total number of carbon atoms in the target scintillator. Results for scaled $\beta$-$n$ rates are shown in Table~\ref{tab:scaleLi} and compared to the measured values \cite{dc2012,DB,RENO}, normalized to the muon flux at the DC far site, in Fig.~\ref{fig:scaleLi}. The DCII result is comparable to the Daya Bay value, where a veto of 1 s following showering muons has been applied, while the DCI result is comparable to the RENO one, with no specific $\beta$-$n$ background reduction. No correction has been applied for the efficiency of the showering-$\mu$ veto. Within the uncertainty of the measured $\beta$-$n$ rate, the scaled results agree. \begin{table}[htb] \caption{$\beta$-$n$ decay rates measured at DC far and scaled to other depths.\label{tab:scaleLi}} \begin{tabular}{lccc} \hline\hline & & \multicolumn{2}{c}{\bf $\beta$-$n$-decay rate}\\ Detector & depth & \multicolumn{2}{c}{\bf (day $\cdot 10^{30}$C)$^{-1}$} \\ & (m.w.e.) & DCI & DCII \\ \hline RENO near & 120 & 18 $\pm$ 10 & 11.7 $\pm$ 8.9 \\ DC near & 150 & 13.5 $\pm$ 7.9 & 8.7 $\pm$ 6.7 \\ Daya Bay EH1 & 250 & 6.5 $\pm$ 3.5 & 4.2 $\pm$ 3.1 \\ Daya Bay EH2 & 265 & 5.9 $\pm$ 3.2 & 3.8 $\pm$ 2.8 \\ DC far & 300 & 4.8 $\pm$ 2.6 & 3.1 $\pm$ 2.3 \\ RENO far & 450 & 2.4 $\pm$ 1.3 & 1.5 $\pm$ 1.2 \\ Daya Bay EH3 & 860 & 0.63 $\pm$ 0.36 & 0.41 $\pm$ 0.31 \\ \hline\hline \end{tabular} \end{table} \begin{figure}[htb] \includegraphics[width=\linewidth]{li9_scaling_plot_v4.pdf} \caption{ Scaling of DC $\beta$-$n$ decay rates and comparison with quoted values. Results were scaled by number of carbon atoms and normalized to muon flux at DC far site. Solid lines and shaded regions correspond to rate and scaling uncertainties in reactor-off analysis: DCI (red solid line) and open data points compare the total $\beta$-$n$ rate, while DCII (blue solid line) and filled data points correspond to analyses with an extended veto following showering muons. \label{fig:scaleLi}} \end{figure} In conclusion, we have reported a direct measurement of the cosmic-ray-induced background in the DC oscillation analysis using 7.53 days of data with both reactors off. The identified candidates are well understood as due to accidentals, $\beta$-$n$-emitting isotopes, cosmic muons producing fast neutrons, and stopped muons that decay. With the same cuts applied as in the Double Chooz reactor-on oscillation analysis \cite{dc2012}, the total background including accidentals, cosmogenic $\beta$-$n$-emitting isotopes, fast neutrons from cosmic muons and stopped-$\mu$ decays is 1.0$\pm$0.4 events/day. The result is consistent with estimations in the DC oscillation analysis. The results have been scaled to depths of interest to the Daya Bay and RENO reactor-based neutrino oscillation experiments. \section*{Acknowledgments} We are grateful to Vitaly Kudryavtsev for providing and supporting the MUSIC and MUSUN muon transport codes. We thank the French electricity company EDF; the European fund FEDER; the R\'egion de Champagne Ardenne; the D\'epartement des Ardennes; and the Communaut\'e des Communes Ardennes Rives de Meuse. We acknowledge the support of the CEA, CNRS/IN2P3, CCIN2P3 and LabEx UnivEarthS in France; the Ministry of Education, Culture, Sports, Science and Technology of Japan (MEXT) and the Japan Society for the Promotion of Science (JSPS); the Department of Energy and the National Science Foundation of the United States; the Ministerio de Ciencia e Innovaci´on (MICINN) of Spain; the Max Planck Gesellschaft, the Deutsche Forschungsgemeinschaft DFG (SBH WI 2152), the Transregional Collaborative Research Center TR27, the Excellence Cluster ``Origin and Structure of the Universe,'' the Maier-Leibnitz-Laboratorium Garching and the SFB676 in Germany; the Russian Academy of Science, the Kurchatov Institute and RFBR (the Russian Foundation for Basic Research); and the Brazilian Ministry of Science, Technology and Innovation (MCTI), the Financiadora de Estudos e Projetos (FINEP), the Conselho Nacional de Desenvolvimento Cient\'{i}fico e Tecnol\'{o}gico (CNPq), the S˜ao Paulo Research Foundation (FAPESP), the Brazilian Network for High Energy Physics (RENAFAE) in Brazil.
{ "timestamp": "2012-10-23T02:01:23", "yymm": "1210", "arxiv_id": "1210.3748", "language": "en", "url": "https://arxiv.org/abs/1210.3748" }
\section{Introduction} This work falls into a general framework which consists of observing the behavior of patterns and structures that can be formed after instability onset in an evaporating liquid layer. In previous work, we studied theoretical instability thresholds in pure fluids [1,2] and in binary mixtures [3,4]. What is of interest here, is a two-dimensional numerical simulation study of the transient temperature and fluid motion in the liquid for a liquid evaporating into a nitrogen gas flow. The chosen liquid is HFE7100 (an electronic liquid produced by 3M). The numerical (CFD) simulations are performed using the software ComSol (finite elements method). The evaporation causes the instability and the gas flow evacuates the liquid vapor. The setup used for this numerical simulation is represented in Fig. \ref{Scheme} and is inspired from the CIMEX experimental setup of ESA [5]. \begin{figure} \resizebox{0.8\columnwidth}{!}{\includegraphics{Scheme} } \caption{A scheme of the system} \label{Scheme} \end{figure} The gas flow is maintained at 100 ml/min in a channel of 3 mm height, while three different liquid thicknesses are considered: 2, 4 and 8 mm. The width of the whole setup is 50 mm. The cover between the liquid and gas channel is $200 \mu m$ thick. At the middle of this cover, there is an opening with a width of 10.6 mm, allowing contact between the liquid and gas channel. These items are taken into the geometry of the numerical software ComSol. The boundaries of the whole system are kept at an ambient temperature and pressure of respectively 298 K and 1 atm, except for the gas channel outlet where only the ambient pressure is respected. Also, the whole system is surrounded by walls except for the gas flow inlet and outlet. The interface is kept at a constant height, since in the ESA experimental setup the liquid is to be replenished at the same rate as the evaporation rate. At the interface, flux conservation is maintained and a tangential stress balance is considered. Furthermore, a no-slip condition is assumed at the interface. The assumption of local thermodynamic equilibrium at the interface allows us the use of Raoult's law, in which the temperature dependence of the saturation pressure is determined via the Clausius-Clapeyron relation. The results present the temperature in the liquid and gas phase as well as the fluid motion in the liquid (caused by the evolution of the temperature via surface-tension and buoyancy effects) by means of streamlines as a function of time. The real total elapsed time is 10 seconds. Two videos are shown, presenting the same results in the following URLs: \begin{enumerate} \item \href{DOI}{Video 1 - High resolution} \item \href{DOI}{Video 2 - Low resolution} \end{enumerate} Note that in the videos the inner streamlines represent the highest velocity values. The red color represents the highest observed temperature (that of the ambient one), 298 K. The blue color represents the lowest observed temperature, around 285 K. \section{Discussion} From the results in the videos we can observe that first several small rolls are formed near the surface, caused by the surface-tension effect as Fig. \ref{comparisont1} shows for the three liquid layer thicknesses at time $t=1 s$. \begin{figure} \resizebox{0.3\columnwidth}{!}{\includegraphics{t1and2mm} } \quad\resizebox{0.3\columnwidth}{!}{\includegraphics{t1and4mm} } \quad\resizebox{0.3\columnwidth}{!}{\includegraphics{t1and8mm} } \caption{The temperature and liquid flow pattern at time $t=1 s$ for liquid thicknesses of 2 mm (left), 4 mm (middle) and 8 mm (right) and a gas flow of 100 ml/min} \label{comparisont1} \end{figure} \noindent Due to buoyancy and as time proceeds, the rolls grow towards the bottom of the liquid layer. Then the rolls also grow in horizontal direction merging with each other until a steady configuration is obtained. For a higher liquid layer thickness, the merging occurs earlier and less rolls are left. Furthermore, the temperature gradients decrease as the liquid thickness increases, which is caused by the higher mixing efficiency when the liquid is less confined. Moreover, the rolls extend more horizontally under the cover towards the side walls as the liquid layer thickness increases. For smaller liquid layer thicknesses, the rolls reach the bottom where a constant temperature of 298 K is maintained. Therefore the rolls stay concentrated close to the interface. As the liquid layer thickness increases, the rolls have more time to increase in size towards the side walls before they reach the bottom of the liquid layer. Fig. \ref{comparisont10} shows this at the time $t= 10 s$. \begin{figure} \resizebox{0.3\columnwidth}{!}{\includegraphics{t10and2mm} } \quad\resizebox{0.3\columnwidth}{!}{\includegraphics{t10and4mm} } \quad\resizebox{0.3\columnwidth}{!}{\includegraphics{t10and8mm} } \caption{The temperature and liquid flow pattern at time $t=10 s$ for liquid thicknesses of 2 mm (left), 4 mm (middle) and 8 mm (right) and a gas flow of 100 ml/min} \label{comparisont10} \end{figure} \noindent This work yields valuable information about the supercritical instability behavior of an evaporating liquid and the qualitative influence of its confinement by means of fluid dynamics. \section{Acknowledgments} The authors gratefully acknowledge financial support of BelSPo and ESA. \section{References} [1] B. Haut and P. Colinet, J. Colloid Interface Sci., 285: 296-305, 2005. [2] F. Chauvet, S. Dehaeck and P. Colinet, Europhys. Lett., 99: 34001, 2012. [3] H. Machrafi, A. Rednikov, P. Colinet, P.C. Dauby, J. Colloid Interface Sci., 349: 331-353, 2010. [4] H. Machrafi, A. Rednikov, P. Colinet, P.C. Dauby, Eur. Phys. J., 192: 71-81, 2011. [5] ESA, \href{http://www.esa.int/SPECIALS/HSF_Research/SEMLVK0YDUF_0.html}{CIMEX experimental setup}, accessed 12 octobre 2012 \end{document}
{ "timestamp": "2012-10-16T02:02:08", "yymm": "1210", "arxiv_id": "1210.3728", "language": "en", "url": "https://arxiv.org/abs/1210.3728" }
\section{Introduction} In recent years considerable progress has been made in understanding the evolutionary sequence of planetary nebulae (PNe). The evolution of the photoionised nebula needs to be understood with regard to the processes leading to its ejection, mass/density relation, chemical composition and the post-AGB evolution of the central star. The central star in particular is the driving force, both ejecting the nebula and then releasing fast winds, driven by radiation pressure, which compress and accelerate the pre-ejected material, creating thin, ionised shells. Since a strong link has been observationally established between the parameters of the central star and those of the surrounding nebula (eg. ~\cite{DM87, DM88},~\cite{DM90},~\cite{SVK87},~\cite{S89}), it follows that certain parameters of the central star can be determined indirectly by measuring key emission lines in the nebula. This is especially useful in the LMC where the central star cannot be directly observed. Over the past couple of years we have used both the UKST H$\alpha$ and short red maps of the central 25deg$^{2}$ region of the LMC to uncover over 460 candidate PNe. These were labeled as `true', `likely' and `possible' depending on the quality of images and confirmatory spectra obtained. To these were added the 169 PNe that were previously catalogued in that area. Spectroscopically confirmed results including calibrated fluxes, luminosity functions and radial velocities were published in~\cite{RP06a, RP06b, RP10a}. I have now extended our survey to the outer regions of the LMC mainly using the \OIII, \SII~and H$\alpha$~images provided by the Magellanic Cloud Emission Line Survey (MCELS). From the 1,000 or so candidates selected for spectroscopic followup, I identified 110 newly discovered and 101 previously known PNe. The complete sample, comprising 749 LMC PNe spanning the entire galaxy, has the advantage of being at a near common, known distance (49.2~kpc,~\cite{RP10a}) with low reddening, yet close enough to be studied in detail. It is currently the most complete PN sample in existence for any galaxy~\citep{R12}. The objective of this preliminary work is to compare the temperature of the central stars to the excitation and expansion velocity of the nebulae. This allows me to investigate the evolution of both the nebula and central star as it evolves into a white dwarf. \section{Observational data for the LMC PNe} Follow-up spectroscopy was mainly performed on the AAT using AAOmega which comprises 400 fibres placed by robotics across a 2 degree field of view. Three nights of observations in February 2010 plus three field observations in February 2012 provided coverage of the most concentrated outer areas. For more extended outer areas of the LMC where the density of candidates was too low for AAOmega I used 6dF on the UK Schmidt telescope. This instrument operates essentially the same way but covers a larger, 6 degree area of the sky while using only 150 fibres. Flux calibration was conducted using the method described in~\cite{RP10a} where data counts are calibrated to fluxes from HST observations for the same objects. This method has proved very reliable and allows the whole dataset to be homogeneously calibrated. Additional spectroscopy for PNe in the inner main bar regions was obtained using FLAMES on the VLT, the 1.9m telescope at the South African Astronomical Observatory and the 2.3m telescope at Siding Spring Observatory. While the long-slit spectra were reduced using standard IRAF tasks, the FLAMES multi-fibre data were flux calibrated using the method described for AAOmega and 6dF data~\citep{RP10a}. \section{PN central star temperatures} Without the ability to individually pinpoint and observe the central stars of LMC PNe, I use photoinisation models \citep{DJ92, RP10b} that demonstrate that for optically thick PNe in the Magellanic Clouds, the excitation class parameter is related to stellar temperature. The equation to estimate low excitation is given by: \begin{equation} 0.45\left(\frac{F_{[OIII]\lambda5007}}{F_{\mathrm{H}\beta}}\right),~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~0.0\,<\,E\,<\,5.0 \end{equation} while the high excitation PNe are estimated by \begin{equation} 5.54\left[\frac{F_\mathrm{He\textrm{\sc ii}\lambda4686}}{F_{\mathrm{H}\beta}} + \log_{10}\left(\frac{F_{\mathrm{[OIII]\lambda{4959} + {5007}}}}{ F_{\mathrm{H}\beta}}\right)\right],~~~5.0\,\leq\,E\,<\,12, \end{equation} Using this definition, a transformation from excitation class to stellar effective temperature ($\textit{T}$$_\textmd{eff}$) was made using: \begin{equation} \textrm{log}~T_{\textmd{eff}} = 4.439 + [0.1174 \pm 0.0025] E - [0.00172 \pm 0.00037] E^{2} \end{equation} which is based on the transformation given in~\cite{DJ92} but adjusted to match the Zanstra temperatures published by~\cite{VSS03, VSS07} (see~\cite{RP10b}). For average abundance levels within the LMC, this equation provides a useful transformation to stellar temperatures. \cite{DJ92} also expected this relation to work well, having tested it using 66 of the brightest PNe in the LMC, but predicted the relationship would break down for low excitation PNe. The reason given for this was the strong dependency of the \OIII/H$\beta$ ratio on metallicity as well as upon stellar temperature. In order to correct for any over-dependency on the metallicity introduced by using the \OIII/H$\beta$ ratio,~\cite{DJ92} constructed a grid, based on covering a range of stellar temperatures and metallicities using the generalised modeling code MAPPINGS \citep{BDT85}. They use an ionisation parameter defined as $\textit{Q}$ = $\textsl{N}$$_{Ly-c}$/4$\pi$$\langle$r$^{2}$$\rangle$$\textsl{N}$$_{H}$ where $\textsl{N}$$_{Ly-c}$ is the number of Lyman continuum photons emitted by the central star, $\langle$r$^{2}$$\rangle$ is the mean radius of the ionised nebula and $\textsl{N}$$_{H}$ is the nebula's hydrogen particle density. By adopting a high value for $\textit{Q}$ (2 $\times$ 10$^{8}$ cm s$^{-1}$), they simulate stellar luminosity and nebula gas pressure typical of the brighter PNe in the LMC as well as those in the Galactic Bulge. The resulting grids, encompassing abundances from 0.1 to 2.0 times solar, each with a set of temperatures between 35,000 and 140,000\,K, encompass the maximum luminosity range for PNe in both the H$\beta$ and \OIII$\lambda$5007 lines. Importantly, although these grids have been available for 20 years, they have not been tested against medium to faint and evolved PNe in the LMC, typical of those that would be found in the 0.0\,$<$\,E\,$<$\,5.0 excitation bracket. With our improvements to the original formulas given for excitation class and temperature, I need to investigate whether our new temperature estimates agree with the temperatures found from the modeled grid of~\cite{DJ92}. \subsection{Results} I compared central star temperatures for high, medium and low excitation PNe, derived using our formulae (equations 1 \& 2) with central star temperatures acquired using the modeled grid of~\cite{DJ92}. The grid relies on the \OIII$\lambda$5007/H$\beta$ ratio and the electron temperature ($\textit{T}$$_\textmd{e}$) in order to produce an estimate of log (Z) and ($\textit{T}$$_\textmd{eff}$). For low excitation PNe, the similar reliance on the \OIII$\lambda$5007/H$\beta$ ratio means that the only difference will be introduced by $\textit{T}$$_\textmd{e}$. For low excitation PNe I find an exponential fit between temperatures derived directly from the excitation class (equation 3) and those derived from the grid of~\cite{DJ92}. In order to show this relation, a curve has been fitted to the data (black circles) in Figure~\ref{Figure1}. For comparison I also show the results for medium to high excitation PNe (red-filled boxes) and low excitation which do not fit the grid (green triangles). \begin{figure} \begin{center} \includegraphics[width=1.12\textwidth]{REID_talk_Fig1.EPS}\\ \caption{A comparison of stellar effective temperatures found from a direct reliance on excitation class and those found for the same PNe using the modeled grid of~\cite{DJ92}. Where low excitation PNe have electron temperatures below 12,000\,K there is an exponential correlation with 95\% confidence (shown curve).} \label{Figure1} \end{center} \end{figure} High excitation PNe do not correlate to central star temperatures derived using excitation class (equation 2). Clearly, the reason is that high excitation PNe require the use of the HeII$\lambda$4686 line in order to obtain $\textit{T}$$_\textmd{eff}$ estimates. The \OIII$\lambda$5007/H$\beta$ ratio and $\textit{T}$$_\textmd{e}$ alone do not measure sufficient levels of excitation to permit the estimation of high central star temperatures. This result agrees with the warning given by~\cite{DJ92} in which they find that the grid is not very useful for determining stellar temperatures where $\textit{T}$$_\textmd{eff}$ $>$ 90,000\,K and log [Z] $<$ -0.5. Although an exponential correlation is found for most low excitation PNe, there is a subgroup that return higher $\textit{T}$$_\textmd{eff}$. Using the grid, low excitation PNe with $\textit{T}$$_\textmd{e}$ higher than 12,000\,K and log (Z) less than -1.0 have increasingly higher $\textit{T}$$_\textmd{eff}$ estimates than those found using equation 1. For this reason I suggest that the grid is not useful for estimating $\textit{T}$$_\textmd{eff}$ where ($\textit{T}$$_\textmd{e}$) are greater than 12,000\,K, even though the grid allows the estimation of $\textit{T}$$_\textmd{eff}$ using $\textit{T}$$_\textmd{e}$ up to 15000\,K. The exponential curve for those low excitation PNe with $\textit{T}$$_\textmd{e}$ below 12,000\,K follows the form: \begin{equation} T_{\textmd{eff} [grid]} = 72.971 \times~T_{\textmd{eff} [E]}~^{0.6001} \end{equation} where $\textit{T}$$_{\textmd{eff} [grid]}$ is the stellar effective temperature found from the grid and $\textit{T}$$_{\textmd{eff} [E]}$ is the stellar effective temperature found from equations 1 and 3 for low excitation PNe. At low $\textit{T}$$_{\textmd{eff}}$, the grid and excitation class produce near equivalent results but as $\textit{T}$$_{\textmd{eff}}$ increases, $\textit{T}$$_\textmd{e}$ has the effect of exponentially decreasing $\textit{T}$$_{\textmd{eff}}$ estimates produced by the model. Our previous comparisons of equations 1, 2 \& 3 with $\textit{T}$$_{\textmd{eff}}$ estimates using the Zanstra method \citep{RP10b} show a good correlation where the nebulae are optically thick. In this case there is an increasing decline in grid temperature estimates where they are compared to Zanstra and excitation (equation 3) temperature estimates. Furthermore, with $\textit{T}$$_\textmd{e}$ greater than 12,000\,K the grid produces a number of the low excitation PNe with inflated $\textit{T}$$_\textmd{eff}$. This is presumably the result of an over correction for the effect of metallicity within the central star. Since there is little correlation between $\textit{T}$$_\textmd{e}$ and any method used to produce a $\textit{T}$$_\textmd{eff}$ estimate, I have decided to use equations 1, 2 \& 3 alone to estimate my central star effective temperatures for this presentation. My central star effective temperatures are shown in Figure~\ref{Figure2} where the temperatures range from 28,000\,K to 291,000\,K with a mean of 90,300\,K. \begin{figure} \begin{center} \includegraphics[width=0.79\textwidth]{REID_talk_Fig2.EPS}\\ \caption{Our stellar effective temperature estimates found from a direct reliance on excitation class as derived from equations 1, 2 \& 3. The largest number of central stars fall within the 50,000\,K bin, encompassing 37,500\,K $<$ $\textit{T}$$_\textmd{eff}$ $<$ 62,500\,K. } \label{Figure2} \end{center} \end{figure} Since there is a correlation between excitation class and $\textit{T}$$_\textmd{eff}$, it follows that there is also a moderate correlation between $\textit{T}$$_\textmd{eff}$ and the expansion velocity of the surrounding nebula. In Figure~\ref{Figure3}~I show the derived expansion velocity of the nebula versus the $\textit{T}$$_\textmd{eff}$ from equation 3. This correlation was first discovered by~\cite{DF85} and later improved using a two parameter fit which included the excitation class and the H$\beta$ flux \citep{DM90}. The equation for estimating the expansion velocity is given as equation 3.2 in~\cite{DM90}. With a strong relationship between excitation class, the H$\beta$ flux and the Zanstra temperature of the central star~\citep{M84}, the position of a PN on plots such as Figure~\ref{Figure3}, representing the relationship between the nebula expansion velocity and $\textit{T}$$_\textmd{eff}$ will depend principally on the optical density, mass of the nebula and intrinsic properties of the central star. Since the most massive stars achieve the highest temperatures, the excitation class should also follow the mass of the star. Massive central stars fade rapidly (as seen in the brightest 4 magnitudes of the PNLF~\citep{RP10a}) so when low H$\beta$ fluxes are associated with high-excitation nebulae we can confidently assume the presence of a massive central star. Such stars drive high expansion velocities in the nebula, delivering high energy and making them more efficient at ionising the surrounding AGB wind. \begin{figure} \begin{center} \includegraphics[width=0.82\textwidth]{REID_talk_Fig3.EPS}\\ \caption{A comparison of nebula expansion velocities with stellar effective temperatures found from a direct reliance on excitation class. Points to the lower left of the plot, below the main group, are expected to be optically thin nebulae. } \label{Figure3} \end{center} \end{figure}
{ "timestamp": "2012-10-17T02:11:20", "yymm": "1210", "arxiv_id": "1210.3750", "language": "en", "url": "https://arxiv.org/abs/1210.3750" }
\section{Introduction} Trapped atomic ions are one of the most promising systems yet proposed for large-scale quantum information processing (QIP) and quantum simulation~\cite{Ladd2010}. Trapped ions benefit from long coherence times and have been used to perform high-fidelity single- and two-qubit gates~\cite{H.Haeffner2008, R.Blatt2008, Wineland2011, JulioT.Barreiro12011, R.Blatt2012}. As systems grow to larger numbers of ions, however, ion traps will require new features to facilitate experiments. Most notably, existing proposals for constructing a large ion-trap quantum computer call for junction elements to manipulate ion positions within the trap. For example, one proposal~\cite{D.Kielpinski2002} would arrange many ion traps in a two-dimensional array, with junctions shuttling ions between separated computation and storage zones. A second proposal~\cite{S.Korenblit2012, Lin2009} envisions co-trapping two species of ions in long chains, using one species for QIP and the second for sympathetic cooling. Since such dual-species chains are cooled most efficiently when ions are ordered in a particular way~\cite{Duan2011}, junctions would be required to establish and preserve the correct sequence of ions. Junction ion traps have been previously demonstrated by several groups, beginning with multi-substrate T-~\cite{Hensinger2006} and X-junctions~\cite{Blakestad2009}, and reliable transport with sub-phonon motional heating has been demonstrated in the X-junction~\cite{Blakestad2011}. However, such multi-substrate constructon is not amenable to scaling to larger systems, making these traps impractical for large-scale QIP. Fortunately, new generations of microfabricated ion traps -- particularly surface-electrode traps -- provide an attractive alternative~\cite{Hughes2011, J.Chiaverini2005}. Microfabricated traps may be built using scalable methods, and their small feature sizes permit electrode designs that offer unprecedented control over trapped-ion positions. Scalable microfabricated junction traps have recently been demonstrated~\cite{Amini2010, Moehring2011}, but transport through these junctions in the absence of Doppler cooling has not been systematically studied, thus it is unknown if the heating in these traps is low enough to support quantum information processing. Here we report the design, fabrication, and characterization of a surface-electrode X-junction ion trap fabricated with standard VLSI-compatible processes and suitable for use in a large-scale quantum information processor. In contrast to previous work with microfabricated junction traps, we perform a detailed study of ion loss induced by transporting ions between legs of the junction, both with and without Doppler cooling. We characterize ion transport through the junction \emph{with cooling} by performing 10$^6$ shuttling operations, determining statistical bounds on the transport fidelity. We find motional heating to be sufficiently low that ions may be shuttled through the junction without Doppler cooling more than sixty-five times without loss. \section{Design and fabrication} The trap's basic design is that of a symmetric five-wire geometry~\cite{J.Chiaverini2005} with segmented outer control electrodes (figure 1a). Its internal layer structure (figure~\ref{fig:MetalLayers}b) is similar to that described in~\cite{S.CharlesDoret2012}: three layers of patterned aluminum insulated from one another by silicon dioxide. The bottom aluminum layer (M1) is grounded and prevents RF electric fields from penetrating into the RF-lossy silicon substrate. The middle layer (M2) is patterned with control and RF electrodes as well as on-chip capacitors which reduce RF pickup on the control electrodes. The top metal layer (M3) is grounded and defines the boundary of the control electrodes. This simplifies the modeling of trapping potentials and also helps to protect trap structures in M2 from physical damage~\cite{S.CharlesDoret2012}. Seventy-eight control electrodes are arranged outside the RF electrodes in 50 $\mu$m wide rails with a 54 $\mu$m pitch (figure~\ref{fig:MetalLayers}a). The control electrodes in the corners of the junction are slightly larger to accommodate electrical leads. All gaps between electrodes are 4 $\mu$m wide. The electrodes between the RF rails are grounded by vias (located outside of the active region of the trap) to the chip ground plane below (M1). A 50 $\mu$m by 50 $\mu$m loading slot is etched through one of the center electrodes so that neutral atoms, supplied from an oven beneath the trap during trap loading, can reach the trapping region without electrically shorting the trap electrodes (figure~\ref{fig:MetalLayers}a,c). RF electrode dimensions in the linear sections are chosen to establish the pseudopotential minimum at a height of 60 $\mu$m above the surface. The rails are 40 $\mu$m wide and are separated by 80 $\mu$m (inner edge to inner edge). \begin{figure} \center \includegraphics[scale=1]{MetalLayers2.eps} \caption{(a) View of the trap from above, showing the control electrodes (green), the RF electrodes (red), and the top-level ground and grounded center electrodes (blue). (b) Cross section of the trap (not to scale). Metal (aluminum) layers are denoted M1 (bottom ground plane), M2 (RF and control electrodes, filter capacitors, and wire bond pads), and M3 (top-level ground plane). (c) Scanning electron micrographs (SEMs) of the completed trap, including a closeup of the junction center.} \label{fig:MetalLayers} \end{figure} A junction naively assembled from the intersection of two linear sections does not provide adequate three-dimensional confinement to allow controlled transport~\cite{Wesenberg2008}. Therefore, we alter the RF electrode shape near the junction to increase trapping strength. Working from an initial trial geometry we then optimize the shape of the rails to reduce the predicted ion heating rate during transport. This is accomplished by placing seven control points along the inside edge of the RF electrode (figure~\ref{fig:RF_ElectrodeDesign}), giving seven degrees of freedom. The locations of these points are modified with a genetic algorithm that employs an objective fitness function, \begin{equation} F =\int_0^{l_{max}} \left(\frac{\partial |\vec{E}_0\cdot \hat{l}|^2} {\partial l} \right) dl,\\ \label{eqs:FitnessFunction} \end{equation} where the electric field due to application of the RF trap drive has the form \begin{equation} \vec{E}_{RF}\left(x,y,z,t\right)=\vec{E_0}\left(x,y,z\right)\cos(\Omega_{RF}t). \label{eqs:ElectricField} \end{equation} \begin{figure}[p] \center \includegraphics[scale=1]{RF_ElectrodeDesign.eps} \caption{Control points for junction optimization. Each point is defined by a distance from the outer rail edge along a pre-defined direction, indicated here by the red arrows.} \label{fig:RF_ElectrodeDesign} \end{figure} \begin{figure}[p] \center \includegraphics[scale=1]{HeatingRate2.eps} \caption{The ratio of the heating rate $\dot{\overline{n}}$ (in quanta/s) to voltage noise spectral density $S_{V_{N}}(\Omega_{RF}-\omega_{z})$, in units matching figure 8 of~\cite{Blakestad2011} (plotted in red). The shaded blue curve shows the trapping pseudopotential. Both quantities are plotted versus the distance from the junction center along z (figure~\ref{fig:MetalLayers}a).} \label{fig:HeatingRate} \end{figure} This fitness function is a measure of intrinsic secular heating along the z direction (figure~\ref{fig:HeatingRate}) during transport~\cite{Blakestad2009} due to spectral noise on the RF potentials. The path $l$ follows the minimum of the pseudopotential as the ion is translated away from the center of the junction along one of the legs. A candidate design is rejected if the pseudopotential is anticonfining in the direction perpendicular to the trap at any point along this path. Each trial geometry generated by the genetic algorithm is evaluated by calculating the field $\vec{E}_0$ with an in-house boundary element method (BEM) electrostatics solver, similar to those described in~\cite{Blakestad2011, KilianSinger2010}. \section{Waveforms} Ions are shuttled between different regions of the trap by applying transport waveforms~\cite{S.CharlesDoret2012}, which are smoothly varying sets of potentials applied to the control electrodes that produce a traveling harmonic well. In the linear regions of the trap, waveforms are designed for an axial secular frequency of $\omega_z = 2\pi\times 1$ MHz (for $^{40}$Ca$^+$). A 12.5$^\circ$ rotation of the radial secular axes from the trap normal ensures adequate Doppler cooling of all radial modes via lasers aligned parallel to the trap surface. Closer to the junction, the waveforms are designed to create a harmonic trapping potential with non-degenerate mode frequencies while minimizing sensitivity of the ion position to stray electric fields. \begin{figure} \center \includegraphics[scale=1]{ModeHeight.eps} \caption{Calculated secular mode frequencies (solid) and ion height (dashed) for a waveform (-z$\rightarrow$+x or -x) designed for transport from one leg of the junction to the midpoint between the two legs. The potential forces the ion to circumnavigate the junction center at a 15 $\mu$m radius (see text).} \label{fig:ModeHeight} \end{figure} We can express the harmonic confinement in quadratic form, \begin{equation} \Phi = \frac12 \left(\begin{array}{ccc} x & y & z \end{array}\right) (M+Q) \left(\begin{array}{c} x\\ y\\ z \end{array}\right), \label{eqs:HarmonicConfinement} \end{equation} where $\Phi$ is the trapping potential and M and Q are $3\times3$ matrices describing the control potentials and RF pseudopotential, respectively. Net confinement of the ion requires Tr(M+Q)$>$0. Poisson's equation enforces Tr(M)=0, hence the ion may only be trapped where Tr(Q)$>$0; control potentials can only redistribute the confinement provided by the RF pseudopotential and cannot increase confinement simultaneously in all three directions. Unfortunately, the RF pseudopotential confinement weakens significantly near the junction. Trapping the ion in this region thus requires using the control electrodes to share the weak pseudopotential confinement among all three directions (figure~\ref{fig:ModeHeight}), leading to small well-depths. We partially compensate for this by deliberately pushing the ion approximately 10 $\mu$m closer to the trap surface (figure~\ref{fig:ModeHeight}), increasing Tr(Q) at the expense of moving the ion away from the pseudopotential null and causing excess micromotion. As such, the path actually followed by the ion is not the same as that followed in the junction optimization, likely causing the ion to experience more heating from RF noise than originally predicted. To construct transport waveforms, we begin by expanding the harmonic portion of the pseudopotential in spherical harmonics for a series of locations along the desired path, calculating the eigen-axes of the pseudopotential alone. Near the junction center the pseudopotential axes rotate sharply, and the associated eigen-frequencies become non-degenerate. Empirically we find that large control potentials are needed to rotate the secular axes away from the pseudopotential axes. We therefore constrain the secular axes to closely overlap with the eigen-axes defined by the RF pseudopotential, allowing small deviations since doing so can increase the trap depth while staying within our control potential limits. For each ion location we specify the following criteria: the height of the ion above the trap surface, any deviation in secular axes from the pseudopotential axes, one or more of the secular frequencies, and bounds of $\pm$8 V for the control potentials. We use a simplex search over the space of control potentials to minimize a weighted, least-squares error function based on these criteria. After calculating potential sets in this way for several locations along the desired path, we determine potentials for intermediate points by interpolating at 2 $\mu$m intervals. Finally, we smooth the results to remove high spatial frequencies, as we have found that these do not improve the waveform but may contribute to heating during transport due to rapid swings in the control potentials. \begin{figure} \center \includegraphics[scale=1]{ModePath2.eps} \caption{Ion path, pseudopotential isocontour, and secular mode axes for transport between two legs of the junction. (a) Top view, indicating the midpoint (M) of the transport waveform index as the ion circumnavigates the center and (b) a side view.} \label{fig:ModePath} \end{figure} Experimental characterization of the junction reveals a potential barrier at the junction center which is not predicted by our electrostatic models. This bump makes the confinement in the center of the junction sufficiently weak that stable transport directly through the center is difficult. We believe the barrier originates from incomplete etching of the oxide layer from the gaps at the center of the junction (where the gaps lie directly beneath the ion), leaving residual dielectric which can be charged by the Doppler cooling laser~\cite{S.CharlesDoret2012, MHarlander2010, ShannonX.Wang2011}. To make transport more robust, we deliberately avoid a 15 $\mu$m radius around the junction center. We also simulate the behavior of charged dielectric in the gaps by applying a positive potential to the M1 ground plane (figure~\ref{fig:MetalLayers}) in our electrostatic model. This model reproduces the two effects we observe near the junction center: repulsion from the center, and barriers across the four RF spurs that project into the junction. These barriers can cause double-wells to form during transport, leading to ballistic motion and associated heating as the ion moves between adjacent junction legs. We adjust the potential applied to M1 in the model ($\sim$ 0.4 V) to roughly match the observed repulsion of the ion from the junction center and also calculate an adjustable correction potential that we may empirically tune to minimize the observed heating of the ion during transport. The resulting calculated ion trajectory is shown in figure~\ref{fig:ModePath}. We estimate that the transport waveform should be robust to stray fields of approximately 50 V/m without compromising the ion confinement. However, as previously characterized traps of similar construction exhibit stray fields of 100 V/m or more~\cite{S.CharlesDoret2012}, we generate additional compensation waveforms to null stray fields in each of the Cartesian directions at every point along the ion's trajectory. These compensation potentials are added to the transport waveform as needed empirically to minimize ion heating during transport. \section{Characterization} We characterize ion lifetime and transport by trapping $^{40}$Ca$^+$ in an apparatus similar to that described in~\cite{S.CharlesDoret2012}. National Instruments PXI-6733 16-bit DAC cards apply the transport waveforms. These cards apply voltage updates at 500 kHz; to reduce noise and associated ion heating, the control potentials are filtered by third-order Butterworth filters (60 kHz cut-off frequency) located just outside the vacuum chamber~\cite{Blakestad2011}. To cool all three modes of the ion in multiple legs of the junction, a Doppler cooling laser propagates at 45$^\circ$ to both the x and z directions. Fluctuations in the power of the fluorescence laser are stabilized to $<$1$\%$. To characterize the junction we measure three figures of merit. First, we measure the lifetime of a stationary trapped ion without Doppler cooling, setting a lower bound on the ion loss-rate. We then explore the reliability of our transport waveforms by repeatedly transporting an ion through the junction, both with and without Doppler cooling. This determines the rate of ion loss due to shuttling operations. Finally, we qualitatively explore the motional heating caused during transport by monitoring ion fluorescence as a function of position for truncated round-trip transports through the junction. \subsection{Ion lifetime} \label{Subsection:IonLifetime} The lifetime of a single ion trapped in one of the legs of the junction\footnote{$\Omega_{RF} = 58.55$~MHz, $V_{RF} = 91$~V$_{RMS}$, calcuated trap depth = 29~meV (axially limited)} is several hours when continuously Doppler cooled. Without cooling, the single-ion lifetime is approximately five seconds (figure~\ref{fig:Lifetime}), with a strongly non-exponential time dependence similar to that observed elsewhere~\cite{S.CharlesDoret2012}. This measurement is performed by repeatedly blocking the cooling laser for fixed periods of time and observing whether the ion remains trapped. We performed the experiment fifty times for each fixed delay. \begin{figure} \center \includegraphics[scale=1]{Lifetime.eps} \caption{Ion survival fraction as a function of time without Doppler cooling. Each point is the accumulation of 50 experiments.} \label{fig:Lifetime} \end{figure} \subsection{Junction transport fidelity} \label{Subsection:JunctionTransports fidelity} To characterize the fidelity of shuttling through the junction, we perform 10$^6$ round-trip transports between two legs of the junction (-z$\rightarrow$+x$\rightarrow$-z) (figure~\ref{fig:MetalLayers}), traveling from a point 100~$\mu$m from the junction's center to a point 100~$\mu$m up the adjacent leg, and back. The round-trip is executed in 200 total steps, requiring a time of 400~$\mu$s ($v_{ion}= 1$~m/s). For the first 5$\times$10$^5$ transports we verify that the ion departs its initial position on the -z leg and then returns by monitoring ion fluorescence at the initial position using a PMT. We then shift the detection location to monitor the mid-point of the round-trip (in the +x leg) and verify the arrival and subsequent departure of the ion at this location. Due to scatter of the fluorescence laser off of the complex topography near the junction, there is a non-negligible overlap between the fluorescence count histograms measured with and without an ion. This limits our detection fidelity, and we can only place lower bounds on the transport reliability. We confirm that the ion is not at an unexpected detection location with a probability of at least $99.8\%$, and that the ion arrives in the expected detection location at least $93\%$ of the time. \subsection{Ion heating during junction transport} \label{Subsection:JunctionHeating} Ion heating during transport can occur for two primary reasons. First, any discontinuous motion, or ``spilling" of the ion between adjacent potential wells, will heat the ion. Such spilling behavior can occur should stray electric fields be present, as they may create multiple closely-spaced local minima in the weakly-confining transport waveform. Second, the ion's motion can be driven by electrical noise that has frequency components near the trap secular frequencies. To distinguish between these two possibilities we execute a sequence of round-trip transports around the junction to one of the legs -- for example, along the path -z$\rightarrow$M$\rightarrow$+x (figure~\ref{fig:ModePath}) -- and compare the ion fluorescence before and after transport. To determine the spatial profile of any heating along a given path, the round-trip transport is truncated, with the ion pausing at an intermediate point for 10~$\mu$s before returning directly to the starting location. Ion heating manifests as a reduction in the ion's fluorescence rate due to increased Doppler broadening. By comparing the ratio of fluorescence before and after transport, we produce a map of heating versus turning point location (figure~\ref{fig:TransportHeating}). Any discontinuity in the ion's motion due to spilling between potential wells should lead to a spatial discontinuity in the observed heating. Paths along -z$\rightarrow$M$\rightarrow$+x and +z$\rightarrow$M$\rightarrow$-x (figure~\ref{fig:TransportHeating}a,c) show smooth reductions in ion fluorescence versus truncation point. We infer that heating along these paths is due to noisy trapping potentials exciting the ion's secular motion rather than a discontinuity; the localized drop in figure~\ref{fig:TransportHeating}a likely corresponds to secular mode frequencies moving into resonance with an unknown source of electric-field noise. In contrast, along the paths -z$\rightarrow$M$\rightarrow$-x and +z$\rightarrow$M$\rightarrow$+x (figure~\ref{fig:TransportHeating}b,d) there is a sharp step in fluorescence, suggesting a discontinuity in the transport waveform where the ion heats suddenly. We believe this discontinuity is due to gradients in the stray fields present, preventing complete stray-field nulling in all junction legs using the common set of stray electric field compensations we applied for all four measurements. However, additional tailoring of the transport waveform to fine-tune shuttling between these legs would likely eliminate this heating. \begin{figure}[hbtp] \center \includegraphics[scale=1]{TransportHeating.eps} \caption{Ion heating due to transport as indicated by reduced ion fluorescence relative to a fully Doppler-cooled ion. Transport from (a) -z to the +x leg, (b) -z to -x, (c) +z to -x, and (d) +z to +x. The localized fluorescence drop in (a) likely corresponds to secular mode frequencies moving into resonance with an unknown source of electric-field noise, while the sudden drops in (b) and (d) are likely due to the ion spilling between double wells formed during transport. Each data point is an average of 1000 experiments.} \label{fig:TransportHeating} \end{figure} Another measure of the heating induced by transport is given by the number of times that we can shuttle the ion back and forth through the junction without intermediate Doppler cooling between consecutive transports. By repeating this transport many times we determine the ion survival fraction as a function of the number of round-trip transports (figure~\ref{fig:DarkTransports}). We find that we can consecutively transport the ion through the junction sixty-five times with $>98\%$ reliability. However, the survival fraction decreases sharply after approximately eighty-five round-trips, suggesting that ion loss is dominated by cumulative heating effects that increase the ion's energy beyond the trap depth. This conjecture is supported by the fact that the eighty-five transports take approximately 34 ms, an interval far shorter than the ion lifetime without transport (approx. 5 s, figure~\ref{fig:Lifetime}). We believe this heating is caused by noise on the RF and control potentials and by aliased harmonics of the transport waveform~[13] around the 500~kHz DAC update rate. The effects of these noise sources are exacerbated by the particularly low secular frequencies during the ion transport through the junction. It should be possible to reduce or eliminate such heating by switching to DACs with update rates well above the maximum secular frequency and by improved filtering of the trapping potentials. \begin{figure}[hbtp] \center \includegraphics[scale=1]{DarkTransports.eps} \caption{Ion survival fraction versus number of round-trip transports without cooling. Each data point represents 100 experiments. The fraction of experiments for which the ion returns to the original starting location gives the survival fraction.} \label{fig:DarkTransports} \end{figure} \section{Conclusions} We have designed, fabricated, and characterized a microfabricated X-junction surface-electrode ion trap, demonstrating reliable transport between the junction legs. More than 10$^5$ round-trip transports can be completed with intermediate cooling. Ion heating while shuttling through the junction is low enough to permit at least sixty-five consecutive round-trip transports without laser cooling, limited by electrical noise on the trapping potentials. These results imply that an X-junction of similar design could be used to re-order a chain of ions or to shift ions between registers of a future large-scale trapped-ion quantum information processor. \section{Acknowledgments} We would like to thank Kenton R. Brown for his comments on the manuscript. This material is based upon work supported by the Georgia Tech Research Institute and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA) under U.S. Army Research Office (ARO) contract W911NF081-0315. All statements of fact, opinion, or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI, or the U.S. Government. \addcontentsline{toc}{section}{References}
{ "timestamp": "2013-02-26T02:05:42", "yymm": "1210", "arxiv_id": "1210.3655", "language": "en", "url": "https://arxiv.org/abs/1210.3655" }
\section{Introduction} Shape plays a key role in our cognitive system: in the perception of shape lies the beginning of concept formation. Artists have implicitly acknowledged the importance of shapes since the dawn of times. Indeed, despite that lines do not divide objects from their background in the real world, line drawings are present in much of our earliest recorded art and, remarkably, remained unchanged through history, see Figure~\ref{fig:lineDrawing}. \begin{figure} \centerline{ \includegraphics[width=.48\columnwidth]{lineDrawing1} \includegraphics[width=.48\columnwidth]{lineDrawing2} } \caption{Lines are used to convey the outer contours of the horses in a very similar way in these drawings, one from 15,000 BC (left: Chinese Horse, paleolithic cave painting at Lascaux, France) and the other from AD 1300 (right: Jen Jen-fa, detail from The Lean Horse and the Fat Horse, Peking Museum, China). Reprinted by permission from Macmillan Ltd: NATURE~\cite{cavanagh05}, copyright 2005.} \label{fig:lineDrawing} \end{figure} Although art may provide clues to understand shape perception, it tells us little from the formal point of view. Let us begin by defining what is a shape. Phenomenologists~\cite{attneave54} conceive shape as a subset of an image, digital or perceptual, endowed with some qualities permitting its recognition. In this sense, both concepts, shape and recognition, are intrinsically intertwined: one has to define what is a shape in such a way that its recognition can be performed. Following these lines of thought, gestaltists~\cite{arnheim} regard shape perception as the grasping of structural features found in or imposed upon the stimulus material. The Gestalt school has extensively studied phenomena that unveil and justify this definition~\cite{kanizsa79,wertheimer38}. Formally, shapes can be defined by extracting contours from solid objects. In this context, shapes are represented and analyzed from an infinite-dimensional approach in which a shape is the locus of an infinite number of points~\cite{krim06}. This point of view leads to the active contours formulation~\cite{kass88} or to level-sets methods~\cite{serra83}. Although these shapes can be defined in any number of dimensions, e.g. the contour of a three dimensional solid object is a surface, we will restrict ourselves to the two dimensional case, following~Lisani \etal~\cite{lisani03-shape} and Cao \etal~\cite{cao08theory}. We define an image as a function $u: \R^2 \rightarrow \R$, where $u(x)$ represents the gray level or luminance at point $x$. Our first task is to extract the topological information of an image, independent of the unknown contrast change function of the acquisition system. This contrast change function can be modeled as a continuous and increasing function $g$. The observed data of an image $u$ might be any such $g(u)$. This simple argument leads to select the level sets~\cite{serra83}, or level lines, as a complete and contrast-invariant image description~\cite{caselles99,caselles10}. Given an image $u$, the upper level set $\mathcal{X}_{\lambda}$ and the lower level set $\mathcal{X}^{\lambda}$ of level $\lambda$ are subsets of $\R^2$ defined by~\cite{caselles10} \begin{align} \mathcal{X}_{\lambda} &= \lbrace x \in \R^2 \ |\ u(x) \geq \lambda \rbrace \textbf{,} \\ \mathcal{X}^{\lambda} &= \lbrace x \in \R^2 \ |\ u(x) < \lambda \rbrace \textbf{.} \end{align} If the image $u$ is lower (\emph{resp.} upper) semi-continuous, it can be reconstructed from the collection of its upper (\emph{resp.} lower) level sets by using the superposition principle~\cite{matheron75}: \begin{align} u(x) &= \sup \lbrace \lambda\ |\ x \in \mathcal{X}_{\lambda} \rbrace \textbf{,} \\ u(x) &= \inf \lbrace \lambda\ |\ x \in \mathcal{X}^{\lambda} \rbrace \textbf{.} \end{align} We define the boundaries of the connected components of a level set as a level line. A gray-level digital image $u_d$ is a discrete function in a rectangular grid that takes values in a finite set, typically integer values between 0 and 255. To obtain a grid independent representation, we can consider an interpolation $u$ of $u_d$ with the desired degree of regularity (i.e., $u$ can be $C^1$, $C^2$, etc.). In this work we use bilinear interpolation, in which case the level lines have the following properties: \begin{itemize} \item for almost all $\lambda$, the level lines are closed Jordan curves; \item by topological inclusion, level lines form a partially ordered set. \end{itemize} For extracting the level lines of such a bilinearly interpolated image we make use of the Fast Level Set Transform (FLST)~\cite{monasse00}. Notice that the FLST correctly handles singularities such as saddle points. We call this collection of level lines (along with their level) a topographic map. In general, the topographic map is an infinite set and so only quantized grey levels are considered, ensuring that the set is finite. Since the connected components of level sets are ordered by the inclusion relation, the topographic map may be embedded in a hierarchical representation. To make things simple, a level line $L_i$ is a descendant of another line $L_j$ in the hierarchy if and only if $L_i$ is included in the interior of $L_j$. Figure~\ref{fig:topographicMap} depicts a simple example. \begin{figure} \centerline{ \hfill \includegraphics[height=.3\columnwidth]{treeA.pdf} \hfill \includegraphics[height=.3\columnwidth]{treeB.pdf} \hfill } \caption{On the left, original image. On the right, the hierarchical representation of the topographic map.} \label{fig:topographicMap} \end{figure} The Mathematical Morphology school~\cite{matheron75,serra83} has extensively studied the topographic map and its level sets, producing a whole set of tools for image analysis. Smoothing filters, usually described by Partial Differential Equations (PDE), can be proven to have an equivalent formulation in terms of iterated morphological operators~\cite{morelPDEs}. Hence, edge detectors can then be directly expressed by combining these operators. The previous requirement leads us to define the set of level lines as a complete and contrast invariant image representation. In apparent contradiction to this fact, many authors, like Attneave, argue that ``information is concentrated along contours (regions where contrast changes abruptly)''~\cite{attneave54}. For example, edge detectors, from which the most renowned is Canny's~\cite{canny86}, rely on this fact. In summary, only a subset of the topographic map is necessary to obtain a \emph{perceptually} complete description. The search for perceptually important lines will focus on unexpected configurations, rising from the perceptual laws of Gestalt Theory~\cite{kanizsa79,wertheimer38}. From an algorithmic point of view, the main problem with Gestalt rules is their qualitative nature. Desolneux \etal~\cite{desolneux08} developed a detection theory which seeks to provide a quantitative assessment of gestalts. This theory is often referred as Computational Gestalt and it has been successfully applied to numerous gestalts and detection problems~\cite{cao2005,grompone10,rabin09}. It is primarily based on the Helmholtz principle which states that conspicuous structures may be viewed as exceptions to randomness. In this approach, there is no need to characterize the elements one wishes to detect but contrarily, the elements one wishes to avoid detecting, i.e., the background model. When an element sufficiently deviates from the background model, it is considered meaningful and thus, detected. Within this framework, Desolneux~\etal~\cite{dmm01} proposed an algorithm to detect contrasted level lines in grey level images, called meaningful boundaries. Further improvements to this algorithm were proposed by Cao \etal~\cite{cao2005}. In this work, we build upon these methods, presenting several contributions: \paragraph{\textbf{From global to partial curve saliency}.} The original meaningful boundaries are totally salient curves (i.e., every point in the curve is salient). We propose a modification that allows detecting partially salient curves as meaningful boundaries. This definition agrees more tightly to the observation that pieces of level lines correspond to object contours and also yields more robust results. \paragraph{\textbf{An extended definition of saliency}.} The criterion used to establish saliency in the original meaningful boundaries algorithm is contrast. Cao \etal~\cite{cao2005} proposed to determine saliency as a cooperation of two criteria: contrast and regularity. We study some theoretical and practical issues in their formulation. We then present a new formulation in which both aforementioned criteria compete, instead of cooperating. It is theoretically sound and yields improved detections, with respect to the ones obtained by using only contrast. The previous partial curve saliency criterion proves determinant in this new formulation Strictly speaking, all the proposed algorithms are only invariant to affine contrast changes. This can be easily proven when contrast (i.e., the gradient magnitude) is used as the saliency measure~\cite[Lemma 1, p.~19]{cao08theory}. Nevertheless, the set of meaningful boundaries is not significantly affected by slight deviations from this class of contrast changes. As a side note, we point out that there are two remaining steps to address in order to develop a complete shape detection system: smoothing, and geometrical invariance. Let us briefly discuss them for the sake of completeness. First, during the acquisition, details much too fine to be perceptually relevant are introduced. It is necessary to use a suitable filtering mechanism. Invariance to these fine details may be handled by an appropriate smoothing procedure, i.e., the Affine morphological Scale Space (AMSS)~\cite{moisan98} or by a subsequent suitable shape description method~\cite{tepper09matching}. Second, representations must be invariant to weak projective transformations. It can be shown that all planar curves within a large class can be mapped arbitrarily close to a circle by projective transformations~\cite{astrom95-limitations}. Moreover, full projective invariance is neither perceptually real (humans have great difficulties to recognize objects under strong perspective effects) nor computationally tractable. In this sense, affine invariance is the most we can impose in practice. At the same time, the effect of any optical acquisition system can be modeled by a convolution with a smoothing radial kernel. It does not commute with projective transformations and must be taken into account in the recognition process. A multiscale analysis is the only feasible way to treat it correctly. Both concepts, affine invariance and multiscale analysis are consistently integrated in the work by Morel and Yu~\cite{morel09ASIFT}. The aforementioned tools that cover these issues can be directly applied to the level lines detected by our method. For a wide perspective of the complete shape recognition chain see the book by Cao~\etal~\cite{cao08theory}. The paper is structured as follows. In Section~\ref{sec:meaningfulBoundaries} we recall the definition of meaningful boundaries and present a generalization that allows to detect partially salient curves. In Section~\ref{sec:meaningfulSmoothBoundaries} we address the combination of contrast and regularity for the detection of meaningful boundaries. We conclude in Section~\ref{sec:conclusions}. \section{Meaningful Contrasted Boundaries} \label{sec:meaningfulBoundaries} Let us begin by formally explaining the meaningful boundaries algorithm by Desolneux \etal~\cite{dmm01}. Let $C$ be a continuous level line of the (bilinearly interpolated) image $u$. We consider a discrete sampling of this curve, and denote it by $x_0, x_1, \dots, x_{n-1}$ \footnote{This corresponds to the following 2 steps: i) The intersection of the continuous level-line $C$ with the Qedgels of the image gives a set of $m$ points as explained in \cite{caselles10}. ii) We sample $n=\lfloor m/2 \rfloor$ points by taking one out of every two points}. This particular sampling is chosen to ensure that $|Du|(x_i)$ and $|Du|(x_{i+1})$ are statistically independent almost everywhere when pixel values of $u$ are considered to be independent The gradient magnitude is computed using a standard finite difference scheme on a $2 \times 2$ neighborhood. \begin{notation} Let $H_c$ be the tail histogram of $|Du|$, defined by \begin{equation} H_c (\mu) \stackrel{\mathrm{def}}{=} \frac{\# \{ x \in u,\ |Du|(x) > \mu \}}{\# \{ x \in u,\ |Du|(x) > \min_{x \in u} |Du|(x) \}}, \end{equation} where $Du$ can be computed by a standard finite differences scheme on a $2 \times 2$ neighborhood. \label{not:H_c} \end{notation} \begin{definition} \label{def:nfaContrastedCurve} \textnormal{(Desolneux~\etal~\cite{dmm01})} Let $\mathcal{C}$ be a finite set of $N_{ll}$ level lines of $u$. A level line $C \in \mathcal{C}$ is a DMM \meps-meaningful contrasted boundary (DMM-MCB) if \begin{equation} \NFA(C) \stackrel{\mathrm{def}}{=} N_{ll} \ H_c ( \min_{x \in C} |Du|(x) ) ^{l/2} < \eps \end{equation} where $l$ is the length of $C$. This number is called number of false alarms (NFA) of $C$. \end{definition} Actually, $l$ denotes the Euclidean length of the discrete approximation of $C$. In \cite{cao08theory} the authors assume that $l=2n$, but we found that this approximation is not accurate enough, which leads us to make here the distinction between $l$ and $2n$. Algorithm~\ref{algo:meaningfulBoundaries} shows a possible procedure to obtain all \meps-meaningful contrasted boundaries. \begin{algorithm}[t] \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{An image $u$ and a scalar \meps.} \Output{A set of closed curves $\mathcal{S}_\mathrm{res}$.} $\mathcal{S} \gets \mathrm{FLST}(u)$\tcp*{Compute the set of level lines} $N_{ll} \gets \#\{ \mathcal{S} \}$\; Compute the tail histogram $H_c$ of $|Du|$\; $\mathcal{S}_\mathrm{res} \gets \emptyset$\; \For{$C \in \mathcal{S}$}{ Compute the length $l$ of $C$\; $\displaystyle \mu \gets \min_{x \in C} |Du|(x)$\; $\displaystyle \mathrm{nfa}_C \leftarrow N_{ll} \ H_c ( \mu ) ^{l/2}$\; \lIf{$\mathrm{nfa}_C < \eps$}{ $\mathcal{S_\mathrm{res}} \gets \mathcal{S_\mathrm{res}} \cup \{ C \}$ } } \Return{$\mathcal{S}_\mathrm{res}$}\; \caption{Computation of \meps-meaningful boundaries in image $u$.} \label{algo:meaningfulBoundaries} \end{algorithm} \paragraph{Background model.} Now we shall check the consistency of Definition~\ref{def:nfaContrastedCurve}, namely that, in average, no more than \meps curves are detected by chance. In order to make this assertion more precise (in Proposition~\ref{prop:contrastedCurvesNFA} below) we need to define the (\emph{a contrario}) statistical background model that is used to present random input images to the boundary detector. Following \cite{cao2005,dmm01} we do not directly introduce a statistical image model, but we only state the statistical properties that each level line $C$ in the input set $E$ of level lines should satisfy. The actual shape of the curve does not matter. We only require that a random gradient value $|Du|(x_i)$ be associated to each of the $n$ regularly sampled points $x_0, x_1, \dots, x_{n-1}$ of $C$, that these $n$ random variables be independent, and with the same distribution $P(|Du|(x_i)>\mu) = H_c(\mu)$. \begin{proposition} \label{prop:contrastedCurvesNFA} The expected number of DMM \meps-mean\-ing\-ful contrasted boundaries in a random set $E$ of random curves is smaller than \meps, if $E$ follows the above background model. \end{proposition} We refer to the work by Cao~\etal~\cite{cao2005} for a complete proof. Proposition~\ref{prop:contrastedCurvesNFA} allows to interpret the meaningful contrasted curves in Definition~\ref{def:nfaContrastedCurve} within a multi-hypothesis testing framework: namely, the curves detected on an image $u$ are those that allow to reject the null hypothesis (background model) \emph{$\Hy_0$: the values of $|Du|$ are i.i.d., and follow the same distribution as gradient magnitude histogram of the image $u$ itself}. Definition~\ref{def:nfaContrastedCurve} has some drawbacks. From one side, the use of the minimum or any punctual measure, for the case, can be an unstable measure in the presence of noise. From the other side, it demands the curve to be not likely \emph{entirely} generated by noise (i.e., well contrasted). We already stated that \emph{pieces} of level lines match object boundaries. Moreover, as seen on Figure~\ref{fig:conceptMinContrast}, the use of the minimum contrast seems in contradiction with what we perceive. It is therefore too restrictive to impose such a constraint. Since we search for object boundaries, we think the natural model is to select level lines that have well contrasted parts. \begin{figure} \centerline { \includegraphics[width=.4\columnwidth]{degrade2} \hspace{.2in} \includegraphics[width=.4\columnwidth]{degradeFlatten} } \caption{Conceptual consequence of using the minimum contrast to detect boundaries. The left image contains a gray gradient and an uniformly black region on its upper and lower halves respectively. The right image is constructed by putting in its upper half the minimum gray level on the left image's upper half. If our perception was tuned to use the minimum contrast to detect the boundary between the two regions, we would perceive that the image on the right is as contrasted as the one on the left, which is clearly not the case.} \label{fig:conceptMinContrast} \end{figure} \subsection{Partially Contrasted Meaningful Boundaries} In this direction, we propose to modify the definition of the number of false alarms of a curve, to support a new model where one detects partially contrasted curves. This modification was briefly introduced in~\cite{tepper09msc} and is now explained in detail. \begin{notation} Let $x_0, x_1, \dots, x_{n-1}$ denote $n$ points of a curve $C$ of length $l$. Let $s$ be the mean Euclidean distance between neighboring points. For $x \in C$ denote by $c_i$ ($0 \leq i < n$) the contrast at $x_i$ defined by $c_i = |Du|(x_i)$. We note by $\mu_k$ ($0 \leq k < n$) the $k$-th value of the vector of the values $c_i$ sorted in ascending order. \end{notation} For $k \leq N \in \N$ and $p \in [0, 1]$, let us denote by \begin{equation} \bintail (N, k; p) \stackrel{\mathrm{def}}{=} \sum_{j = k}^{N} \binom{N}{j} p^j (1 - p)^{N - j} \end{equation} the tail of the binomial law. Desolneux~\etal~present a thorough study of the binomial tail and its use in the detection of geometric structures~\cite{desolneux08}. The regularized incomplete beta function, defined by \begin{equation} I(x; a, b) = \frac{\int_0^x t^{a-1} (1-t)^{b-1} dt}{\int_0^1 t^{a-1} (1-t)^{b-1} dt} \text{,} \end{equation} is an interpolation $\widetilde{\bintail}$ of the binomial tail to the continuous domain~\cite{desolneux08}: \begin{equation} \widetilde{\bintail} (n, k; p) = I(p; k, n-k+1) \end{equation} where $n, k \in \R$. In the case $n$ and $k$ are natural numbers $\widetilde{\bintail} (n, k; p) = \bintail (n, k; p)$. Additionally the regularized incomplete beta function can be computed very efficiently~\cite{numericalRecipes}. Following Meinhardt~\etal~\cite{meinhardt08}, for a given curve the probability under $\Hy_0$ that at least $k$ among the $n$ values $c_j$ are greater than $\mu$ is given by the tail of the binomial law $\bintail (n, k; H_c(\mu))$. Thus it is interesting, and more convenient, to extend this model to the continuous case using the regularized incomplete beta function \begin{equation} \widetilde{\bintail} (n \cdot \lsn{s}, k \cdot \lsn{s}; H_c(\mu)) \end{equation} where $\lsn{s} = \frac{l}{s \cdot n}$ and acts as a normalization factor. This represents the probability under $\Hy_0$ that, for a curve of length $l$, some parts with total length greater or equal than $\lsn{s} (n-k)$ have a contrast greater than $\mu$. \begin{definition} \label{def:nfaContrastedCurve_k} Let $\mathcal{C}$ be a finite set of $N_{ll}$ level lines of $u$. A level line $C \in \mathcal{C}$ is a TMA \meps-meaningful boundary if \begin{equation} \NFA_K(C) \stackrel{\mathrm{def}}{=} N_{ll}\ K\ \min_{k < K} \widetilde{\bintail} (n \cdot \lsn{2}, k \cdot \lsn{2}; H_c(\mu_k)) < \eps \end{equation} where $K$ is a parameter of the algorithm. This number is called number of false alarms (NFA) of $C$. \end{definition} The parameter $K$ controls the number of points that we allow to be likely generated by noise, that is, a curve must have no more than $K$ points with a ``high'' probability of belonging to the background model. It is simply chosen as a percentile of the total number of points in the curve. The procedure is similar to Algorithm~\ref{algo:meaningfulBoundaries} but replacing $\NFA$ by $\NFA_K$. As usual, Definition~\ref{def:nfaContrastedCurve_k} is correct if the following proposition holds. \begin{proposition} \label{prop:nfaContrastedCurve_k} The expected number of TMA \meps-mean\-ing\-ful boundaries, in a finite random set $E$ of random curves is smaller than \meps. \end{proposition} This very important proof is given in Appendix~\ref{sec:proofNFA_C} to avoid breaking the flow of the discussion. This new model is an extension of the previous one, since $\NFA_{K=1}(C) = \NFA(C)$. In fact, Definition~\ref{def:nfaContrastedCurve_k} is no other than a relaxation of Definition~\ref{def:nfaContrastedCurve}. We should expect to have new detections and to detect the same lines, with increased stability. This comes from the fact that several punctual measures are used and the minimum is taken over their probability. This was experimentally checked and some results can be seen in Section~\ref{sec:experimentsMeaningful}. We apply the DMM-MCB and TMA-MCB algorithms to an image of white noise, in order to experimentally check that when $\eps=1$ the number of detections is in average lower than 1. This is confirmed in Figure~\ref{fig:whiteNoise_C}, where the number of detections is actually zero. Even when $\eps=1000$, the number of detections remain very small. \begin{figure*} \centering \includegraphics[width=\textwidth]{noise-newC3} \caption{There are 4845004 level lines in the center image of a Gaussian noise with standard deviation 50. By setting $\eps=1000$, DMM-MCB detects one boundary (left detail) and TMA-MCB detects two boundaries (left and right details). At $\eps=1$, both methods detect zero boundaries.} \label{fig:whiteNoise_C} \end{figure*} In~\cite{cao2005}, other modifications are proposed to the basic meaningful boundaries algorithm. On the one hand, meaningfulness is computed locally. We will not discuss this further, since we are only interested in the redefinition of the NFA and its consequences. In any case, our redefined NFA can also be used in the same local detection process. On the other hand, only level lines that remain stable across several zoom scalings are detected. The reason behind this approach is to counter the effect of small perturbations (i.e., noise) in the image. Our scheme handles naturally this effect by minimizing a probability instead of a punctual measure. This was confirmed in our experiments where multiscale stabilization did not provide any visible improvement. \subsection{Maximal boundaries} Because of interpolation, meaningful boundaries usually appear in parallel and redundant groups, called bundles. Since the meaningful level lines inherit the tree structure of the topographic map, Desolneux~\etal~\cite{desolneux08} use this structure to efficiently remove redundant boundaries. From now on, we work on the tree composed only of meaningful boundaries. \begin{definition} \textnormal{(Monasse and Guichard~\cite{monasse00})} A monotone section of a level lines tree is a part of a branch such that each node has a unique son and where grey level is monotone (no contrast reversal). A maximal monotone section is a monotone section which is not strictly included in another one. \end{definition} \begin{definition} \textnormal{(Desolneux~\etal~\cite{dmm01})} A meaningful boundary is maximal meaningful if it has a minimal NFA in a maximal monotone section. \end{definition} Algorithm~\ref{algo:newMeaningfulBoundaries} depicts the overall proposed procedure. \begin{algorithm}[t] \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{An image $u$, a scalar \meps an integer $K$.} \Output{A set of closed curves $\mathcal{S}_\mathrm{res}$.} $\mathcal{S} \gets \mathrm{FLST}(u)$\tcp*{Compute the set of level lines} $N_{ll} \gets \# \{ \mathcal{S} \}$\; Compute the tail histogram $H_c$ of $|Du|$\; $\mathcal{S}_\mathrm{res} \gets \emptyset$\; \For{$C \in \mathcal{S}$}{ Compute the length $l$ of $C$\; $n \gets \# \{ x \in C\}$\; $\mu_1, \dots, \mu_K \gets$ the $K$ smallest values of $|Du|(x)$, $x \in C$\; $\displaystyle \mathrm{nfa}_C \gets N_{ll} K \min_{k < K} \widetilde{\bintail} (\tfrac{l}{2}, k \cdot \tfrac{l}{2n}; H_c(\mu_k))$\; \lIf{$\mathrm{nfa}_C < \eps$}{ $\mathcal{S}_\mathrm{res} \gets \mathcal{S}_\mathrm{res} \cup \{ C \}$ } } \tcp{Maximality-based pruning:} \Repeat{all monotone sections have been explored}{ Find an unexplored monotone section $\mathcal{S}_\mathrm{M}$ in the level lines tree\; $\displaystyle C_\mathrm{M} \gets \max_{C \in \mathcal{S}_\mathrm{M}} \mathrm{nfa}_C$\; \For{$C \in \mathcal{S}_\mathrm{M}$}{ \lIf{$C \in \mathcal{S}_\mathrm{res}$ \textbf{and} $C \neq C_\mathrm{M}$}{ $\mathcal{S}_\mathrm{res} \gets \mathcal{S}_\mathrm{res} \setminus \{ C \}$ } } } \Return{$\mathcal{S}_\mathrm{res}$}\; \caption{Computation of maximal TMA \meps-meaningful boundaries in image $u$.} \label{algo:newMeaningfulBoundaries} \end{algorithm} Figure~\ref{fig:buildingSequence} shows an example of the reduction of the number of level lines caused by the maximality constraint. Parallel level lines are eliminated, leading to ``thinner edges .'' \begin{figure*} \centerline{ \includegraphics[width=.3\textwidth]{building} \hfil \fbox{\includegraphics[width=.3\textwidth]{building-all-ll}} \hfil \fbox{\includegraphics[width=.3\textwidth]{building-ll}} } \caption{Effect of the maximality condition over the meaningful boundaries of an image. On the left, original image; on the center, DMM-MCB (8987 lines found); on the left, maximal DMM-MCB (517 lines found).} \label{fig:buildingSequence} \end{figure*} In the following, when we refer to meaningful boundaries, both in its DMM or TMA versions, we always compute maximal meaningful boundaries. Notice that working with representative curves of monotone sections has some well-known dangers for particular configurations that rarely occur in practice. For example, if the input image contains successively nested objects of different increasing shades of gray, the proposed algorithm will detect only one object of each nested set. Other definitions that explore local maxima of some saliency measure along the tree, such as MSER~\cite{matas02-mser}, can be used to correct this issue. Desolneux~\etal~\cite{dmm01} also proposed an algorithm called meaningful edges which aims at detecting salient (i.e., well contrasted) pieces of level lines. TMA-MCB can be considered a hybrid of meaningful boundaries and meaningful edges and presents advantages from both algorithms. Pieces of level lines belonging to different level lines cannot be compared, since they can have different positions and lengths. This means that we cannot compute maximal meaningful edges in the level lines tree. The TMA-MCB algorithm is able to detect partially salient curves while retaining compatibility with the maximality in the tree. On the other side, it is possible to compute maximal meaningful edges inside a given curve. TMA-MCB, as a provider of the supporting level lines, can be considered a first step towards finding meaningful edges that are maximal in both directions: in the tree, i.e., orthogonal to the curve, and along the curve. The extraction of the optimal pieces in a curve is discussed by Tepper \etal~\cite{tepper12ps}. \subsection{Practical implications of the change in the NFA} \label{sec:experimentsMeaningful} We now address the following question: is there a fundamental difference in practice between DMM-MCB and TMA-MCB? The answer is that, given an image, this change implies noticeable differences in the detected curves. Indeed, TMA-MCB are more robust since the NFAs attained are much lower. Taking the minimum of probabilities is also more stable than taking the minimum on any punctual measure, see Figure~\ref{fig:comparisonNFAunderNoise}. \begin{figure*} \centering \begin{tabular}{@{\hspace{0pt}}c@{\hspace{4pt}}c@{\hspace{12pt}}c@{\hspace{12pt}}c@{\hspace{0pt}}} & \textsc{image} & \textsc{dmm-mcb} & \textsc{tma-mcb} \tabularnewline \raisebox{.4in}{\begin{sideways}\textsc{original}\end{sideways}} & \includegraphics[width=1.4in]{143090} & \fbox{\includegraphics[width=1.4in]{143090-oldllc}} & \fbox{\includegraphics[width=1.4in]{143090-llc}} \tabularnewline \vspace{4pt} \raisebox{.3in}{\begin{sideways}\textsc{original+noise}\end{sideways}} & \includegraphics[width=1.4in]{143090noise} & \fbox{\includegraphics[width=1.4in]{143090noise-oldllc}} & \fbox{\includegraphics[width=1.4in]{143090noise-llc}} \end{tabular} \caption{Noise contamination example. The image on the bottom left is contaminated by a small amount of noise. DMM-MCB takes a minimum of punctual measures, thus its result is affected. On the counterpart, result with TMA-MCB is less affected, as it deals with probabilities. Notice that here no smoothing is performed previous to detection, contrarily to the original implementation of the meaningful boundaries algorithm~\cite{dmm01}.} \label{fig:comparisonNFAunderNoise} \end{figure*} In some cases, by relaxing the meaningfulness threshold in DMM-MCB, that is setting $\eps > 1$, visually better results can be achieved. More level lines are kept, but at the expense of having lower confidence on them. The key advantage with TMA-MCB is that, for a given threshold for \meps, less visually salient level lines are discarded. One of the possible arguments against TMA-MCB could be that it is no more than a shift of the threshold on the NFA of DMM-MCB. Specifically, that there exists a threshold $\eps' > \eps$ for which DMM-MCB and $\eps'$ would be the same as TMA-MCB and \meps. However, the assertion is clearly false, as shown in Figure~\ref{fig:comparisonNFA}. \begin{figure*} \centering \begin{tabular}{@{\hspace{4pt}}c@{\hspace{4pt}}c@{\hspace{4pt}}c@{\hspace{4pt}}c@{\hspace{4pt}}} \textsc{image} & \textsc{dmm-mcb} \begin{footnotesize}($\eps=10^{-10}$)\end{footnotesize} & \textsc{dmm-mcb} \begin{footnotesize}($\eps=1$)\end{footnotesize} & \textsc{tma-mcb} \begin{footnotesize}($\eps=10^{-10}$)\end{footnotesize} \tabularnewline \includegraphics[width=1.1in]{106025} & \fbox{\includegraphics[width=1.1in]{106025-ll-old10}} & \fbox{\includegraphics[width=1.1in]{106025-ll-old0}} & \fbox{\includegraphics[width=1.1in]{106025-ll}} \end{tabular} \caption{Definition~\ref{def:nfaContrastedCurve_k} is not merely a shift of the threshold on the NFA from Definition~\ref{def:nfaContrastedCurve}: even relaxing the threshold to its limit ($\eps = 1$), the result with the old method remains roughly the same. A lot of structure missed with Definition~\ref{def:nfaContrastedCurve} is recovered with Definition~\ref{def:nfaContrastedCurve_k}.} \label{fig:comparisonNFA} \end{figure*} In many applications (e.g., scene reconstruction, image matching), underdetection is far more dangerous than overdetection. Losing structure is critical as it can end-up in a total failure. Detection noise can always be handled (or even tolerated) when the amount of noise does not occlude information, as in our case. TMA-MCB has an advantage over DMM-MB in this respect\footnote{Note however that overdetection might have as well a huge detrimental impact in other applications.}. This is experimentally checked in all examples, even if the difference is more striking in some examples than in others. Figure~\ref{fig:epsilonEvolution} shows the numerical robustness attained with TMA-MCB. The visually important boundaries in the image have a much lower NFA with TMA-MCB than with DMM-MCB. \begin{figure*} \centering \begin{tabular}{@{\hspace{0pt}}m{.1in}@{\hspace{4pt}}m{1.52in}@{\hspace{4pt}}m{1.52in}@{\hspace{4pt}}m{1.52in}@{\hspace{0pt}}} &&\includegraphics[width=1.4in]{296059} \tabularnewline & \centering{\begin{footnotesize}$\eps=10^{-10}$\end{footnotesize}} & \centering{\begin{footnotesize}$\eps=10^{-50}$\end{footnotesize}} & \centering{\begin{footnotesize}$\eps=10^{-80}$\end{footnotesize}} \tabularnewline \begin{sideways}\textsc{dmm-mcb}\end{sideways} & \fbox{\includegraphics[width=1.4in]{296059-oldC-10}} & \fbox{\includegraphics[width=1.4in]{296059-oldC-50}} & \fbox{\includegraphics[width=1.4in]{296059-oldC-80}} \tabularnewline \begin{sideways}\textsc{tma-mcb}\end{sideways} & \fbox{\includegraphics[width=1.4in]{296059-newC-10}} & \fbox{\includegraphics[width=1.4in]{296059-newC-50}} & \fbox{\includegraphics[width=1.4in]{296059-newC-80}} \tabularnewline \end{tabular} \caption{ Comparison between the stability of DMM-MCB and TMA-MCB. Much lower NFAs are attained with the latter in lines which are visually relevant. } \label{fig:epsilonEvolution} \end{figure*} \section{Combining contrast and good continuation} \label{sec:meaningfulSmoothBoundaries} As already stated, in natural images contrasted boundaries often locally coincide with object edges. Thus, they are also incidentally smooth. Active contours~\cite{kass88} rely on this combination of good contrast and smoothness to provide well localized contours. In this section, we reprise the work by Cao \etal~\cite{cao2005} and study the possible influence of smoothness in the a contrario detection process. We conclude that regularity plays an important role in the improvement of the quality of the obtained detections. This reinforcement phenomenon and the fact that each partial detector can detect most image edges prove a contrario that contrast and regularity are not independent in natural images. Let $C$ be a rectifiable planar curve, parameterized by its length. Let $l$ be the length of $C$ and $x = C(\tau) \in C$. With no loss of generality, we assume that $\tau = 0$. \begin{definition} \textnormal{(Cao~\etal~\cite{cao2005})} Let $s > 0$ be a fixed positive value such that $2s < l$. We call regularity of $C$ at $x$ (at scale $s$) the quantity \begin{equation} R_s (x) = \frac{\max (|x - C(-s)|, |x - C(s)|)}{s} \end{equation} where $|x_i - x_j|$ represents the Euclidean distance between $x_i$ and $x_j$. \end{definition} Figure~\ref{fig:defregularity} visually explains the pertinence of this definition. Only when one of the subcurves $C((-s, 0))$ or $C((0, s))$ is a line segment, $R_s (x) = 1$; in all other cases $R_s (x) < 1$. When $s$ is small enough, regularity is inversely proportional to the curve's curvature around $x$~\cite{cao2005}. \begin{figure} \centering \includegraphics[width=150pt]{defregularity} \put(-80, 85){$x=C(0)$} \put(-125, 65){$C(-s)$} \put(-80, 40){$C(s)$} \put(-48, 70){$s \times R_s(x)$} \caption{Reproduced from the work by Cao~\etal~\cite{cao2005}. The regularity at $x$ is obtained by comparing the radius of the circle with $s$. The radius is equal to $s$ if and only if the curve is a straight line. If the curve has a large curvature, the radius will be small compared to $s$.} \label{fig:defregularity} \end{figure} The question about the choice of $s$ arises naturally and was studied in detail by Cao~\etal~\cite{cao2005} and Mus\'e~\cite{museThesis}. We will limit ourselves to state that a larger value of $s$ (thus at less local scale of analysis) is more robust to noise. On the other side, $s$ should not be too large either. In practice, and following Cao~\etal~\cite{cao2005} one may safely set $s=5$, which is the value we use in our experiments. Let us denote by $H_s (r)$ the distribution of the regularity in white noise level lines, i.e., \begin{equation} H_s (r) = P \Big( R_s (x) > r,\, x \in C,\, C \text{ is a white noise level line} \Big) \text{,} \end{equation} which depends only on $s$ and can be empirically estimated. Again, the curve detection algorithm consists in adequately rejecting the null hypothesis \emph{$\Hy_0$: the values of $|R_s|$ are i.i.d., extracted from a noise image}. We assume that, in the background model, contrast and regularity are independent. Let us forget for the moment the issues associated with the use of extremal (the minimum) statistics, discussed in Section~\ref{sec:meaningfulBoundaries}. \begin{definition} Let $C$ be a level line in a finite set $\mathcal{C}$ of $N_{ll}$ level lines of image $u$. Let \begin{align*} \mu &= \min_{x \in C} |Du|(x) \text{,}\\ \rho &= \min_{x \in C} R_s(x) \end{align*} be respectively the minimal quantized contrast and regularity along $C$. The level line $C$ is a DMM \meps-meaningful regular boundary (DMM-MRB) if \begin{equation} \NFA^{R} (C) \stackrel{\mathrm{def}}{=} N_{ll}\ H_s (\rho)^{l / 2 s} < \eps \text{.} \end{equation} The level line $C$ is a DMM \meps-meaningful contrasted regular boundary (DMM-MRB) if \begin{equation} \NFA^{\mathrm{CR}} (C) \stackrel{\mathrm{def}}{=} N_{ll} \max \left( H_c (\mu)^{l},\ H_s (\rho)^{l/s} \right) < \eps \text{.} \end{equation} \label{def:nfaSmoothCurve} \end{definition} \begin{remark} Cao~\etal~\cite{cao08theory} provided the following definition of meaningful contrasted regular boundaries: \begin{equation} \NFA^{\mathrm{CR}} (C) \stackrel{\mathrm{def}}{=} N_{ll}\ H_c (\mu)^{l/2}\ H_s (\rho)^{l / 2 s} < \eps \text{.} \end{equation} Unfortunately, they do not prove that the expected number of \meps-meaningful contrasted regular boundaries in a finite set of random curves is smaller than \meps. This fact is annoying since the threshold \meps is emptied of meaning. It is not by any means an easy proof and we have not found a solution yet. However, we have proven that by slightly changing their definition in the following manner \begin{equation} \NFA^{\mathrm{CR}} (C) \stackrel{\mathrm{def}}{=} N_{ll}\ H_c (\mu)^{{l}^2 / 2 s}\ H_s (\rho)^{{l}^2 / 2 s} \text{.} \label{eq:nfaSmoothCurve2} \end{equation} a proof can be built~\cite{tepperPhD}. Although theoretically sound, meaningful contrasted regular boundaries defined by Equation~\ref{eq:nfaSmoothCurve2} do not provide satisfactory results. This is a consequence of using the exponent $l^2$. With respect to DMM-MCB (Definition~\ref{def:nfaContrastedCurve}, p.~\pageref{def:nfaContrastedCurve}) and even if the regularity term has high probability (say one), raising the contrast term to a much larger power will shift the NFA of all curves towards zero. Irregular curves that were not meaningful by their contrast, might become meaningful regular boundaries. This is certainly an unwanted side effect. \leavevmode\unskip\penalty9999 \hbox{}\nobreak\hfill \quad\hbox{$\triangle$} \end{remark} Definition~\ref{def:nfaSmoothCurve} exhibits some interesting properties: \begin{itemize} \item A contrasted but irregular curve will not be detected; \item A regular but non-contrasted curve will not be detected; \item An irregular and non-contrasted curve will not be detected; \item A regular and contrasted curve will be detected. \end{itemize} Both gestalts, i.e., contrast and good continuation, interact in a novel way: instead of cooperating by reinforcing each other, as in Equation~\ref{eq:nfaSmoothCurve2}, they compete for the ``control'' of the curve. As the exponent in the contrast term is greater than the exponent in the regularity term ($l > l/s$), the contrast term will in general dominate the detections and the regularity will act as an additional sanity check. The shifting phenomenon mentioned in the above remark will still be present. However, $2l$ is much less aggressive than $l^2$ and its effect will be doubly mitigated: (1) since $l \gg 2$ and (2) because of the controlling effect of using the maximum. Since TMA-MCB is a relaxed version of DMM-MCB, we profit from such knowledge and also relax the definition of meaningful contrasted regular boundaries. This relaxation will prove particularly relevant for the contrasted regular case. \begin{definition} \label{def:nfaContrastedSmoothCurve_k} Let $\mathcal{C}$ be a finite set of $N_{ll}$ level lines of $u$. A level line $C \in \mathcal{C}$ is a TMA \meps-meaningful contrasted regular boundary (TMA-MCRB) if \begin{equation} \NFA_{K}^{\mathrm{CR}}(C) \stackrel{\mathrm{def}}{=} N_{ll}\ K_c\ K_s \max \left( \begin{split} \min_{k < K_c} I_c (C, k)^2 \\ \min_{k < K_s} I_s (C, k)^2 \end{split} \right) < \eps \text{,} \end{equation} where \begin{align*} I_c (C, k) &= \widetilde{\bintail} (n \cdot \lsn{2}, k \cdot \lsn{2}; H_c(\mu_k)) \text{,}\\ I_s (C, k) &= \widetilde{\bintail} (n \cdot \lsn{2s}, k \cdot \lsn{2s}; H_s(\rho_{k})) \text{,} \end{align*} and $K_c$ and $K_s$ are parameters of the algorithm. This number is called number of false alarms (NFA) of $C$. \end{definition} Here $K_c$ and $K_s$ have the same meaning as $K$ in Definition~\ref{def:nfaContrastedCurve_k} and they are also set as a percentile of the total number of points in the curve. \begin{proposition} The expected number of TMA \meps-mean\-ing\-ful contrasted regular boundaries in a finite set $E$ of random curves is smaller than \meps. \end{proposition} This very important proof is given in Appendix~\ref{sec:proofNFA_CR} to avoid breaking the flow of the discussion. For completeness, we provide the following definition. \begin{definition} \label{def:nfaSmoothCurve_k} Let $\mathcal{C}$ be a finite set of $N_{ll}$ level lines of $u$. A level line $C \in \mathcal{C}$ is a TMA \meps-meaningful regular boundary (TMA-MRB) if \begin{equation} \NFA_K^{\mathrm{R}}(C) \stackrel{\mathrm{def}}{=} \\ N_{ll}\ K_s \min_{k < K_s} \widetilde{\bintail} (n \cdot \lsn{2s}, k \cdot \lsn{2s}; H_s(\rho_{k})) < \eps \text{,} \end{equation} and $K_s$ is a parameter of the algorithm. This number is called number of false alarms (NFA) of $C$. \end{definition} As a sanity check, we apply the DMM-MCRB and TMA-MCRB algorithms to an image of white noise. We would expect that when $\eps=1$ the number of detections is in average lower than 1. This is checked in Figure~\ref{fig:whiteNoise_CR}, where the number of detections is actually zero. Even when $\eps=1000$, the number of detections remain negligible. \begin{figure*} \centering \includegraphics[width=.7\textwidth]{noise-newCR3} \caption{There are 4845004 level lines in the left image of a Gaussian noise with standard deviation 50. By setting $\eps=1000$, DMM-MCRB detects zero boundaries and TMA-MCRB detects two boundaries forming a packet (right detail). At $\eps=1$, both methods detect zero boundaries.} \label{fig:whiteNoise_CR} \end{figure*} An immediate objection to the use of regularity might be: since high curvature points are often regarded as very meaningful perceptually~\cite{attneave54}, why such an emphasis in discarding them? The answer is also immediate: we detect \emph{partially} contrasted and regular level lines. Hence, a curve containing a relatively small number of high curvature points will be detected by TMA-MCRB but not by DMM-MCRB. In this scenario, these high curvature points will become more surprising, because of their seldomness, and thus meaningful. The procedure for finding maximal meaningful regular or contrasted regular boundaries is similar to Algorithm~\ref{algo:newMeaningfulBoundaries}, replacing $\NFA$ by $\NFA_K^\mathrm{R}$ or $\NFA_K^\mathrm{CR}$, respectively. \subsection{Discussion} We will now examine the results of the proposed competition between contrast and good continuation. The benefits of using meaningful contrasted regular boundaries are clear in Figure~\ref{fig:contrastedVSregular}. In both examples, only using contrast produces an overdetection (level lines are detected in areas with texture, e.g. the vegetation on the left, or exhibiting a slight gradient, e.g. the sky and the dome on the right) while only using good continuation produces an underdetection (e.g. the bridge on the left and the bell on the right). The combination of both gestalts corrects the issues by keeping the best from both worlds: most undesired level lines disappear (e.g. the vegetation on the left and the sky on the right) while the desired ones are kept (e.g. the bridge on the left and the bell on the right). \begin{figure*} \centering \begin{tabular}{@{\hspace{0pt}}m{.1in}@{\hspace{4pt}}m{.4\textwidth}@{\hspace{4pt}}m{.4\textwidth}@{\hspace{0pt}}} \begin{sideways}\textsc{image}\end{sideways} & \includegraphics[width=.4\textwidth]{22090} & \includegraphics[width=.4\textwidth]{118035} \tabularnewline \begin{sideways}\textsc{tma-mcb}\end{sideways} & \fbox{\includegraphics[width=.4\textwidth]{22090-llc}} & \fbox{\includegraphics[width=.4\textwidth]{118035-llc}} \tabularnewline \begin{sideways}\textsc{tma-mrb}\end{sideways} & \fbox{\includegraphics[width=.4\textwidth]{22090-llr}} & \fbox{\includegraphics[width=.4\textwidth]{118035-llr}} \tabularnewline \begin{sideways}\textsc{tma-mcrb}\end{sideways} & \fbox{\includegraphics[width=.4\textwidth]{22090-llrc}} & \fbox{\includegraphics[width=.4\textwidth]{118035-llrc}} \tabularnewline \end{tabular} \caption{Comparison of TMA-MCB (Definition~\ref{def:nfaContrastedCurve_k}), TMA-MRB (Definition~\ref{def:nfaSmoothCurve_k}), and TMA-MCRB (Definition~\ref{def:nfaContrastedSmoothCurve_k}).} \label{fig:contrastedVSregular} \end{figure*} Although more complicated to analyze, Figure~\ref{fig:star-wars} further supports our claims. See the detail on Harrison Ford's sleeve: it is completely lost by using contrast, partially recovered by using good continuation and well recovered by combining them. It is important to point out that in general, good continuation has a predominant effect over contrast. In the depicted examples, meaningful contrasted boundaries have lower NFAs than meaningful smooth ones. This explains the visual effect that we perceive when looking at the results: contrasted regular boundaries are basically regular boundaries reinforced by some contrasted parts. \begin{figure*} \centering \begin{tabular}{@{\hspace{0pt}}c@{\hspace{12pt}}c@{\hspace{0pt}}} \textsc{image} & \textsc{tma-mcb} \tabularnewline \includegraphics[width=.4\textwidth]{star-wars-casual} & \fbox{\includegraphics[width=.4\textwidth]{star-wars-casual-llc}} \tabularnewline \textsc{tma-mrb} & \textsc{tma-mcrb} \tabularnewline \fbox{\includegraphics[width=.4\textwidth]{star-wars-casual-llr}} & \fbox{\includegraphics[width=.4\textwidth]{star-wars-casual-llrc}} \tabularnewline \end{tabular} \caption{Comparison of TMA-MCB (Definition~\ref{def:nfaContrastedCurve_k}), TMA-MRB (Definition~\ref{def:nfaSmoothCurve_k}), and TMA-MCRB (Definition~\ref{def:nfaContrastedSmoothCurve_k}).} \label{fig:star-wars} \end{figure*} The example in Figure~\ref{fig:watchmen} is a real scene, extremely complicated from the edge detection point of view. In any case, all results are globally satisfactory. Noticeable differences between the methods are perceived by looking at the signs containing letters. \begin{figure*} \centering \begin{tabular}{@{\hspace{0pt}}c@{\hspace{4pt}}c@{\hspace{0pt}}} \textsc{image} & \textsc{tma-mcb} \tabularnewline \includegraphics[width=2.3in]{watchmen} & \fbox{\includegraphics[width=2.3in]{watchmen-llc}} \tabularnewline \textsc{tma-mrb} & \textsc{tma-mcrb} \tabularnewline \fbox{\includegraphics[width=2.3in]{watchmen-llr}} & \fbox{\includegraphics[width=2.3in]{watchmen-llrc}} \tabularnewline \end{tabular} \caption{Comparison of TMA-MCB (Definition~\ref{def:nfaContrastedCurve_k}), TMA-MRB (Definition~\ref{def:nfaSmoothCurve_k}), and TMA-MCRB (Definition~\ref{def:nfaContrastedSmoothCurve_k}).} \label{fig:watchmen} \end{figure*} We lastly compare TMA-MCRB with DMM-MCRB in Figure~\ref{fig:contrastedRegularDMMvsTMA}. As already stated TMA-MCB often detects more structure than DMM-MCB (second and third rows). This effect is amplified in DMM-MCRB, and can lead to severe underdetections (fourth row). On the other hand, the relaxation present in the TMA version allows to recover the structure more faithfully(fifth row), albeit some mild overdetections. \begin{figure*} \centering \begin{tabular}{@{\hspace{0pt}}m{.08in}@{\hspace{4pt}}m{.255\textwidth}@{\hspace{4pt}}m{.255\textwidth}@{\hspace{4pt}}m{.255\textwidth}@{\hspace{4pt}}m{.15\textwidth}@{\hspace{0pt}}} \begin{sideways}\textsc{image}\end{sideways} & \includegraphics[width=.255\textwidth]{42049} & \includegraphics[width=.255\textwidth]{119082} & \includegraphics[width=.255\textwidth]{167062} & \includegraphics[width=.15\textwidth]{148026} \tabularnewline \begin{sideways}\textsc{dmm-mcb}\end{sideways} & \fbox{\includegraphics[width=.255\textwidth]{42049-oldC}} & \fbox{\includegraphics[width=.255\textwidth]{119082-oldC}} & \fbox{\includegraphics[width=.255\textwidth]{167062-oldC}} & \fbox{\includegraphics[width=.15\textwidth]{148026-oldC}} \tabularnewline \begin{sideways}\textsc{tma-mcb}\end{sideways} & \fbox{\includegraphics[width=.255\textwidth]{42049-newC}} & \fbox{\includegraphics[width=.255\textwidth]{119082-newC}} & \fbox{\includegraphics[width=.255\textwidth]{167062-newC}} & \fbox{\includegraphics[width=.15\textwidth]{148026-newC}} \tabularnewline \begin{sideways}\textsc{dmm-mcrb}\end{sideways} & \fbox{\includegraphics[width=.25\textwidth]{42049-oldCR}} & \fbox{\includegraphics[width=.25\textwidth]{119082-oldCR}} & \fbox{\includegraphics[width=.25\textwidth]{167062-oldCR}} & \fbox{\includegraphics[width=.15\textwidth]{148026-oldCR}} \tabularnewline \begin{sideways}\textsc{tma-mcrb}\end{sideways} & \fbox{\includegraphics[width=.25\textwidth]{42049-newCR}} & \fbox{\includegraphics[width=.25\textwidth]{119082-newCR}} & \fbox{\includegraphics[width=.25\textwidth]{167062-newCR}} & \fbox{\includegraphics[width=.15\textwidth]{148026-newCR}} \tabularnewline \end{tabular} \caption{ Comparison of DMM-MCB, TMA-MCB, DMM-MCRB, and TMA-MCRB. DMM-MCRB may produce severe underdetections.} \label{fig:contrastedRegularDMMvsTMA} \end{figure*} \section{Conclusions} \label{sec:conclusions} This work presents a novel contribution to the field of image structure retrieval. We think that the topographic map is an extremely well suited theoretical framework to perform that task. Mathematical Morphology has proved this in depth and extension with the work it developed. In that direction, we based our work on the algorithm called Meaningful Boundaries~\cite{desolneux08}, introducing a few deep modifications that help improve the results. First, the criterion of meaningfulness was relaxed. In the new definition, a level line can have a non-causal piece and still be considered perceptually important. We also provide an intuitive parameter that allows to deal with the length of that piece. Second, we analyze the interaction of two fundamental cues for the perception of contours: contrast and regularity. We propose a new way of combining these features in which they compete for the control of the boundary saliency. Experiments show the suitability of this combination strategy. Examples of the resulting image structure retrieval method were presented, soundly showing that its theoretical advantages are also validated in practice. The proposed method increases significantly the robustness and the stability of the detections. As a final remark, the maximality constraint presents some issues. All the packets of parallel level line pieces are not eliminated by it. The exploration of another kind of algorithm based on maximality along the gradient direction might help to eliminate this effect~\cite{meinhardt08b}. \subsection{Meaningful Contrasted Boundaries} \label{sec:proofNFA_C} This section proves that TMA-MCB (see Definition~\ref{def:nfaContrastedCurve_k}, p.~\pageref{def:nfaContrastedCurve_k}) are theoretically correct. As usual, being correct means that the following proposition holds. \begin{proposition} The expected number of TMA \meps-meaningful boundaries in a finite set $E$ of random curves is smaller than \meps. \end{proposition} \begin{proof} For this proof we follow the scheme from Proposition~12 in~\cite{cao08theory}. For all $k$, let us denote by $L_k$ the random length of the pieces of $C$ such that $|Du| \geq \mu_k$. From Definition~\ref{def:nfaContrastedCurve_k}, any curve $C$ is \meps-meaningful if there is at least one $0 \leq k < K$ such that $N_{ll} \ K \ \widetilde{\bintail} (n \cdot \lsn{2}, L_k; H_c(\mu_k)) < \eps$. Let us denote by $E(C, k)$ this event and recall that all probabilities are under $\Hy_0$: \begin{equation*} \Pr (E(C, k)) \stackrel{\mathrm{def}}{=} \Pr \left( \widetilde{\bintail} (n \cdot \lsn{2}, L_k; H_c(\mu_k) < \frac{\eps}{N_{ll} \ K} \right) \text{.} \end{equation*} From Lemma~\ref{lem:classic}, we denote \begin{align*} X & = L_k & S(x) & = \widetilde{\bintail} (n \cdot \lsn{2}, x; Hc(\mu_k)) \\ t & = \frac{\eps}{N_{ll} \ K} &\quad \Pr(S(X) < t) & = \Pr (E(C, k)) \end{align*} and finally \begin{equation*} \Pr (E(C, k)) \leq \frac{\eps}{N_{ll} \cdot K} \text{.} \end{equation*} The event defined by ``C is \meps-meaningful'' is $$E(C) = \bigcup_{0 \leq k < K} E(C, k).$$ Let us denote by $\expectation_{\Hy_0}$ the mathematical expectation under $\Hy_0$. The expected number of \meps-meaningful curves is defined as $\mathbb{E}_{\Hy_0} \left( \sum_{C \in \mathcal{C}} \mathbf{1}_{E(C)} \right)$ where $\mathbf{1}_{A}$ is the indicator function of the set $A$. Then \begin{equation*} \expectation_{\Hy_0} \left( \sum_{C \in \mathcal{C}} \mathbf{1}_{E(C)} \right) \leq \sum_{\substack{ C \in \mathcal{C} \\ 0 \leq k < K }} \Pr \left( E(C, k) \right) \leq \sum_{\substack{C \in \mathcal{C} \\ 0 \leq k < K}} \frac{\eps}{N_{ll} \cdot K} = \eps. \end{equation*} \end{proof} \subsection{Meaningful Contrasted Regular Boundaries} \label{sec:proofNFA_CR} TMA \meps-meaningful boundaries (see Definition~\ref{def:nfaContrastedSmoothCurve_k}, p.~\pageref{def:nfaContrastedSmoothCurve_k}) are correct is the following proposition holds. \begin{proposition} The expected number of \meps-meaningful contrasted regular boundaries, obtained with Definition~\ref{def:nfaContrastedSmoothCurve_k}, in a finite random set $E$ of random curves is smaller than \meps. \end{proposition} \begin{proof} The same assumptions from the previous proof hold. Let $X_i = \mathbf{1}_{C_i \mathrm{\ is\ meaningful}}$ and $N = \#E$. Let us denote by $\expectation_{\Hy_0}$ the mathematical expectation under $\Hy_0$. Then \begin{multline} \expectation \left( \sum_{i=1}^{N} \sum_{k=1}^{K_c} \sum_{k'=1}^{K_s} X_i \right) = \\ \expectation \left( \expectation \left( \sum_{i=1}^{n} \sum_{k=1}^{k_c} \sum_{k'=1}^{k_s} X_i \ |\ N = n, K_c = k_c, K_s = k_s \right) \right) \textbf{.} \end{multline} We have assumed that $N$ is independent from the curves and $K_c$, $K_s$ are input parameters. Thus, conditionally to $N = n$, the law of $\sum_{i=1}^{N} X_i$ is the law of $\sum_{i=1}^{n} Y_i$ where $$ \displaystyle Y_i = \mathbf{1}_{n\, k_c\, k_s\, \max \left(\min_{0 \leq k < k_c}I_c (C_i, k)^2,\ \min_{0 \leq k' < k_s} I_s (C_i, k')^2 \right) < \eps}. $$ By the linearity of expectation \begin{equation} \expectation \left( \sum_{i=1}^{n} \sum_{k=1}^{k_c} \sum_{k'=1}^{k_s} X_i \right) = \expectation \left( \sum_{i=1}^{n} \sum_{k=1}^{k_c} \sum_{k'=1}^{k_s} Y_i \right) = \sum_{i=1}^{n} \sum_{k=1}^{k_c} \sum_{k'=1}^{k_s} \expectation \left( Y_i \right) \text{.} \end{equation} Since $Y_i$ is a Bernoulli variable, \begin{multline} \expectation (Y_i) = \Pr (Y_i = 1) = \Pr \left( n\, k_c\, k_s \ \max \left( \begin{split} \min_{0 \leq k < k_c}I_c (C_i, k)^2 \\ \min_{0 \leq k' < k_s} I_s (C_i, k')^2 \end{split} \right) < \eps \right) =\\ = \sum_{l=0}^{\infty} \Pr \left( n\, k_c\, k_s \max \left( \begin{split} \min_{0 \leq k < k_c}I_c (C_i, k)^2 \\ \min_{0 \leq k' < k_s} I_s (C_i, k')^2 \end{split} \right) < \eps \ \Big|\ L_i=l \right) \cdot \Pr (L_i=l) \text{.} \end{multline} Let us finally denote by $\alpha_1 \dots \alpha_l$ the $l$ independent values of $|Du|$ and $\gamma_1 \dots \gamma_{l/s}$ the $l/s$ independent values of $|R_s|$. Again, we have assumed that $L_i$ is independent of the gradient and regularity distributions in the image. Thus conditionally to $L_i = l$, \begin{multline} \Pr \left( n\, k_c\, k_s \max \left( \begin{split} \min_{0 \leq k < k_c} I_c (C_i, k)^2 \\ \min_{0 \leq k' < k_s} I_s (C_i, k')^2 \end{split} \right) < \eps \ |\ L_i=l \right) = \\ = \Pr \left( n\, k_c\, k_s \max \left( \begin{split} \min_{0 \leq k < k_c} I_c (C_i, k)^2 \\ \min_{0 \leq k' < k_s} I_s (C_i, k')^2 \end{split} \right) < \eps \right) = \\ = \Pr \left( \max \left( \begin{split} \min_{0 \leq k < k_c} I_c (C_i, k) \\ \min_{0 \leq k' < k_s} I_s (C_i, k') \end{split} \right) < \left( \frac{\eps}{n\, k_c\, k_s} \right)^{1 / 2} \right) = \\ = \Pr \left( \min_{0 \leq k < k_c}I_c (C_i, k) < \left( \frac{\eps}{n\, k_c\, k_s} \right)^{1 / 2} \right) \cdot \\ \Pr \left( \min_{0 \leq k' < k_s}I_s (C_i, k') < \left( \frac{\eps}{n\, k_c\, k_s} \right)^{1 / 2} \right) \text{.} \end{multline} From proof of Proposition~\ref{prop:nfaContrastedCurve_k}, \begin{multline} \Pr \left( \min_{0 \leq k < k_c}I_c (C_i, k) < \left( \frac{\eps}{n\, k_c\, k_s} \right)^{1 / 2} \right) \cdot \\ \Pr \left( \min_{0 \leq k' < k_s}I_s (C_i, k') < \left( \frac{\eps}{n\, k_c\, k_s} \right)^{1 / 2} \right) \leq \\ \leq \left( \frac{\eps}{n\, k_c\, k_s} \right)^{1 / 2} \left( \frac{\eps}{n\, k_c\, k_s} \right)^{1 / 2} = \frac{\eps}{n\, k_c\, k_s} \textbf{.} \end{multline} Finally \begin{equation} \expectation (Y_i) \leq \frac{\eps}{n\, k_c\, k_s} \quad \Rightarrow \quad \sum_{i=1}^{n} \sum_{k=1}^{k_c} \sum_{k'=1}^{k_s} \expectation (Y_i) \leq \eps \text{.} \end{equation} \end{proof} \section{Introduction} Shape plays a key role in our cognitive system: in the perception of shape lies the beginning of concept formation. Artists have implicitly acknowledged the importance of shapes since the dawn of times. Indeed, despite that lines do not divide objects from their background in the real world, line drawings are present in much of our earliest recorded art and, remarkably, remained unchanged through history, see Figure~\ref{fig:lineDrawing}. \begin{figure} \centerline{ \includegraphics[width=.48\columnwidth]{lineDrawing1} \includegraphics[width=.48\columnwidth]{lineDrawing2} } \caption{Lines are used to convey the outer contours of the horses in a very similar way in these drawings, one from 15,000 BC (left: Chinese Horse, paleolithic cave painting at Lascaux, France) and the other from AD 1300 (right: Jen Jen-fa, detail from The Lean Horse and the Fat Horse, Peking Museum, China). Reprinted by permission from Macmillan Ltd: NATURE~\cite{cavanagh05}, copyright 2005.} \label{fig:lineDrawing} \end{figure} Although art may provide clues to understand shape perception, it tells us little from the formal point of view. Let us begin by defining what is a shape. Phenomenologists~\cite{attneave54} conceive shape as a subset of an image, digital or perceptual, endowed with some qualities permitting its recognition. In this sense, both concepts, shape and recognition, are intrinsically intertwined: one has to define what is a shape in such a way that its recognition can be performed. Following these lines of thought, gestaltists~\cite{arnheim} regard shape perception as the grasping of structural features found in or imposed upon the stimulus material. The Gestalt school has extensively studied phenomena that unveil and justify this definition~\cite{kanizsa79,wertheimer38}. Formally, shapes can be defined by extracting contours from solid objects. In this context, shapes are represented and analyzed from an infinite-dimensional approach in which a shape is the locus of an infinite number of points~\cite{krim06}. This point of view leads to the active contours formulation~\cite{kass88} or to level-sets methods~\cite{serra83}. Although these shapes can be defined in any number of dimensions, e.g. the contour of a three dimensional solid object is a surface, we will restrict ourselves to the two dimensional case, following~Lisani \etal~\cite{lisani03-shape} and Cao \etal~\cite{cao08theory}. We define an image as a function $u: \R^2 \rightarrow \R$, where $u(x)$ represents the gray level or luminance at point $x$. Our first task is to extract the topological information of an image, independent of the unknown contrast change function of the acquisition system. This contrast change function can be modeled as a continuous and increasing function $g$. The observed data of an image $u$ might be any such $g(u)$. This simple argument leads to select the level sets~\cite{serra83}, or level lines, as a complete and contrast-invariant image description~\cite{caselles99,caselles10}. Given an image $u$, the upper level set $\mathcal{X}_{\lambda}$ and the lower level set $\mathcal{X}^{\lambda}$ of level $\lambda$ are subsets of $\R^2$ defined by~\cite{caselles10} \begin{align} \mathcal{X}_{\lambda} &= \lbrace x \in \R^2 \ |\ u(x) \geq \lambda \rbrace \textbf{,} \\ \mathcal{X}^{\lambda} &= \lbrace x \in \R^2 \ |\ u(x) < \lambda \rbrace \textbf{.} \end{align} If the image $u$ is lower (\emph{resp.} upper) semi-continuous, it can be reconstructed from the collection of its upper (\emph{resp.} lower) level sets by using the superposition principle~\cite{matheron75}: \begin{align} u(x) &= \sup \lbrace \lambda\ |\ x \in \mathcal{X}_{\lambda} \rbrace \textbf{,} \\ u(x) &= \inf \lbrace \lambda\ |\ x \in \mathcal{X}^{\lambda} \rbrace \textbf{.} \end{align} We define the boundaries of the connected components of a level set as a level line. A gray-level digital image $u_d$ is a discrete function in a rectangular grid that takes values in a finite set, typically integer values between 0 and 255. To obtain a grid independent representation, we can consider an interpolation $u$ of $u_d$ with the desired degree of regularity (i.e., $u$ can be $C^1$, $C^2$, etc.). In this work we use bilinear interpolation, in which case the level lines have the following properties: \begin{itemize} \item for almost all $\lambda$, the level lines are closed Jordan curves; \item by topological inclusion, level lines form a partially ordered set. \end{itemize} For extracting the level lines of such a bilinearly interpolated image we make use of the Fast Level Set Transform (FLST)~\cite{monasse00}. Notice that the FLST correctly handles singularities such as saddle points. We call this collection of level lines (along with their level) a topographic map. In general, the topographic map is an infinite set and so only quantized grey levels are considered, ensuring that the set is finite. Since the connected components of level sets are ordered by the inclusion relation, the topographic map may be embedded in a hierarchical representation. To make things simple, a level line $L_i$ is a descendant of another line $L_j$ in the hierarchy if and only if $L_i$ is included in the interior of $L_j$. Figure~\ref{fig:topographicMap} depicts a simple example. \begin{figure} \centerline{ \hfill \includegraphics[height=.3\columnwidth]{treeA.pdf} \hfill \includegraphics[height=.3\columnwidth]{treeB.pdf} \hfill } \caption{On the left, original image. On the right, the hierarchical representation of the topographic map.} \label{fig:topographicMap} \end{figure} The Mathematical Morphology school~\cite{matheron75,serra83} has extensively studied the topographic map and its level sets, producing a whole set of tools for image analysis. Smoothing filters, usually described by Partial Differential Equations (PDE), can be proven to have an equivalent formulation in terms of iterated morphological operators~\cite{morelPDEs}. Hence, edge detectors can then be directly expressed by combining these operators. The previous requirement leads us to define the set of level lines as a complete and contrast invariant image representation. In apparent contradiction to this fact, many authors, like Attneave, argue that ``information is concentrated along contours (regions where contrast changes abruptly)''~\cite{attneave54}. For example, edge detectors, from which the most renowned is Canny's~\cite{canny86}, rely on this fact. In summary, only a subset of the topographic map is necessary to obtain a \emph{perceptually} complete description. The search for perceptually important lines will focus on unexpected configurations, rising from the perceptual laws of Gestalt Theory~\cite{kanizsa79,wertheimer38}. From an algorithmic point of view, the main problem with Gestalt rules is their qualitative nature. Desolneux \etal~\cite{desolneux08} developed a detection theory which seeks to provide a quantitative assessment of gestalts. This theory is often referred as Computational Gestalt and it has been successfully applied to numerous gestalts and detection problems~\cite{cao2005,grompone10,rabin09}. It is primarily based on the Helmholtz principle which states that conspicuous structures may be viewed as exceptions to randomness. In this approach, there is no need to characterize the elements one wishes to detect but contrarily, the elements one wishes to avoid detecting, i.e., the background model. When an element sufficiently deviates from the background model, it is considered meaningful and thus, detected. Within this framework, Desolneux~\etal~\cite{dmm01} proposed an algorithm to detect contrasted level lines in grey level images, called meaningful boundaries. Further improvements to this algorithm were proposed by Cao \etal~\cite{cao2005}. In this work, we build upon these methods, presenting several contributions: \paragraph{\textbf{From global to partial curve saliency}.} The original meaningful boundaries are totally salient curves (i.e., every point in the curve is salient). We propose a modification that allows detecting partially salient curves as meaningful boundaries. This definition agrees more tightly to the observation that pieces of level lines correspond to object contours and also yields more robust results. \paragraph{\textbf{An extended definition of saliency}.} The criterion used to establish saliency in the original meaningful boundaries algorithm is contrast. Cao \etal~\cite{cao2005} proposed to determine saliency as a cooperation of two criteria: contrast and regularity. We study some theoretical and practical issues in their formulation. We then present a new formulation in which both aforementioned criteria compete, instead of cooperating. It is theoretically sound and yields improved detections, with respect to the ones obtained by using only contrast. The previous partial curve saliency criterion proves determinant in this new formulation Strictly speaking, all the proposed algorithms are only invariant to affine contrast changes. This can be easily proven when contrast (i.e., the gradient magnitude) is used as the saliency measure~\cite[Lemma 1, p.~19]{cao08theory}. Nevertheless, the set of meaningful boundaries is not significantly affected by slight deviations from this class of contrast changes. As a side note, we point out that there are two remaining steps to address in order to develop a complete shape detection system: smoothing, and geometrical invariance. Let us briefly discuss them for the sake of completeness. First, during the acquisition, details much too fine to be perceptually relevant are introduced. It is necessary to use a suitable filtering mechanism. Invariance to these fine details may be handled by an appropriate smoothing procedure, i.e., the Affine morphological Scale Space (AMSS)~\cite{moisan98} or by a subsequent suitable shape description method~\cite{tepper09matching}. Second, representations must be invariant to weak projective transformations. It can be shown that all planar curves within a large class can be mapped arbitrarily close to a circle by projective transformations~\cite{astrom95-limitations}. Moreover, full projective invariance is neither perceptually real (humans have great difficulties to recognize objects under strong perspective effects) nor computationally tractable. In this sense, affine invariance is the most we can impose in practice. At the same time, the effect of any optical acquisition system can be modeled by a convolution with a smoothing radial kernel. It does not commute with projective transformations and must be taken into account in the recognition process. A multiscale analysis is the only feasible way to treat it correctly. Both concepts, affine invariance and multiscale analysis are consistently integrated in the work by Morel and Yu~\cite{morel09ASIFT}. The aforementioned tools that cover these issues can be directly applied to the level lines detected by our method. For a wide perspective of the complete shape recognition chain see the book by Cao~\etal~\cite{cao08theory}. The paper is structured as follows. In Section~\ref{sec:meaningfulBoundaries} we recall the definition of meaningful boundaries and present a generalization that allows to detect partially salient curves. In Section~\ref{sec:meaningfulSmoothBoundaries} we address the combination of contrast and regularity for the detection of meaningful boundaries. We conclude in Section~\ref{sec:conclusions}. \section{Meaningful Contrasted Boundaries} \label{sec:meaningfulBoundaries} Let us begin by formally explaining the meaningful boundaries algorithm by Desolneux \etal~\cite{dmm01}. Let $C$ be a continuous level line of the (bilinearly interpolated) image $u$. We consider a discrete sampling of this curve, and denote it by $x_0, x_1, \dots, x_{n-1}$ \footnote{This corresponds to the following 2 steps: i) The intersection of the continuous level-line $C$ with the Qedgels of the image gives a set of $m$ points as explained in \cite{caselles10}. ii) We sample $n=\lfloor m/2 \rfloor$ points by taking one out of every two points}. This particular sampling is chosen to ensure that $|Du|(x_i)$ and $|Du|(x_{i+1})$ are statistically independent almost everywhere when pixel values of $u$ are considered to be independent The gradient magnitude is computed using a standard finite difference scheme on a $2 \times 2$ neighborhood. \begin{notation} Let $H_c$ be the tail histogram of $|Du|$, defined by \begin{equation} H_c (\mu) \stackrel{\mathrm{def}}{=} \frac{\# \{ x \in u,\ |Du|(x) > \mu \}}{\# \{ x \in u,\ |Du|(x) > \min_{x \in u} |Du|(x) \}}, \end{equation} where $Du$ can be computed by a standard finite differences scheme on a $2 \times 2$ neighborhood. \label{not:H_c} \end{notation} \begin{definition} \label{def:nfaContrastedCurve} \textnormal{(Desolneux~\etal~\cite{dmm01})} Let $\mathcal{C}$ be a finite set of $N_{ll}$ level lines of $u$. A level line $C \in \mathcal{C}$ is a DMM \meps-meaningful contrasted boundary (DMM-MCB) if \begin{equation} \NFA(C) \stackrel{\mathrm{def}}{=} N_{ll} \ H_c ( \min_{x \in C} |Du|(x) ) ^{l/2} < \eps \end{equation} where $l$ is the length of $C$. This number is called number of false alarms (NFA) of $C$. \end{definition} Actually, $l$ denotes the Euclidean length of the discrete approximation of $C$. In \cite{cao08theory} the authors assume that $l=2n$, but we found that this approximation is not accurate enough, which leads us to make here the distinction between $l$ and $2n$. Algorithm~\ref{algo:meaningfulBoundaries} shows a possible procedure to obtain all \meps-meaningful contrasted boundaries. \begin{algorithm}[t] \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{An image $u$ and a scalar \meps.} \Output{A set of closed curves $\mathcal{S}_\mathrm{res}$.} $\mathcal{S} \gets \mathrm{FLST}(u)$\tcp*{Compute the set of level lines} $N_{ll} \gets \#\{ \mathcal{S} \}$\; Compute the tail histogram $H_c$ of $|Du|$\; $\mathcal{S}_\mathrm{res} \gets \emptyset$\; \For{$C \in \mathcal{S}$}{ Compute the length $l$ of $C$\; $\displaystyle \mu \gets \min_{x \in C} |Du|(x)$\; $\displaystyle \mathrm{nfa}_C \leftarrow N_{ll} \ H_c ( \mu ) ^{l/2}$\; \lIf{$\mathrm{nfa}_C < \eps$}{ $\mathcal{S_\mathrm{res}} \gets \mathcal{S_\mathrm{res}} \cup \{ C \}$ } } \Return{$\mathcal{S}_\mathrm{res}$}\; \caption{Computation of \meps-meaningful boundaries in image $u$.} \label{algo:meaningfulBoundaries} \end{algorithm} \paragraph{Background model.} Now we shall check the consistency of Definition~\ref{def:nfaContrastedCurve}, namely that, in average, no more than \meps curves are detected by chance. In order to make this assertion more precise (in Proposition~\ref{prop:contrastedCurvesNFA} below) we need to define the (\emph{a contrario}) statistical background model that is used to present random input images to the boundary detector. Following \cite{cao2005,dmm01} we do not directly introduce a statistical image model, but we only state the statistical properties that each level line $C$ in the input set $E$ of level lines should satisfy. The actual shape of the curve does not matter. We only require that a random gradient value $|Du|(x_i)$ be associated to each of the $n$ regularly sampled points $x_0, x_1, \dots, x_{n-1}$ of $C$, that these $n$ random variables be independent, and with the same distribution $P(|Du|(x_i)>\mu) = H_c(\mu)$. \begin{proposition} \label{prop:contrastedCurvesNFA} The expected number of DMM \meps-mean\-ing\-ful contrasted boundaries in a random set $E$ of random curves is smaller than \meps, if $E$ follows the above background model. \end{proposition} We refer to the work by Cao~\etal~\cite{cao2005} for a complete proof. Proposition~\ref{prop:contrastedCurvesNFA} allows to interpret the meaningful contrasted curves in Definition~\ref{def:nfaContrastedCurve} within a multi-hypothesis testing framework: namely, the curves detected on an image $u$ are those that allow to reject the null hypothesis (background model) \emph{$\Hy_0$: the values of $|Du|$ are i.i.d., and follow the same distribution as gradient magnitude histogram of the image $u$ itself}. Definition~\ref{def:nfaContrastedCurve} has some drawbacks. From one side, the use of the minimum or any punctual measure, for the case, can be an unstable measure in the presence of noise. From the other side, it demands the curve to be not likely \emph{entirely} generated by noise (i.e., well contrasted). We already stated that \emph{pieces} of level lines match object boundaries. Moreover, as seen on Figure~\ref{fig:conceptMinContrast}, the use of the minimum contrast seems in contradiction with what we perceive. It is therefore too restrictive to impose such a constraint. Since we search for object boundaries, we think the natural model is to select level lines that have well contrasted parts. \begin{figure} \centerline { \includegraphics[width=.4\columnwidth]{degrade2} \hspace{.2in} \includegraphics[width=.4\columnwidth]{degradeFlatten} } \caption{Conceptual consequence of using the minimum contrast to detect boundaries. The left image contains a gray gradient and an uniformly black region on its upper and lower halves respectively. The right image is constructed by putting in its upper half the minimum gray level on the left image's upper half. If our perception was tuned to use the minimum contrast to detect the boundary between the two regions, we would perceive that the image on the right is as contrasted as the one on the left, which is clearly not the case.} \label{fig:conceptMinContrast} \end{figure} \subsection{Partially Contrasted Meaningful Boundaries} In this direction, we propose to modify the definition of the number of false alarms of a curve, to support a new model where one detects partially contrasted curves. This modification was briefly introduced in~\cite{tepper09msc} and is now explained in detail. \begin{notation} Let $x_0, x_1, \dots, x_{n-1}$ denote $n$ points of a curve $C$ of length $l$. Let $s$ be the mean Euclidean distance between neighboring points. For $x \in C$ denote by $c_i$ ($0 \leq i < n$) the contrast at $x_i$ defined by $c_i = |Du|(x_i)$. We note by $\mu_k$ ($0 \leq k < n$) the $k$-th value of the vector of the values $c_i$ sorted in ascending order. \end{notation} For $k \leq N \in \N$ and $p \in [0, 1]$, let us denote by \begin{equation} \bintail (N, k; p) \stackrel{\mathrm{def}}{=} \sum_{j = k}^{N} \binom{N}{j} p^j (1 - p)^{N - j} \end{equation} the tail of the binomial law. Desolneux~\etal~present a thorough study of the binomial tail and its use in the detection of geometric structures~\cite{desolneux08}. The regularized incomplete beta function, defined by \begin{equation} I(x; a, b) = \frac{\int_0^x t^{a-1} (1-t)^{b-1} dt}{\int_0^1 t^{a-1} (1-t)^{b-1} dt} \text{,} \end{equation} is an interpolation $\widetilde{\bintail}$ of the binomial tail to the continuous domain~\cite{desolneux08}: \begin{equation} \widetilde{\bintail} (n, k; p) = I(p; k, n-k+1) \end{equation} where $n, k \in \R$. In the case $n$ and $k$ are natural numbers $\widetilde{\bintail} (n, k; p) = \bintail (n, k; p)$. Additionally the regularized incomplete beta function can be computed very efficiently~\cite{numericalRecipes}. Following Meinhardt~\etal~\cite{meinhardt08}, for a given curve the probability under $\Hy_0$ that at least $k$ among the $n$ values $c_j$ are greater than $\mu$ is given by the tail of the binomial law $\bintail (n, k; H_c(\mu))$. Thus it is interesting, and more convenient, to extend this model to the continuous case using the regularized incomplete beta function \begin{equation} \widetilde{\bintail} (n \cdot \lsn{s}, k \cdot \lsn{s}; H_c(\mu)) \end{equation} where $\lsn{s} = \frac{l}{s \cdot n}$ and acts as a normalization factor. This represents the probability under $\Hy_0$ that, for a curve of length $l$, some parts with total length greater or equal than $\lsn{s} (n-k)$ have a contrast greater than $\mu$. \begin{definition} \label{def:nfaContrastedCurve_k} Let $\mathcal{C}$ be a finite set of $N_{ll}$ level lines of $u$. A level line $C \in \mathcal{C}$ is a TMA \meps-meaningful boundary if \begin{equation} \NFA_K(C) \stackrel{\mathrm{def}}{=} N_{ll}\ K\ \min_{k < K} \widetilde{\bintail} (n \cdot \lsn{2}, k \cdot \lsn{2}; H_c(\mu_k)) < \eps \end{equation} where $K$ is a parameter of the algorithm. This number is called number of false alarms (NFA) of $C$. \end{definition} The parameter $K$ controls the number of points that we allow to be likely generated by noise, that is, a curve must have no more than $K$ points with a ``high'' probability of belonging to the background model. It is simply chosen as a percentile of the total number of points in the curve. The procedure is similar to Algorithm~\ref{algo:meaningfulBoundaries} but replacing $\NFA$ by $\NFA_K$. As usual, Definition~\ref{def:nfaContrastedCurve_k} is correct if the following proposition holds. \begin{proposition} \label{prop:nfaContrastedCurve_k} The expected number of TMA \meps-mean\-ing\-ful boundaries, in a finite random set $E$ of random curves is smaller than \meps. \end{proposition} This very important proof is given in Appendix~\ref{sec:proofNFA_C} to avoid breaking the flow of the discussion. This new model is an extension of the previous one, since $\NFA_{K=1}(C) = \NFA(C)$. In fact, Definition~\ref{def:nfaContrastedCurve_k} is no other than a relaxation of Definition~\ref{def:nfaContrastedCurve}. We should expect to have new detections and to detect the same lines, with increased stability. This comes from the fact that several punctual measures are used and the minimum is taken over their probability. This was experimentally checked and some results can be seen in Section~\ref{sec:experimentsMeaningful}. We apply the DMM-MCB and TMA-MCB algorithms to an image of white noise, in order to experimentally check that when $\eps=1$ the number of detections is in average lower than 1. This is confirmed in Figure~\ref{fig:whiteNoise_C}, where the number of detections is actually zero. Even when $\eps=1000$, the number of detections remain very small. \begin{figure*} \centering \includegraphics[width=\textwidth]{noise-newC3} \caption{There are 4845004 level lines in the center image of a Gaussian noise with standard deviation 50. By setting $\eps=1000$, DMM-MCB detects one boundary (left detail) and TMA-MCB detects two boundaries (left and right details). At $\eps=1$, both methods detect zero boundaries.} \label{fig:whiteNoise_C} \end{figure*} In~\cite{cao2005}, other modifications are proposed to the basic meaningful boundaries algorithm. On the one hand, meaningfulness is computed locally. We will not discuss this further, since we are only interested in the redefinition of the NFA and its consequences. In any case, our redefined NFA can also be used in the same local detection process. On the other hand, only level lines that remain stable across several zoom scalings are detected. The reason behind this approach is to counter the effect of small perturbations (i.e., noise) in the image. Our scheme handles naturally this effect by minimizing a probability instead of a punctual measure. This was confirmed in our experiments where multiscale stabilization did not provide any visible improvement. \subsection{Maximal boundaries} Because of interpolation, meaningful boundaries usually appear in parallel and redundant groups, called bundles. Since the meaningful level lines inherit the tree structure of the topographic map, Desolneux~\etal~\cite{desolneux08} use this structure to efficiently remove redundant boundaries. From now on, we work on the tree composed only of meaningful boundaries. \begin{definition} \textnormal{(Monasse and Guichard~\cite{monasse00})} A monotone section of a level lines tree is a part of a branch such that each node has a unique son and where grey level is monotone (no contrast reversal). A maximal monotone section is a monotone section which is not strictly included in another one. \end{definition} \begin{definition} \textnormal{(Desolneux~\etal~\cite{dmm01})} A meaningful boundary is maximal meaningful if it has a minimal NFA in a maximal monotone section. \end{definition} Algorithm~\ref{algo:newMeaningfulBoundaries} depicts the overall proposed procedure. \begin{algorithm}[t] \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{An image $u$, a scalar \meps an integer $K$.} \Output{A set of closed curves $\mathcal{S}_\mathrm{res}$.} $\mathcal{S} \gets \mathrm{FLST}(u)$\tcp*{Compute the set of level lines} $N_{ll} \gets \# \{ \mathcal{S} \}$\; Compute the tail histogram $H_c$ of $|Du|$\; $\mathcal{S}_\mathrm{res} \gets \emptyset$\; \For{$C \in \mathcal{S}$}{ Compute the length $l$ of $C$\; $n \gets \# \{ x \in C\}$\; $\mu_1, \dots, \mu_K \gets$ the $K$ smallest values of $|Du|(x)$, $x \in C$\; $\displaystyle \mathrm{nfa}_C \gets N_{ll} K \min_{k < K} \widetilde{\bintail} (\tfrac{l}{2}, k \cdot \tfrac{l}{2n}; H_c(\mu_k))$\; \lIf{$\mathrm{nfa}_C < \eps$}{ $\mathcal{S}_\mathrm{res} \gets \mathcal{S}_\mathrm{res} \cup \{ C \}$ } } \tcp{Maximality-based pruning:} \Repeat{all monotone sections have been explored}{ Find an unexplored monotone section $\mathcal{S}_\mathrm{M}$ in the level lines tree\; $\displaystyle C_\mathrm{M} \gets \max_{C \in \mathcal{S}_\mathrm{M}} \mathrm{nfa}_C$\; \For{$C \in \mathcal{S}_\mathrm{M}$}{ \lIf{$C \in \mathcal{S}_\mathrm{res}$ \textbf{and} $C \neq C_\mathrm{M}$}{ $\mathcal{S}_\mathrm{res} \gets \mathcal{S}_\mathrm{res} \setminus \{ C \}$ } } } \Return{$\mathcal{S}_\mathrm{res}$}\; \caption{Computation of maximal TMA \meps-meaningful boundaries in image $u$.} \label{algo:newMeaningfulBoundaries} \end{algorithm} Figure~\ref{fig:buildingSequence} shows an example of the reduction of the number of level lines caused by the maximality constraint. Parallel level lines are eliminated, leading to ``thinner edges .'' \begin{figure*} \centerline{ \includegraphics[width=.3\textwidth]{building} \hfil \fbox{\includegraphics[width=.3\textwidth]{building-all-ll}} \hfil \fbox{\includegraphics[width=.3\textwidth]{building-ll}} } \caption{Effect of the maximality condition over the meaningful boundaries of an image. On the left, original image; on the center, DMM-MCB (8987 lines found); on the left, maximal DMM-MCB (517 lines found).} \label{fig:buildingSequence} \end{figure*} In the following, when we refer to meaningful boundaries, both in its DMM or TMA versions, we always compute maximal meaningful boundaries. Notice that working with representative curves of monotone sections has some well-known dangers for particular configurations that rarely occur in practice. For example, if the input image contains successively nested objects of different increasing shades of gray, the proposed algorithm will detect only one object of each nested set. Other definitions that explore local maxima of some saliency measure along the tree, such as MSER~\cite{matas02-mser}, can be used to correct this issue. Desolneux~\etal~\cite{dmm01} also proposed an algorithm called meaningful edges which aims at detecting salient (i.e., well contrasted) pieces of level lines. TMA-MCB can be considered a hybrid of meaningful boundaries and meaningful edges and presents advantages from both algorithms. Pieces of level lines belonging to different level lines cannot be compared, since they can have different positions and lengths. This means that we cannot compute maximal meaningful edges in the level lines tree. The TMA-MCB algorithm is able to detect partially salient curves while retaining compatibility with the maximality in the tree. On the other side, it is possible to compute maximal meaningful edges inside a given curve. TMA-MCB, as a provider of the supporting level lines, can be considered a first step towards finding meaningful edges that are maximal in both directions: in the tree, i.e., orthogonal to the curve, and along the curve. The extraction of the optimal pieces in a curve is discussed by Tepper \etal~\cite{tepper12ps}. \subsection{Practical implications of the change in the NFA} \label{sec:experimentsMeaningful} We now address the following question: is there a fundamental difference in practice between DMM-MCB and TMA-MCB? The answer is that, given an image, this change implies noticeable differences in the detected curves. Indeed, TMA-MCB are more robust since the NFAs attained are much lower. Taking the minimum of probabilities is also more stable than taking the minimum on any punctual measure, see Figure~\ref{fig:comparisonNFAunderNoise}. \begin{figure*} \centering \begin{tabular}{@{\hspace{0pt}}c@{\hspace{4pt}}c@{\hspace{12pt}}c@{\hspace{12pt}}c@{\hspace{0pt}}} & \textsc{image} & \textsc{dmm-mcb} & \textsc{tma-mcb} \tabularnewline \raisebox{.4in}{\begin{sideways}\textsc{original}\end{sideways}} & \includegraphics[width=1.4in]{143090} & \fbox{\includegraphics[width=1.4in]{143090-oldllc}} & \fbox{\includegraphics[width=1.4in]{143090-llc}} \tabularnewline \vspace{4pt} \raisebox{.3in}{\begin{sideways}\textsc{original+noise}\end{sideways}} & \includegraphics[width=1.4in]{143090noise} & \fbox{\includegraphics[width=1.4in]{143090noise-oldllc}} & \fbox{\includegraphics[width=1.4in]{143090noise-llc}} \end{tabular} \caption{Noise contamination example. The image on the bottom left is contaminated by a small amount of noise. DMM-MCB takes a minimum of punctual measures, thus its result is affected. On the counterpart, result with TMA-MCB is less affected, as it deals with probabilities. Notice that here no smoothing is performed previous to detection, contrarily to the original implementation of the meaningful boundaries algorithm~\cite{dmm01}.} \label{fig:comparisonNFAunderNoise} \end{figure*} In some cases, by relaxing the meaningfulness threshold in DMM-MCB, that is setting $\eps > 1$, visually better results can be achieved. More level lines are kept, but at the expense of having lower confidence on them. The key advantage with TMA-MCB is that, for a given threshold for \meps, less visually salient level lines are discarded. One of the possible arguments against TMA-MCB could be that it is no more than a shift of the threshold on the NFA of DMM-MCB. Specifically, that there exists a threshold $\eps' > \eps$ for which DMM-MCB and $\eps'$ would be the same as TMA-MCB and \meps. However, the assertion is clearly false, as shown in Figure~\ref{fig:comparisonNFA}. \begin{figure*} \centering \begin{tabular}{@{\hspace{4pt}}c@{\hspace{4pt}}c@{\hspace{4pt}}c@{\hspace{4pt}}c@{\hspace{4pt}}} \textsc{image} & \textsc{dmm-mcb} \begin{footnotesize}($\eps=10^{-10}$)\end{footnotesize} & \textsc{dmm-mcb} \begin{footnotesize}($\eps=1$)\end{footnotesize} & \textsc{tma-mcb} \begin{footnotesize}($\eps=10^{-10}$)\end{footnotesize} \tabularnewline \includegraphics[width=1.1in]{106025} & \fbox{\includegraphics[width=1.1in]{106025-ll-old10}} & \fbox{\includegraphics[width=1.1in]{106025-ll-old0}} & \fbox{\includegraphics[width=1.1in]{106025-ll}} \end{tabular} \caption{Definition~\ref{def:nfaContrastedCurve_k} is not merely a shift of the threshold on the NFA from Definition~\ref{def:nfaContrastedCurve}: even relaxing the threshold to its limit ($\eps = 1$), the result with the old method remains roughly the same. A lot of structure missed with Definition~\ref{def:nfaContrastedCurve} is recovered with Definition~\ref{def:nfaContrastedCurve_k}.} \label{fig:comparisonNFA} \end{figure*} In many applications (e.g., scene reconstruction, image matching), underdetection is far more dangerous than overdetection. Losing structure is critical as it can end-up in a total failure. Detection noise can always be handled (or even tolerated) when the amount of noise does not occlude information, as in our case. TMA-MCB has an advantage over DMM-MB in this respect\footnote{Note however that overdetection might have as well a huge detrimental impact in other applications.}. This is experimentally checked in all examples, even if the difference is more striking in some examples than in others. Figure~\ref{fig:epsilonEvolution} shows the numerical robustness attained with TMA-MCB. The visually important boundaries in the image have a much lower NFA with TMA-MCB than with DMM-MCB. \begin{figure*} \centering \begin{tabular}{@{\hspace{0pt}}m{.1in}@{\hspace{4pt}}m{1.52in}@{\hspace{4pt}}m{1.52in}@{\hspace{4pt}}m{1.52in}@{\hspace{0pt}}} &&\includegraphics[width=1.4in]{296059} \tabularnewline & \centering{\begin{footnotesize}$\eps=10^{-10}$\end{footnotesize}} & \centering{\begin{footnotesize}$\eps=10^{-50}$\end{footnotesize}} & \centering{\begin{footnotesize}$\eps=10^{-80}$\end{footnotesize}} \tabularnewline \begin{sideways}\textsc{dmm-mcb}\end{sideways} & \fbox{\includegraphics[width=1.4in]{296059-oldC-10}} & \fbox{\includegraphics[width=1.4in]{296059-oldC-50}} & \fbox{\includegraphics[width=1.4in]{296059-oldC-80}} \tabularnewline \begin{sideways}\textsc{tma-mcb}\end{sideways} & \fbox{\includegraphics[width=1.4in]{296059-newC-10}} & \fbox{\includegraphics[width=1.4in]{296059-newC-50}} & \fbox{\includegraphics[width=1.4in]{296059-newC-80}} \tabularnewline \end{tabular} \caption{ Comparison between the stability of DMM-MCB and TMA-MCB. Much lower NFAs are attained with the latter in lines which are visually relevant. } \label{fig:epsilonEvolution} \end{figure*} \section{Combining contrast and good continuation} \label{sec:meaningfulSmoothBoundaries} As already stated, in natural images contrasted boundaries often locally coincide with object edges. Thus, they are also incidentally smooth. Active contours~\cite{kass88} rely on this combination of good contrast and smoothness to provide well localized contours. In this section, we reprise the work by Cao \etal~\cite{cao2005} and study the possible influence of smoothness in the a contrario detection process. We conclude that regularity plays an important role in the improvement of the quality of the obtained detections. This reinforcement phenomenon and the fact that each partial detector can detect most image edges prove a contrario that contrast and regularity are not independent in natural images. Let $C$ be a rectifiable planar curve, parameterized by its length. Let $l$ be the length of $C$ and $x = C(\tau) \in C$. With no loss of generality, we assume that $\tau = 0$. \begin{definition} \textnormal{(Cao~\etal~\cite{cao2005})} Let $s > 0$ be a fixed positive value such that $2s < l$. We call regularity of $C$ at $x$ (at scale $s$) the quantity \begin{equation} R_s (x) = \frac{\max (|x - C(-s)|, |x - C(s)|)}{s} \end{equation} where $|x_i - x_j|$ represents the Euclidean distance between $x_i$ and $x_j$. \end{definition} Figure~\ref{fig:defregularity} visually explains the pertinence of this definition. Only when one of the subcurves $C((-s, 0))$ or $C((0, s))$ is a line segment, $R_s (x) = 1$; in all other cases $R_s (x) < 1$. When $s$ is small enough, regularity is inversely proportional to the curve's curvature around $x$~\cite{cao2005}. \begin{figure} \centering \includegraphics[width=150pt]{defregularity} \put(-80, 85){$x=C(0)$} \put(-125, 65){$C(-s)$} \put(-80, 40){$C(s)$} \put(-48, 70){$s \times R_s(x)$} \caption{Reproduced from the work by Cao~\etal~\cite{cao2005}. The regularity at $x$ is obtained by comparing the radius of the circle with $s$. The radius is equal to $s$ if and only if the curve is a straight line. If the curve has a large curvature, the radius will be small compared to $s$.} \label{fig:defregularity} \end{figure} The question about the choice of $s$ arises naturally and was studied in detail by Cao~\etal~\cite{cao2005} and Mus\'e~\cite{museThesis}. We will limit ourselves to state that a larger value of $s$ (thus at less local scale of analysis) is more robust to noise. On the other side, $s$ should not be too large either. In practice, and following Cao~\etal~\cite{cao2005} one may safely set $s=5$, which is the value we use in our experiments. Let us denote by $H_s (r)$ the distribution of the regularity in white noise level lines, i.e., \begin{equation} H_s (r) = P \Big( R_s (x) > r,\, x \in C,\, C \text{ is a white noise level line} \Big) \text{,} \end{equation} which depends only on $s$ and can be empirically estimated. Again, the curve detection algorithm consists in adequately rejecting the null hypothesis \emph{$\Hy_0$: the values of $|R_s|$ are i.i.d., extracted from a noise image}. We assume that, in the background model, contrast and regularity are independent. Let us forget for the moment the issues associated with the use of extremal (the minimum) statistics, discussed in Section~\ref{sec:meaningfulBoundaries}. \begin{definition} Let $C$ be a level line in a finite set $\mathcal{C}$ of $N_{ll}$ level lines of image $u$. Let \begin{align*} \mu &= \min_{x \in C} |Du|(x) \text{,}\\ \rho &= \min_{x \in C} R_s(x) \end{align*} be respectively the minimal quantized contrast and regularity along $C$. The level line $C$ is a DMM \meps-meaningful regular boundary (DMM-MRB) if \begin{equation} \NFA^{R} (C) \stackrel{\mathrm{def}}{=} N_{ll}\ H_s (\rho)^{l / 2 s} < \eps \text{.} \end{equation} The level line $C$ is a DMM \meps-meaningful contrasted regular boundary (DMM-MRB) if \begin{equation} \NFA^{\mathrm{CR}} (C) \stackrel{\mathrm{def}}{=} N_{ll} \max \left( H_c (\mu)^{l},\ H_s (\rho)^{l/s} \right) < \eps \text{.} \end{equation} \label{def:nfaSmoothCurve} \end{definition} \begin{remark} Cao~\etal~\cite{cao08theory} provided the following definition of meaningful contrasted regular boundaries: \begin{equation} \NFA^{\mathrm{CR}} (C) \stackrel{\mathrm{def}}{=} N_{ll}\ H_c (\mu)^{l/2}\ H_s (\rho)^{l / 2 s} < \eps \text{.} \end{equation} Unfortunately, they do not prove that the expected number of \meps-meaningful contrasted regular boundaries in a finite set of random curves is smaller than \meps. This fact is annoying since the threshold \meps is emptied of meaning. It is not by any means an easy proof and we have not found a solution yet. However, we have proven that by slightly changing their definition in the following manner \begin{equation} \NFA^{\mathrm{CR}} (C) \stackrel{\mathrm{def}}{=} N_{ll}\ H_c (\mu)^{{l}^2 / 2 s}\ H_s (\rho)^{{l}^2 / 2 s} \text{.} \label{eq:nfaSmoothCurve2} \end{equation} a proof can be built~\cite{tepperPhD}. Although theoretically sound, meaningful contrasted regular boundaries defined by Equation~\ref{eq:nfaSmoothCurve2} do not provide satisfactory results. This is a consequence of using the exponent $l^2$. With respect to DMM-MCB (Definition~\ref{def:nfaContrastedCurve}, p.~\pageref{def:nfaContrastedCurve}) and even if the regularity term has high probability (say one), raising the contrast term to a much larger power will shift the NFA of all curves towards zero. Irregular curves that were not meaningful by their contrast, might become meaningful regular boundaries. This is certainly an unwanted side effect. \leavevmode\unskip\penalty9999 \hbox{}\nobreak\hfill \quad\hbox{$\triangle$} \end{remark} Definition~\ref{def:nfaSmoothCurve} exhibits some interesting properties: \begin{itemize} \item A contrasted but irregular curve will not be detected; \item A regular but non-contrasted curve will not be detected; \item An irregular and non-contrasted curve will not be detected; \item A regular and contrasted curve will be detected. \end{itemize} Both gestalts, i.e., contrast and good continuation, interact in a novel way: instead of cooperating by reinforcing each other, as in Equation~\ref{eq:nfaSmoothCurve2}, they compete for the ``control'' of the curve. As the exponent in the contrast term is greater than the exponent in the regularity term ($l > l/s$), the contrast term will in general dominate the detections and the regularity will act as an additional sanity check. The shifting phenomenon mentioned in the above remark will still be present. However, $2l$ is much less aggressive than $l^2$ and its effect will be doubly mitigated: (1) since $l \gg 2$ and (2) because of the controlling effect of using the maximum. Since TMA-MCB is a relaxed version of DMM-MCB, we profit from such knowledge and also relax the definition of meaningful contrasted regular boundaries. This relaxation will prove particularly relevant for the contrasted regular case. \begin{definition} \label{def:nfaContrastedSmoothCurve_k} Let $\mathcal{C}$ be a finite set of $N_{ll}$ level lines of $u$. A level line $C \in \mathcal{C}$ is a TMA \meps-meaningful contrasted regular boundary (TMA-MCRB) if \begin{equation} \NFA_{K}^{\mathrm{CR}}(C) \stackrel{\mathrm{def}}{=} N_{ll}\ K_c\ K_s \max \left( \begin{split} \min_{k < K_c} I_c (C, k)^2 \\ \min_{k < K_s} I_s (C, k)^2 \end{split} \right) < \eps \text{,} \end{equation} where \begin{align*} I_c (C, k) &= \widetilde{\bintail} (n \cdot \lsn{2}, k \cdot \lsn{2}; H_c(\mu_k)) \text{,}\\ I_s (C, k) &= \widetilde{\bintail} (n \cdot \lsn{2s}, k \cdot \lsn{2s}; H_s(\rho_{k})) \text{,} \end{align*} and $K_c$ and $K_s$ are parameters of the algorithm. This number is called number of false alarms (NFA) of $C$. \end{definition} Here $K_c$ and $K_s$ have the same meaning as $K$ in Definition~\ref{def:nfaContrastedCurve_k} and they are also set as a percentile of the total number of points in the curve. \begin{proposition} The expected number of TMA \meps-mean\-ing\-ful contrasted regular boundaries in a finite set $E$ of random curves is smaller than \meps. \end{proposition} This very important proof is given in Appendix~\ref{sec:proofNFA_CR} to avoid breaking the flow of the discussion. For completeness, we provide the following definition. \begin{definition} \label{def:nfaSmoothCurve_k} Let $\mathcal{C}$ be a finite set of $N_{ll}$ level lines of $u$. A level line $C \in \mathcal{C}$ is a TMA \meps-meaningful regular boundary (TMA-MRB) if \begin{equation} \NFA_K^{\mathrm{R}}(C) \stackrel{\mathrm{def}}{=} \\ N_{ll}\ K_s \min_{k < K_s} \widetilde{\bintail} (n \cdot \lsn{2s}, k \cdot \lsn{2s}; H_s(\rho_{k})) < \eps \text{,} \end{equation} and $K_s$ is a parameter of the algorithm. This number is called number of false alarms (NFA) of $C$. \end{definition} As a sanity check, we apply the DMM-MCRB and TMA-MCRB algorithms to an image of white noise. We would expect that when $\eps=1$ the number of detections is in average lower than 1. This is checked in Figure~\ref{fig:whiteNoise_CR}, where the number of detections is actually zero. Even when $\eps=1000$, the number of detections remain negligible. \begin{figure*} \centering \includegraphics[width=.7\textwidth]{noise-newCR3} \caption{There are 4845004 level lines in the left image of a Gaussian noise with standard deviation 50. By setting $\eps=1000$, DMM-MCRB detects zero boundaries and TMA-MCRB detects two boundaries forming a packet (right detail). At $\eps=1$, both methods detect zero boundaries.} \label{fig:whiteNoise_CR} \end{figure*} An immediate objection to the use of regularity might be: since high curvature points are often regarded as very meaningful perceptually~\cite{attneave54}, why such an emphasis in discarding them? The answer is also immediate: we detect \emph{partially} contrasted and regular level lines. Hence, a curve containing a relatively small number of high curvature points will be detected by TMA-MCRB but not by DMM-MCRB. In this scenario, these high curvature points will become more surprising, because of their seldomness, and thus meaningful. The procedure for finding maximal meaningful regular or contrasted regular boundaries is similar to Algorithm~\ref{algo:newMeaningfulBoundaries}, replacing $\NFA$ by $\NFA_K^\mathrm{R}$ or $\NFA_K^\mathrm{CR}$, respectively. \subsection{Discussion} We will now examine the results of the proposed competition between contrast and good continuation. The benefits of using meaningful contrasted regular boundaries are clear in Figure~\ref{fig:contrastedVSregular}. In both examples, only using contrast produces an overdetection (level lines are detected in areas with texture, e.g. the vegetation on the left, or exhibiting a slight gradient, e.g. the sky and the dome on the right) while only using good continuation produces an underdetection (e.g. the bridge on the left and the bell on the right). The combination of both gestalts corrects the issues by keeping the best from both worlds: most undesired level lines disappear (e.g. the vegetation on the left and the sky on the right) while the desired ones are kept (e.g. the bridge on the left and the bell on the right). \begin{figure*} \centering \begin{tabular}{@{\hspace{0pt}}m{.1in}@{\hspace{4pt}}m{.4\textwidth}@{\hspace{4pt}}m{.4\textwidth}@{\hspace{0pt}}} \begin{sideways}\textsc{image}\end{sideways} & \includegraphics[width=.4\textwidth]{22090} & \includegraphics[width=.4\textwidth]{118035} \tabularnewline \begin{sideways}\textsc{tma-mcb}\end{sideways} & \fbox{\includegraphics[width=.4\textwidth]{22090-llc}} & \fbox{\includegraphics[width=.4\textwidth]{118035-llc}} \tabularnewline \begin{sideways}\textsc{tma-mrb}\end{sideways} & \fbox{\includegraphics[width=.4\textwidth]{22090-llr}} & \fbox{\includegraphics[width=.4\textwidth]{118035-llr}} \tabularnewline \begin{sideways}\textsc{tma-mcrb}\end{sideways} & \fbox{\includegraphics[width=.4\textwidth]{22090-llrc}} & \fbox{\includegraphics[width=.4\textwidth]{118035-llrc}} \tabularnewline \end{tabular} \caption{Comparison of TMA-MCB (Definition~\ref{def:nfaContrastedCurve_k}), TMA-MRB (Definition~\ref{def:nfaSmoothCurve_k}), and TMA-MCRB (Definition~\ref{def:nfaContrastedSmoothCurve_k}).} \label{fig:contrastedVSregular} \end{figure*} Although more complicated to analyze, Figure~\ref{fig:star-wars} further supports our claims. See the detail on Harrison Ford's sleeve: it is completely lost by using contrast, partially recovered by using good continuation and well recovered by combining them. It is important to point out that in general, good continuation has a predominant effect over contrast. In the depicted examples, meaningful contrasted boundaries have lower NFAs than meaningful smooth ones. This explains the visual effect that we perceive when looking at the results: contrasted regular boundaries are basically regular boundaries reinforced by some contrasted parts. \begin{figure*} \centering \begin{tabular}{@{\hspace{0pt}}c@{\hspace{12pt}}c@{\hspace{0pt}}} \textsc{image} & \textsc{tma-mcb} \tabularnewline \includegraphics[width=.4\textwidth]{star-wars-casual} & \fbox{\includegraphics[width=.4\textwidth]{star-wars-casual-llc}} \tabularnewline \textsc{tma-mrb} & \textsc{tma-mcrb} \tabularnewline \fbox{\includegraphics[width=.4\textwidth]{star-wars-casual-llr}} & \fbox{\includegraphics[width=.4\textwidth]{star-wars-casual-llrc}} \tabularnewline \end{tabular} \caption{Comparison of TMA-MCB (Definition~\ref{def:nfaContrastedCurve_k}), TMA-MRB (Definition~\ref{def:nfaSmoothCurve_k}), and TMA-MCRB (Definition~\ref{def:nfaContrastedSmoothCurve_k}).} \label{fig:star-wars} \end{figure*} The example in Figure~\ref{fig:watchmen} is a real scene, extremely complicated from the edge detection point of view. In any case, all results are globally satisfactory. Noticeable differences between the methods are perceived by looking at the signs containing letters. \begin{figure*} \centering \begin{tabular}{@{\hspace{0pt}}c@{\hspace{4pt}}c@{\hspace{0pt}}} \textsc{image} & \textsc{tma-mcb} \tabularnewline \includegraphics[width=2.3in]{watchmen} & \fbox{\includegraphics[width=2.3in]{watchmen-llc}} \tabularnewline \textsc{tma-mrb} & \textsc{tma-mcrb} \tabularnewline \fbox{\includegraphics[width=2.3in]{watchmen-llr}} & \fbox{\includegraphics[width=2.3in]{watchmen-llrc}} \tabularnewline \end{tabular} \caption{Comparison of TMA-MCB (Definition~\ref{def:nfaContrastedCurve_k}), TMA-MRB (Definition~\ref{def:nfaSmoothCurve_k}), and TMA-MCRB (Definition~\ref{def:nfaContrastedSmoothCurve_k}).} \label{fig:watchmen} \end{figure*} We lastly compare TMA-MCRB with DMM-MCRB in Figure~\ref{fig:contrastedRegularDMMvsTMA}. As already stated TMA-MCB often detects more structure than DMM-MCB (second and third rows). This effect is amplified in DMM-MCRB, and can lead to severe underdetections (fourth row). On the other hand, the relaxation present in the TMA version allows to recover the structure more faithfully(fifth row), albeit some mild overdetections. \begin{figure*} \centering \begin{tabular}{@{\hspace{0pt}}m{.08in}@{\hspace{4pt}}m{.255\textwidth}@{\hspace{4pt}}m{.255\textwidth}@{\hspace{4pt}}m{.255\textwidth}@{\hspace{4pt}}m{.15\textwidth}@{\hspace{0pt}}} \begin{sideways}\textsc{image}\end{sideways} & \includegraphics[width=.255\textwidth]{42049} & \includegraphics[width=.255\textwidth]{119082} & \includegraphics[width=.255\textwidth]{167062} & \includegraphics[width=.15\textwidth]{148026} \tabularnewline \begin{sideways}\textsc{dmm-mcb}\end{sideways} & \fbox{\includegraphics[width=.255\textwidth]{42049-oldC}} & \fbox{\includegraphics[width=.255\textwidth]{119082-oldC}} & \fbox{\includegraphics[width=.255\textwidth]{167062-oldC}} & \fbox{\includegraphics[width=.15\textwidth]{148026-oldC}} \tabularnewline \begin{sideways}\textsc{tma-mcb}\end{sideways} & \fbox{\includegraphics[width=.255\textwidth]{42049-newC}} & \fbox{\includegraphics[width=.255\textwidth]{119082-newC}} & \fbox{\includegraphics[width=.255\textwidth]{167062-newC}} & \fbox{\includegraphics[width=.15\textwidth]{148026-newC}} \tabularnewline \begin{sideways}\textsc{dmm-mcrb}\end{sideways} & \fbox{\includegraphics[width=.25\textwidth]{42049-oldCR}} & \fbox{\includegraphics[width=.25\textwidth]{119082-oldCR}} & \fbox{\includegraphics[width=.25\textwidth]{167062-oldCR}} & \fbox{\includegraphics[width=.15\textwidth]{148026-oldCR}} \tabularnewline \begin{sideways}\textsc{tma-mcrb}\end{sideways} & \fbox{\includegraphics[width=.25\textwidth]{42049-newCR}} & \fbox{\includegraphics[width=.25\textwidth]{119082-newCR}} & \fbox{\includegraphics[width=.25\textwidth]{167062-newCR}} & \fbox{\includegraphics[width=.15\textwidth]{148026-newCR}} \tabularnewline \end{tabular} \caption{ Comparison of DMM-MCB, TMA-MCB, DMM-MCRB, and TMA-MCRB. DMM-MCRB may produce severe underdetections.} \label{fig:contrastedRegularDMMvsTMA} \end{figure*} \section{Conclusions} \label{sec:conclusions} This work presents a novel contribution to the field of image structure retrieval. We think that the topographic map is an extremely well suited theoretical framework to perform that task. Mathematical Morphology has proved this in depth and extension with the work it developed. In that direction, we based our work on the algorithm called Meaningful Boundaries~\cite{desolneux08}, introducing a few deep modifications that help improve the results. First, the criterion of meaningfulness was relaxed. In the new definition, a level line can have a non-causal piece and still be considered perceptually important. We also provide an intuitive parameter that allows to deal with the length of that piece. Second, we analyze the interaction of two fundamental cues for the perception of contours: contrast and regularity. We propose a new way of combining these features in which they compete for the control of the boundary saliency. Experiments show the suitability of this combination strategy. Examples of the resulting image structure retrieval method were presented, soundly showing that its theoretical advantages are also validated in practice. The proposed method increases significantly the robustness and the stability of the detections. As a final remark, the maximality constraint presents some issues. All the packets of parallel level line pieces are not eliminated by it. The exploration of another kind of algorithm based on maximality along the gradient direction might help to eliminate this effect~\cite{meinhardt08b}. \subsection{Meaningful Contrasted Boundaries} \label{sec:proofNFA_C} This section proves that TMA-MCB (see Definition~\ref{def:nfaContrastedCurve_k}, p.~\pageref{def:nfaContrastedCurve_k}) are theoretically correct. As usual, being correct means that the following proposition holds. \begin{proposition} The expected number of TMA \meps-meaningful boundaries in a finite set $E$ of random curves is smaller than \meps. \end{proposition} \begin{proof} For this proof we follow the scheme from Proposition~12 in~\cite{cao08theory}. For all $k$, let us denote by $L_k$ the random length of the pieces of $C$ such that $|Du| \geq \mu_k$. From Definition~\ref{def:nfaContrastedCurve_k}, any curve $C$ is \meps-meaningful if there is at least one $0 \leq k < K$ such that $N_{ll} \ K \ \widetilde{\bintail} (n \cdot \lsn{2}, L_k; H_c(\mu_k)) < \eps$. Let us denote by $E(C, k)$ this event and recall that all probabilities are under $\Hy_0$: \begin{equation*} \Pr (E(C, k)) \stackrel{\mathrm{def}}{=} \Pr \left( \widetilde{\bintail} (n \cdot \lsn{2}, L_k; H_c(\mu_k) < \frac{\eps}{N_{ll} \ K} \right) \text{.} \end{equation*} From Lemma~\ref{lem:classic}, we denote \begin{align*} X & = L_k & S(x) & = \widetilde{\bintail} (n \cdot \lsn{2}, x; Hc(\mu_k)) \\ t & = \frac{\eps}{N_{ll} \ K} &\quad \Pr(S(X) < t) & = \Pr (E(C, k)) \end{align*} and finally \begin{equation*} \Pr (E(C, k)) \leq \frac{\eps}{N_{ll} \cdot K} \text{.} \end{equation*} The event defined by ``C is \meps-meaningful'' is $$E(C) = \bigcup_{0 \leq k < K} E(C, k).$$ Let us denote by $\expectation_{\Hy_0}$ the mathematical expectation under $\Hy_0$. The expected number of \meps-meaningful curves is defined as $\mathbb{E}_{\Hy_0} \left( \sum_{C \in \mathcal{C}} \mathbf{1}_{E(C)} \right)$ where $\mathbf{1}_{A}$ is the indicator function of the set $A$. Then \begin{equation*} \expectation_{\Hy_0} \left( \sum_{C \in \mathcal{C}} \mathbf{1}_{E(C)} \right) \leq \sum_{\substack{ C \in \mathcal{C} \\ 0 \leq k < K }} \Pr \left( E(C, k) \right) \leq \sum_{\substack{C \in \mathcal{C} \\ 0 \leq k < K}} \frac{\eps}{N_{ll} \cdot K} = \eps. \end{equation*} \end{proof} \subsection{Meaningful Contrasted Regular Boundaries} \label{sec:proofNFA_CR} TMA \meps-meaningful boundaries (see Definition~\ref{def:nfaContrastedSmoothCurve_k}, p.~\pageref{def:nfaContrastedSmoothCurve_k}) are correct is the following proposition holds. \begin{proposition} The expected number of \meps-meaningful contrasted regular boundaries, obtained with Definition~\ref{def:nfaContrastedSmoothCurve_k}, in a finite random set $E$ of random curves is smaller than \meps. \end{proposition} \begin{proof} The same assumptions from the previous proof hold. Let $X_i = \mathbf{1}_{C_i \mathrm{\ is\ meaningful}}$ and $N = \#E$. Let us denote by $\expectation_{\Hy_0}$ the mathematical expectation under $\Hy_0$. Then \begin{multline} \expectation \left( \sum_{i=1}^{N} \sum_{k=1}^{K_c} \sum_{k'=1}^{K_s} X_i \right) = \\ \expectation \left( \expectation \left( \sum_{i=1}^{n} \sum_{k=1}^{k_c} \sum_{k'=1}^{k_s} X_i \ |\ N = n, K_c = k_c, K_s = k_s \right) \right) \textbf{.} \end{multline} We have assumed that $N$ is independent from the curves and $K_c$, $K_s$ are input parameters. Thus, conditionally to $N = n$, the law of $\sum_{i=1}^{N} X_i$ is the law of $\sum_{i=1}^{n} Y_i$ where $$ \displaystyle Y_i = \mathbf{1}_{n\, k_c\, k_s\, \max \left(\min_{0 \leq k < k_c}I_c (C_i, k)^2,\ \min_{0 \leq k' < k_s} I_s (C_i, k')^2 \right) < \eps}. $$ By the linearity of expectation \begin{equation} \expectation \left( \sum_{i=1}^{n} \sum_{k=1}^{k_c} \sum_{k'=1}^{k_s} X_i \right) = \expectation \left( \sum_{i=1}^{n} \sum_{k=1}^{k_c} \sum_{k'=1}^{k_s} Y_i \right) = \sum_{i=1}^{n} \sum_{k=1}^{k_c} \sum_{k'=1}^{k_s} \expectation \left( Y_i \right) \text{.} \end{equation} Since $Y_i$ is a Bernoulli variable, \begin{multline} \expectation (Y_i) = \Pr (Y_i = 1) = \Pr \left( n\, k_c\, k_s \ \max \left( \begin{split} \min_{0 \leq k < k_c}I_c (C_i, k)^2 \\ \min_{0 \leq k' < k_s} I_s (C_i, k')^2 \end{split} \right) < \eps \right) =\\ = \sum_{l=0}^{\infty} \Pr \left( n\, k_c\, k_s \max \left( \begin{split} \min_{0 \leq k < k_c}I_c (C_i, k)^2 \\ \min_{0 \leq k' < k_s} I_s (C_i, k')^2 \end{split} \right) < \eps \ \Big|\ L_i=l \right) \cdot \Pr (L_i=l) \text{.} \end{multline} Let us finally denote by $\alpha_1 \dots \alpha_l$ the $l$ independent values of $|Du|$ and $\gamma_1 \dots \gamma_{l/s}$ the $l/s$ independent values of $|R_s|$. Again, we have assumed that $L_i$ is independent of the gradient and regularity distributions in the image. Thus conditionally to $L_i = l$, \begin{multline} \Pr \left( n\, k_c\, k_s \max \left( \begin{split} \min_{0 \leq k < k_c} I_c (C_i, k)^2 \\ \min_{0 \leq k' < k_s} I_s (C_i, k')^2 \end{split} \right) < \eps \ |\ L_i=l \right) = \\ = \Pr \left( n\, k_c\, k_s \max \left( \begin{split} \min_{0 \leq k < k_c} I_c (C_i, k)^2 \\ \min_{0 \leq k' < k_s} I_s (C_i, k')^2 \end{split} \right) < \eps \right) = \\ = \Pr \left( \max \left( \begin{split} \min_{0 \leq k < k_c} I_c (C_i, k) \\ \min_{0 \leq k' < k_s} I_s (C_i, k') \end{split} \right) < \left( \frac{\eps}{n\, k_c\, k_s} \right)^{1 / 2} \right) = \\ = \Pr \left( \min_{0 \leq k < k_c}I_c (C_i, k) < \left( \frac{\eps}{n\, k_c\, k_s} \right)^{1 / 2} \right) \cdot \\ \Pr \left( \min_{0 \leq k' < k_s}I_s (C_i, k') < \left( \frac{\eps}{n\, k_c\, k_s} \right)^{1 / 2} \right) \text{.} \end{multline} From proof of Proposition~\ref{prop:nfaContrastedCurve_k}, \begin{multline} \Pr \left( \min_{0 \leq k < k_c}I_c (C_i, k) < \left( \frac{\eps}{n\, k_c\, k_s} \right)^{1 / 2} \right) \cdot \\ \Pr \left( \min_{0 \leq k' < k_s}I_s (C_i, k') < \left( \frac{\eps}{n\, k_c\, k_s} \right)^{1 / 2} \right) \leq \\ \leq \left( \frac{\eps}{n\, k_c\, k_s} \right)^{1 / 2} \left( \frac{\eps}{n\, k_c\, k_s} \right)^{1 / 2} = \frac{\eps}{n\, k_c\, k_s} \textbf{.} \end{multline} Finally \begin{equation} \expectation (Y_i) \leq \frac{\eps}{n\, k_c\, k_s} \quad \Rightarrow \quad \sum_{i=1}^{n} \sum_{k=1}^{k_c} \sum_{k'=1}^{k_s} \expectation (Y_i) \leq \eps \text{.} \end{equation} \end{proof}
{ "timestamp": "2012-10-16T02:02:02", "yymm": "1210", "arxiv_id": "1210.3718", "language": "en", "url": "https://arxiv.org/abs/1210.3718" }
\section{Introduction} When a liquid in a container is heated from below and cooled from the top a heat flux settles through the liquid. As the temperature of the bottom plate increases, the heat transport is successively dominated by different mechanisms: conduction, convection and boiling. To study this, experiments with a Hele-Shaw cell were undertaken. Instantaneous non-intrusive measurements of the two-dimensional temperature field are performed in the cell. The video presents these measurements in combination with Schlieren visualizations of the flow to illustrate the different heat transport mechanisms and the transitions between them. The cell is made of two thin vertical glass slides ($50\,\rm{mm}$ in width and $25\,\rm{mm}$ in height) separated by a $1\,\rm{mm}$ gap. The bottom of the cell is made of a thin heat-conducting plate. The top is open. The movie begins when a cell filled with ethanol at ambient temperature is put on top of a hot copper block. In order to visualize the temperature evolution inside the cell a screen patterned with a regular cross-ruling is put behind the cell. As the liquid in the cell warms up, the cross-ruling -- as seen through the cell -- looks distorted. The temperature differences in the cell plane give rise to index of refraction gradients. This bends the light rays, resulting in a distorted image of the cross-ruling. Because the index of refraction-temperature relationship and the distance to the screen are known, measurements of the distortion can be used to calculate the temperature gradients in the cell. By integrating those gradients, this technique provides the temperature field inside the cell with a time resolution that is only limited by the camera frame rate (here 500 frames per second). This is used in the video to illustrate the different heat transport mechanisms at play as the liquid inside the cell is heated from below to progressively higher temperatures. One successively observes the temperature fields for: \begin{itemize} \item two-dimensional convection motions with light, hot plumes rising between falling heavy, cold ones, \item nucleate boiling of vapor bubbles on the bottom plate as the ethanol boiling temperature is exceeded, \item and the `boiling crisis', i.e. the transition to the film boiling regime where the bottom plate is covered by a vapor film which periodically destabilizes by a Rayleigh-Taylor instability. \end{itemize} \end{document}
{ "timestamp": "2012-10-16T02:01:37", "yymm": "1210", "arxiv_id": "1210.3693", "language": "en", "url": "https://arxiv.org/abs/1210.3693" }
\section{Introduction} The Large Hadron Collider (LHC) has clearly exhibited its ability to make discoveries with the observation of a new resonance~\cite{:2012gk,:2012gu} with even spin that decays to photons and Z bosons as expected of the Standard Model (SM) Higgs particle. Thus precise measurements of the decays of this resonance into various channels (whether standard or not), are of the utmost importance. At the same time, it is essential to verify our understanding of the existing channels, in particular, $h \rightarrow \gamma \gamma$. How well are these photons defined? Can physics objects other than single photons leave signatures in the detector similar to that of a photon? Not surprisingly, the answer is yes~\cite{Dobrescu:2000jt, Toro:2012sv, Draper:2012xt}. Given the granularity of the calorimeters, an object consisting of (nearly) collinear photons, typically labeled a photon-jet, will generate a signature similar to that of a single photon. The possibility that the Higgs particle decays to multiple collinear photons is not new~\cite{Dobrescu:2000jt, Draper:2012xt}. Simple models where the Higgs decays to almost massless scalars that each in turn decay to a pair of photons, typically do not give rise to events with four separately identifiable photons, but rather to pairs of photon-jets, each with 2 photons. Slightly more complicated models can produce Higgs decays to photon-jets with $4, 6, \cdots$ photons. We will discuss concrete models where the Higgs decays to photon-jets with $2$ and $4$ photons per photon-jet. Thus it is essential to develop tools to separate single photons from photon-jets from QCD-jets. Otherwise we are unlikely to understand either the signal or the background. ATLAS recently made attempts to identify photon-jets from Higgs decays~\cite{ATLAS-CONF-2012-079}. These analyses rely on relaxing the isolation/shower shape criteria, which use the differing distributions of energy deposition within the calorimeter cells to quite successfully discriminate single photons from QCD-jets. Unfortunately, the parameters of the underlying model can be easily adjusted so that the resultant photon-jets pass the strictest isolation/shower shape criteria just like photons. More importantly, loosening isolation criteria results in a larger fake rate for QCD-jets. Discriminating photon-jets from QCD-jets is more challenging than separating single photons from QCD-jets. Fortunately jet substructure techniques~\cite{Seymour:1993mx, Brooijmans:1077731,Butterworth:2007ke, Butterworth:2008iy, Thaler:2008ju, Kaplan:2008ie} have recently been developed to distinguish QCD-jets from jets containing boosted heavy particle decays, and we can use this work for detection of photon-jets. More broadly, `jets', as defined by an infrared safe jet clustering algorithm, are being proposed as a universal language to describe \textit{all} calorimeter objects including single photons, photon-jets and QCD-jets. By using the tools developed in jet substructure physics, we do not need to rely on isolation cuts. We supplement the traditional/conventional variables currently used to discriminate photons from QCD-jets with substructure variables that probe in detail the energy distribution within the jet. Note that the photons-jets are composed of energetic photons distributed inside the jet, where the distribution is a result of the kinematic features of the model, e.g., the masses and spins of the intermediate particles. The existence of this structure within photon-jet suggests that substructure variables will be efficient at finding and discriminating photon-jets. We show that our analysis is capable of separating photon-jets from both single photons and QCD-jets \emph{at least as} efficiently as the traditional discriminators separate photons from QCD-jets. There is another important advantage to applying jet substructure techniques to purely electro-magnetic calorimeter (ECal) objects. The introduction of `grooming' algorithms (including filtering~\cite{Butterworth:2008iy, Butterworth:2008sd, Butterworth:2008tr}, pruning~\cite{Ellis:2009su, Ellis:2009me}, and trimming~\cite{Krohn:2009th}) promised to suppress the undesirable contributions to purely hadronic jets from the underlying event (the largely uncorrelated soft interactions surrounding the interesting hard scattering) and from pile-up (the truly uncorrelated proton-proton collisions that occur in the same time window). Indeed, the recent results from studies at ATLAS~\cite{ATLAS-CONF-2012-065} and CMS~\cite{CMS-PAS-EXO-11-095} indicate this grooming is effective. We expect that this substructure-based grooming will work as well for all ECal based objects. It should be noted that in the context of Higgs physics, the decay to photon-jets is not the only example where the collinearity of the decay products adds complexity to the analysis. Collinearity plays a role for traditional decays of the Higgs boson when it is boosted. In Ref.~\cite{Butterworth:2008iy}, the authors exploited the collinearity of the $b$-quarks in boosted Higgs decays (both quarks in a single jet) to greatly enhance the chances of detecting the $h\rightarrow b\bar{b}$ channel, featuring jet substructure as a mainstream tool (see also Refs.~\cite{Seymour:1993mx, Brooijmans:1077731,Butterworth:2007ke}). The application of jet substructure in Higgs physics has now become a very active area of research, applied both to the SM Higgs~\cite{Plehn:2009rk,Gallicchio:2010dq, Hackstein:2010wk} as well to beyond the SM Higgs scenarios~\cite{Kribs:2009yh, Kribs:2010hp, Kribs:2010ii,Katz:2010iq, Englert:2011iz,Son:2012mb}. For reviews, more detailed descriptions, and references see Refs.~\cite{Abdesselam:2010pt, Altheimer:2012mn}. The paper is organized as follows: in Sec.~\ref{sec:simplified_models}, we start with a simplified model for photon-jets. We propose a set of benchmark points, where we take different combinations of masses and parameters in the simplified model to produce photon-jets displaying a variety of distinct kinematics. In Sec.~\ref{sec:simulations} we define the details of our simulation. We describe, in detail, how we generate samples of photon-jets, one for each of the benchmark points, QCD-jets, and single photons. We present our analysis in Sec.~\ref{sec:analysis}. We describe all the variables that we use in this work to discriminate photon-jets from QCD-jets from single photons. Then we combine these variables in a multivariate analysis. We train boosted decision trees (BDTs) using the samples of jets and use these to optimize the discriminating power of our analyses. We also show how these BDTs can be used to simultaneously separate photon-jets, photons, and QCD-jets from each other. Our conclusions are presented in Sec.~\ref{sec:conclusion}. \section{\label{sec:simplified_models} Simple Model for Photon-Jets} By definition, photons-jets refer to calorimeter objects consisting of more than one hard photon. However, such a broad definition presents a challenge since all photon-jets are not the same. They differ in terms of the number of hard constituent photons as well as in the distribution of those photons within the photon-jet. To provide a systematic phenomenological study of photon-jets we classify these objects in more detail in terms of the production mechanism and consider a broad range. We will refer to the various production scenarios as `benchmark' scenarios. We find that a simple model in the spirit of Ref.~\cite{Alves:2011wf} with two new particles is sufficient to characterize these benchmarks. The model includes a small number of interactions and we can vary the strength of these interaction and the new particle masses in order to generate the benchmark scenarios. In particular, we introduce two scalar fields $n_1$ and $n_2$ of mass $m_1$ and $m_2$ respectively. Without loss of generality, we choose the naming convention such that $m_1 > m_2$. Neither $n_1$ nor $n_2$ carry any SM charges. We use the following interactions to generate photon-jets \begin{equation} \label{eq:simplified-model} \frac{1}{2} \mu_h \: h n_1^2 + \frac{1}{2} \mu_{12} \: n_1 n_2^2 + \left( \frac{\eta_1}{m_1} \: n_1 + \frac{\eta_2}{m_2} \: n_2\right) F^{\mu\nu}F_{\mu\nu} \; , \end{equation} where $\mu_h, \mu_{12}$ are mass parameters, $\eta_1, \eta_2$ are dimensionless coupling constants, and $F_{\mu\nu}$ is the electromagnetic field strength operator. This simple model bears a resemblance to a Higgs portal scenario~\cite{Schabinger:2005ei,Patt:2006fw,Strassler:2006im} because of the $\mu_h$ coupling. In the Higgs portal language, $n_1$ and $n_2$ constitute a `hidden' sector while the coupling $\mu_h$ provides a tunnel to the corresponding `hidden valley'. The electromagnetic couplings (proportional to the $\eta$ parameters) provide ways for the new particles to decay back to SM particles, photons in this case. With respect to Higgs physics, this simple model provides a realistic example where the SM Higgs field decays through the new particles to multiple photons. In the limit $m_1 \ll m_h$, the resultant photons (the decay products of $n_1$) are essentially collinear. In Table~\ref{table:bench} we list the benchmark scenarios (labeled photon-jet study points or PJSPs) that we investigate in this work. All are generated by varying the parameters in Eq.\eqref{eq:simplified-model}. The symbol $\text{X}$ in Table~\ref{table:bench} denotes that a non-zero value is selected for that parameter, which then determines the decay mode. We have chosen the benchmarks in such a way that the parameters denoted by $\text{X}$ only change the \textit{total} width of the decaying particles. As long as the decays are prompt, the exact values of these parameters are irrelevant to the phenomenological properties of the photon-jets. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Study Points} & $m_1$ & $m_2$ & $\mu_{12}$ & \multirow{2}{*}{$\eta_1$}& \multirow{2}{*}{$\eta_2$} \\ & $\left( \text{GeV} \right)$ & $\left( \text{GeV} \right)$ & $\left( \text{GeV} \right)$ & & \\ \hline \pjsp{1} & $0.5$ & & \multirow{3}{*}{$0$} & \multirow{3}{*}{$ \text{X}$} & \\ \pjsp{2} & $1.0$ & & & & \\ \pjsp{3} & $10.0$ & & & & \\ \hline \pjsp{4} & $2.0$ & $0.5$ & \multirow{5}{*}{$\text{X}$} & \multirow{5}{*}{$0$} & \multirow{5}{*}{$\text{X}$} \\ \cline{1-3} \pjsp{5} & \multirow{2}{*}{$5.0$} & $0.5$ & & &\\ \pjsp{6} & & $1.0$ & & & \\ \cline{1-3} \pjsp{7} & \multirow{2}{*}{$10.0$} & $0.5$& & & \\ \pjsp{8} & & $1.0$ & & & \\ \hline \end{tabular} \caption{\label{table:bench} The study points used in our analysis. For \pjsp{1-3}, $n_2$ does not participate in the decay chain since $\mu_{12} = 0$ and the $m_2$ and $\eta_2$ columns are empty. By $\text{X}$ we denote that a non-zero value is chosen for the parameter, which facilitates prompt decays, but the specific value plays no role. } \end{table} In all these study points we take the Higgs particle to decay to a pair of $n_1$ particles. The small $n_1$ mass ($m_1 \ll m_h$) ensures that the decay products of the $n_1$ are highly collimated. In the Higgs particle rest frame, which is close to the laboratory frame on average, each $n_1$ has momentum $\sim m_h/2$ and the typical angular separation between the $n_1$ decay products is of the order of $4 m_1/m_h $. Note that, given we always consider $m_1 \leq 10~\text{GeV}$, we expect the typical angular separation between the $n_1$ decay products to be $\lesssim 1/3$ (we use $m_h = 120~\text{GeV}$). As long as the angular size of photon-jets is larger than $1/3$, we expect to capture all the decay products of the $n_1$ in each photon-jet for all the benchmark points. For the study points \pjsp{1-3} the mass parameter $\mu_{12}$ is set to zero and $n_1 \rightarrow \gamma \gamma$ is the only possible $n_1$ decay mode. Hence these scenarios are characterized by photons-jets with typically $2$ hard photons per jet, and $n_2$ plays no role in the phenomenology (so no $n_2$ mass or coupling values are included in the table). In these scenarios the Higgs particle cascade decays to four photons ($h \rightarrow n_1 n_1 \rightarrow \gamma \gamma\gamma \gamma$). The precise value of $m_1$ governs the angular separation of the two photons inside the photon-jets. For a very small $m_1$, each photon-jet looks much like a single photon. (Of course, if the Higgs is highly boosted, the decay results in a single photon-jet containing all 4 photons.) For study points \pjsp{4-8} we set $\eta_1$ to zero and $\mu_{12}$ to a non-zero value. In these contrasting scenarios the only $n_1$ decay mode involves the chain $n_1 \rightarrow n_2 n_2 \rightarrow \gamma \gamma \gamma \gamma $. Hence the Higgs decays again to two photon-jets, but now each photon-jet typically contains four photons (the $n_1$ decay products). (In this case, a highly boosted Higgs yields a single photon-jet containing 8 photons.) \section{\label{sec:simulations} Simulation Details} In order to generate samples of photons-jets, we implement the simple model of Eq.~\eqref{eq:simplified-model} in MadGraph~$5$~\cite{Alwall:2011uj}. For each benchmark point we generate matrix elements corresponding to the process $pp \rightarrow h \rightarrow n_1n_1$ (via gluon fusion) using MadGraph~$5$ with $m_h = 120~\text{GeV}$, which we employ as input to Pythia~$8.1$~\cite{Sjostrand:2006za,Sjostrand:2007gs} in order to generate the full events and for the subsequent $n_1$ decays. Since the Higgs production is evaluated at lowest order, the produced Higgs particles have zero transverse momentum. We use the QCD dijet events generated by standalone Pythia~$8.1$ to provide a sample of QCD-jets. In order to define a sample of single photons, we also generate $pp \rightarrow h \rightarrow \gamma \gamma$ events where the photons are well separated. Finally, we include initial state radiation (ISR), final state radiation (FSR) and multiple parton interactions (MI, i.e., the UE) as implemented in Pythia~$8.1$ to simulate the relevant busy hadronic collider environment. The Pythia output final states are subjected to our minimal detector simulation. In the following we describe briefly how we treat the final state particles in each event: \begin{itemize} \item We identify all charged particles with transverse momentum $p_T > 2~\text{GeV}$ and pseudorapidity $|\eta| < 2.5$ as charged tracks. \item In a real detector, tracks are also generated if photons convert within the pixel part of the tracker. In this work, we simulate this photon conversion process by associating with each photon a probability for it to convert in the tracker.~\footnote{ We do not simulate the magnetic field in the detector. Consequently the $e^+ e^-$ pairs from photon conversion continue in the direction of the photon. So for every converted photon we obtain effectively a single track, if the photon passes the $p_{T}$ threshold.} The probability is a function of the number of radiation lengths of material the photon has to traverse in order to escape the inner part of the tracker. We use the specifications of the ATLAS detector in order to model this pseudorapidity dependent probability distribution. The details of this procedure are outlined in the Appendix~\ref{sec:app-conversion}. \item In our simulation, all particles (except charged particles with $E < 0.1~\text{GeV}$) reach the calorimeters, and all of these (except muons with $p_T > 0.5~\text{GeV}$) deposit all of their energy in the calorimeters. The electromagnetic calorimeter (ECal) is modeled as cells of size $0.025\times0.025$ in ($\eta$-$\phi$), whereas the hadronic calorimeter (HCal) is taken to have more coarse granularity with $0.1\times0.1$ cells. Besides photons and electrons, soft muons and soft hadrons (soft means $E < 0.5~\text{GeV}$) are treated as depositing all of their energy in the ECal. More energetic hadrons are absorbed in the HCal, while more energetic muons escape the calorimeter. For a more detailed picture see Appendix~\ref{sec:Calorimeter}. \item We attempt to simulate the showering that occurs within the ECal. We distribute the energy of each particle that is absorbed in the ECal into a $(3\times 3)$ grid of cells (centered on the direction of the original particle) according to a precomputed Moli\`{e}re matrix corresponding to the Moli\`{e}re radius of lead. For details on this transverse smearing see Appendix~\ref{sec:app-Moliere}. The structure induced by this shower simulation is observable in our final results. \item We implement calorimeter energy smearing for both the ECal and the HCal. The calorimetric response is parametrized through a Gaussian smearing of the accumulated cell energy $E$ with a variance $\sigma$: \begin{equation} \label{eq:cal-response} \frac{\sigma}{E} \ = \ \frac{S}{\sqrt{E}} + C \; , \end{equation} where $S$ and $C$ are the stochastic and constant terms. For the ECal and the HCal, we use ($S, C$) to be $(0.1,0.01)$ and $(0.5,0.03)$, respectively, in order to approximately match the reported calorimeter response from ATLAS~\cite{Smearing}. \item Each calorimeter cell that passes an energy threshold becomes an input for our jet clustering algorithm. For the ECal cells we require $E_T > 0.1~\text{GeV}$, while for the HCal cells we use the somewhat harder cut $E_T > 0.5~\text{GeV}$.~\footnote{ The specific values are chosen to mimic the choices for real detectors and the difference between the two accounts for the differing noise levels in calorimeter cells of different sizes.} We sum all the energy deposited in a given calorimeter cell and construct a massless 4-vector with the 3-vector direction corresponding to the location of that cell. \item As the final step we cluster the 4-vectors corresponding to the calorimeter cells into jets using Fastjet~$3.0.3$~\cite{Cacciari:2005hq, Cacciari:2011ma}. In particular, we use the $\text{anti-}k_{T} $ jet clustering algorithm~\cite{Cacciari:2008gp} with $R = 0.4$ and and require $p_T > 50~\text{GeV}$ for every jet. Only the leading jet from each event is retained for further analysis in order to maintain independence among the jets in the sample. \end{itemize} \section{\label{sec:analysis} Analysis} In this section we describe the analysis of $10$~samples of jets generated according to the prescription of the previous sections. The first sample contains QCD jets derived from QCD dijet events. The second sample consists of jets from $h \rightarrow \gamma \gamma$ events where each jet typically contains one of the photons from the Higgs decays, plus contributions from the rest of the event (ISR, FSR, UE). We refer to the jets in this sample as single photon jets, or simply single photons. The remaining $8$ samples of jets are the photon-jet samples and correspond to the $8$ study points in Table~\ref{table:bench}. As noted above, in these events the Higgs particle decays into $4$ or $8$ photons and the corresponding photon-jets typically contain either 2 or 4 photons. The resulting $p_T$ distributions for QCD-jets (red), photon-jets (blue) (\pjsp{8}) and single photons (green) are indicated in Fig.~\ref{fig:pT_dist}. \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{pT_spectra.pdf} \vspace{-3mm} \caption{ \label{fig:pT_dist} The $p_T$ distribution of jets for QCD-jets (red), single photons (green), and for photon-jets (blue) from the study point \pjsp{8}. Jets are constructed as described in the text (the $\text{anti-}k_{T} $ algorithm with $R = 0.4$).} \end{figure} As expected, the $p_T$ distribution for QCD-jets is a falling distribution, while both the single photon and photon-jet distributions exhibit a peak near $m_h/2 (= 60~\text{GeV})$. We understand this last point as arising from the production of Higgs particles with zero transverse momentum followed by 2-body decays (either 2 photons or 2 $n_1$'s). It is the remnants of these two bodies that are typically captured in the jets yielding the indicated peaks near $p_T \sim m_h/2$. For the photon-jet sample we only show the $p_T$ spectrum for the study point \pjsp{8}, but note that the $p_T$ distributions are almost identical for all other benchmark points. As indicated in Fig.~\ref{fig:pT_dist}, the jets in all of these samples of events have crudely comparable transverse momentum distributions in the range $50-100~\text{GeV}$, although the QCD sample is more strongly peaked at the low end. Thus studying the jets in these samples should provide a useful laboratory in which to study photon-jets, QCD jets and single photons. The remainder of this section describes a systematic analysis aimed at distinguishing photon-jets from QCD-jets as well as from single photons. We begin with brief descriptions of the variables that provide the discriminating power. The variables are organized into two groups: $(i)$ conventional variables and $(ii)$ substructure variables. We demonstrate how each of these variables individually discriminates photons-jets form the jet samples. Later in this section, we combine these variables in a multivariate analysis in order to maximize the separation of photon-jets from QCD-jets as well as from single photons. \subsection{Conventional Variables} The conventional variables we describe below are well known, well understood, and play essential roles in the identification of single photons, i.e., the separation from QCD-jets. We expect these variables to play a similar role in separating photon-jets from QCD-jets, since the probability distributions as functions of these variables are similar for photon-jets and for single photons. On the other hand, they cannot be expected to efficiently discriminate photon-jets from single photons. \subsubsection{\label{subsec:theta} Hadronic Energy Fraction, $\theta_J$} We define the hadronic energy fraction $\theta_J$ for a jet to be the fraction of its energy deposited in the hadronic calorimeter: \begin{equation} \theta_J \ = \ \frac{1} {E_J } \sum_{i \in \text{HCal}\, \in \, J} E_{i} \end{equation} where $E_{J}$ is the total energy of the jet, and $E_{i}$ is the energy of the $i$-th HCal cell that is a constituent of the jet. This is the most powerful variable for discriminating a single photon or a photon-jet (objects that deposit most of their energy in the ECal) from QCD-jets. Since a QCD-jet typically contains 2/3 charged pions and 1/3 neutral pions, we expect to see a peak at $\theta_J \sim 2/3$ ($\log \theta_J \sim -0.2$) for QCD-jets. Isolated single photons and photon-jets, on the other hand, should exhibit very small $\theta_J$ values. However, we start with objects identified by a jet algorithm so there will be contributions from the rest of the event and pile-up, and from leakage from the ECal into the HCal. Thus the precise value of $\theta_J$ for single photons and photon-jets will depend on detailed detector properties and on the contribution from the underlying event and pile-up. Nevertheless, we expect single photons/photon-jets to exhibit very small values of $\theta_j$. \begin{figure}[h] \includegraphics[width=0.45\textwidth]{Comparison_Theta.pdf} \caption{\label{fig:hadfrac} The probability distributions for jets as functions of $\log \theta_J$ for QCD-jets (red), single photons (green) and photon-jets from \pjsp{8} (blue). The first bin of the plot (at $\theta_J = 10^{-3}$) has an open lower boundary, i.e., it includes all jets with $\log \theta_J < -3.0$.} \end{figure} Figure~\ref{fig:hadfrac} shows the probability distribution versus $\log \theta_J$ for QCD-jets (red), single photons (green), and for photon-jets (blue) in our simulated data. For the photon-jets we only show the study point \pjsp{8}, since the distribution is essentially identical for the other benchmark points. As expected the QCD-jet distribution peaks near $\log \theta_J = -0.2$ ($\theta_J = 2/3$), while the single photon and photon-jet distributions are very similar with a peak near $\log \theta_J = -1.9$ and an implied tail to very small $\theta_J$ values. The clear separation of the single photon/photon-jet distributions from the QCD-jet distribution indicates why this variable plays such an important role in the separation of QCD-jets from photons. Any reasonable cut on $\theta_J$ ($\theta_J \sim 0.1$) will reduce the QCD-jet contribution by factors of $10^{-2}$--$10^{-3}$, while barely changing the photon/photon-jet contribution. We impose a preliminary cut by keeping only $\theta_J \leq 0.25$ ($\log \theta_J \leq -0.6$). About $2\%$ of the original QCD-jets survive this cut, while approximately $94\%$ of the single photons/photon-jets survive. We use the modified jet samples that pass this preliminary $\theta_J$ cut for the remainder of this paper. \subsubsection{\label{subsec:nu} Number of Charged Tracks, $\nu_J$} In conventional collider phenomenology, the number of charged particles (tracks) associated with an object is often used to distinguish objects from each other. Although photons and electrons generate similar signatures in the ECal, the latter are typically associated with a track while the former are not. Tracks also play an important role in rejecting QCD-jets since, as mentioned before, a QCD-jet typically contains several charged pions. In our simulated data we keep all charged particles with $p_T > 2~\text{GeV}$ and assume that all of these correspond to tracks in a real detector. In order to associate these tracks with the jets, which are constructed entirely from calorimeter cells, we perform the following analysis. First replace each track by an arbitrarily soft light-like four vector with the same ($\eta$-$\phi$) direction as the track, and then include these soft four-vectors in the jet clustering process along with the calorimeter cells. (We explicitly check that the inclusion of these soft four-vectors does not affect the outcome of the clustering procedure.) A track is associated with a jet if the soft four vector corresponding to that track is clustered into that jet% ~\footnote{As a check we also consider the more traditional construction where a track is associated with a jet if it is within an angular distance $R$ or less from the given jet's direction, where $R$ is the size-parameter used in the clustering algorithm. For $\text{anti-}k_{T} $ jets both methods yield identical associations of tracks and jets. For the $k_T$ or $\text{C/A}$ algorithms, where jets are not exactly circular, the method described in the text is a more natural definition of whether a track is associated with a jet or not.}. The resulting total number of tracks associated with a jet yields the value of $\nu_J$ for that jet. \begin{figure}[h] \includegraphics[width=0.45\textwidth]{Comparison_Nu.pdf} \caption{\label{fig:chtrk} The relative probability distribution for QCD jets (red), single photons (green) and photon-jets (blue) versus the number of charged tracks associated with a jet. The algorithm for associating tracks with jets is given in the text. For photon-jets we show the distribution for jets from the study points \pjsp{1} (dotted) and \pjsp{8} (solid). } \end{figure} Figure~\ref{fig:chtrk} shows the relative probability distribution versus the number of tracks per jet ($\nu_J$) for QCD-jets (red), single photons (green) and photon-jets (blue). As expected, the number of tracks associated with QCD-jets varies over a broad range and only a tiny fraction of QCD-jets have no associated tracks. The single photon/photon-jet samples, on the other hand, are dominated by jets with no associated tracks. Photons that convert yield tracks associated with the corresponding jets. Since the probability of conversion increases with the number of photons per jet, the probability of obtaining one of more associated tracks increases from single photon jets (single photons) to jets with two photons (typical for \pjsp{1}, the dotted blue curve) to jets with four photons (typical for \pjsp{8}, the solid blue curve). As with the variable $\theta_J$, $\nu_J$ offers some separation between QCD-jets and single photons, but much less between single photons and photon-jets (and even less between the different types of photon-jets). \subsection{\label{sec:substructure } Jet Substructure} Next we want to focus on variables that explicitly characterize the internal structure of jets, i.e., characterize the energetic subjet components of the jet. Recall that in this analysis we have identified jets using the the $\text{anti-}k_{T} $ jet algorithm with $R=0.4$, but we do not expect the general features of our analysis to depend on this specific choice. The next step is to determine a `recombination tree' for the jets we want to study (here the leading jet in each event). To this end we apply the $k_T$ algorithm~\cite{Catani:1993hr, Ellis:1993tq} to the calorimeter cells identified as constituents of the jet in the first step. (We could as well use the Cambridge/Aachen ($\text{C/A}$) algorithm~\cite{Dokshitzer:1997in, Wobisch:1998wt, Wobisch:2000dk}, but not the $\text{anti-}k_{T} $ algorithm in this step as $\text{anti-}k_{T} $ does not tend to produce a physically relevant recombination tree.) This recombination tree specifies the subjets at each level of recombination $N$ from $N=1$ (the full jet) to $N=$ the number of constituent calorimeter cells in the jet (no recombination). At the next step the subjet variables we study fall into two classes. In the first class we attempt to count the effective number of relevant subjets without using any properties of the subjets in the tree except their directions in $\eta$-$\phi$. In this case the useful variable (defined in detail below) is called $N$-subjettiness. The $N$-subjettiness variable for a given jet becomes numerically small when the parameter $N$ is large enough to describe all of the relevant substructure, i.e., this value of $N$ provides a measure of the number of subjets without explicitly identifying the subjets. $N$-subjettiness involves \textit{all} components of the original jet for all values of $N$. The rest of the substructure variables we study more explicitly resolve a jet into a set of subjets. We define both the level in the recombination tree at which we choose to work, i.e., the number of subjets we have split the jet into and how many of these subjets to use in the subsequent analysis. We use $N_\text{pre-filter}$ (this notation should become clear shortly) and $N_\text{hard}$ to label these two parameters. Thus we start with the 4-vectors corresponding to the (calorimeter cell) constituents of a given jet, and then (re)cluster these constituents using the chosen subjet algorithm (which is not necessarily the algorithm used to originally identify the jet) in \textit{exclusive} mode, i.e. we continue (re)clustering until there are precisely $N_\text{pre-filter}$ 4-vectors left -- the $N_\text{pre-filter}$ exclusive subjets. Out of these $N_{\text{pre-filter}}$ subjets we pick the $N_\text{hard}$ largest $p_T$ subjets and discard the rest. All the substructure variables discussed below (except $N$-subjettiness) are constructed using these $N_\text{hard}$ subjets. Note that by choosing $N_{\text{pre-filter}} > N_\text{hard}$, we have performed a version of jet `grooming' typically labeled filtering~\cite{Butterworth:2008iy, Butterworth:2008sd, Butterworth:2008tr}. This will ensure that our results are relatively insensitive to the effects of the underlying event and pile-up. Ideally, the integers $(N_\text{hard}, N_{\text{pre-filter}})$ should be chosen based on the topology of the object we are looking for. However, the naive topology will be influenced by the interaction with the detector and the details of the jet clustering algorithm. For example, a $4$ photon photon-jet will often appear in the detector to have fewer than $4$ distinct lobes of energy, i.e., one or more photons often merge inside a single lobe of energy. In our simulation, we find that the choice $N_\text{hard} = 3$ and $N_\text{pre-filter} = 5$ is an acceptable compromise, working reasonably well for single photons and photon-jets from all the study points. Further optimization will be possible in the context of real detectors and searches for specific photon-jet scenarios. \subsubsection{\label{subsec:tau} $N$-Subjettiness, $\tau_N$} \begin{figure*}[t] \centering \subfloat[]{\includegraphics[width=0.25\textwidth]{Comparison_Tau1.pdf}} \hspace{-1mm} \subfloat[]{\includegraphics[width=0.25\textwidth]{Comparison_Tau21.pdf}} \hspace{-1mm} \subfloat[]{\includegraphics[width=0.25\textwidth]{Comparison_Tau32.pdf}} \hspace{-1mm} \subfloat[]{\includegraphics[width=0.25\textwidth] {Comparison_Tau43.pdf}} \vspace{-5 mm} \caption{\label{fig:tau} Probability distributions vesus various $N$-subjettiness variables. The solid red and green curves show, as usual, the distributions for QCD-jets and single photons respectively. Various blue curves are for photon-jets from different study points. The solid, dashed, dotted and dash-dotted curves in all these figures are for \pjsp{8}, \pjsp{4}, \pjsp{1} and \pjsp{3} respectively.} \end{figure*} ``$N$-subjettiness", introduced in Ref.~\cite{Thaler:2010tr, Thaler:2011gf}, is a modified version of ``$N$-jettiness" from Ref.~\cite{Stewart:2010tn}. It is adapted in a way such that it becomes a property of a jet rather than of an event. $N$-subjettiness provides a simple way to effectively count the number of subjets inside a given jet. It captures whether the energy flow inside a jet deviates from the one-lobe configuration expected to characterize a typical QCD-jet. We use the definition of $N$-subjettiness proposed in Ref.~\cite{Thaler:2010tr}. The starting point is a jet, the full set of 4-vectors corresponding to the (calorimeter cell) constituents of the jet (here found with the $\text{anti-}k_{T} $ algorithm for $R=0.4$), and the recombination tree found with the $k_T$ algorithm as outlined above. From this tree we know the 4-vectors describing the exclusive subjets for any level $N$, i.e., the level where there are exactly $N$ subjets. With this information we can define $N$-subjettiness to be \begin{equation} \label{eq:tau} \tau_N = \frac{ \sum_k p_{T_{k}} \times \text{min} \bigl \{ \Delta R_{1,k}, \Delta R_{1,k} , \cdots, \Delta R_{N,k} \bigr \} } { \sum_k p_{T_{k}} \times R} \; , \end{equation} where $k$ runs over all the (calorimeter cell) constituents of the jet, $p_{T_{k}}$ is the transverse momentum for the $k$-th constituent, $\Delta R_{l,k}=\sqrt{(\Delta \eta_{l,k})^2+ (\Delta \phi_{l,k})^2} $ is the angular distance between the $l$-th subjet (at the level when there are $N$ subjets) and the $k$-th constituent of the jet, and $R$ is the characteristic jet radius used in the original jet clustering algorithm. In the context of single photons, photon-jets and QCD-jets, we use $N$-subjettiness in two different ways. The first application is to use the \textit{ratios} $\tau_{N+1}/ \tau_{N}$ in the same way $N$-subjettiness is used to tag boosted massive particles such as a $W$~boson or a hadronic decaying top~\cite{Thaler:2010tr, Thaler:2011gf}. In particular, for a jet with $N_0$ distinct lobes of energy, $\tau_{N_{0} } $ is expected to be much smaller than $\tau_{N_{0} -1} $ (of course, we are assuming $N_0 > 1$), whereas for $N > N_0$, $\tau_{N+1}$ is expected to be comparable to $\tau_{N}$. Thus a two photon photon-jet is expected to be characterized by $\tau_2/\tau_1 \ll 1$. On the other hand, one lobed QCD-jets and single photons should exhibit comparable values for $\tau_2$ and $\tau_1$, and consequently $\tau_2/\tau_1 \sim 1$. The second way in which we use $N$-subjettiness consists of using the magnitude of $\tau_1$ itself. Even for a jet with one lobe of energy the exact magnitude of $\tau_1$ represents a measure of how widely the energy is spread. A pencil-like energy profile, like that of a single photon or a narrow photon-jet, should yield a much smaller $\tau_1$ compared to QCD-jets with a much broader profile. In fact, $\tau_1$ is an indicator of jet mass, and, for jets with identical energy, $\tau_1$ is proportional to the square of the jet mass. Figure~\ref{fig:tau} shows the probability distributions versus $\log \tau_1$ and $\tau_{N+1}/ \tau_{N}$ for $N = 1,2,3$ corresponding to single photons, QCD-jets and photon-jets from different study points. Note that for photon-jets, the jet mass is almost always given by the mass parameter $m_1$ in Table~\ref{table:bench}. Thus for \pjsp{8} and \pjsp{3}, where $m_1$ has the same value, the probability distributions versus $\log \tau_1$ are almost identical. For study points \pjsp{8}, \pjsp{4} and \pjsp{1} the peak in $\log \tau_1$ shifts to the left as the value of $m_1$ decreases (from $10~\text{GeV}$ to $2~\text{GeV}$ to $0.5~\text{GeV}$). Note also that the \pjsp{1} and \pjsp{3} distributions exhibit a small $\tau_1$ (small mass) enhancement at essentially the same $\tau_1$ value as the primary peak in the single photon (green curve) distribution. This presumably corresponds to those kinematic configurations where only one of the (two) photons from the $n_1$ decay is included in the jet. Thus we expect that a (small) fraction of the time these scenarios will look very single photon-like. Clearly the ratio $\tau_2/\tau_1$ gives significant separation for the different photons-jet scenarios. The study points \pjsp{8} and \pjsp{3} are now separated, although both exhibit peaks at small values of the ratio. This suggests an intrinsic 2-lobe structure corresponding to 2 photons for \pjsp{3} and 4 photons in two relatively tight pairs ($m_2 \ll m_1$) for \pjsp{8}. \pjsp{4} with presumably a more distinctive 4 photon structure exhibits a broader peak at a larger value of $\tau_2/\tau_1$. Single photons and \pjsp{1} exhibit even broader distributions presumably corresponding to an intrinsically 1-lobe structure. The QCD-jet distribution is also broad but with an enhancement around $\tau_2/\tau_1 = 0.8$, presumably arising from a typical 1-lobe structure but some contribution from showers with more structure and from the underlying event. The ratios $\tau_3/\tau_2$ or $\tau_4/\tau_3$ seem to be less effective in discriminating photon-jets from single photons and QCD-jets. This can be understood by noting that quite often the hard photons inside a photon-jet become collinear at the scale of the size of the cell. So even for photon-jets with $4$ hard photons, we rarely find jets with $4$ distinct centers of energy. In general we expect the ratio $\tau_{N+1}/ \tau_{N}$ becomes less and less useful with increasing $N$. Note that the distributions for single photons and photon-like photon-jets tend to exhibit a double peak structure in $\tau_3/\tau_2$ or $\tau_4/\tau_3$. We believe that this feature arises from both the contributions due to the underlying event and due to our implementation of transverse smearing in the ECal (see Appendix~\ref{sec:app-Moliere}). \subsubsection{\label{subsec:lambda} Transverse momentum of the Leading Subjet} Now we proceed to discuss the second class of subjet variables constructed from the 3 hardest subjets out of the 5 exclusive subjets. As the first such variable consider the fraction of the jet transverse momentum carried by the leading subjet, which provides significant information about the jet itself. In particular, it indicates the fraction of the jet's total $p_T$ carried by the leading subjet only. Since photon-jets result from the decay of massive particles into hard and often widely separated photons inside the jet, the subjets are usually of comparable hardness. The leading subjet for single photons and for QCD-jets, on the other hand, typically carry nearly the entire $p_{T}$ of the jet. So for the majority of these jets, the $p_{T}$ of the leading subjet (label it $p_{T_{L}}$) is of the order of the $p_{T}$ of the entire jet ($p_{T_{J}}$). Instead of using the ratio $p_{T_{L}} / p_{T_{J}}$ directly we find that it is more instructive to define the variable \begin{equation} \label{eq:lambda} \lambda_J \ = \ \log \Bigl(1- \frac{p_{T_{L}}} {p_{T_{J}}} \Bigr) \; . \end{equation} The advantage of using the definition in Eq.\eqref{eq:lambda} is that it focuses on the behavior near $p_{T_{L}} \sim p_{T_{J}}$. The discussion above depends crucially on how the subjets are constructed, especially for QCD-jets. QCD partons typically shower into many soft partons/hadrons. After showering and hadronization, single hard partons yield many soft hadrons distributed throughout the jet. The way in which these jets are clustered into subjets dictates the $p_T$ distribution of subjets. For example, for $\text{anti-}k_{T} $ subjets, the hardest subjet will always have $p_{T_{L}} \simeq p_{T_{J}}$. The $k_T$ algorithm, on the other hand, clusters the softer elements first and results in more evenly distributed subjets. The $\text{C/A}$ jet algorithm clusters taking into consideration only the geometric separations of the elements, and produces qualitatively different results. Single photons, on the other hand, shower very little (no QCD Shower) and deposit energy in only a handful of cells (per hard photon). Therefore we expect that our results for single photons or photon-jets will be less sensitive to the details of the clustering algorithm. To versify this point we use both $k_T$ and $\text{C/A}$ subjets to evaluate $\lambda_J$ from Eq.\eqref{eq:lambda}. The simultaneous use of different clustering algorithms to extract information from the same jet should not come as a surprise. As shown in Ref.~\cite{Ellis:2012sn}, substantial further information can be extracted if one employs a broad sampling out of \textit{all} of the physically sensible clustering histories (trees) for a given jet. In this sense the current analysis is modest in that we only use two specific clustering procedures. \begin{figure}[h] \centering \subfloat[]{\includegraphics[width=0.24\textwidth]{Comparison_Lambda_CA.pdf}} \hspace{-1mm} \subfloat[]{\includegraphics[width=0.24\textwidth]{Comparison_Lambda_KT.pdf}} \vspace{-5 mm} \caption{\label{fig:lambda} Probability distribution for $\lambda_J$ from Eq.\eqref{eq:lambda}. As in Fig.~\ref{fig:tau} the solid red is for QCD-jets, the solid green for single photons, dotted blue for \pjsp{1}, dash-dotted blue for \pjsp{3}, dashed blue for \pjsp{4} and solid blue for \pjsp{8}. The left (right) figure shows the distribution when $\lambda$ is calculated using $\text{C/A}$ ($k_T$) subjets.} \end{figure} In Fig.~\ref{fig:lambda} we plot the probability distribution of jets as a function of $\lambda_J$ for QCD-jets, single photons, and photon-jets. The left (right) panel shows the distribution when we use the $\text{C/A}$ ($k_T$) algorithm to find the subjets. Note how the distribution for QCD-jets (the red curve) moves more to the right (i.e., the $p_{T}$ of the jet gets more evenly distributed among its subjets) as we go from $\text{C/A}$ subjets to $k_T$ subjets. The various photon-jet study points also look more similar when using the $k_T$ algorithm. In this case the \pjsp{1} and \pjsp{3} distributions exhibit enhancements suggesting the presence of both single photon-like behavior ($\lambda_J \sim -1.2$) and QCD-like behavior ($\lambda_J \sim -0.2$ to $-0.3$). The more complex structure of the \pjsp{4} and \pjsp{8} jets exhibit a distribution closer to QCD alone. Finally note that the $\text{C/A}$ subjets display the jet substructure information differently from the $k_T$ case with the peak in the QCD-jet distribution at least somewhat separated from the peaks in the photon-jet distributions. Also for $\text{C/A}$ all of the photon-jet scenarios exhibit at least a little single photon-like enhancement (for $k_T$ this is only true for \pjsp{1} and \pjsp{3}). There is clearly some discrimination to be gained from using more than one definition of the subjets. \subsubsection{\label{subsec:epsilon} Energy-Energy Correlation, $\epsilon_J$ } Another useful variable is the ``energy-energy correlation". We define it as: \begin{equation} \label{eq:epsilon} \epsilon_J \ = \ \frac{1}{E_J^2} \sum_{(i > j) \in N_\text{hard} } E_{i} E_{j}\,, \end{equation} where $E_J$ is the total energy of a given jet, and the indices $i,j$ run over the (3 hardest) subjets of the jet. From the definition, it should be clear that $\epsilon_J$ is sensitive to the energy of the subleading jets. In particular, the energy-energy correlation can be expressed as \begin{eqnarray} \epsilon_J & = \frac{ E_{L}\left(E_{NL}+ E_{NNL} \right) + E_{NL} E_{NNL}}{E_J^2}\nonumber\\ & \approx \frac{ E_{L}\left(E_{J}- E_{L} \right) + E_{NL} E_{NNL}}{E_J^2} \,, \end{eqnarray} where $E_{L},E_{NL}$, and $E_{NNL}$ are the energies of the leading subjet, the next-to-leading subjet, and the next-to-next-to-leading subjet. \begin{figure}[h] \centering \subfloat[]{\includegraphics[width=0.24\textwidth]{Comparison_Epsilon_CA.pdf}} \hspace{-1mm} \subfloat[]{\includegraphics[width=0.24\textwidth]{Comparison_Epsilon_KT.pdf}} \vspace{-5 mm} \caption{\label{fig:epsilon} Probability distribution versus $\epsilon_J$ from Eq.\eqref{eq:epsilon}. As in Fig.~\ref{fig:tau} the solid red is for QCD-jets, the solid green for single photons, dotted blue for \pjsp{1}, dash-dotted blue for \pjsp{3}, dashed blue for \pjsp{4} and solid blue for \pjsp{8}. The left (right) figure shows the distribution when $\epsilon_J$ is evaluated using $\text{C/A}$ ($k_T$) subjets.} \end{figure} We show the probability distribution of jets as a function of $\epsilon_J$ for QCD-jets, single photons and photon-jets in Fig.~\ref{fig:epsilon}. Note that for single photons (the green curve), $E_{NL} \text{ and } E_{NNL}$ are negligible and hence we expect $\epsilon_J$ for single photons to be well approximated by $E_{L}\left(E_J- E_{L} \right) /E_J^2 $. In fact, the sharp peak for single photons in Fig.~\ref{fig:lambda} at $-1.2$ ($k_T$ algorithm) corresponds to the sharp peak at about $0.04$ in Fig.~\ref{fig:epsilon}. More generally the qualitative features in Fig.~\ref{fig:lambda} are repeated in Fig.~\ref{fig:epsilon}. For $\text{C/A}$ subjets the distributions for all of the photon-jet study points exhibit two peaks, the large $\epsilon_J$ value enhancement presumably corresponding to the energy being shared approximately equally among several final photons, while the small value enhancement arises from the case when one photon dominates (perhaps because some of the photons are not in the jet). For $k_T$ subjets only the \pjsp{1} and \pjsp{3} distributions exhibit the small $\epsilon_J$ single photon-like enhancement. We also see that again the two algorithms yield distinctly different distributions for QCD-jets. \subsubsection{\label{subsec:rho} Subjet Spread, $\rho_J$} We define ``subjet spread" as a measure of the geometric distribution of the subjets. \begin{equation} \label{eq:rho} \rho_J = \frac{1}{R} \sum_{(i > j) \in N_\text{hard}} \Delta R_{i,j} \; , \end{equation} where $ \Delta R_{i,j}$ is the angular distance between the $i$-th and $j$-th (hard) subjets, and $R$ is the size parameter of the jet algorithm. \begin{figure}[h] \subfloat[]{\includegraphics[width=0.24\textwidth]{Comparison_Rho_CA.pdf}} \hspace{-1.mm} \subfloat[]{\includegraphics[width=0.24\textwidth]{Comparison_Rho_KT.pdf}} \vspace{-5 mm} \caption{\label{fig:rho} Probability distribution for subjet-spread $\rho_J$ from Eq.\eqref{eq:rho}. As in Fig.~\ref{fig:tau} the solid red is for QCD-jets, the solid green for single photons, dotted blue for \pjsp{1}, dash-dotted blue for \pjsp{3}, dashed blue for \pjsp{4} and solid blue for \pjsp{8}. The left (right) figure shows the distribution when $\rho_J$ is calculated using $\text{C/A}$ ($k_T$) subjets.} \end{figure} The left (right) panel of Fig.~\ref{fig:rho} shows the probability distribution of jets as a function of $\rho_J$ for QCD-jets, single photons and photon-jets when the $\text{C/A}$ ($k_T$) subjets are used to evaluate Eq.\eqref{eq:rho}. For this variable only the QCD-jet distribution changes dramatically when changing the choice of subjet algorithm from $\text{C/A}$ to $k_T$. By using both algorithms this feature will provide some ability to discriminate between QCD-jets and single photons or photon-jets. For the single photon case the the strong peak at small $\rho_J$ confirms that all of the subjets are close to each other, forming a hard core. Subjet spread is quite sensitive to the mass $m_1$ as can be seen from the different photon-jet distributions. In particular, the position of the peaks for photon-jets with different $m_1$ simply follow the $m_1$ value. The \pjsp{3} and \pjsp{8} distributions are nearly the same (with the same $m_1$ value), while the \pjsp{1} and \pjsp{4} distributions are just similar (with somewhat different $m_1$ values), but distinct from \pjsp{3} and \pjsp{8}. The $m_1$ dependence is not surprising since the opening angle between the decay products of the $n_1$ particle depends on $m_1$. Finally we note that the \pjsp{3} and \pjsp{8} distributions do have an enhancement at small $\rho_J$ values presumably corresponding to configurations where the extra photons are not captured in the jet. \subsubsection{\label{subsec:delta} Subjet Area of the Jet} As defined in Ref.~\cite{Cacciari:2008gn}, the ``area" associated with a jet is an unambiguous concept that represents quantitatively the amount of surface in the ($\eta$-$\phi$) plane included in a jet. In this analysis, we use the ``active area" definition for the area of the jet. The active area of a jet is calculated by adding a \textit{uniform} background of arbitrarily soft `ghost' particles to the event (so that each ghost represents a fixed area) and then counting the number of ghosts clustered into the given jet. The area of a jet is often used to provide a quantitative understanding of the largely uncorrelated contributions to a jet from the underlying event and pile-up. However, it is rarely used in phenomenology for the purpose of discovering new particles or tagging jets. We use `subjet area' as a measure of the `cleanliness' of the jet. We show that it can be a useful tool for distinguishing a single photon or a photon-jet from noisier QCD-jets. We define the subjet area fraction as \begin{equation} \label{eq:delta} \delta_J = \frac{1}{A_J} \sum_{i \in N_\text{hard}} A_i \; , \end{equation} where $A_i$ is the area of the $i$-th subjet and $A_J$ is the area of the entire jet. Note that this definition of $\delta_J$ is only useful when the subjets are constructed geometrically by merging the nearest neighbors first (i.e., using the $\text{C/A}$ algorithm). \begin{figure}[h] \includegraphics[width=0.45\textwidth]{Comparison_Delta.pdf} \vspace{-3mm} \caption{\label{fig:delta} Probability distribution versus fractional area $\delta_J$ from Eq.\eqref{eq:delta}. As in Fig.~\ref{fig:tau} the solid red is for QCD-jets, the solid green for single photons, dotted blue for \pjsp{1}, dash-dotted blue for \pjsp{3}, dashed blue for \pjsp{4} and solid blue for \pjsp{8}. We use $\text{C/A}$ subjets to calculate $\delta_J$.} \end{figure} In Fig.~\ref{fig:delta}, we show the probability distribution for jets as a function of $\delta_J$ for QCD-jets, single photons, and photon-jets. As expected, the figure shows that single photons (the green curve) are significantly cleaner (exhibit smaller $\delta_J$ values) than QCD-jets (the red curve) and that photon-jets (the blue curves) tend to lie in between. Fixing $m_1$ such that the first splitting is fairly wide, we can investigate the effects of $m_2$. If $m_2$ is small, then the two photons coming from the $n_2$ decays will be very close together, and the subjet that contains them will not collect many ghosts. On the other hand, a large $m_2$ will split the two photons further apart and, if still contained in the same subjet, that subjet will collect substantially more ghosts resulting in a subjet with a larger active area. QCD-jets contain many soft particles and so the subjets in QCD jets have larger areas. Thus we see that the QCD distribution peaks for $\delta_j$ near $0.5$, while the single photon distribution exhibits both a large peak at small ($\sim 10^{-2}$) $\delta_J$ and a smaller peak at larger ($\sim 0.4$) $\delta_J$ values. The photon-jet cases interpolate between these two behaviors and this variable can clearly provide some discriminating power. \subsection{\label{sec:results} Multivariate Analysis} We have, so far, introduced a set of well-understood variables. In this subsection, we will employ these variables in a multivariate discriminant, specifically in a Boosted Decision Tree (BDT)~\cite{BDT}. A decision tree is a hierarchical set of one-sided cuts used to discriminate signal versus background. The `boosting' of a decision tree extends this concept from one tree to several trees which form a forest. The trees are derived from the same training ensemble by reweighing events, and are finally combined into a single classifier. In the current discussion we are treating photon-jets as the signal and both single photons and QCD-jets as background. We construct multiple BDT analyses in order to estimate how well the photon-jets can be separated from single photons \textit{and} from QCD-jets. This will allow us to demonstrate the power of the new jet substructure variables when these are combined with the conventional variables. In practice, we employ the Toolkit for Multivariate Analysis (TMVA)~\cite{Hocker:2007ht} package and use the ``BDTD" option to book BDTs, where the input variables are decorrelated first. For every study point in Table~\ref{table:bench} we optimize two separate BDTs, one for discriminating photon-jets form QCD-jets and the other for separating photon-jets from single photons. We make use of all the variables discussed earlier in order to minimize the background fake rate ($\mathcal{F} =$ the fraction of the background jets that pass the cuts) for a given signal acceptance rate ($\mathcal{A} = $ the fraction of the signal jets that pass the cuts). For demonstration purposes we also consider BDTs made with a subset of the full set of variables. To be specific, we consider three different sets of variables: \begin{align} \label{eq:variable_sets} \begin{split} D \ \equiv \ & \Bigl\{\log \theta_J, \nu_J, \log \tau_1, \frac{\tau_2}{\tau_1}, \frac{\tau_3}{\tau_2}, \frac{\tau_4}{\tau_3}, \\ & \quad \bigl( \lambda_J, \epsilon_J, \rho_J, \delta_J \bigr) \bigl|_{\text{C/A}}, \bigl( \lambda_J, \epsilon_J, \rho_J \bigr) \bigl|_{k_T} \Big\} \end{split} \\ D_{\text{C}} \ \equiv \ & \Big\{\log \theta_J, \nu_J \Big\} \\ \label{eq:variable_sets2} \begin{split} D_{\text{S}} \ \equiv \ & \Bigl\{\log \tau_1, \frac{\tau_2}{\tau_1}, \frac{\tau_3}{\tau_2}, \frac{\tau_4}{\tau_3}, \\ & \quad \bigl( \lambda_J, \epsilon_J, \rho_J, \delta_J \bigr) \bigl|_{\text{C/A}}, \bigl( \lambda_J, \epsilon_J, \rho_J \bigr) \bigl|_{k_T} \Big\} \; , \end{split} \end{align} where the subscripts $\text{C/A}$ or $k_T$ in Eqs.~\eqref{eq:variable_sets} and~\eqref{eq:variable_sets2} imply that the observables are calculated using $\text{C/A}$ or $k_T$ subjets. The sets $D_{\text{C}} $ and $D_{\text{S}}$ consist of the conventional and the jet substructure variables respectively, whereas $D$ is the set of all variables. In a previous paper \cite{Ellis:Future} we described the more conventional separation of single photons from QCD-jets along with an initial introduction to the separation of single photons from photon-jets. In both cases the single photons were treated as the signal. Here we extend that discussion and focus on the photon-jets as the signal. We organize the results of our analysis into three subsections. First, we show the results of BDTs optimized to discriminate photon-jets from QCD-jets, the analogue of the seperation of single photons from QCD-jets. In the following subsection, we repeat the same study, but optimize it for treating single photons as the background to photon-jets. Finally, we demonstrate how the BDTs might be used for an effective three-way separation of single photons from photon-jets from QCD-jets. \subsection{\label{subsec:QCD-PJ} QCD-Jets as Background for Photon-jets} We use all of the variables in the set of discriminants $D$ in the BDTs in order to maximize the extraction of signal jets (photon-jets) from background (QCD-jets). This is similar to the separation of single photons from QCD-jets perfromed in Ref.~\cite{Ellis:Future}. The BDTs are trained individually for each study point. The results for fake rate versus acceptance are shown in Fig.~\ref{fig:AF_PJvsQCD} for all of the study points. In this plot the lower right is desirable and the upper left is undesirable. Note that the acceptance rate for photon-jets is bounded above by about $0.94$ due to our preselection cut $\theta_J \geq 0.25$ (see Section~\ref{subsec:rho}). The same cut eliminates approximately $98\%$ of the QCD-jets yielding a fake rate below $10^{-2}$ except at the largest acceptance. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{AF_PJvsQCD.pdf} \caption{\label{fig:AF_PJvsQCD} The background fake rate versus signal acceptance where photon-jets from the different study points are the signal and QCD-jets are the background. All variables in the set of discriminant $D$ are used in the analysis. } \end{figure} For 2 photon photon-jets (study points 1 to 3) the separation becomes easier as $m_1$ increases yielding increasing separation between the photons inside the jet. The other physics scenarios tend to have even more structure within the photon-jets that the jet substructure variables allow us to use to suppress the QCD background. The more structure a jet possesses, the easier it becomes to discriminate it from (largely feature-less) QCD-jets. The conclusion from Fig.~\ref{fig:AF_PJvsQCD} is that, for photon-jets of varied kinematic features, we can achieve a very small QCD fake rate for a reasonably large acceptance rate. In more detail, for all of our study points a tagging efficiency (acceptance) of $\sim 70\%$ for photon-jets is accompanied by a fake rate for QCD-jets of only 1 in $10^4$ to 1 in $10^5$. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{Imp_PJvsQCD.pdf} \caption{\label{fig:Imp_PJvsQCD} The improvement brought in because of the use of substructure variables are shown in the figure. For a quantitative definition of improvement see the text.} \end{figure} It is instructive to quantify the improvements made possible by including the jet substructure variables as discriminants. To achieve this comparison we consider BDTs using only the conventional variables (i.e., we use the set $D_{\text{C}}$ of discriminants to train the BDTs). For a given acceptance of signal we thus obtain two different fake rates -- one when we use only the conventional variables (labeled $\mathcal{F}_\text{C}$), and another when we use conventional+jet substructure variables (labeled $\mathcal{F}_\text{C+S}$). For a given acceptance, the ratio $\mathcal{F}_\text{C}/\mathcal{F}_\text{C+S}$ quantifies the improvement due to using jet substructure variables in this analysis. The improvement rates for conventional plus jet substructure variables over only conventional variables versus acceptance for discriminating photon-jets from QCD-jets is shown in Fig.~\ref{fig:Imp_PJvsQCD} for the different study points. While Figs.~\ref{fig:hadfrac} and \ref{fig:chtrk} indicate that the conventional variables provide some discrimination between photon-jets and QCD-jets, Figs.~\ref{fig:tau} to \ref{fig:delta} indicate that the jet substructure variables provide a substantial number of new distinguishing features. Fig.~\ref{fig:Imp_PJvsQCD} shows that these new features in the jet substructure variables can provide substantial improvement. Factors of 4 to 50 improvement in the discrimination of photon-jets from QCD-jets are possible at an acceptance of about $70\%$. As expected more improvement is possible in those physics scenarios where the photon-jets have more structure. Further, our results demonstrate that the use of jet substructure variables provides a tool to distinguish the different physics scenarios, i.e., the different study points, which is not possible with conventional variables alone. \subsection{\label{subsec:PvsPJ} Single Photons as Background to Photon-Jets} Now consider the same analysis as in the previous section but with single photons treated as the background. This new sort of separation is essential if we want to consider physics scenarios with photon-jets. Again we use all of the variables in the set of discriminants $D$ in the BDTs in order to maximize the extraction of signal jets (photon-jets) from background (single photons). The BDTs are trained individually for each study point. The results for fake rate versus acceptance are shown in Fig.~\ref{fig:AF_PJvsP}. As in Fig.~\ref{fig:AF_PJvsQCD} the lower right is desirable and the upper left is undesirable. Again the acceptance rate for photon-jets is bounded above by about $0.94$ due to our preselection cut $\theta_J > 0.25$ (see Section~\ref{subsec:rho}). For the same reason a similar limit (0.94) holds also for the fake rate from single photons (although this is difficult to see on the logarithmic scale). \begin{figure}[h] \includegraphics[width=0.49\textwidth]{AF_PJvsP.pdf} \caption{\label{fig:AF_PJvsP} The background fake rate versus signal acceptance curves are shown for all study points. Here the photon-jets from the different study points are treated as the signal and single photons are the background. These curves employ all variables in the set of discriminants $D$.} \end{figure} The results in Fig.~\ref{fig:AF_PJvsP} teach us several lessons. A photon-jet from \pjsp{1} consists of a pair of highly collinear photons. Such a jet is quite photon-like and thus difficult to separate from single photons. Hence the corresponding (solid black) curve is most towards the upper left. One needs to cut away almost half of the signal sample ($\mathcal{A} \sim 0.55$) in order to reduce the fake rate to 1 in $10^3$. We also see that it is a challenge to separate the photon-jets for \pjsp{3} from single photons (the solid red curve). In this scenario $m_1 = 10~\text{GeV}$ and the $n_1$ decays directly to two photons. Because of the large $m_1$ value, almost $30\%$ of these ($R=0,4$) jets do not contain both of the photons from the $n_1$ decay, i.e., about $30\%$ of this jet sample are actually single photons (in the jet), and not photon-jets. We saw this point earlier in essentially all of the individual jet substructure variable plots, Figs.~\ref{fig:tau} to \ref{fig:delta}, where the \pjsp{3} distribution exhibited an enhancement that overlapped with the corresponding peak in the single photon distribution. A larger separation of \pjsp{3} from single photons can be obtained at an acceptance just below $0.7$, where these single photons configurations are cut away and the fake rate drops below 1 in $10^3$. The photon-jets of \pjsp{2} represent a `sweet' spot between \pjsp{1} and \pjsp{3} where the 2 photons are typically well enough separated to be resolved but close enough to be in the same jet. Thus the \pjsp{2} (solid purple) curve is well below and to the right compared to the \pjsp{1} (solid black) and \pjsp{3} (solid red) curves. Similarly the photon-jets at the other study points can be well separated at even larger acceptance rates using the combination of jet substructure and conventional discriminants. For example, for the study points \pjsp{4} and \pjsp{6}, even at $85\%$ acceptance, one obtains a fake rate \textit{smaller} than $1$ in $10^3$. Again it is instructive to determine the impact of the jet substructure variables for this analysis. As in the previous subsection we consider BDTs using only the conventional variables (i.e., we use the set $D_{\text{C}}$ of discriminants to train the BDTs) to compare to the results from the full set $D$ of variables. We plot the ratio of fake rates at fixed acceptance for these two analyses \begin{figure}[h] \includegraphics[width=0.49\textwidth]{Imp_PJvsP.pdf} \caption{\label{fig:Imp_PJvsP} The improvement brought in because of the use of substructure variables are shown in the figure. For a quantitative definition of improvement see the text.} \end{figure} in Fig.~\ref{fig:Imp_PJvsP} versus the acceptance. A comparison of Fig.~\ref{fig:Imp_PJvsP} and Fig.~\ref{fig:AF_PJvsP} indicates that the bulk of the separation of photon-jets from single photons is provided by the jet substructure variables, i.e., the improvement factor typically differs by less than a factor of 10 from one over the fake rate. Further, the improvement factor ranges from 10 to more that $10^3$ even at acceptances as large as $90\%$ for all physics scenarios except \pjsp{1} and \pjsp{3}. Even in these challenging cases substantial improvement is possible at lower acceptance rates. This is not a surprise since the conventional variables are ineffective at distinguishing between photon-jets and single photons. Recall from Fig.~\ref{fig:hadfrac} that the hadronic energy fraction distributions are nearly identical for photon-jets and single photons. The distribution of the number of charge tracks associated with a jet, shown in Fig.~\ref{fig:chtrk}, also indicates only slight differences, arising from the somewhat different conversion rates for photon-jets versus single photons. So it is clear that, if we want to be able to discriminate between photon-jets and single photons (and we do), the jet substructure variables provide the necessary tool. \subsection{\label{subsec:QCDvsPJvsP} Three-way Separation} Finally we come to the really interesting challenge: the \textit{simultaneous} separation all three samples: single photons, photon-jets, and QCD-jets. In principle, one could perform three BDT training exercises, separating photon-jets from single photons, separating photon-jets from QCD-jets and separating single photons from QCD-jets, using one of the variable sets of Eqs.~\ref{eq:variable_sets} - \ref{eq:variable_sets2} in each case. Then the responses from each of these BDTs for each jet could be used to separate the experimentally identified jets in the corresponding 3-dimensional `physics object' space. In order to illustrate these ideas in a fairly simple analysis here we will focus on a two-dimensional analysis employing the two BDTs we have been discussing, separating photon-jets from single photons and separating photon-jets from QCD-jets. There are still the related questions of which set of variables to use for each BDT and, in fact, how to characterize the `best separation'.\footnote{With three BDTs and the three BDT response numbers for each jet, the `best separation' presumably corresponds to the three distinct physics objects being sent to three diagonally opposite vertices of the BDT response cube (on a equilateral triangle with side of length $\sqrt{2}$ times the length of the edge of the cube).} Qualitatively at least, we find good 2-dimensional separation for the following definitions of the BDTs. One is trained to separate QCD-jets and photon-jets based only on the conventional discriminants ($D_\text{C}$) and is plotted on the vertical axis in the following plots, while the other BDT is trained to separate photon-jets from single photons with the substructure discriminants ($D_\text{S}$) alone and is plotted along the horizontal axis. We present the results in terms of two-dimensional contour plots where the numerical values associated with a given contour corresponds to the relative probability to find a calorimeter object of the given kind (indicated by the color) in a cell of size $0.1 \times 0.1$ in BDT response units. (Note that, by construction, the BDT responses have values in the range $-1$ to $+1$, where $+1$ means `signal-like' and $-1$ means `background-like'.) The color coding in these figures matches the previous choices. Red is for QCD-jets, blue for photon-jets and green is for single photons. As a first example, Fig.~\ref{fig:Density_2JPJ} indicates the 2-dimensional distributions resulting from the BDTs for \pjsp{2}, a scenario with typically two photons in the photon-jet with small angular separation due to the small value of $m_1$. When interpreting the following figures it is important to recall that the jet samples indicated in these figures are constrained to satisfy $\theta_J \leq 0.25$, which means that we are only keeping the approximately $2\%$ of QCD-jets that are most `photon-like'. \begin{figure}[h] \includegraphics[width=0.45\textwidth]{Contour_Seperation_PJSP2.pdf} \caption{\label{fig:Density_2JPJ} The BDT responses of QCD-jets(red), single photons(green) and photon-jets(blue) for photon-jets at \pjsp{2}. The $D_S$ variables are used on the horizontal axis and the $D_C$ variables on the vertical axis.} \end{figure} However, Fig.~\ref{fig:Density_2JPJ} indicates a pretty clear separation between the QCD-jets and the true photon objects (little red above $0.0$ in the vertical direction). On the other hand, as we expect from our previous one-dimensional discussions in Subsection~\ref{subsec:PvsPJ}, the blue (photon-jet) contours in the upper-left green (single photon) region indicate that it is a challenge to completely separate (\pjsp{2}) photon-jets from single photons. \begin{figure}[h] \includegraphics[width=0.45\textwidth]{Contour_Seperation_PJSP3.pdf} \caption{\label{fig:Density_3JPJ} The BDT responses of QCD-jets(red), single photons(green) and photon-jets(blue) for photon-jets at \pjsp{3}.} \end{figure} In the case of \pjsp{3} photon-jets, as indicated in Fig.~\ref{fig:Density_3JPJ}, the photon-jet versus single photon separation challenge is even larger, as we have already discussed. Again we have photon-jets with potentially two photons but, due to the relatively large $m_1$ value, one of those photons is sometimes outside of the identified jet. This explains the small region with a solid blue (probability $~0.1$) contour inside the green (single photon) region. \begin{figure}[h] \includegraphics[width=0.45\textwidth]{Contour_Seperation_PJSP4.pdf} \caption{\label{fig:Density_4JPJ} The BDT responses of QCD-jets(red), single photons(green) and photon-jets(blue) for photon-jets at \pjsp{4}.} \end{figure} \begin{figure}[h] \includegraphics[width=0.45\textwidth]{Contour_Seperation_PJSP8.pdf} \caption{\label{fig:Density_8JPJ} The separation of QCD-jets (red), single photons (green) and photon-jets(blue) for photon-jets at \pjsp{8}.} \end{figure} The corresponding results for the more complex (and more easily separated) photon-jets of \pjsp{4} and \pjsp{8}, typically with 4 photons in a photon-jet, are displayed in Figs.~\ref{fig:Density_4JPJ} and \ref{fig:Density_8JPJ}. In these scenarios the three-way photon-jet versus single photon versus QCD-jet separation is fairly cleanly achieved using just the $D_S$ (horizontal) and $D_C$ (vertical) variable sets. At the $0.005$ level there is only a tiny overlap of photon-jets with QCD-jets for \pjsp{4} (near the location ($0.5$,$0.0$) in Fig.~\ref{fig:Density_4JPJ}) and no overlap for \pjsp{8} (Fig.~\ref{fig:Density_8JPJ}). \begin{figure}[h] \includegraphics[width=0.45\textwidth]{Contour_Seperation_PJSP3_DCDS.pdf} \caption{\label{fig:Density_3JPJ_ex} The BDT responses of QCD-jets(red), single photons(green) and photon-jets(blue) for photon-jets at \pjsp{3} including a contour at $0.001$.} \end{figure} Before ending this section we should discuss one other point. From our previous discussion, one would expect to improve the photon-jet versus single photon separation by using the full $D$ set of variables (instead of the $D_S$ variables alone), and this expectation raises one of the interesting, and challenging, features of \textit{simultaneous} separations. Since we are currently training the BDTs so that each BDT separates one type of signal from one type of background, while, at the same time, trying to perform a three-way separation, it can happen that an improvement in one separation corresponds to a degradation in another of the separations. To illustrate this point we first reproduce the results in Fig.~\ref{fig:Density_3JPJ}, but now include a contour at relative probability $0.001$, which we did not include earlier to avoid plots that are too busy. The resulting plot is shown in Fig.~\ref{fig:Density_3JPJ_ex}. Now we perform the same analysis but using the full variable set $D$ in both BDTs. The resulting contour plot is displayed in Fig.~\ref{fig:Density_3JPJ_D}, which illustrates the relevant points. The $0.001$ level boundaries for single photons (green) and photon-jets (blue) are now somewhat better separated, although the effectively one-photon-jets from \pjsp{3} (when one of the photons is very soft or is outside of the jet) still lie within the single photon boundary. At the same time, however, the separation between single photons (green) and the (typically more numerous) QCD-jets (red) is somewhat degraded (the green and red regions have moved towards each other). Due to the coupling between the different pairwise separations, optimizing such a three-way separation takes careful work and likely depends on the details of the actual analysis and detector. \begin{figure}[h] \includegraphics[width=0.45\textwidth]{Contour_Seperation_PJSP3_DD.pdf} \caption{\label{fig:Density_3JPJ_D} The BDT responses of QCD-jets(red), single photons(green) and photon-jets(blue) for photon-jets at \pjsp{3} using the full set $D$ of variables on the horizontal axis.} \end{figure} These results clearly suggest that a three-way separation is possible, including the ability to distinguish different photon-jet scenarios. Further enhancement will arise from using the full 3-dimensional structure and from using a realistic detector simulation in the training. A thorough optimization in the context of a real detector and actual data may select different, more effective choices of the discriminating variables. \section{\label{sec:conclusion} Conclusion} In this paper we have attempted to link several concepts, some conventional and some less so, with the goal of enhancing the searches for and the analyses of both Standard Model and Beyond the Standard Model physics. We advocate employing general techniques for analyzing and interpreting the detector objects identified by applying standard jet algorithms to the calorimeter cells of typical hadron collider detectors, allowing a universal language for such objects. We have demonstrated the efficacy of employing the recent developments in jet substructure techniques to separate and identify these detector objects in terms of physics objects. Continuing the efforts begun in Ref.~\cite{Ellis:Future}, we have focused on identifying three specific physics objects, the familiar single photons, QCD-jets and the Beyond the Standard Model (and LHC) relevant photon-jets. In particular, we have demonstrated that it is possible to achieve significant separation between photon-jets and their dominant backgrounds, i.e., single photons and QCD-jets. We expect that both the ATLAS and CMS groups could enhance their searches for signatures of new physics by adopting the methods described. These methods should allow the separation of photon-jets from single photons from QCD-jets, and also provide some identification of the specific dynamics yielding the photon-jets. We note that our simulation does not take into account the impact of magnetic fields inside the detectors. On the other hand, one might interpret this absence of a magnetic field as making our results more conservative. When the magnetic field bends the electrons and positrons from converted photons, this serves to generate more structure inside the jet. The substructure variables, as we have described, tend to become more powerful with more structure. A more detailed analysis is, however, beyond the scope of this paper. Finally, it is worth mentioning that the formalism and techniques developed in this paper for photon-jets should work in a similar way for the case of collinear electrons, often labeled `electron-jets'~\cite{Ruderman:2009tj, Cheung:2009su, Falkowski:2010cm, Falkowski:2010gv}. An electron-jet is characterized by a large number of charged tracks along with a small hadronic energy fraction. Also, we expect the electrons inside these jets to bend in a magnetic field, creating more substructure. Therefore we anticipate that multivariate analyses similar to those described here will be correspondingly effective at separating electron-jets from QCD-jets (and photon-jets). \section*{Acknowledgements} The authors would like to acknowledge stimulating conversations with Henry Lubatti and Gordon Watts regarding the project. We especially thank Henry Lubatti for his careful reading of the manuscript. TSR would like to thank the hospitality of CERN, where part of the work was completed. The work of SDE, TSR and JS was supported, in part, by the US Department of Energy under contract numbers DE-FGO2-96ER40956. JS would also like to acknowledge partial support from a DOE High Energy Physics Graduate Theory Fellowship. Computing resources were provided by the University of Washington supported by the US National Science Foundation contract ARRA-NSF-0959141.
{ "timestamp": "2012-10-22T02:02:32", "yymm": "1210", "arxiv_id": "1210.3657", "language": "en", "url": "https://arxiv.org/abs/1210.3657" }
\section{Introduction} Considering the important role of the Higgs boson in particle physics, hunting for it has been one of the major tasks of the running Large Hadron Collider (LHC). Recently, both the ATLAS and CMS collaborations have reported some evidence for a light Higgs boson near 125 GeV \cite{ATLAS,CMS} with a di-photon signal rate slightly above the SM prediction \cite{Carmi}. As is well known, in new physics beyond the SM model several Higgs bosons are predicted, among which the SM-like one may be near 125 GeV \cite{125Higgs,125other,cao125,Cao:2011sn}. Recently, in our studies \cite{cao125,Cao:2011sn} we examined the mass of the SM-like Higgs boson in several supersymmetric (SUSY) models including the minimal supersymmetric standard model (MSSM)\cite{Haber,Djouadi}, the next-to-minimal supersymmetric standard model (NMSSM)\cite{NMSSM1,NMSSM2} and the constrained MSSM\cite{mSUGRA,nuhm2}. At tree-level, these SUSY models are hard to predict a Higgs boson near 125 GeV, and sizable radiative corrections, which mainly come from the top and top-squark loops, are necessary to enhance the Higgs boson mass\cite{Carena:1995bx}. Due to the different properties of these SUSY models, the loop contributions to the Higgs boson mass are different for giving a 125 GeV Higgs boson. Therefore, different models have different lower bounds on the top-squark mass which is associated with the fine-tuning problem \cite{tuning}. On the other hand, since the di-photon Higgs signal is the most promising discovery channel for a light Higgs boson at the LHC\cite{diphoton1}, in our recent study \cite{di-photon} we performed a comparative study for the di-photon Higgs signal in different SUSY models, namely the MSSM, NMSSM and the nearly mininal supersymmertric standard model (nMSSM) \cite{xnMSSM,cao-xnmssm}. In this note we briefly review these stuides on a 125 GeV Higgs boson and its di-photon signal rate in different SUSY models. This note is organized as follows. In the next section we briefly describe the Higgs sector and the di-photon Higgs signal in these SUSY models. Then we present the numerical results and discussions in Sec. III. Finally, the conclusions are given in Sec. IV. \section{The Higgs sector and di-photon signal rate in SUSY models} \subsection{The Higgs sector in SUSY models} Different from the SM, the Higgs sector in the supersymmetric models is usually extended by adding Higgs doublets and/or singlets. The most economical realization is the MSSM, which consists of two Higgs doublet $H_u$ and $H_d$. In order to solve the $\mu-$problem and the little hierarchy problem in the MSSM, the singlet extension of MSSM, such as the NMSSM\cite{NMSSM1} and nMSSM\cite{xnMSSM,cao-xnmssm} has been intensively studied\cite{Barger}. The differences between these models come from their superpotentials and the corresponding soft-breaking terms, which are given by: \begin{eqnarray} W_{\rm MSSM}&=& W_F + \mu \hat{H_u}\cdot \hat{H_d}, \label{MSSM-pot}\\ W_{\rm NMSSM}&=&W_F + \lambda\hat{H_u} \cdot \hat{H_d} \hat{S} + \frac{1}{3}\kappa \hat{S^3},\\ W_{\rm nMSSM}&=&W_F + \lambda\hat{H_u} \cdot \hat{H_d} \hat{S} +\xi_F M_n^2\hat S,\\ V_{\rm soft}^{\rm MSSM}&=&\tilde m_u^2|H_u|^2 + \tilde m_d^2|H_d|^2 + (B\mu H_u\cdot H_d + h.c.),\\ V_{\rm soft}^{\rm NMSSM}&=&\tilde m_u^2|H_u|^2 + \tilde m_d^2|H_d|^2 + \tilde m_S^2|S|^2 +(A_\lambda \lambda SH_u\cdot H_d +\frac{A_\kappa}{3}\kappa S^3 + h.c.),\\ V_{\rm soft}^{\rm nMSSM}&=&\tilde m_u^2|H_u|^2 + \tilde m_d^2|H_d|^2 + \tilde m_S^2|S|^2 +(A_\lambda \lambda SH_u\cdot H_d +\xi_S M_n^3 S + h.c.), \end{eqnarray} where $W_F$ is the MSSM superpotential without the $\mu$ term, $\lambda$, $\kappa$ and $\xi_F$ are the dimensionless parameters and $\tilde{m}_{u}$, $\tilde{m}_{d}$, $\tilde{m}_{S}$, $B$, $A_\lambda$, $A_\kappa$ and $\xi_S M_n^3$ are soft-breaking parameters. Note that in the NMSSM and nMSSM the $\mu$-term is replaced by the $\mu_{\rm eff}=\lambda s$ when the singlet Higgs field $\hat S$ develops a VEV $s$. The differences between the NMSSM and nMSSM reflect the last term in the superpotential, where the cubic singlet term $\kappa \hat{S}^3$ in the NMSSM is replaced by a tadpole term $\xi_F M_n^2 \hat{S}$ in the nMSSM. This replacement in the superpotential makes the nMSSM has no discrete symmetry and thus free of the domain wall problem that the NMSSM suffers from. Actually, due to the tadpole term $\xi_F M_n^2$ does not induce any interaction, the nMSSM is identical to the NMSSM with $\kappa=0$, except for the minimization conditions of the Higgs potential and the tree-level Higgs mass matrices. With the superpotentials and the soft-breaking terms giving above, one can get the Higgs potentials of these SUSY models, and then can derive the Higgs mass matrics and eigenstates. At the minimum of the potential, the Higgs fields $H_u$, $H_d$ and $S$ are expanded as \begin{eqnarray} H_u = \left ( \begin{array}{c} H_u^+ \\ v_u +\frac{ \phi_u + i \varphi_u}{\sqrt{2}} \end{array} \right),~~ H_d & =& \left ( \begin{array}{c} v_d + \frac{\phi_d + i \varphi_d}{\sqrt{2}}\\ H_d^- \end{array} \right),~~ S = s + \frac{1}{\sqrt{2}} \left(\sigma + i \xi \right), \end{eqnarray} with $v=\sqrt{v_u^2+v_d^2}=$ 174 GeV. By unitary rotation the mass eigenstates can be given by \begin{eqnarray} \left( \begin{array}{c} h_1 \\ h_2 \\ h_3 \end{array} \right) = S \left( \begin{array}{c} \phi_u \\ \phi_d\\ \sigma\end{array} \right),~ \left(\begin{array}{c} a\\ A\\ G^0 \end{array} \right) = P \left(\begin{array}{c} \varphi_u \\ \varphi_d \\ \xi \end{array} \right),~ \left(\begin{array}{c} H^+ \\G^+ \end{array} \right) =U \left(\begin{array}{c}H_u^+\\ H_d^+ \end{array} \right). \label{rotation} \end{eqnarray} where $h_1,h_2,h_3$ are physical CP-even Higgs bosons ($m_{h_1}<m_{h_2}<m_{h_3}$), $a,A$ are CP-odd Higgs bosons, $H^+$ is the charged Higgs boson, and $G^0$, $G^+$ are Goldstone bosons eaten by $Z$ and $W^+$. Due to the absence of the singlet field $S$, the MSSM only has two CP-even Higgs bosons and one CP-odd Higgs bosons, as well as one pair of charged Higgs bosons. At the tree-level, the Higgs masses in the MSSM are conventionally parameterized in terms of the mass of the CP-odd Higgs boson ($m_A$) and $\tan\beta\equiv v_u/v_d$ and the loop corrections typically come from top and stop loops due to their large Yukawa coupling. For small splitting between the stop masses, an approximate formula of the lightest Higgs boson mass is given by\cite{Carena:2011aa}, \begin{equation}\label{mh} m^2_{h} \simeq M^2_Z\cos^2 2\beta + \frac{3m^4_t}{4\pi^2v^2} \ln\frac{M^2_S}{m^2_t} + \frac{3m^4_t}{4\pi^2v^2}\frac{X^2_t}{M_S^2} \left( 1 - \frac{X^2_t}{12M^2_S}\right), \end{equation} where $M_S = \sqrt{m_{\tilde{t}_1}m_{\tilde{t}_2}}$ and $X_t \equiv A_t - \mu \cot \beta$. The formula manifests that larger $M_S$ or $\tan\beta$ is necessary to push up the Higgs boson mass. And the Higgs boson mass can reach a maximum when $X_t/M_S=\sqrt{6}$ for given $M_S$ (i.e. the so-called $m_h^{max}$ scenario). Note that the lightest Higgs boson is the SM-like Higgs boson $h$ (with the largest coupling to vector bosons) in most of the MSSM parameter space. Different from the case in the MSSM, the Higgs sector in the NMSSM depends on the following six parameters, \begin{eqnarray} \lambda, \quad \kappa, \quad M_A^2= \frac{2 \mu (A_\lambda + \kappa s)}{\sin 2 \beta}, \quad A_\kappa, \quad \tan \beta=\frac{v_u}{v_d}, \quad \mu = \lambda s. \end{eqnarray} and in the nMSSM the input parameters in the Higgs sector are \begin{eqnarray} \lambda, \quad \tan\beta, \quad \mu \quad A_\lambda, \quad \tilde m_S, \quad M_A^2=\frac{2(\mu A_\lambda + \lambda \xi_F M_n^2)}{\sin 2 \beta}. \end{eqnarray} Because the coupling $\lambda\hat{H_u} \cdot \hat{H_d} \hat{S}$ in the superpotential, the tree-level Higgs boson mass has an additional contribution in the NMSSM and nMSSM, \begin{eqnarray} \Delta m_h^2= \lambda^2 v^2 \sin^2 2\beta \end{eqnarray} In order to push up the tree-level Higgs boson mass, $\lambda$ has to be as large as possible and $\tan\beta$ has to be small. The requirement of the absence of a landau singularity below the GUT scale implies that $\lambda\lesssim$ 0.7 at the weak scale, and the upper bound on $\lambda$ at the weak scale depends strongly on $\tan\beta$ and grows with increasing $\tan\beta$\cite{king}. However, this can still lead to a larger tree-level Higgs boson mass than in the MSSM. Therefore, the radiative corrections to $m_h^2$ may be reduced in the NMSSM and nMSSM, which may induce light top-squark and ameliorate the fine-tuning problem\cite{tuning2}. In the NMSSM and nMSSM, due to the mixing between the doublet Higgs fields and the singlet Higgs field, the SM-like Higgs boson $h$ may either be the lightest CP-even Higgs boson or the next-to-lightest CP-even Higgs boson, which corresponds to the so-called pull-down case or the push-up case\cite{cao125}, respectively. Although the mass of the SM-like Higgs boson in the nMSSM is quite similar to that in the NMSSM, the Higgs signal is quite different. This is because the peculiarity of the neutralino sector in the nMSSM, where the lightest neutralino $\tilde{\chi}^0_1$ as the lightest supersymmetric particle(LSP) acts as the dark matter candidate, and its mass takes the form\cite{rarez} \begin{eqnarray} m_{\tilde{\chi}^0_1} \simeq \frac{2\mu \lambda^2 v^2}{\mu^2+\lambda^2 v^2} \frac{\tan \beta}{\tan^2 \beta+1} \end{eqnarray} This expression implies that $\tilde{\chi}_1^0$ must be lighter than about $60$ GeV for $\lambda < 0.7$ (perturbativity bound) and $\mu > 100 {\rm GeV}$ (from lower bound on chargnio mass). And $\tilde{\chi}^0_1$ must annihilate by exchanging a resonant light CP-odd Higgs boson to get the correct relic density. For such a light neutralino, the SM-like Higgs boson around 125GeV tends to decay predominantly into light neutralinos or other light Higgs bosons\cite{cao-xnmssm}. \subsection{The di-photon Higgs signal} Considering the di-photon signal is of prime importance to searching for Higgs boson near 125 GeV, it is necessary to estimate its signal rate, and we define the normalized production rate as \begin{eqnarray} R_{\gamma\gamma} &\equiv & \sigma_{SUSY} ( p p \to h \to \gamma \gamma)/\sigma_{SM} ( p p \to h \to \gamma \gamma ) \nonumber \\ &\simeq& [\Gamma(h\to gg) Br(h\to \gamma\gamma)] /[\Gamma(h_{SM}\to gg) Br(h_{SM}\to \gamma\gamma)] \nonumber \\ &=& [\Gamma(h\to gg) \Gamma(h\to \gamma\gamma)] /[\Gamma(h_{SM}\to gg) \Gamma(h_{SM}\to \gamma\gamma)] \times \Gamma_{tot}(h_{SM})/\Gamma_{tot}(h) \nonumber\\ &=& C_{hgg}^2 C_{h\gamma\gamma}^2\times \Gamma_{tot}(h_{SM})/\Gamma_{tot}(h) \label{definition} \end{eqnarray} where $C_{hgg}$ and $C_{h\gamma\gamma}$ are the couplings of Higgs to gluons and photons in SUSY with respect to their SM values, respectively. In SUSY, the $hgg$ coupling arises mainly from the loops mediated by the third generation quarks and squarks, while the $h\gamma\gamma$ coupling has additional contributions from loops mediated by W-boson, charged Higgs boson, charginos and the third generation leptons and sleptons. Their decay widths are given by\cite{Djouadi} \begin{eqnarray} \Gamma(h\to gg)&=&\frac{G_F \alpha_s^2 m_h^3}{36 \sqrt{2}\pi^3} \left| \frac{3}{4}\sum_q g_{hqq}\, A_{1/2}^h(\tau_q) +\frac{3}{4}{\cal A}^{gg} \right|^2 \label{hgg}\\ \Gamma(h\to \gamma\gamma)&=&\frac{G_{F}\alpha^{2}m_{h}^{3}}{128\sqrt{2}\pi} \left| \sum_f N_{c}\, Q_{f}^{2}\, g_{hff}\, A_{1/2}^{h}(\tau_{f})+ g_{hWW}\, A_{1}^{h}(\tau_{W}) + {\cal A}^{\gamma\gamma}\right|^2, \label{hgaga} \end{eqnarray} with $\tau_i = m_h^2/(4m_i^2)$, and \begin{eqnarray} {\cal A}^{gg} &=& \sum_{i} \frac{g_{h\tilde{q}_i\tilde{q}_i}}{m_{\tilde{q}_i}^2}A_{0}^h(\tau_{\tilde{q}_i}),\nonumber\\ {\cal A}^{\gamma\gamma} &=& g_{hH^{+}H^{-}}\frac{m_{W}^{2}}{m^{2}_{H^{\pm}}}A_{0}^{h}(\tau_{H^{\pm}}) +\sum_{f}\frac{g_{h\tilde{f}\tilde{f}}}{m_{\tilde{f}}^2}A_{0}^h(\tau_{\tilde{f}}) +\sum_i g_{h\chi_i^+\chi_i^-}\frac{m_W}{m_{\chi_i}} A_{1/2}^h(\tau_{\chi_i}) \label{agg}, \end{eqnarray} where $m_{\tilde{f}}$ and $m_{\chi_i}$ are the mass of sfermion and chargino, respectively. In the limit $\tau_i \ll 1$, the asymptotic behaviors of $A_i^h$ are given by \begin{equation}\label{asymp} A_0^h \to - \frac13\ , \quad A_{1/2}^h \to -\frac43\ ,\quad A_{1}^h \to +7 \ , \end{equation} One can easily learn that the W-boson contribution to $h\gamma\gamma$ is by far dominant, however, for light stau or squarks with large mixing, the $h\gamma\gamma$ coupling can be enhanced. While light squarks with large mixing can suppress the $hgg$ coupling. Therefore, light stau with large mixing may enhance the di-photon signal rate\cite{Carena:2011aa}, while light squarks with large mixing have little effect on the di-photon signal rate. \section{Numerical Results and discussions} In our work the Higgs boson mass are calculated by the package NMSSMTools\cite{NMSSMTools}, which include the dominant one-loop and leading logarithmic two-loop corrections. Considering the Higgs hints at the LHC, we focus on the Higgs boson mass between 123 GeV and 127 GeV, furthermore, we consider the following constraints: \begin{itemize} \item [(1)]The constraints from LHC experiment for the non-standard Higgs boson. \item [(2)]The constraints from LEP and Tevatron on the masses of the Higgs boson and sparticles, as well as on the neutralino pair productions. \item [(3)]The indirect constraints from B-physics (such as the latest experimental result of $B_s\to \mu^+\mu^-$) and from the electroweak precision observables such as $M_W$, $\sin^2 \theta_{eff}^{\ell}$ and $\rho_{\ell}$, or their combinations $\epsilon_i (i=1,2,3)$ \cite{Altarelli}. \item [(4)]The constraints from the muon $g-2$: $a_\mu^{exp}-a_\mu^{SM} = (25.5\pm 8.2)\times 10^{-10}$ \cite{g-2}. We require SUSY to explain the discrepancy at $2\sigma$ level. \item [(5)]The dark matter constraints from WMAP relic density (0.1053 $< \Omega h^2 <$ 0.1193) \cite{WMAP} and the direct detection exclusion limits on the scattering cross section from XENON100 experiment (at $90\%$ C.L.) \cite{XENON}. \end{itemize} Note that most of the above constraints have been encoded in the package NMSSMTools. \begin{figure}[t] \centering \includegraphics[width=7cm]{fig1a.ps}\hspace{0.2cm} \includegraphics[width=7cm]{fig1b.ps} \vspace*{-0.5cm} \caption{The scatter plots of the samples in the MSSM and NMSSM (with $\lambda>$ 0.53) satisfying various constraints listed in the text (including $ 123{\rm GeV} \leq m_h \leq 127 {\rm GeV} $), showing the correlation between the mass of the lighter top-squark and $X_t/M_{S}$ with $M_S \equiv \sqrt{m_{\tilde t_1} m_{\tilde t_2}}$ and $X_t \equiv A_t - \mu \cot \beta$. In the right panel the circles (green) denote the pull-down case (the lightest Higgs boson being the SM-like Higgs), and the times (red) denote the push-up case (the next-to-lightest Higgs boson being the SM-like Higgs).} \label{fig1} \end{figure} Natural supersymmetry are usually characterized by a small superpotential parameter $\mu$, and the third generation squarks with mass $\lesssim 0.5-1.5$ TeV\cite{naturalsusy}. Therefore, we only consider the case with \begin{eqnarray} 100{\rm ~GeV}\leq (M_{Q_3},M_{U_3})\leq 1 {\rm ~TeV} ,~~|A_{t}|\leq 3 {\rm ~TeV}. \label{narrow} \end{eqnarray} For the case with $\lambda<0.2$ in the NMSSM, the property of the NMSSM is similar to the case in the MSSM\cite{cao125}. In order to distinguish the features between MSSM and NMSSM, we only consider the case with $\lambda> m_Z/v \simeq 0.53$ in the NMSSM. We scan over the parameter space of the MSSM and NMSSM under the above experimental constraints and study the property of the Higgs boson for the samples surviving the constraints. \begin{figure}[htbp] \centering \includegraphics[width=6.5cm]{fig2.ps} \vspace*{-0.5cm} \caption{Same as Fig.\ref{fig1}, but only for the NMSSM, showing the correlation between $m_{\tilde t_1}$ and the ratio $m_{\tilde t_2}/m_{\tilde t_1}$.} \label{fig2} \end{figure} In Fig.\ref{fig1} we display the surviving samples in the MSSM and NMSSM (with $\lambda>$ 0.53), showing the correlation between the lighter top-squark mass and the ratio $X_t/M_{S}$ with $M_S \equiv \sqrt{m_{\tilde t_1} m_{\tilde t_2}}$. From the figure we see that for a moderate light $\tilde t_1$, large $X_t$ is necessary to satisfy $m_h\sim$ 125 GeV, and for large $m_{\tilde t_1}$, the ratio $X_t/M_{S}$ decreases. In the MSSM, $|X_t/M_{S}|>$ 1.6 for $m_{\tilde t_1} <$ 1 TeV, i.e. no-mixing scenario($X_t=0$) cannot survive, and the top-squark mass is usually larger than 300 GeV. This implies that a large top-squark mass or a near-maximal stop mixing is necessary to satisfy the Higgs mass near 125 GeV. However, the case is very different in the NMSSM, $X_t\approx 0$ may also survive, and the lighter top-squark mass can be as light as about 100 GeV, which may alleviate the fine-tuning problem and make the NMSSM seems more natural. In the case of light $m_{\tilde t_1}$, $|X_t/M_{S}|$ is usually larger than $\sqrt{6}$, which corresponds to a large splitting between $m_{\tilde t_1}$ and $m_{\tilde t_2}$, as the Fig.\ref{fig2} shown. \begin{figure}[t] \includegraphics[width=6.5cm]{fig3a.ps}\hspace{0.2cm} \includegraphics[width=6.5cm]{fig3b.ps} \includegraphics[width=6.5cm]{fig3c.ps}\hspace{0.2cm} \includegraphics[width=6.5cm]{fig3d.ps} \vspace*{-0.5cm} \caption{Same as Fig.1, projected in the planes of $m_{\tilde t_1}$ versus the reduced couplings $C_{h\gamma\gamma}$ and $C_{hgg}$, respectively. } \label{fig3} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=7cm]{fig4a.ps}\hspace{0.2cm} \includegraphics[width=7cm]{fig4b.ps} \vspace*{-0.5cm} \caption{Same as Fig.\ref{fig1}, but only for the MSSM, showing the correlation between $m_{\tilde \tau_1}$ and the reduced coupling $C_{h\gamma\gamma}$, $\mu$ and $\tan\beta$, respectively. The purple points correspond to $R_{\gamma\gamma}>1$.} \label{fig4} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=7cm]{fig5a.ps}\hspace{0.2cm} \includegraphics[width=7cm]{fig5b.ps} \vspace*{-0.5cm} \caption{Same as Fig.\ref{fig1}, but showing the dependence of the di-photon signal rate $R_{\gamma\gamma}$ on the effective $h b\bar{b}$ coupling $C_{h b\bar{b}}\equiv C^{\rm SUSY}_{h b\bar{b}}/C^{\rm SM}_{h b\bar{b}}$. (taken for Ref.\cite{cao125})} \label{fig5} \end{figure} Due to the clean background, the di-photon signal is crucial for searching for the Higgs boson near 125 GeV. As discussed in the Sec.II, the signal rate is relevant with the coupling $C_{h\gamma\gamma}$ and $C_{hgg}$ and the total width of the SM-like Higgs boson. Both the coupling $C_{h\gamma\gamma}$ and $C_{hgg}$ are affected by the contributions from the squark loops, especially the light top-squark loop, so in the Fig.\ref{fig3} we give the relationship between the lighter top-squark mass and the coupling $C_{h\gamma\gamma}$ and $C_{hgg}$, respectively. The figure shows that the light $m_{\tilde t_1}$ may suppress the coupling $C_{hgg}$ significantly, especially in the NMSSM. While the light top-squark has little effect on the coupling $C_{h\gamma\gamma}$ because there are additional contributions, as the Eq.(\ref{hgg})and Eq.(\ref{hgaga}) shown. As analyzed in the Sec.II, light stau may enhance the coupling $C_{h\gamma\gamma}$, so in Fig.\ref{fig4} we give the correlation between $m_{\tilde \tau_1}$ and the coupling $C_{h\gamma\gamma}$ in the MSSM. The figure clearly shows that the coupling $C_{h\gamma\gamma}$ can enhance to 1.25 for $m_{\tilde \tau_1}\sim$ 100 GeV. Fig.4 also manifests that the enhancement of the coupling $C_{h\gamma\gamma}$ corresponds to large $\mu\tan\beta$, which leads to large mixing. These results exactly verifies the discussions in the Sec.II. Since $h\to b\bar b$ is the main decay mode of the light Higgs boson, the total width of the SM-like Higgs boson may be affected by the effective $hb\bar b$ coupling $C_{hb\bar{b}}$, as discussed in \cite{di-photon}. Under the effect of the combination $C_{hgg} C_{h\gamma \gamma}/C_{hb\bar{b}}$, the di-photon Higgs signal rate may be either enhanced or suppressed, as shown in Fig.\ref{fig5}, which also manifest that for the signal rate larger than 1, the effective $hb\bar b$ coupling is enhanced a little in the MSSM, while it is suppressed significantly in the NMSSM. Therefore, we can conclude that the reason for the enhancement in the signal rate is very different between the MSSM and NMSSM. In the MSSM the enhancement of the signal is mainly due to the enhancement of the coupling $C_{h\gamma\gamma}$, while in the NMSSM it is mainly due to the suppression of the $hb\bar b$ coupling. \begin{figure}[htb] \centering \includegraphics[width=7cm]{fig6.ps} \vspace*{-0.5cm} \caption{Same as Fig.\ref{fig2}, showing the signal rate $R_{\gamma\gamma}$ versus $S_d^2$ with $S_d=C_{hb\bar b}\cos\beta$.} \label{fig6} \end{figure} Due to the presence of the singlet field in the NMSSM, the doublet component in the SM-like Higgs boson $h$ may be different from the case in the MSSM, which will affect the coupling $hb\bar b$, and accordingly affect the total width of $h$. At the tree-level, $C_{h b\bar{b}}=S_d/\cos\beta$. In Fig.\ref{fig6} we show the dependence of the signal rate $R_{\gamma\gamma}$ on $S_d^2$. Obviously, for the signal rate larger than 1, $S_d^2$ is usually very small, which leads to large suppression on the reduced coupling $hb\bar b$. The figure also shows that the push-up case is more effective to enhance the signal rate than the pull-down case. This is because the push-up case is easier to realize the large mixing between the singlet field and the doublet field\cite{cao125}. As the case in the NMSSM, nMSSM can also accommodate a 125 GeV SM-like Higgs\cite{di-photon}. However, due to the peculiar property of the lightest neutralino $\tilde\chi_1^0$ in the nMSSM\cite{rarez}, the decay mode of the SM-like Higgs is very different from the case in the MSSM and NMSSM. As discussed in the Sec.II, $h\to \tilde\chi_1^0\tilde\chi_1^0$ may be dominant over $h\to b\bar b$ in a major part of parameter space in the nMSSM \cite{di-photon,zhu}, which induce a severe suppression on the di-photon Higgs signal. Although the Higgs mass can be easily reach to 125 GeV, the di-photon signal is not consistent with the LHC experiment. Therefore, the nMSSM may be excluded by LHC experiment. \begin{figure}[htbp] \centering \includegraphics[width=12cm]{fig7.ps} \vspace*{-0.5cm} \caption{The scatter plots of the surviving sample in the CMSSM, displayed in the planes of the top-squark mass and the LHC di-photon rate versus the Higgs boson mass. In the left frame, the crosses (red) denote the samples satisfying all the constraints except $B_s\to\mu^+\mu^-$, and the times (green) denotes those further satisfying the $Br(B_s\to\mu^+\mu^-)$ constraint. In the right frame, the crosses (red) are same as those in the left frame, while the times (sky-blue) denote the samples further satisfying the $R$ constraint.(taken for Ref.\cite{Cao:2011sn})} \label{fig7} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=12cm]{fig8.ps} \vspace*{-0.5cm} \caption{Same as Fig.7, but for the NUHM2.(taken for Ref.\cite{Cao:2011sn})} \label{fig8} \end{figure} We also considered the SM-like Higgs boson mass and its di-photon signal in the constrained MSSM (including CMSSM and NUHM2) under various experimental constraints, especially the limits from $B_s\to \mu^+\mu^-$. Because $Br(B_s\to \mu^+\mu^-)\propto\tan^6\beta/M_A^4$, so it may provide a rather strong constraint on SUSY with large $\tan\beta$. Considering the large theoretical uncertainties for the calculation of $Br(B_s\to \mu^+\mu^-)$, we use not only the LHCb data, but also the double ratio of the purely leptonic decays defined as $R\equiv\frac{\eta}{\eta_{SM}}$ with $\eta\equiv\frac{Br(B_s\to\mu^+\mu^-)/Br(B_u\to\tau\nu_\tau)} {Br(D_s\to\tau\nu_\tau)/Br(D\to\mu\nu_\mu)}$. The surviving parameter space is plotted in Fig.\ref{fig7} for the CMSSM and Fig.\ref{fig8} for the NUHM2. It shows that both the CMSSM and NUHM2 are hard to realize a 125 GeV SM-like Higgs boson, and also the di-photon Higgs signal is suppressed relative to the SM prediction due to the enhanced $h\bar bb$ coupling. Therefore, the constrained MSSM may also be excluded by the LHC experiment. \section{Conclusion} In this work we briefly review our recent studies on a 125 GeV Higgs and its di-photon signal rate in the MSSM, NMSSM, nMSSM and the constrained MSSM. Under the current experimental constraints, we find: (i) the SM-like Higgs can easily reach to 125 GeV in the MSSM, NMSSM and nMSSM, while it is hard to satisfy in the constrained MSSM; (ii) the NMSSM may predict a lighter top-squark than the MSSM, even as light as 100 GeV, which can ameliorate the fine-tuning problem; (iii) the di-photon Higgs signal is suppressed in the nMSSM and the constrained MSSM, but in a tiny corner of the parameter space in the MSSM and NMSSM, it can be enhanced. \section*{Acknowledgement} The work is supported by the Startup Foundation for Doctors of Henan Normal University under contract No.11108.
{ "timestamp": "2012-10-16T02:02:37", "yymm": "1210", "arxiv_id": "1210.3751", "language": "en", "url": "https://arxiv.org/abs/1210.3751" }
\section{Introduction}\label{secone} The Szlenk index is an ordinal-valued isomorphic invariant of a Banach space that was introduced in \cite{Szlenk1968}. There it is used to show that the class of separable, reflexive Banach spaces contains no universal element, thereby solving a problem posed by Banach and Mazur in the Scottish Book. Since then the Szlenk index has found a variety of uses in the study of Banach space geometry, a survey of which can be found in \cite{Lancien2006}. One of the main applications of the Szlenk index is in the study of $C(K)$ spaces and their operators, as witnessed in particular by the work of Alspach \cite{Alspach1978}, Alspach and Benyamini \cite{Alspach1979}, Benyamini \cite{Benyamini1978}, Bourgain \cite{Bourgain1979} and Gasparis \cite{Gasparis2005}; we refer to the survey article \cite{Rosenthal2003} for a detailed discussion of this topic. The purpose of the current paper is to enlarge the class of $C(K)$ spaces for which the Szlenk index of $C(K)$ is known. We shall also discuss the related $w^\ast$-dentability index for the same class of $C(K)$ spaces. It is a classical result of Mazurkiewicz and Sierpinski \cite{MS} that every countable, compact Hausdorff space is homeomorphic to an ordinal interval $[0,\,\alpha]$ equipped with its order topology, for some $\alpha<\omega_1$. The linear isomorphic classification of $C(K)$ spaces with $K$ countable is due to Bessaga and Pe{\l}czy{\'n}ski \cite{Bessaga1960}, who showed that for ordinals $\omega \leq \alpha <\beta <\omega_1$, $C([0,\,\alpha])$ is isomorphic to $C([0,\,\beta])$ if and only if $\beta<\alpha^\omega$. In particular, it follows that each $C(K)$ space with $K$ countable is isomorphic to the space $C([0,\,\omega^{\omega^\gamma}])$ for a unique countable ordinal $\gamma$. The computation of the Szlenk indices of the Banach spaces $C(K)$ with $K$ countable is due to Samuel \cite{Samuel1984}; drawing upon deep results of Alspach and Benyamini \cite{Alspach1979}, Samuel showed that the Szlenk index of $C([0,\,\omega^{\omega^\gamma}])$ is $\omega^{\gamma+1}$ for each countable ordinal $\gamma$. The first extension of Samuel's result was achieved by Lancien \cite{Lancien1996}, who used Samuel's result and a separable-reduction argument to show that if $K$ is a (scattered) compact Hausdorff space of countable height, then the Szlenk index of $C(K)$ is $\omega^{\gamma+1}$, where $\gamma$ is the unique ordinal such that the height of $K$ belongs to the ordinal interval $[\omega^\gamma, \, \omega^{\gamma+1})$. H{\'a}jek and Lancien later gave in \cite{H'ajek2007} a `direct' proof of Samuel's result, the existence of which was conjectured by Rosenthal in \cite{Rosenthal2003}; in particular, they computed the Szlenk indices of the Banach spaces $C([0,\,\alpha])$ for ordinals $\alpha<\omega_1\omega$ without appeal to Alspach and Benyamini's results from \cite{Alspach1979}. Their result says that for $\omega\leq \alpha<\omega_1\omega$ the Szlenk index of $C([0,\,\alpha])$ is $\omega^{\gamma+1}$, where $\gamma$ is the unique ordinal satisfying $\omega^{\omega^\gamma}\leq \alpha<\omega^{\omega^{\gamma+1}}$. As the Szlenk index of $c_0(X)$ coincides with the Szlenk index of $X$ for every infinite dimensional Banach space $X$, it follows easily that the statement of H{\'a}jek and Lancien's result holds in fact for all ordinals $\alpha<\omega_1\omega^\omega$ (see \cite[p.2232]{Brookera} for details.) The main purpose of the current paper is to determine the Szlenk index of $C([0,\,\alpha])$ for $\alpha$ an arbitrary ordinal. In particular, we shall extend the previous results of Samuel and H{\'a}jek-Lancien, showing that for $\alpha\geq \omega$ the Szlenk index of $C([0,\,\alpha])$ is $\omega^{\gamma+1}$, where $\gamma$ is the unique ordinal satisfying $\omega^{\omega^\gamma}\leq \alpha<\omega^{\omega^{\gamma+1}}$ (Theorem~\ref{stpip}). The computation by H{\'a}jek and Lancien of Szlenk indices of the spaces $C([0,\,\alpha])$, $\alpha<\omega_1$, makes use of the aforementioned isomorphic classification of the spaces $C([0,\,\alpha])$, $\alpha<\omega_1$, by Bessaga and Pe{\l}czy{\'n}ski. That the statement of the Bessaga-Pe{\l}czy{\'n}ski classification theorem does not hold in general for the spaces $C([0,\,\alpha])$ when $\alpha\geq\omega_1$ is the reason that the argument of H{\'a}jek and Lancien does not yield the Szlenk indices of the spaces $C([0,\,\alpha])$ for arbitrary $\alpha$. In the current paper we avoid an appeal to the Bessaga-Pe{\l}czy{\'n}ski theorem, working instead through decompositions of the spaces $C([0,\,\alpha])$ into $c_0$-direct sums of smaller spaces of continuous functions on compact ordinals (cf. Lemma~\ref{bp}) and isomorphisms of such $c_0$-direct sums (cf. Lemma~\ref{pb}). In Section~\ref{secthree} we shall outline how the techniques developed in \cite{H'ajek2009} can be combined with the arguments used in the proof of Theorem~\ref{stpip} of the current paper to show that for $\alpha$ and $\gamma$ as in the preceding paragraph, the $w^\ast$-dentability index of $C([0,\,\alpha])$ is $\omega^{1+\gamma+1}$ (Theorem~\ref{dentthm}). We now detail most of the necessary terminology and background results for the current paper. As usual, $\omega$ denotes the first infinite ordinal and $\omega_1$ denotes the first uncountable ordinal. For $K$ a compact Hausdorff space, $C(K)$ is the Banach space of continuous scalar-valued functions on $K$, equipped with the supremum norm. For $\alpha$ an ordinal, the ordinal interval $[0,\,\alpha]$ is a compact Hausdorff space when equipped with its order topology. We denote by $C_0([0,\,\alpha])$ the closed subspace of $C([0,\,\alpha])$ consisting of all elements of $C([0,\,\alpha])$ that vanish at $\alpha$. It is well-known and easy to show that $C_0([0,\,\alpha])$ is isomorphic to $C([0,\,\alpha])$ whenever $\alpha\geq\omega$. For ordinals $\xi \leq \alpha$ and $f\in C_0([0,\,\xi])$, we define $f_{\xi,\,\alpha}\in C_0([0,\,\alpha])$ by setting \[ f_{\xi,\,\alpha}(\zeta)= \begin{cases} f(\zeta)& \text{if $\zeta\leq \xi$},\\ 0& \text{if $\zeta >\xi$,} \end{cases}\quad 0\leq \zeta \leq\alpha\, . \] It is clear that the operator $J_{\xi,\,\alpha}: C_0([0,\,\xi])\longrightarrow C_0([0,\,\alpha]): f\mapsto f_{\xi,\,\alpha}$ is an isometric linear embedding of $C_0([0,\,\xi])$ into $C_0([0,\,\alpha])$. For a Banach space $X$ we write $\cball{X}$ for the set $\{ x\in X\mid \Vert x\Vert\leq 1\}$. If $Y$ is a Banach space that is isomorphic to $X$, we write $X\approx Y$. If $S$ is a nonempty set, $c_0(S, \, X)$ is defined to be the linear space \[ \{ f: S\longrightarrow X \mid \{ s\in S \mid \Vert f(s)\Vert >\epsilon\} \mbox{ is finite for every }\epsilon>0\} \] which we equip with the complete norm $\Vert f\Vert := \sup\{ \Vert f(s)\Vert\mid s \in S\}$. For a nonempty subset $R\subseteq S$, we denote by $U_R$ the canonical isometric linear embedding of $c_0(R, \, X)$ into $c_0(S, \, X)$. The dual space $c_0(S,\, X)\sp{\ast}$ is naturally identified via isometric linear isomorphism with the Banach space $\ell_1(S,\,X\sp{\ast})$. The Szlenk index is defined as follows. Let $X$ be an Asplund space and $B \subseteq X\sp{\ast}$. Define \begin{equation*} s_\epsilon(B) = \{x \in B \mid \diam (B \cap V)> \epsilon \mbox{ for every } w\sp{\ast} \mbox{-open }V\ni x\}\,. \end{equation*} We iterate $s_\epsilon$ transfinitely as follows: $s_\epsilon^0(B) = B$, $ s_\epsilon^{\beta+1}(B)= s_\epsilon(s_\epsilon^\beta(B))$ for each ordinal $\beta$ and $ s_\epsilon^\beta(B) = \bigcap_{\sigma< \beta} s_\epsilon^\sigma(B)$ whenever $\beta $ is a limit ordinal. The \emph{$\epsilon$-Szlenk index of $X$}, denoted ${Sz}(X, \, \epsilon)$, is the first ordinal $\beta$ such that $s_\epsilon^\beta(\cball{X\sp{\ast}}) = \emptyset$. The \emph{Szlenk index of $X$} is the ordinal $Sz(X):=\sup_{\epsilon >0}Sz(X, \, \epsilon)$. Note that $Sz(X,\,\epsilon)$ exists for every Asplund space $X$ and $\epsilon > 0$ by following well-known characterisation of Asplund spaces: a Banach space is Asplund if and only if every bounded subset of its dual admits $w\sp{\ast}$-open slices of arbitrarily small norm diameter \cite[Theorem~5.2]{Deville1993}. The ordinal index $Sz(X)$ is thus defined for every Asplund space $X$, and the definition cannot be extended beyond the class of Asplund spaces. It is worth noting that the definition of the Szlenk index used in the current paper (and many others) differs from that introduced by Szlenk in \cite{Szlenk1968}, however the two definitions give the same index on separable Banach spaces containing no copy of $\ell_1$. The following proposition collects some basic facts regarding the Szlenk index. \begin{proposition}\label{collection} Let $X$ and $Y$ be Asplund spaces. \begin{itemize} \item[(i)] If $X$ is isomorphic to a subspace of $Y$, then $ Sz(X)\leq Sz(Y)$. In particular, the Szlenk index is an isomorphic invariant of an Asplund space. \item[(ii)] If $\gamma$ is an ordinal and $\epsilon >0$ is such that $Sz(X, \, \epsilon)>\omega^\gamma$, then $Sz(X)\geq \omega^{\gamma+1}$. It follows that $Sz(X)=\omega^\alpha$ for some ordinal $\alpha$. \item[(iii)] $Sz(X)=1$ if and only $\dim (X)<\infty$. \end{itemize} \end{proposition} Details of the proofs of assertions (i) and (ii) of Proposition~\ref{collection} can be found in \cite[\S2.4]{H'ajek2008}. Verification of (iii) is elementary. The characterisation of those compact Hausdorff spaces $K$ for which $C(K)$ is an Asplund space is due to Namioka and Phelps; they showed in \cite{Namioka1975} that a Banach space $C(K)$ is Asplund if and only if $K$ is scattered. As every ordinal interval $[0,\,\alpha]$ is scattered and compact when equipped with its order topology, the spaces $C([0,\,\alpha])$ are Asplund spaces and their Szlenk index is defined. Information regarding topological properties of ordinals can be found in, e.g., \cite[\S8.6]{Semadeni1971}. Important to our analysis of the spaces $C([0,\,\alpha])$ and their duals is the fact that for a scattered, compact Hausdorff space $K$, the dual space $C(K)\sp{\ast}$ is naturally identified with $\ell_1(K)$; this is due to Rudin \cite[Theorem~6]{Rudin1957}. The dual of $C_0([0,\,\alpha])$ is naturally identified with $\ell_1([0,\,\alpha))$. \section{The Szlenk index of $C([0,\,\alpha])$}\label{sectwo} We begin our computations of Szlenk indices by gathering some preliminary results that we shall need. The first such result is the following proposition that provides a way to obtain an upper estimate of the Szlenk index of a Banach space. \begin{proposition}[\cite{H'ajek2007}]\label{upest} Let $X$ be a Banach space and $\eta$ an ordinal. Assume that \[ \forall\epsilon>0\quad \exists\delta(\epsilon)\in(0,\,1) \quad s_\epsilon^\eta (\cball{X\sp{\ast}}) \subseteq (1-\delta(\epsilon))\cball{X\sp{\ast}}\, . \] Then \[ Sz(X)\leq \eta\omega\, . \] \end{proposition} \begin{lemma}\label{bp} Let $\xi$ and $\zeta$ be ordinals satisfying $0<\zeta\leq\xi$ and $\omega\leq\xi$. Then $C_0([0,\,\xi\zeta])\approx C_0([0,\,\zeta])\oplus c_0(\zeta, \, C_0([0,\,\xi]))$. \end{lemma} Lemma~\ref{bp} is essentially noted by Bessaga and Pe{\l}czy{\'n}ski in their proof of \cite[2.4]{Bessaga1960}; for the sake of completeness, we give here the details of their sketch proof. \begin{proof} We may write $C_0([0,\,\xi\zeta])=Y\oplus Z$, where $Y$ consists of all elements of $C_0([0,\,\xi\zeta])$ that are constant on the intervals $(\xi \sigma,\,\xi(\sigma+1)]$, $0\leq \sigma<\zeta$, and $Z$ consists of all elements of $C_0([0,\,\xi\zeta])$ vanishing at the points $\xi\sigma$, $1\leq \sigma\leq \zeta$. The lemma then follows from the routine observation that $Y$ and $Z$ are isometrically isomorphic to $C_0([0,\,\zeta])$ and $ c_0(\zeta, \, C_0([0,\,\xi]))$ respectively. \end{proof} We have the following consequence of Lemma~\ref{bp}. \begin{lemma}\label{pb} Let $\gamma$ be an ordinal and $1<n<\omega$. Then \[ C_0([0,\,\omega^{\omega^\gamma n}])\approx c_0(\omega^{\omega^\gamma}, \, C_0([0,\,\omega^{\omega^\gamma}]))\, .\] \end{lemma} \begin{proof} We proceed via induction on $n$. For the case $n=2$, note that an application of Lemma~\ref{bp} with $\xi=\zeta = \omega^{\omega^\gamma}$ yields \[ C_0([0,\,\omega^{\omega^\gamma 2}])\approx C_0([0,\,\omega^{\omega^\gamma}])\oplus c_0(\omega^{\omega^\gamma}, \, C_0([0,\,\omega^{\omega^\gamma}])) \approx c_0(\omega^{\omega^\gamma}, \, C_0([0,\,\omega^{\omega^\gamma}]))\, , \] as desired. Similarly, if $C_0([0,\,\omega^{\omega^\gamma n}])\approx c_0(\omega^{\omega^\gamma}, \, C_0([0,\,\omega^{\omega^\gamma}]))$, then applying Lemma~\ref{bp} with $\zeta= \omega^{\omega^\gamma}$ and $\xi = \omega^{\omega^\gamma n}$ yields \begin{align*} C_0([0,\,\omega^{\omega^\gamma (n+1)}]) &\approx C_0([0,\,\omega^{\omega^\gamma}]) \oplus c_0(\omega^{\omega^\gamma}, \, C_0([0,\,\omega^{\omega^\gamma n}]))\\ &\approx C_0([0,\,\omega^{\omega^\gamma}]) \oplus c_0(\omega^{\omega^\gamma},\, c_0(\omega^{\omega^\gamma}, \, C_0([0,\,\omega^{\omega^\gamma}])))\\ &\approx C_0([0,\,\omega^{\omega^\gamma}]) \oplus c_0(\omega^{\omega^\gamma}, \, C_0([0,\,\omega^{\omega^\gamma}]))\\ &\approx c_0(\omega^{\omega^\gamma}, \, C_0([0,\,\omega^{\omega^\gamma}]))\,, \end{align*} which completes the proof. \end{proof} The last of the preliminary results that we shall require is the following generalisation of \cite[Lemma~3.3]{H'ajek2007}. \begin{lemma}\label{HaLag} Let $\alpha$, $\beta$ and $\xi$ be ordinals such that $\xi<\alpha$, let $S$ be a set, $\emptyset \subsetneq R\subseteq S$ and $\epsilon >0$. If $(z_s)_{s\in S}\in s_{3\epsilon}^\beta(\cball{c_0(S,\,C_0([0,\,\alpha]))\sp{\ast}})$ and $\sum_{r\in R}\Vert J_{\xi,\,\alpha}\sp{\ast} z_r\Vert>1-\epsilon$, then $(J_{\xi,\, \alpha}\sp{\ast} z_r)_{r\in R}\in s_\epsilon^\beta (B_{c_0(R,\,C_0([0,\,\xi]))\sp{\ast}})$. \end{lemma} \begin{proof} We proceed via transfinite induction on $\beta$. The assertion of the lemma is clearly true for $\beta=0$. Suppose that $\sigma$ is an ordinal such that the assertion of the lemma holds for all $\beta\leq \sigma$; we will show that the lemma holds also for $\beta=\sigma+1$. Let $(z_s)_{s\in S} \in \cball{c_0(S,\,C_0([0,\,\alpha]))\sp{\ast}}$ be such that $\sum_{r\in R}\Vert J_{\xi,\,\alpha}\sp{\ast} z_r \Vert>1-\epsilon$ and $(J_{\xi,\, \alpha}\sp{\ast} z_r)_{r\in R}\notin s_\epsilon^{\sigma+1} (B_{c_0(R,\,C_0([0,\,\xi]))\sp{\ast}})$. Since we intend to show that $(z_s)_{s\in S} \notin s_{3\epsilon}^{\sigma+1}(\cball{c_0(S,\,C_0([0,\,\alpha]))\sp{\ast}})$, we may assume that $(z_s)_{s\in S} \in s_{3\epsilon}^{\sigma}(\cball{c_0(S,\,C_0([0,\,\alpha]))\sp{\ast}})$, hence $(J_{\xi,\, \alpha}\sp{\ast} z_r)_{r\in R}\in s_\epsilon^\sigma (B_{c_0(R,\,C_0([0,\,\xi]))\sp{\ast}})$ by the induction hypothesis. So there is a $w\sp{\ast}$-open subset $V$ of $c_0(R, \, C_0([0,\,\xi]))\sp{\ast}$ containing $(J_{\xi,\, \alpha}\sp{\ast} z_r)_{r\in R}$ and such that $\diam (V\cap s_\epsilon^\sigma(\cball{c_0(R,\, C_0([0,\,\xi]))\sp{\ast}}))\leq \epsilon$. Since $\sum_{r\in R}\Vert J_{\xi,\,\alpha}\sp{\ast} z_r \Vert>1-\epsilon$, we may assume that \begin{equation}\label{doop} V\cap (1-\epsilon)\cball{c_0(R,\,C_0([0,\,\xi]))\sp{\ast}} =\emptyset\, . \end{equation} Define \[ J: c_0(R,\, C_0([0,\,\xi]))\longrightarrow c_0(R,\,C_0([0,\,\alpha])): (x_r)_{r\in R}\mapsto (J_{\xi, \, \alpha}x_r)_{r\in R}\,, \] so that $U_RJ$ is an isometric linear embedding of $c_0(R, \, C_0([0,\,\xi]))$ into $c_0(S, \, C_0([0,\,\alpha]))$. Let $W=(J\sp{\ast} U_R\sp{\ast})^{-1}(V)$, so that $W$ is a $w\sp{\ast}$-open set containing $(z_s)_{s\in S}$, and let $(u_s)_{s\in S}\in W\cap s_{3\epsilon}^\sigma(\cball{c_0(S,\, C_0([0,\,\alpha]))\sp{\ast}})$. Then $\sum_{r\in R}\Vert J_{\xi, \alpha}\sp{\ast} u_r \Vert >1-\epsilon$ by (\ref{doop}) and $(u_s)_{s\in S}\in s_{3\epsilon}^\sigma(\cball{c_0(S,\, C_0([0,\,\alpha]))\sp{\ast}})$ by assumption, hence $J\sp{\ast} U_R\sp{\ast} (u_s)_{s\in S}\in s_{\epsilon}^\sigma (\cball{c_0(R, \, C_0([0,\,\xi]))\sp{\ast}})$ by the induction hypothesis. Suppose $(u_s)_{s\in S}, \, (v_s)_{s\in S}\in W\cap s_{3\epsilon}^\sigma (\cball{c_0(S, \, C_0([0,\,\alpha]))\sp{\ast}})$. Then \[ \Vert J\sp{\ast} U_R\sp{\ast} (u_s)_{s\in S} - J\sp{\ast} U_R\sp{\ast} (v_s)_{s\in S} \Vert \leq \diam (V\cap s_\epsilon^\sigma(\cball{c_0(R,\, C_0([0,\,\xi]))\sp{\ast}}))\leq \epsilon \, .\] Moreover, since $\Vert J\sp{\ast} U_R\sp{\ast}(u_s)_{s\in S}\Vert >1-\epsilon$, we have \[ \sum_{s\in S\setminus R}\Vert u_s\Vert +\sum_{r\in R}\Vert u_r|_{[\xi, \, \alpha)}\Vert <\epsilon\, , \] and similarly, \[ \sum_{s\in S\setminus R}\Vert v_s\Vert +\sum_{r\in R}\Vert v_r|_{[\xi, \, \alpha)}\Vert <\epsilon\, . \] It follows that \begin{align*} \Vert (u_s)_{s\in S}- (v_s)_{s\in S}\Vert &\leq \Vert J\sp{\ast} U_R\sp{\ast} (u_s)_{s\in S} - J\sp{\ast} U_R\sp{\ast} (v_s)_{s\in S} \Vert + \sum_{s\in S\setminus R}\Vert u_s-v_s\Vert +\sum_{r\in R}\Vert (u_r-v_r)|_{[\xi, \, \alpha)} \Vert\\ &\leq \epsilon +\epsilon +\epsilon = 3\epsilon\, . \end{align*} In particular, $\diam (W\cap s_{3\epsilon}^\sigma(\cball{c_0(S, \, C_0([0,\,\alpha]))\sp{\ast}}))\leq 3\epsilon$, hence $(z_s)_{s\in S}\notin s_{3\epsilon}^{\sigma+1}(\cball{c_0(S, \, C_0([0,\,\alpha]))\sp{\ast}})$. We have now shown that the assertion of the lemma passes to successor ordinals. As the assertion of the lemma passes readily to limit ordinals, the proof is complete. \end{proof} We are now ready to determine upper estimates for the Szlenk indices of the Banach spaces $C_0([0,\,\omega^{\omega^\gamma}])$, where $\gamma$ is an arbitrary ordinal. \begin{proposition}\label{build} Let $\gamma$ be an ordinal and $0<n<\omega$. Then \[Sz(c_0(\omega^{\omega^\gamma},\,C_0([0,\,\omega^{\omega^\gamma n}])))\leq\omega^{\gamma+1}\, .\] \end{proposition} \begin{proof} We proceed by induction on $\gamma$, first establishing the proposition in the case $\gamma=0$ and $n=1$. By Proposition~\ref{upest}, it suffices to show that \[ \forall \epsilon>0 \quad s_\epsilon (\cball{c_0(\omega,\,C_0([0,\,\omega]))\sp{\ast}})\subseteq \Big(1-\frac{\epsilon}{3}\Big)\cball{c_0(\omega,\,C_0([0,\,\omega]))\sp{\ast}}\, . \] Suppose by way of contraposition that there is $\epsilon>0$ and $(z_l)_{l<\omega}\in s_\epsilon (\cball{c_0(\omega,\,C_0([0,\,\omega]))\sp{\ast}})$ such that $\Vert (z_l)_{l<\omega}\Vert >1-\epsilon/3$. Since \[ \Vert (z_l)_{l<\omega}\Vert = \sup \Big\{ \sum_{r\in R}\Vert J_{m,\,\omega}\sp{\ast} z_r\Vert \,\, \Big\vert \,\, 0<m<\omega, \, R\subseteq \omega, \, 0<\vert R\vert <\infty \Big\}\, , \] there exists $m<\omega$ and a nonempty finite set $R\subseteq \omega$ such that \[ \sum_{r\in R}\Vert J_{m,\,\omega}\sp{\ast} z_r\Vert >1-\frac{\epsilon}{3}\, . \] By Lemma~\ref{HaLag}, this implies that $(J_{m,\,\omega}\sp{\ast} z_r)_{r\in R}\in s_{\epsilon/3}(\cball{c_0(R, \, C_0([0,\,m]))\sp{\ast}})$, hence $Sz(c_0(R, \, C_0([0,\,m])))>1$. By Proposition~\ref{collection}(iii), this in turn implies that $c_0(R, \, C_0([0,\,m]))$ is infinite dimensional; however, this is impossible since $\dim (c_0(R, \, C_0([0,\,m])))=m\vert R\vert<\infty$. With this contradiction we have now established the assertion of the proposition for $\gamma=0$ and $n=1$. Next we show that if $\beta$ is an ordinal such that the assertion of the proposition holds for $\gamma=\beta$ and $n=1$, then the proposition is true for $\gamma=\beta$ and all $0<n<\omega$. Let $1<m<\omega$ and note that, by Lemma~\ref{pb}, for any ordinal $\beta$ we have \[ c_0(\omega^{\omega^\beta}, \, C_0([0,\,\omega^{\omega^\beta m}]))\approx c_0(\omega^{\omega^\beta}, c_0(\omega^{\omega^\beta}, \, C_0([0,\,\omega^{\omega^\beta}])))\approx c_0(\omega^{\omega^\beta}, \, C_0([0,\,\omega^{\omega^\beta}]))\, . \] Assuming the proposition is true for $n=1$ and $\gamma=\beta$, we deduce that \[ Sz(c_0(\omega^{\omega^\beta}, \, C_0([0,\,\omega^{\omega^\beta m}]))) = Sz(c_0(\omega^{\omega^\beta}, \, C_0([0,\,\omega^{\omega^\beta}]))) \leq \omega^{\beta+1}\, , \] as desired. It now remains to show that if $\beta$ is an ordinal such that the assertion of the proposition holds for all $\gamma<\beta$ and $0<n<\omega$, then the assertion of the proposition holds for $n=1$ and $\gamma=\beta$. Take such $\beta$ and note that, by Proposition~\ref{upest}, it suffices to show that \begin{equation}\label{eq2} \forall\epsilon>0 \quad s_\epsilon^{\omega^\beta}(\cball{c_0(\omega^{\omega^\beta}, \, C_0([0,\,\omega^{\omega^\beta}]))\sp{\ast}}) \subseteq \Big(1-\frac{\epsilon}{3}\Big)\cball{c_0(\omega^{\omega^\beta},\, C_0([0,\,\omega^{\omega^\beta}]))\sp{\ast}}\, . \end{equation} Suppose by way of contraposition that there is $\epsilon>0$ and $(z_\eta)_{\eta< \omega^{\omega^\beta}}\in s_\epsilon^{\omega^\beta}(\cball{c_0(\omega^{\omega^\beta}, \, C_0([0,\,\omega^{\omega^\beta}]))\sp{\ast}})$ with $\Vert (z_\eta)_{\eta< \omega^{\omega^\beta}}\Vert >1-\epsilon/3$. Since \[ \Vert (z_\eta)_{\eta< \omega^{\omega^\beta}}\Vert = \sup \Big\{ \sum_{r\in R}\Vert J_{\omega^{\omega^\zeta m}, \, \omega^{\omega^\beta}}\sp{\ast} z_r\Vert \,\, \Big\vert \,\, \zeta <\beta , \, 0<m<\omega, \, R\subseteq \omega^{\omega^\beta}, \, 0<\vert R\vert <\infty\Big\}\, , \] there exists $\zeta<\beta$, $0<m<\omega$ and a nonempty finite set $R\subseteq \omega^{\omega^\beta}$ such that \[ \sum_{r\in R}\Vert J_{\omega^{\omega^\zeta m},\, \omega^{\omega^\beta}}\sp{\ast} z_r\Vert >1-\epsilon/3\, . \] By Lemma~\ref{HaLag}, this implies that $(J_{\omega^{\omega^\zeta m},\, \omega^{\omega^\beta}}\sp{\ast} z_r)_{r\in R}\in s_{\epsilon/3}^{\omega^\beta}(\cball{c_0(R, \, C_0([0,\,\omega^{\omega^\zeta m}]))\sp{\ast}})$, hence $Sz(c_0(R, \, C_0([0,\,\omega^{\omega^\zeta m}]))) > \omega^\beta$. By the induction hypothesis, it follows that \[ \omega^{\beta}< Sz(c_0(R, \, C_0([0,\,\omega^{\omega^\zeta m}])))\leq \omega^{\zeta+1}\leq \omega^\beta\, , \] which is absurd. Thus (\ref{eq2}) holds, and the assertion of the proposition holds for $n=1$ and $\gamma=\beta$. The inductive proof is now complete. \end{proof} \begin{theorem}\label{stpip} Let $\alpha\geq \omega$ and let $\gamma$ be the unique ordinal satisfying $\omega^{\omega^\gamma}\leq \alpha < \omega^{\omega^{\gamma+1}}$. Then \[ Sz(C([0,\,\alpha]))=\omega^{\gamma+1}\,.\] \end{theorem} \begin{proof} Let $n<\omega$ be such that $\omega^{\omega^\gamma n}>\alpha$, so that $C([0,\,\alpha])$ is isomorphic to a subspace of $C_0([0,\,\omega^{\omega^\gamma n}])$, hence isomorphic to a subspace of $c_0(\omega^{\omega^\gamma},\,C_0([0,\,\omega^{\omega^\gamma n}]))$. Then, by Proposition~\ref{collection}(i) and Proposition~\ref{build}, \begin{align}\label{prooof} Sz(C([0,\,\alpha]))\leq Sz(c_0(\omega^{\omega^\gamma},\,C_0([0,\,\omega^{\omega^\gamma n}])))\leq \omega^{\gamma+1}. \end{align} To obtain the reverse inequality, we consider the functionals $\delta_\xi\in \cball{C([0,\,\alpha])\sp{\ast}}$, $\xi\leq \alpha$, where $\langle \delta_\xi, \, f\rangle = f(\xi)$ for each $f\in C([0,\,\alpha])$. As the map $[0,\,\alpha]\longrightarrow C([0,\,\alpha])\sp{\ast}$ is an order-$w\sp{\ast}$ homeomorphic embedding, a straightforward induction shows that $\delta_{\omega^\zeta}\in s_1^{\zeta}(\cball{C([0,\,\alpha])\sp{\ast}})$ whenever $\omega^\zeta \leq \alpha$. In particular, $s_1^{\omega^\gamma}(\cball{C([0,\,\alpha])\sp{\ast}})\ni \delta_{\omega^{\omega^\gamma}}$ is nonempty, hence $Sz(C([0,\,\alpha]))>\omega^\gamma$. By Proposition~\ref{collection}(ii), $Sz(C([0,\,\alpha]))\geq\omega^{\gamma+1}$, and we are done. \end{proof} \section{The $w^\ast$-dentability index of $C([0,\,\alpha])$}\label{secthree} In this section we discuss the $w^\ast$-dentability indices of the spaces $C([0,\,\alpha])$, where $\alpha$ is an arbitrary ordinal. For a (real) Asplund space $X$, the definitions of the $\epsilon$-$w^\ast$-dentability index $Dz(X,\,\epsilon)$ of $X$ and the $w^\ast$-dentability index $Dz(X)$ of $X$ are essentially the same as for the Szlenk indices $Sz(X, \, \epsilon)$ and $Sz(X)$, the difference being that in the derivation on $w^\ast$-compact sets we remove only $w^\ast$-slices of small norm diameter (for $x\in X$ and $t\in \mathbb{R}$, let $H(x,\,t)= \{x\sp{\ast} \in X\sp{\ast} \mid x\sp{\ast} (x)>t\}$; for $B\subseteq X\sp{\ast}$, a $w\sp{\ast}$-\emph{slice of} $B$ is any set of the form $H(x,\,t)\cap B$, where $x\in X$ and $t\in \mathbb{R}$.) To be precise, let $X$ be an Asplund space and $B \subseteq X\sp{\ast}$. Define \begin{equation*} d_\epsilon(B) = \{x\sp{\ast} \in B \mid \diam (B \cap H(x,\,t))> \epsilon \mbox{ for every } w\sp{\ast} \mbox{-slice }H(x,\,t)\ni x\sp{\ast}\}\,. \end{equation*} We iterate $d_\epsilon$ transfinitely, setting $d_\epsilon^0(B) = B$, $ d_\epsilon^{\beta+1}(B)= d_\epsilon(d_\epsilon^\beta(B))$ for each ordinal $\beta$ and $ d_\epsilon^\beta(B) = \bigcap_{\sigma< \beta} d_\epsilon^\sigma(B)$ whenever $\beta $ is a limit ordinal. Define ${Dz}(X,\,\epsilon)$ to be the first ordinal $\beta$ such that $d_\epsilon^\beta(\cball{X\sp{\ast}}) = \emptyset$, and $Dz(X) :=\sup_{\epsilon >0}{Dz}(X,\,\epsilon)$. Similarly to the Szlenk index, the $w^\ast$-dentability index ${Dz}(X)$ is defined for every Asplund space $X$. The natural analogues of parts (i) and (ii) of Proposition~\ref{collection} hold for the $w^\ast$-dentability index, with similar proofs. In particular, $Dz(X)\leq Dz(Y)$ whenever $X$ is a subspace of $Y$, and $Dz(X)>\omega^\gamma$ implies $Dz(X)\geq \omega^{\gamma+1}$. For part (iii), the analogous result for the $w^\ast$-dentability index is that $Dz(X)\leq \omega$ if and only if $X$ is superreflexive; this is due to Lancien \cite{Lancien1995}. Moreover, it is clear that ${Sz}(X)\leq {Dz}(X)$ for all Asplund spaces $X$; conversely, we have the following: \begin{proposition}[\cite{Lancien2006}]\label{bochnerest} Let $X$ be an Asplund space and $L_2(X)$ the Banach space of all (equivalence classes of) Bochner integrable functions $f:[0,\,1]\longrightarrow X$, equipped with its usual norm. Then \[ Dz(X)\leq Sz(L_2(X))\,. \] \end{proposition} Proposition~\ref{bochnerest} was used in \cite{H'ajek2009} to show that for ordinals $\omega^{\omega^\gamma}\leq \alpha<\omega^{\omega^{\gamma+1}}<\omega_1$, the $w^\ast$-dentability index of $C([0,\,\alpha])$ is $\omega^{1+\gamma+1}$. The authors of \cite{H'ajek2009} then extend their result to a certain nonseparable setting by showing that for a scattered compact Hausdorff space $K$ of countable height, the $w^\ast$-dentability index of $C(K)$ is $\omega^{1+\gamma+1}$, where $\gamma$ is the unique (countable) ordinal such that the height of $K$ belongs to the ordinal interval $[\omega^\gamma, \, \omega^{\gamma+1})$. The following result extends the main result of \cite{H'ajek2009} in a different direction. \begin{theorem}\label{dentthm} Let $\alpha\geq \omega$ and let $\gamma$ be the unique ordinal satisfying $\omega^{\omega^\gamma}\leq \alpha < \omega^{\omega^{\gamma+1}}$. Then \[ Dz(C([0,\,\alpha]))=\omega^{1+\gamma+1}\,.\] \end{theorem} We shall only sketch the proof of Theorem~\ref{dentthm}, as the differences between the proofs of Theorem~\ref{stpip} and Theorem~\ref{dentthm} are completely analogous to the differences between the proofs of the separable cases established in \cite{H'ajek2007} and \cite{H'ajek2009} (we note that although it is essentially possible to prove Theorem~\ref{stpip} and Theorem~\ref{dentthm} simultaneously by estimating the Szlenk index of $L_2(\mu, \, C([0,\,\alpha]))$, where $\mu$ is assumed to be either counting measure on a singleton or Lebesgue measure on $[0,\,1]$, respectively, we feel it would obscure the main ideas of the proof of Theorem~\ref{stpip} to do so). Theorem~\ref{dentthm} follows readily from the Szlenk index estimate given by the following result. \begin{proposition}\label{dentdentest} Let $\gamma$ be an ordinal and $0<n<\omega$. Then \[ Sz(L_2(c_0(\omega^{\omega^\gamma},\,C([0,\,\omega^{\omega^\gamma n}])))) \leq \omega^{1+\gamma+1}\,. \] \end{proposition} The main difficulty in establishing Proposition~\ref{dentdentest} is to prove the following variant of Proposition~\ref{HaLag}; the proof combines ideas from the proofs of Proposition~\ref{dentdentest} and \cite[Lemma~6]{H'ajek2009} \begin{lemma} Let $0<n<\omega$ and let $\zeta$ and $\gamma$ be ordinals satisfying either $\zeta=\gamma=0$ or $\omega^{\omega^\zeta n}<\omega^{\omega^\gamma}$. Let $\emptyset \subsetneq R\subseteq \omega^{\omega^\gamma}$, $\epsilon >0$ and let $J$ denote the canonical embedding of $L_2(c_0(R,\,C([0,\,\omega^{\omega^\zeta n}])))$ into $L_2(c_0(\omega^{\omega^\gamma},\,C([0,\,\omega^{\omega^\gamma}])))$. Let $\beta$ be an ordinal. If $z\in s_{3\epsilon}^\beta(B_{L_2(c_0(\omega^{\omega^\gamma},\,C([0,\,\omega^{\omega^\gamma}])))^\ast})$ and $\Vert J^\ast z\Vert^2 >1-\epsilon^2 $, then $J^\ast z\in s_\epsilon^\beta (B_{L_2(c_0(R,\,C([0,\,\omega^{\omega^\zeta n}])))^\ast})$. \end{lemma} The estimate $Dz(C([0,\,\alpha]))\leq \omega^{1+\gamma+1}$ follows readily from Proposition~\ref{bochnerest} and Proposition~\ref{dentdentest}. For the reverse inequality, note that for the case $\gamma\geq \omega$ we have \[ Dz(C([0,\,\alpha]))\geq Sz(C([0,\,\alpha])) > Sz(C([0,\,\alpha]), \, 1)\geq \omega^{\gamma} =\omega^{1+\gamma}\,,\] so that the required estimate follows by the aforementioned $w^\ast$-dentability index version of Proposition~\ref{collection}(ii). The case $\gamma<\omega$ follows from the fact established in \cite[Proposition~11]{H'ajek2009} that $Dz(C([0,\,\omega^{\omega^\gamma}]),\,1/2)>\omega^{1+\gamma}$ for every $\gamma<\omega$.
{ "timestamp": "2012-10-16T02:01:43", "yymm": "1210", "arxiv_id": "1210.3696", "language": "en", "url": "https://arxiv.org/abs/1210.3696" }
\section{Introduction} Throughout, let $\mathbb{N}$, $\mathbb{N}_0= \mathbb{N}\cup\{0\}$, $\mathbb{Z}$, and $\mathbb{C}$ be the sets of positive integers, nonnegative integers, integers, and complex numbers respectively. The set of even positive integers and the set of odd positive integers will be written $2 \mathbb{N}$ and $2 \mathbb{N} -1$ respectively. We will use $p$ to denote prime numbers and for simplicity we shall often write $(m,n)$ rather than $\gcd(m,n)$. Next $\phi(n)$ will denote the Euler totient function and $\mu(n)$ will denote the M\"{o}bius mu function. For our current purposes we record the following well-known properties of these two functions. If $n>1$ and $s \in \mathbb{Z}\setminus\{0\}$ , then \begin{equation} \label{basic-phi-mu} \phi(n) = n \prod_{p \mid n}\left( 1 - \frac{1}{p} \right), \ \sum_{d\mid n} \phi(d) = n,\ \sum_{d\mid n} \mu(d) = 0,\ \sum_{d\mid n} \mu(d) d^s = \prod_{p\mid n} (1-p^s). \end{equation} Further, it is easy to verify the following formula which is crucial in this paper. If $k,n \in\mathbb{N}$ such that $n>1$, then \begin{equation} \label{primitive-sum} \sum_{\substack{1\leq l <n\\ (l,n)=1}} l^{k} = \sum_{d\mid n} \mu(d) d^{k}\sum_{j=1}^{\frac{n}{d}} j^{k} = \sum_{d\mid n} \mu(d) d^{k}\sum_{j=1}^{\frac{n}{d}-1} j^{k} \end{equation} \[ = \sum_{d\mid n} \mu(d) \frac{d^{k}}{k+1} \sum_{j=0}^{k} \binom{k+1}{j} B_j \left(\frac{n}{d} \right)^{k+1-j}, \] where $B_j$ denotes the $j$th Bernoulli number for which $B_1 = -1/2$ and $B_{2j+1}=0$ for $j\in \mathbb{N}$ and the first few terms for even $j$ are \[ B_0=1,\ B_2 = 1/6, B_4 = -1/30, B_6 = 1/42, B_8 = -1/30. \] \begin{definition} For $n \in \mathbb{N}$ let the sets $\mathcal{B}(n)$ and $\mathcal{B}'(n)$ be defined as follows: \[ \mathcal{B}(n) = \{(a,b,x,y)\in\mathbb{N}^4:\ ax+by=n \}, \] \[ \mathcal{B}'(n) = \{(a,b,x,y)\in\mathbb{N}^4:\ ax+by=n,\ \text{and\ } \gcd(a,b)= \gcd(x,y)=1 \}. \] \noindent For $n \in 2\mathbb{N}$ let the sets $\mathcal{O}(n)$ and $\mathcal{O}'(n)$ be defined as follows: \[ \mathcal{O}(n) = \{(a,b,x,y)\in(2\mathbb{N}-1)^4:\ ax+by=n \}, \] \[ \mathcal{O}'(n) = \{(a,b,x,y)\in(2\mathbb{N}-1)^4:\ ax+by=n,\ \text{and\ } \gcd(a,b)= \gcd(x,y)=1 \}. \] \end{definition} \begin{definition} For $1< n\in\mathbb{N}$ let the numbers $G'(n)$, $H'(n)$, and $I'(n)$ be defined as follows: \[ \begin{split} G'(n) &= \# \{ (a,b,c,d,x,y)\in\mathbb{N}_{0}^{2} \times \mathbb{N}^{4}:\ ( (a+c)^{\frac{1}{3}}, (b+d), x, y) \in B'(n) \} \\ H'(n) &= \# \{ (a,b,c,d,k,x,y) \in \mathbb{N}_0^{2} \times \mathbb{N}^{5}:\ ( (a+c), (k(b+d))^{\frac{1}{3}}, x, y) \in B'(n)\ \text{and\ } (b,d)=1 \} \\ I'(n) &= \# \{ (a,b,c,d,k,l,x,y) \in\mathbb{N}_0^{2}\times \mathbb{N}^{6}:\ ( (k(a+c))^{\frac{1}{3}} , l(b+d), x, y) \in B'(n)\ \text{and\ } (a,c)=(b,d)=1 \}. \end{split} \] \noindent For $n\in 2 \mathbb{N}$ let the numbers $J'(n)$, $K'(n)$, and $L'(n)$ be defined as follows: \[ \begin{split} J'(n) &= \# \{ (a,b,c,d,x,y)\in\mathbb{N}_{0}^{2} \times \mathbb{N}^{4}:\ ( (a+c)^{\frac{1}{3}}, (b+d), x, y) \in \mathcal{O}'(n) \} \\ K'(n) &= \# \{ (a,b,c,d,k,x,y) \in \mathbb{N}_0^{2} \times \mathbb{N}^{5}:\ ( (a+c), (k(b+d))^{\frac{1}{3}}, x, y) \in \mathcal{O}'(n)\ \text{and\ } (b,d)=1 \} \\ L'(n) &= \# \{ (a,b,c,d,k,l,x,y) \in\mathbb{N}_0^{2}\times\mathbb{N}^{6}:\ ( (k(a+c))^{\frac{1}{3}} , l(b+d), x, y) \in \mathcal{O}'(n)\ \text{and\ } (a,c)=(b,d)=1 \}. \end{split} \] \end{definition} Williams in \cite[Theorem 11.1]{Williams} reproduced the Jacobi's four squares formula with the help of the following sum over the set $\mathcal{B}(n)$ which is due to Liouville ~\cite{Liouville4}. For a positive integer $n$ and an even function $f: \mathbb{Z} \to \mathbb{C}$ we have \[ \begin{split} \sum_{(a,b,x,y)\in \mathcal{B}(n)} \big( f(a-b)-f(a+b) \big) & = f(0)\big(\sigma(n)-d(n) \big) \\ & \quad + \sum_{\substack{d \in \mathbb{N} \\ d\mid n}} ( 1+2n/d -d ) f(d) - 2 \sum_{\substack{d \in \mathbb{N} \\ d\mid n}}\big( \sum_{l=1}^d f(l) \big), \end{split} \] where $d(n)$ is the number of positive divisors of $n$ and $\sigma(n)$ is the sum of these divisors. For integer representations which are obtained using this type of sums we refer to \cite{Williams-2006, Williams-2008}. In this note we shall give exact formulas for each of the numbers $G'(n)$, $H'(n)$, $I'(n)$, $J'(n)$, $K'(n)$, and $L'(n)$. Our main tool is the following theorem of the author on sums over the sets $\mathcal{B}'(n)$ and $\mathcal{O}'(n)$. \begin{theorem}\cite{Elbachraoui}\label{main1} (a) If $1<n\in \mathbb{N}$ and $f:\mathbb{Z}\to \mathbb{C}$ is an even function, then \begin{equation*} \sum_{(a,b,x,y)\in \mathcal{B}'(n)} \big( f(a-b)-f(a+b) \big) = \big(f(0) + 2f(1) -f(n) \big) \phi(n) - 2 \sum_{\substack{ 1\leq l < n \\ (l,n)=1}} f(l). \end{equation*} \noindent (b) If $n \in 2\mathbb{N}$ and $f:\mathbb{Z}\to \mathbb{C}$ is an even function, then \begin{equation*} \sum_{(a,b,x,y)\in \mathcal{O}'(n)} \big( f(a-b)-f(a+b) \big) = \big(f(0) -f(n) \big) \phi(n). \end{equation*} \end{theorem} \section{The results} \noindent We will need the following lemma which is the analogue of Theorem~12.3 in \cite{Williams}. \begin{lemma} \label{general-application} (a) If $k , n \in \mathbb{N}$ such that $n>1$, then \[ \sum_{s=0}^{k-1}\binom{2k}{2s+1}\left( \sum_{(a,b,x,y)\in \mathcal{B}'(n)} a^{2k-2s-1} b^{2s+1} \right) \] \[ = \frac{n^{2k}-2}{2}\phi(n) + \frac{1}{2k+1} \sum_{d\mid n} \mu(d) d^{2k} \left( \sum_{j=0}^{2k} \binom{2k+1}{j} B_j \left(\frac{n}{d} \right)^{2k+1-j} \right). \] \noindent (b) If $k\in \mathbb{N}$ and $n\in 2\mathbb{N}$, then \[ \sum_{s=0}^{k-1}\binom{2k}{2s+1}\left( \sum_{(a,b,x,y)\in \mathcal{O}'(n)} a^{2k-2s-1} b^{2s+1} \right) = n^{2k} \phi(n). \] \end{lemma} \begin{proof} (a) Application of Theorem~\ref{main1}(a) to the even function $f(x) = x^{2k}$ yields \begin{equation}\label{help1} \sum_{(a,b,x,y) \in \mathcal{B}'(n)} \bigl( (a-b)^{2k} - (a+b)^{2k} \bigr) = (2- n^{2k}) \phi(n) - 2\sum_{\substack{1\leq l <n\\ (l,n)=1}} l^{2k}. \end{equation} The left hand side of (\ref{help1}) simplifies to \begin{multline*} -2 \sum_{(a,b,x,y)\in \mathcal{B}'(n)} \sum_{s=0}^{k-1}\binom{2k}{2s +1} a^{2k-2s-1} b^{2s+1} \\ = -2\sum_{s=0}^{k-1}\binom{2k}{2s +1} \sum_{(a,b,x,y)\in \mathcal{B}'(n)} a^{2k-2s-1} b^{2s+1}. \end{multline*} On the other hand, by virtue of identity (\ref{primitive-sum}) the right hand side of (\ref{help1}) becomes \[ (2- n^{2k}) \phi(n) - 2 \sum_{d\mid n} \mu(d) \frac{d^{2k}}{2k+1} \sum_{j=0}^{2k} \binom{2k+1}{j} B_j (n/d)^{2k+1-j}. \] Equating the left and the right hand sides of (\ref{help1}) and dividing by $-2$ yields \[ \sum_{s=0}^{k-1}\binom{2k}{2s +1} \sum_{(a,b,x,y)\in \mathcal{B}'(n)} a^{2k-2s-1} b^{2s+1} \] \[ = \frac{(n^{2k} - 2)}{2} \phi(n) + \sum_{d\mid n} \mu(d) \frac{d^{2k}}{2k+1} \sum_{j=0}^{2k} \binom{2k+1}{j} B_j (n/d)^{2k+1-j}, \] as desired. \noindent (b) Similar to the previous part with an application of Theorem~\ref{main1}(b) to the even function $f(x)= x^{2k}$. \end{proof} \begin{theorem} \label{G'-H'-K'} (a) If $1<n \in\mathbb{N}$, then \[ G'(n) = H'(n) = I'(n) = \frac{7 n^5 -10 n}{80} \prod_{p\mid n}\left(1 - \frac{1}{p} \right) + \frac{n^3}{24} \prod_{p\mid n} (1-p) - \frac{n}{240} \prod_{p\mid n} (1-p^3). \] \noindent (b) If $n\in 2\mathbb{N}$, then \[ J'(n) = K'(n) = L'(n) = \frac{n^5}{8} \prod_{p\mid n}\left(1 - \frac{1}{p} \right). \] \end{theorem} \begin{proof} (a) We have \[ \begin{split} G'(n) & = \sum_{\substack{(a,b,c,d,x,y)\in\mathbb{N}_0{}^2 \times \mathbb{N}^4 \\ (a+c)^{1/3} x + (b+d)y =n \\ ( (a+c)^{1/3}, b+d) = (x,y)=1}} 1 \\ & = \sum_{(u,v,x,y) \in \mathcal{B}'(n)} \Bigl(\sum_{\substack{(a,c)\in \mathbb{N}_0\times \mathbb{N} \\ a+c= u^3}} 1 \Bigr) \Bigl(\sum_{\substack{(b,d)\in \mathbb{N}_0\times \mathbb{N} \\ b+d= v}} 1 \Bigr) \\ & = \sum_{(u,v,x,y)\in \mathcal{B}'(n)} u^3 v, \end{split} \] \[ \begin{split} H'(n) & = \sum_{\substack{(a,b,c,d,k,x,y)\in\mathbb{N}_0^{2} \times \mathbb{N}^5 \\ (a+c)x + (k(b+d))^{1/3} y = n \\ \bigl(a+c, (k(b+d))^{1/3} \bigr) = (x,y)=(b,d)=1}} 1 \\ & = \sum_{(u,v,x,y) \in \mathcal{B}'(n)} \Bigl(\sum_{\substack{(a,c)\in \mathbb{N}_0\times \mathbb{N} \\ a+c= u}} 1 \Bigr) \Bigl(\sum_{e\mid v^3}\sum_{\substack{(b,d)\in \mathbb{N}_0 \times \mathbb{N} \\ b+d= e \\ (b,d)=1}} 1 \Bigr) \\ & = \sum_{(u,v,x,y) \in \mathcal{B}'(n)} u \sum_{e\mid v^3} \phi(e) = \sum_{(u,v,x,y) \in \mathcal{B}'(n)} u v^3, \end{split} \] and \[ \begin{split} I'(n) & = \sum_{\substack{(a,b,c,d,k,l,x,y)\in\mathbb{N}_0^{2}\times\mathbb{N}^6 \\ (k(a+c))^{1/3} x + l(b+d) y =n \\ \bigl( k(a+c), l(b+d) \bigr) = (x,y)= (a,c) = (b,d)=1 }} 1 \\ & = \sum_{(u,v,x,y)\in \mathcal{B}'(n)} \Bigl(\sum_{e\mid u^3}\sum_{\substack{(a,c)\in\mathbb{N}_0\times\mathbb{N} \\ a+c= e \\ (a,c)=1}} 1 \Bigr) \Bigl(\sum_{f\mid v}\sum_{\substack{(b,d)\in\mathbb{N}_0\times\mathbb{N} \\ b+d= f \\ (b,d)=1}} 1 \Bigr) \\ & = \sum_{(u,v,x,y) \in \mathcal{B}'(n)} \sum_{e\mid u^3}\phi(e) \sum_{f\mid v} \phi(f) = \sum_{(u,v,x,y) \in \mathcal{B}'(n)} u^3 v. \end{split} \] This shows that $G'(n)=H'(n)=I'(n)= \sum_{(u,v,x,y) \in \mathcal{B}'(n)} u^3 v$ and so we will be done if we prove that \[ \sum_{(a,b,x,y) \in \mathcal{B}'(n)} a^3 b = \frac{7 n^5 -10 n}{80} \prod_{p\mid n}\left(1 - \frac{1}{p} \right) + \frac{n^3}{24} \prod_{p\mid n} (1-p) - \frac{n}{240} \prod_{p\mid n} (1-p^3). \] Taking $k=2$ in Lemma~\ref{general-application}(a) gives \[ 4\sum_{(a,b,x,y)\in \mathcal{B}'(n)} a^3 b + 4 \sum_{(a,b,x,y)\in \mathcal{B}'(n)} a b^3 \] \[ = \frac{n^4 -2}{2} \phi(n) + \frac{1}{5} \sum_{d\mid n} \mu(d) d^4 \left( B_0 \frac{n^5}{d^5} + 5 B_1 \frac{n^4}{d^4} + 10 B_2 \frac{n^3}{d^3} + 10 B_3 \frac{n^2}{d^2} + 5 B_4 \frac{n}{d} \right), \] or equivalently, \[ 8 \sum_{(a,b,x,y) \in \mathcal{B}'(n)} a^3 b \] \[ = \frac{n^4 -2}{2} \phi(n) + \frac{n^5}{5} \sum_{d\mid n}\mu(d) d^{-1} + \frac{n^3}{3} \sum_{d\mid n}\mu(d) d -\frac{n}{30} \sum_{d\mid n}\mu(d) d^3 . \] With the help of the properties (\ref{basic-phi-mu}) the previous identity yields \[ \sum_{(a,b,x,y) \in \mathcal{B}'(n)} a^3 b \] \[ = \frac{n^5 - 2n}{16} \prod_{p\mid n} \left( 1- \frac{1}{p} \right) + \frac{n^5}{40} \prod_{p\mid n} \left( 1- \frac{1}{p} \right) + \frac{n^3}{24} \prod_{p\mid n} (1-p) - \frac{n}{240} \prod_{p\mid n} (1-p^3) \] \[ = \frac{7 n^5 -10 n}{80} \prod_{p\mid n}\left(1 - \frac{1}{p} \right) + \frac{n^3}{24} \prod_{p\mid n} (1-p) - \frac{n}{240} \prod_{p\mid n} (1-p^3). \] This completes the proof of part (a). Part (b) follows similarly and the details are left to the reader. \end{proof} \section{Final remarks} With the help of a result of Williams in \cite{Williams} we shall give in this section formulas for the following related numbers: \[ \begin{split} G(n) &= \# \{ (a,b,c,d,x,y)\in\mathbb{N}_{0}^{2} \times \mathbb{N}^{4}:\ ( (a+c)^{\frac{1}{3}}, (b+d), x, y) \in \mathcal{B}(n) \} \\ H(n) &= \# \{ (a,b,c,d,k,x,y) \in \mathbb{N}_0^{2} \times \mathbb{N}^{5}:\ ( (a+c), (k(b+d))^{\frac{1}{3}}, x, y) \in \mathcal{B}(n)\ \text{and\ } (b,d)=1 \} \\ I(n) &= \# \{ (a,b,c,d,k,l,x,y) \in\mathbb{N}_0^{2}\times\mathbb{N}^{6}:\ ( (k(a+c))^{\frac{1}{3}} , l(b+d), x, y) \in \mathcal{B}(n)\ \text{and\ } (a,c)=(b,d)=1 \}. \end{split} \] For this purpose, let $\sigma_{m}(n)$ be the sum of the $m$th powers of the divisors of $n$ where $m\in\mathbb{N}_0$ and $n\in\mathbb{N}$. \begin{theorem} \label{H-G-I} If $1<n \in\mathbb{N}$, then \[ G(n) = H(n) = I(n) = \frac{7}{80} \sigma_{5}(n) + \left( \frac{1}{24}-\frac{1}{8}n \right) \sigma_{3}(n) - \frac{1}{240} \sigma(n). \] \end{theorem} \begin{proof} On the one hand, by an argument as in the proof of Theorem~\ref{G'-H'-K'}(a) we find \[ H(n) = G(n) = I(n) = \sum_{(u,v,x,y) \in \mathcal{B}(n)} u^3 v. \] On the other hand, it is easily seen that \[ \sum_{(u,v,x,y) \in \mathcal{B}(n)} u^3 v = \sum_{m=1}^{n-1} \sigma_{3} (m) \sigma(n-m). \] Moreover, by Williams~\cite[Example 12.3, p. 128]{Williams} we have \[ \sum_{m=1}^{n-1} \sigma_{3} (m) \sigma(n-m) = \frac{7}{80} \sigma_{5}(n) + \left( \frac{1}{24}-\frac{1}{8}n \right) \sigma_{3}(n) - \frac{1}{240} \sigma(n). \] Combining the previous identities yields the result. \end{proof}
{ "timestamp": "2012-10-29T01:01:37", "yymm": "1210", "arxiv_id": "1210.3708", "language": "en", "url": "https://arxiv.org/abs/1210.3708" }
\section{Introduction} Distributed storage systems (DSS) are designed to store data over a distributed network of nodes. DSS have become increasingly important given the growing volumes of data being generated, analyzed and archived today. OceanStore \cite{Rhea:Pond03}, Google File System (GFS) \cite{Ghemawat:Google03}, and TotalRecall \cite{Bhagwan:Total04} are a few examples of storage systems employed today. Data to be stored is more than doubling every two years, and efficiency in storage and data recovery is particularly critical today. The coding schemes employed by DSS are designed to provide efficient storage while ensuring resilience against node failures in order to prevent the permanent loss of the data stored on the system. In a majority of existing literature, the analysis of DSS focuses primarily on isolated node failures. In our work, we study a more general scenario of DSS that can suffer from multiple simultaneous node failures. In addition to multiple node failures, DSS systems are also inherently susceptible to adversarial attacks, such as one from eavesdroppers aiming to gain access to the stored data. Therefore, a ``good'' DSS would meet desired security requirements while performing efficient repairs even in the case of multiple simultaneous node failures. In \cite{Dimakis:Network10}, Dimakis et al. present a class of \emph{regenerating codes}, which efficiently trade-off per node storage and repair bandwidth for single node repair. These codes are designed to possess a maximum distance separable (MDS) property, which is an \emph{``any $k$ out of $n$''} property, wherein the entire data can be reconstructed by contacting to any $k$ storage nodes out of $n$ nodes. By utilizing a network coding approach, the notion of {\em functional repair} is developed in \cite{Dimakis:Network10}, where the original failed node may not be replicated exactly, but can be repaired such that it is {\em functionally} equivalent. On the other hand, {\em exact repair} requires that the regeneration process results in an exact replica of the data stored on the failed node. This is essential due to the ease of maintenance and other practical purposes such as maintaining a code in its systematic form. Exact repair may also prove to be advantageous compared to functional repair in the presence of eavesdroppers, as the latter scheme requires updating the coding coefficients, which in turn may leak additional information to eavesdroppers \cite{Pawar:Securing11}. The design of exact regenerating codes achieving one of the two ends of the trade off between storage and repair bandwidth has been recently investigated by researchers. In particular, Rashmi et al. \cite{Rashmi:Optimal11} design codes that are optimal for all parameters at the minimum bandwidth regeneration (MBR) point. For the minimum storage regeneration (MSR) point, optimal codes are presented in multiple recent papers. (See \cite{Papailiopoulos:Repair11,Cadambe:Permutation11,Tamo:Zigzag11,Wang:codes11} and references therein.) As discussed before, DSS can also exhibit multiple simultaneous node failures, and it is desirable that these be repaired simultaneously. It is not uncommon that multiple failures occur in DSS, especially for large-scale systems. In addition, some DSS administrators may choose to wait to initiate a repair process after a critical number of failures has occurred (say $t$ of them), in order to render the entire process more efficient and less frequent. For example, TotalRecall \cite{Bhagwan:Total04} currently executes a node repair process only after a certain threshold on the number of failures is reached. In such multiple failure scenarios, each new node replacing a failed one can still contact $d$ remaining (surviving) nodes to download data for the repair process. In addition, replacement nodes, after downloading data from surviving nodes, can also exchange data within themselves to complete the repair process. This repair process is referred to as {\em cooperative repair} in \cite{Hu:Cooperative10}, which present network coding techniques to implement such repairs. Cooperative repair is shown to be essential as it can help in lowering the total repair bandwidth compared to the $t=1$ case. Flexibility of the choice on download nodes at repair nodes is analyzed in~\cite{Wang:MFR10}. \cite{Kermarrec:Repairing11}, focusing on functional repair, shows that under the constraint $n=d+t$, deliberately delaying repairs (and thus increasing $t$) does not result in gains in terms of MBR/MSR optimality. \cite{Kermarrec:Repairing11} and \cite{Shum:Existence11} utilize a cut-set bound argument and derive the cooperative counterpart of the end points of the trade off region. These two points are named as the minimum bandwidth cooperative regenerating (MBCR) point and the minimum storage cooperative regenerating (MSCR) point (See also~\cite{Oggier:Coding12}). The work in~\cite{Shum:Existence11} shows the existence of cooperative regenerating codes with optimal repair bandwidth. Explicit code constructions for exact repair on this setup are presented in~\cite{Shum:Exact11}, for the MBCR point, and in~\cite{Shum:Cooperative11}, for the MSCR point. These constructions are designed for the setting of $d=k$. (See also~\cite{Shum:Cooperative12}.) Interference alignment is used in~\cite{LeScouarnec:Exact12} to construct scalar codes to operate at the MSCR point. (This construction is limited to the case $k=2$ with $d\geq k$, and does not generalize to $k\geq 3$ with $d>k$.) An explicit construction for the MBCR point, with the restriction that $n=d+t$ for any $t\geq 1$, is presented in~\cite{Jiekak:CROSS12}. Finally, the reference~\cite{Wang:Exact12} presents designs of scalar codes for the MBCR point for all possible parameter values. Noting the significance of cooperative repair in DSS, regenerating codes that have resilience to eavesdropping attacks will have greater value if they also have efficient cooperative repair mechanisms. The security of systems can be understood in terms of their resilience to either (or both) active or passive attacks~\cite{Goldreich:Foundations04, Delfs:Introduction07}. Active attacks include settings where the attacker modifies existing packets or injecting new ones to the system, whereas passive attacks include eavesdroppers observing the information being stored/transmitted. For DSS, cryptographic approaches like private-key cryptography are often logistically prohibitive, as the secret key distribution between each pair of nodes and its renewal are highly challenging, especially for large-scale systems. In addition, most cryptographic approaches are typically based on certain hardness results, which, if repudiated, could leave the system vulnerable to attacks. On the other hand, information theoretic security, see, e.g., \cite{Shannon:Communication49,Wyner:Wire-tap75}, presents secrecy guarantees even with infinite computational power at eavesdroppers without requiring the sharing and/or distribution of keys. This approach is based on the design of secrecy-achieving coding schemes by taking into account the amount of information leaked to eavesdroppers, and can offer new solutions to security challenges in DSS. In its simplest form, the security can be achieved with the one-time pad scheme~\cite{Vernam:Cipher26}, which claims the security of the ciphertext obtained by XOR of data and uniform key. This approach is of significant value to DSS. For example, consider a system storing the key at a node, and ciphertext at another node. Then, the eavesdropper will not obtain any information by observing one of these two nodes, whereas the data collector can contact to both nodes and decipher the data. The problem of designing secure DSS against eavesdropping attacks has been recently studied by Pawar et al. \cite{Pawar:Securing11}, where the authors consider a passive eavesdropper model that observe the data stored on $\ell$ $(<k)$ storage nodes for a DSS employing an MBR code. The proposed schemes are designed for the ``bandwidth limited regime'', and shown to achieve an upper bound on the secure file size, establishing its optimality. Shah et al.~\cite{Shah:Information11} consider the design of secure MSR codes. Here, they show that the eavesdropper model for an MSR code should be extended compared to that of an MBR code. The underlying reason is that at the MSR point of operation, the eavesdropper may obtain additional information by observing the downloaded information (as compared to just observing the stored information). Thus, at the MSR point, the eavesdropper is modeled with a pair ($\ell_1,\ell_2$) with $\ell_1+\ell_2<k$, where the eavesdropper has knowledge of the content of the $\ell_1$ number of nodes, and, in addition, has knowledge of the downloaded information (and hence also the storage content) of the $\ell_2$ number of nodes. We note that, as the downloaded data is stored for minimum bandwidth regenerating codes, the two notions are different only at the minimum storage point. Considering such an eavesdropper model, Shah et al. present coding schemes utilizing product matrix codes \cite{Rashmi:Optimal11}, and show that the bound on secrecy capacity in \cite{Pawar:Securing11} at MBR is achievable. They further use product matrix based codes for MSR point as well, and show the bound in \cite{Pawar:Securing11} is achievable only when $\ell_2 = 0$. In addition to this classical MBR/MSR setting, the security aspects of locally repairable codes (see, e.g., \cite{Oggier:Homomorphic11, Oggier:Self11,Gopalan:locality11,Huang:Pyramid07,Prakash:Optimal12,Papailiopoulos:Locally12}) are studied in~\cite{Rawat:Optimal12}; and security against active eavesdroppers are investigated in~\cite{Oggier:Byzantine11, Rashmi:Regenerating12,Silberstein:Error12}. In this paper, we analyze and design secure and cooperative regenerating codes for DSS. In terms of security requirements, we utilize a passive and colluding eavesdropper model as presented in \cite{Shah:Information11}. In this model, during the entire life span of the DSS, the eavesdropper can gain access to data stored on an $\ell_1$ number of nodes, and, in addition, it observes both the stored content and the data downloaded (for repair) on an additional $\ell_2$ number of nodes. Given this eavesdropper model, we focus on the problem of designing secure regenerating codes in the context of DSS that performs multiple node repairs in a cooperative manner. This scenario generalizes the single node repair setting considered in earlier works to multiple node failures. First, we present upper bound on the secrecy capacity for MBCR codes, and present a secure coding scheme that achieves this bound. This proves the tightness of the bound and characterizes the secrecy capacity for MBCR codes. Next, we address the secrecy capacity of a DSS employing the MSCR codes, and show that the existing MSCR codes can be made secure against eavesdropping. In this minimum storage setup, our codes match the upper bound secure file size under special cases. In all scenarios, the achievability results allows for exact repair, and secure file size upper bounds are obtained from mincut analyses over the secrecy graph representation of DSS. The main secrecy achievability coding argument of the paper is obtained by utilizing a secret precoding scheme to obtain secure coding schemes for DSS. In some cases, this precoding is established simply with the one-time pad scheme, and in others {\em maximum rank distance} (MRD) codes are utilized similar to the classical work of~\cite{Shamir:How79}. The rest of the paper is organized as follows. In Section II, we provide the general system model together with some preliminary results utilized throughout the text. Section III provides the analysis of secure MBCR codes, and Section IV is devoted to the secure MSCR codes. The paper is concluded in Section V, and, to enhance the flow of the paper, some of the results and proofs are relegated to appendices. \section{System Model and Preliminaries} \label{sec:sys_model} Consider a DSS with $n$ live nodes at a time and a file ${\bf f}$ of size $\mathcal{M}$ over $\mathbb{F}_q$ that needs to be stored on the DSS. In order to store the file ${\bf f}$, it is divided into $k$ blocks of size $\frac{\mathcal{M}}{k}$ each. Let $(\mathbf{f}_1, \ldots, \mathbf{f}_k)$ denotes these $k$ blocks. Here, we have $\mathbf{f}_i \in \mathbb{F}^{\frac{\mathcal{M}}{k}}_{q}$. These $k$ data blocks are encoded into $n$ data blocks, $(\mathbf{x}_1,\ldots, \mathbf{x}_n)$, each of length $\alpha$ over $\mathbb{F}_q$ ($\alpha \geq \frac{\mathcal{M}}{k}$). Given the codewords, node $i$ in an $n$-node DSS stores encoded block ${\bf x}_i$. In this paper, we use ${\bf x}_i$, to represent both block ${\bf x}_i$ and a storage node storing this encoded block interchangeably. Motivated by the MDS property of the codes that are traditionally proposed to store data in centralized storage systems \cite{Patterson:case88, Blaum:EVENODD95, Blaum:MDS96}, the works on regenerating codes focus on storage schemes that have ``any $k$ out of $n$'' property, i.e., the content of any $k$ nodes will suffice to recover the file. We focus on codes achieving this property. We use the following notation throughout the text. We usually stick with the notation of having vectors denoted by lower-case bold letters; and, sets and subspaces being denoted with calligraphic fonts. For $a<b$, $[a:b]$ represents the set of numbers $\{a,a+1,\cdots, b\}$. (This is shortened as $[b]$ for $[1:b]$, and brackets are omitted in subscripts to improve readability.) The symbols stored at node $i$ is represented by the vector ${\bf s}_i$, the symbols transmitted from node $i$ to node $j$ is denoted as ${\bf d}_{i,j}$, and the set ${\bf d}_j$ is used to denote all of the downloaded symbols to node $j$. DSS is initialized with the $n$ nodes containing encoded symbols, i.e., ${\bf s}_i={\bf x}_i$ for $i=1, \cdots, n$. \subsection{Cooperative repair in DSS} In most of the studies on DSS, exact repair for regenerating codes is analyzed in the context of single node failure. However, it is not uncommon to see simultaneous multiple node failures in storage networks, especially for large ones. The basic setup involves the simultaneous repair of $t$ (possibly greater than one) failed nodes. After the failure of $t$ storage nodes, the same number of newcomer nodes are introduced to the system. Each such node contacts to $d$ live storage nodes and downloads $\beta$ symbols from each of these nodes. In addition, utilizing a cooperative approach, each newcomer node also contacts other nodes being under repair and downloads $\beta'$ symbols from each other node. Hence, the total repair cost is given by \begin{equation} \gamma=d\beta + (t-1) \beta'. \end{equation} Each newcomer node, to repair the $i$-th node of the original network, uses these $d\beta+(t-1)\beta'$ number of downloaded symbols to regenerate $\alpha$ symbols, $\mathbf{x}_i$, and stores these symbols. This exact repair process preserves the MDS property, i.e., data stored on any $k$ nodes (potentially including the nodes that are repaired) allows the original file ${\bf f}$ to be reconstructed. See Fig.~\ref{fig:FlowGraph1}. We remark that, as also argued in~\cite{Wang:Exact12}, $d\geq k$ can be assumed without loss of generality. (Earlier papers on the subject assumed $d\geq k$ case, and noted that this is assumed for simplicity. See, e.g.,~\cite{Shum:Cooperative11,Shum:Exact11,LeScouarnec:Exact12, Jiekak:CROSS12,Shum:Cooperative12}.) Remarkably, if $d<k$, a data collector can reconstruct the whole file by contacting only $d$ nodes, as from these nodes the other nodes can be repaired in groups of size $t$. Thus, any $(n,k,d)$ code with $d<k$ can be reduced to $(n,k'=d,d)$ code. Therefore, without loss of generality, we will assume $d\geq k$. \begin{figure}[t] \centering \includegraphics[width=1.00\columnwidth]{fig_CooperativeFlowGraph1} \caption{Information flow graph of DSS implementing cooperative repair. In this representative example, we have $n=5$, $d=k=3$, and $t=2$. Accordingly, after a failure of two nodes, namely node $1$ and node $2$, the system cooperatively repairs these two nodes as node $6$ and node $7$. Downloads from live nodes (blue) and from cooperative repair pairs (green) are shown. Due to exact repair, the network will repair the nodes to satisfy $x^{\rm out}_6=x^{\rm out}_1$ and $x^{\rm out}_7=x^{\rm out}_2$.} \label{fig:FlowGraph1} \end{figure} \subsection{Information flow graph} In their seminal work \cite{Dimakis:Network10}, Dimakis et al. models the operation DSS using a multicasting problem over an information flow graph. (See Figs.~\ref{fig:FlowGraph1} and \ref{fig:FlowGraph2} for the flow graph in the cooperative setting.) Information flow graph consists of three types of nodes: \begin{itemize} \item Source node ($S$): Source node contains $\mathcal{M}$ symbols long original file ${\bf f}$. The source node is connected to $n$ nodes. \item Storage nodes ($(x^{\rm in}_i,x^{\rm co}_i,x^{\rm out}_i)$): In information flow graph associated with cooperative regenerating codes, we represent each node with a combination of three sub-nodes: $x^{\rm in}$, $x^{\rm co}$, and $x^{\rm out}$. Here, $x^{\rm in}$ is the sub-node having the connections from the live nodes, $x^{\rm co}$ is the sub-node having the connections from the nodes under repair in the same repair group, and $x^{\rm out}$ is the storage sub-node, which stores the data and is contacted by a data collector or other nodes under repair. $x^{\rm in}$ is connected to $x^{\rm co}$ with a link of infinite capacity, $x^{\rm co}$ is connected to $x^{\rm out}$ with a link of capacity $\alpha$. We represent cuts with a notation with bars as in $(x^{\rm in},x^{\rm co}|x^{\rm out})$, meaning the cut is passing through the link between $x^{\rm co}$ and $x^{\rm out}$. (See Fig.~\ref{fig:FlowGraph2}.) The nodes on the right hand side of the cuts belong to data collector side, represented by the set ${\cal D}$, whereas the nodes belonging to the left hand side of the cuts belong to ${\cal D}^c$, the source side. For a newcomer node, $x^{\rm{in}}_i$ is connected to $x^{\rm{out}}$ sub-nodes of $d$ live nodes with links of capacity $\beta$ symbols each, representing the data downloaded during node repair. This newcomer node also connects to $x^{\rm{in}}$ sub-nodes of $(t-1)$ nodes being repaired in the same group, each having a link capacity of $\beta'$. \item Data collector node(s) ($\rm{DC}$): Each data collector contacts $x^{\rm{out}}$ sub-node of $k$ live nodes with edges each having $\infty$-link capacity. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=1.00\columnwidth]{fig_CooperativeFlowGraph2} \caption{Information flow graph of DSS implementing cooperative repair under security constraints. In this representative example, we have $n=5$, $d=k=3$, and $t=2$. Multiple repair stages and a cut, represented by dotted line, through the nodes connected to the $\rm{DC}$ are shown. The figure has different cut types: The first repaired node has a cut of type $(|x^{\rm in},x^{\rm co},x^{\rm out})$ and the second has a cut of type $(x^{\rm in},x^{\rm co}|x^{\rm out})$. Nodes that are being eavesdropped are indicated with dashed-dotted lines. Here, both the content and the downloads of the first repaired node is observed by the eavesdropper ($\ell_2=1$), and only the content of the last repaired node is observed by the eavesdropper ($\ell_1=1$). Accordingly, eavesdropper has observations of $d\beta+(t-1)\beta'$ downloaded symbols from the first repaired node, and has $\alpha$ number of symbols from the last repaired node.} \label{fig:FlowGraph2} \end{figure} \subsection{MBCR and MSCR points} With the aforementioned values of capacities of various edges in the information flow graph, the DSS is said to employ an $(n,k,d,\alpha,\beta,\beta')$ code. For a given graph ${\cal G}$ and data collectors $\rm{DC}_i$, the file size that can be stored in such a DSS can be bounded using the max flow-min cut theorem for multicasting utilized in network coding~\cite{Ahlswede:Network00,Ho:random06}. \begin{lemma}[Max flow-min cut theorem for multicasting~\cite{Ahlswede:Network00,Ho:random06,Dimakis:Network10}] $${\cal M} \leq \min_{{\cal G}} \min_{\rm{DC}_i} \rm{max flow}(S \to \rm{DC}_i,{\cal G}),$$ where $\rm{flow}(S \to \rm{DC}_i,{\cal G})$ represents the flow from the source node $S$ to data collector $\rm{DC}_i$ over the graph ${\cal G}$. \end{lemma} Therefore, e.g., for the graph in Fig.~\ref{fig:FlowGraph2}, $\mathcal{M}$ symbol long file can be delivered to a data collector $\rm{DC}$, only if the min cut is at least $\mathcal{M}$. Dimakis et al., \cite{Dimakis:Network10}, consider $k$ successive node failures and evaluate the min-cut over possible graphs, and obtain the following bound (for $t=1$ case). \begin{align} \mathcal{M} \leq \sum_{i=0}^{k-1}\min\{\alpha, (d - i)\beta\} \label{eq:dimakis_thm} \end{align} We emphasize that the min-cut for this ($t=1$) case is given by the scenario where $k$ successively repaired nodes are connected to $\rm{DC}$, and, for each successive repair, the repaired node $i+1$ also connects to $i$ number of previously repaired nodes. Hence, for each $\rm{DC}$-connected node, the cut value is equal to $(d-i)\beta$ if the cut is of type $(|x^{\rm in},x^{\rm out})$, and is equal to $\alpha$ if the cut is of type $(x^{\rm in}|x^{\rm out})$. (Note that, $x^{\rm co}$ does not appear here as the model considered in~\cite{Dimakis:Network10} does not involve cooperative repair.) The codes that attain the bound in (\ref{eq:dimakis_thm}) are named as {regenerating codes} \cite{Dimakis:Network10}. For the cooperative scenario, we consider secure file size upper bound in the next section using similar min cut arguments in the presence of eavesdroppers. Removing the leakage (to eavesdropper) terms one will obtain the min cut file size bound for the cooperative scenario. In particular, a file size bound in the cooperative setting is obtained as follows. \begin{align}\label{eq:CoopCutsetBound} &{\cal M} \leq \sum\limits_{i=0}^{g-1} u_i \min \Bigg\{ \alpha, \left(d-\sum\limits_{j=0}^{i-1}u_j\right)\beta + \left(t-u_i\right)\beta' \Bigg\}, \end{align} where $u_i\in[0:t]$ is the number of repaired nodes in repair group $i\in[0:g-1]$ that is connected to $\rm{DC}$. Similar to the $t=1$ case described above, the cut of type $(x^{\rm in},x^{\rm co}|x^{\rm out})$ has a value of $\alpha$. The cut of type $(|x^{\rm in},x^{\rm co},x^{\rm out})$, on the other hand, has a value of $\left(t-u_i\right)\beta'$ due to the links coming from the nodes under repair that are not connected to $\rm{DC}$ and additional value of $(d-\sum\limits_{j=0}^{i-1}u_j)\beta$ due to the connections to the previously repaired live nodes that are not contacted by $\rm{DC}$. (Here, we again subtract the values of the flows from the nodes already belonging to the data collector side, ${\cal D}$.) The cut of type $(x^{\rm in}|x^{\rm co},x^{\rm out})$ has value of $\infty$ and hence, does not appear in the min-cut. Note that, given a file size $\mathcal{M}$, there is an inherent trade off between storage per node $\alpha$ and {\em repair bandwidth} $\gamma \triangleq d\beta+(t-1)\beta'$. This trade off, for the cooperative setting, can be established using a similar analyses leading to MBR/MSR points from the equation \eqref{eq:dimakis_thm}. Two classes of codes that achieve two extreme points of this trade off are named as {\em minimum bandwidth cooperative regenerating (MBCR)} codes and {\em minimum storage cooperative regenerating (MSCR)} codes. The former is obtained by first finding the minimum possible $\gamma$ and then finding the minimum $\alpha$ satisfying \eqref{eq:CoopCutsetBound}. This point is given by the following. \begin{align} &\alpha_{\rm{MBCR}}=\frac{{\cal M}}{k}\frac{2d+t-1}{2d+t-k}, \quad \gamma_{\rm{MBCR}}=\alpha_{\rm{MBCR}}, \nonumber\\ &\beta_{\rm{MBCR}}=\frac{{\cal M}}{k}\frac{2}{2d+t-k}, \quad \beta'_{\rm{MBCR}}=\frac{{\cal M}}{k}\frac{1}{2d+t-k} \label{eq:mbcr_point} \end{align} \begin{figure}[t] \centering \includegraphics{fig_MBCR-MSCR} \caption{Storage vs. repair bandwidth trade off for cooperative regenerating codes. The repair bandwidth is given by $\gamma=d\beta+(t-1)\beta'$.} \label{fig:MBCR-MSCR} \end{figure} MSCR point, on the other hand, is obtained by first choosing a minimum storage per node (i.e., $\alpha={\cal M} /k$), and then minimizing $\gamma$ (via choosing minimum possible $\beta$-$\beta'$ pair) satisfying the min cut \eqref{eq:CoopCutsetBound}. \begin{align} &\alpha_{\rm{MSCR}}=\frac{\mathcal{M}}{k}, \quad \gamma_{\rm{MSCR}}=\frac{{\cal M}}{k}\frac{d+t-1}{d+t-k}, \nonumber\\ &\beta_{\rm{MSCR}}=\frac{\mathcal{M}}{k}\frac{1}{d+t-k}, \quad \beta'_{\rm{MSCR}}=\frac{\mathcal{M}}{k}\frac{1}{d+t-k}\label{eq:mscr_point} \end{align} We depict these two trade off points, which are directly computable from \eqref{eq:CoopCutsetBound}, in Fig.~\ref{fig:MBCR-MSCR}. (We refer reader to the works~\cite{Kermarrec:Repairing11,Shum:Existence11} for a detailed derivation of these two points. See also~\cite{Oggier:Coding12} for an analysis for the simplified case of when $t|k$, i.e., the number of groups satisfies $g=k/t$.) Note that, when $t=1$, these two points correspond to MBR/MSR points characterized in~\cite{Dimakis:Network10}. \subsection{Eavesdropper model} We consider an $(\ell_1,\ell_2)$ eavesdropper, which can access the stored data of nodes in the set ${\cal E}_1$, and additionally can access both the stored and downloaded data at the nodes in the set ${\cal E}_2$, where $\ell_1=|{\cal E}_1|$ and $\ell_2=|{\cal E}_2|$. Hence, the eavesdropper has access to $x_i^{\rm{out}}$ for $i\in{\cal E}_1$ and $x_j^{\rm{in}},x_j^{\rm{co}},x_j^{\rm{out}}$ for $j\in{\cal E}_2$. (See Fig.~\ref{fig:FlowGraph2}.) This is the eavesdropper model defined in \cite{Shah:Information11} (adapted here to the cooperative repair setting), which generalizes the eavesdropper model considered in \cite{Pawar:Securing11}. The eavesdropper is assumed to know the coding scheme employed by the DSS. At the MBCR point, a newcomer downloads $\alpha_{\rm{MBCR}}=\gamma_{\rm{MBCR}}$ amount of data. Thus, an eavesdropper does not gain any additional information if it is allowed to access the data downloaded during repair. However, at the MSCR point, repair bandwidth is strictly greater than the per node storage, $\alpha_{\rm{MSCR}}$, and an eavesdropper potentially gains more information if it has access to the data downloaded during node repair as well. We summarize the eavesdropper model together with the definition of achievability of a secure file size in the following. \begin{definition}[Security against an $(\ell_1,\ell_2)$ eavesdropper] A DSS is said to achieve a secure file size of ${\cal M}^s$ against an $(\ell_1,\ell_2)$ eavesdropper, if, for any sets ${\cal E}_1$ and ${\cal E}_2$ of size $\ell_1$ and $\ell_2$, respectively, $I({\bf f}^s;{\bf e})=0$. Here ${\bf f}^s$ is the secure file of size ${\cal M}^s$, which is first encoded to file ${\bf f}$ of size ${\cal M}$ before storing into DSS, and ${\bf e}$ is the eavesdropper observation vector given by ${\bf e}\triangleq\{x_i^{\rm{out}},x_j^{\rm{in}},x_j^{\rm{co}},x_j^{\rm{out}}: i\in{\cal E}_1, j\in{\cal E}_2\}$. \end{definition} We remark that, as it will be clear from the following sections, when a file ${\bf f}$ of size ${\cal M}$ is stored in DSS and the secure file size achieved is ${\cal M}^s$, the remaining ${\cal M}-{\cal M}^s$ symbols can be utilized as public data, which does not have security constraints. Yet, noting the possibility of storing the public data, we will refer to this uniformly distributed part as the random data, which is utilized to achieve security. Finally, we note the following lemma, which will be used in the following parts of the sequel. \begin{lemma}[Secrecy Lemma]\label{thm:SecrecyLemma} Consider a system with information bits ${\bf u}$, random bits ${\bf r}$ (independent of ${\bf u}$), and an eavesdropper with observations given by ${\bf e}$. If $H({\bf e})\leq H({\bf r})$ and $H({\bf r}|{\bf u},{\bf e})=0$, then I({\bf u};{\bf e})=0. \end{lemma} \begin{proof} See Appendix~\ref{app:SecrecyLemma}. \end{proof} \section{Secure MBCR codes} In this section, we study secure minimum bandwidth cooperative regenerating codes. We first present an upper bound on the secure file size that can be supported by an MBCR code. Then, we present exact repair coding schemes achieving the derived bound. In addition, we analyze how the cooperation affects the penalty paid in securing storage systems. \subsection{Upper bound on secure file size of MBCR codes} Analysis of the cut-set bounds for cooperative regenerating codes are provided in~\cite{Kermarrec:Repairing11,Shum:Existence11}. (See also the arguments given in~\cite{Hu:Cooperative10,Oggier:Coding12}. Here, we follow the notations of~\cite{Kermarrec:Repairing11,Oggier:Coding12}.) We consider groups of nodes being repaired, and denote the number of nodes in group $i$ that are repaired in group $i$ and contacted by the data collector as $u_i$ such that \begin{eqnarray*} u_i &\in& [t], \forall i=0,1,\cdots,g-1,\\ \sum\limits_{i=0}^{g-1} u_i &=& k, \end{eqnarray*} where $g$ is the total number of groups that have been repaired. While evaluating an upper bound on the file size that can be securely stored on the DSS, the data collector under consideration is assumed to contact only these $k$ nodes that belong to one of these $g$ groups. We consider two types of cuts: $m_i$ number of nodes have the first cut type $(x^{\rm in}, x^{\rm co}|x^{\rm out})$, and $u_i-m_i$ number of nodes have the second cut type $(|x^{\rm in}, x^{\rm co}, x^{\rm out})$, $0\leq i\leq g-1$. Note that the cuts of the form $(x^{\rm in}, x^{\rm co}|x^{\rm out})$ give a cut value of $\alpha$ as opposed to $(x^{\rm in}|x^{\rm co}, x^{\rm out})$, which has cut value larger than $\alpha$. Since we are interested in the cuts of smaller size, we do not consider the cuts $(x^{\rm in}|x^{\rm co}, x^{\rm out})$. We consider $\ell_1$ number of colluding eavesdroppers, each observing the contents of different nodes. Note that, for MBCR point analysis, we can consider $\ell_2=0$ without loss of generality, as the amount of data a particular node stores is equal to amount of data it downloads during its repair. We denote the number of eavesdroppers on the nodes in the first cut type as $l_1^{i,1}$, $0\leq i\leq g-1$; and denote the number of eavesdroppers on the nodes in the second cut type as $l_1^{i,2}$,$0\leq i\leq g-1$, such that \begin{eqnarray*} l_1^{i,1} \leq m_i \\ l_1^{i,2} \leq u_i-m_i \\ \sum\limits_{i=0}^{g-1} (l_1^{i,1} + l_1^{i,2}) = \ell_1. \end{eqnarray*} Thus, for group $i$, due to the eavesdroppers, the nodes that belong to the first type can only add the value of $(m_i-l_1^{i,1})\alpha$ to the cut. The second type, on the other hand, consists of $u_i-m_i$ nodes, out of which $l_1^{i,2}$ of them are eavesdropper. As the data downloaded is equal to the data stored at MBCR point, the nodes that are eavesdropped do not add a value to the cut. The remaining $u_i-m_i-l_1^{i,2}$ number of nodes contact $d$ live nodes, $\sum\limits_{j=0}^{i-1}u_j$ number of these belong to the previous groups being repaired. In addition, these nodes contact $t-1$ nodes from the same repair group, out of which $u_i-m_i-1$ number of nodes belong to ${\cal D}$. Accordingly, this cut-set bound is given by the following. \begin{eqnarray}\label{eq:Cutsetsum} \mathcal{M}^{s} \leq \sum\limits_{i=0}^{g-1} \left(\left(m_i-l_1^{i,1}\right)\alpha +\left(u_i-m_i-l_1^{i,2}\right) C_i\right), \end{eqnarray} where $$C_i=\left(d-\sum\limits_{j=0}^{i-1}u_j\right)\beta + \left(t-u_i+m_i\right)\beta'.$$ Each summation term in \eqref{eq:Cutsetsum} is concave for $m_i\in [0,u_i]$. We consider two scenarios in \eqref{eq:Cutsetsum}, (i) $m_i=0, l_1^{i,2}=l_1^{i}$ and (ii) $m_i=u_i, l_1^{i,1}=l_1^{i}$. Hence, we obtain, \begin{eqnarray}\label{eq:Cutsetsum2} \mathcal{M}^{s} &\leq& \sum\limits_{i=0}^{g-1} (u_i-l_1^i) \min \Bigg\{ \alpha,\nonumber\\ &&\quad \quad \left(d-\sum\limits_{j=0}^{i-1}u_j\right)\beta + \left(t-u_i\right)\beta' \Bigg\}. \end{eqnarray} Note that, at MBCR point, the nodes store what they download, therefore the MBCR codes should satisfy \begin{eqnarray}\label{eq:CoopMBCR} \alpha=d \beta + (t-1) \beta'. \end{eqnarray} Utilizing this, we consider the following cases of \eqref{eq:Cutsetsum2}. \textbf{Case 1:} $g=k$, $u_i=1$, $\forall i=0, \cdots, k-1$ \begin{eqnarray} \mathcal{M}^{s} &\leq& \sum\limits_{i=0}^{k-1} (1-l_1^i) \left((d-i)\beta + (t-1)\beta'\right) \end{eqnarray} Here, the minimum cut value corresponds to having $l_1^i=1$ for $i=0,1,\cdots, \ell_1-1$; and $l_1^i=0$ otherwise. Hence, we get \begin{eqnarray} \mathcal{M}^{s} &\leq& \sum\limits_{i=\ell_1}^{k-1} (d-i)\beta + (t-1)\beta', \end{eqnarray} from which we obtain \begin{eqnarray}\label{eq:CoBound1} \mathcal{M}^{s} &\leq& \frac{(k-\ell_1)(2d-k-\ell_1+1)}{2}\beta \nonumber\\ &&{+}\: (k-\ell_1)(t-1)\beta'. \end{eqnarray} \textbf{Case 2:} If $t\geq k$, $g=1$, $u_0=k$ \begin{eqnarray} \mathcal{M}^{s} &\leq& (k-\ell_1) \left(d\beta + (t-k)\beta'\right) \label{eq:CoBound2} \end{eqnarray} \textbf{Case 3:} If $t<k$, $g=\floorb{k/t}+1$, $u_i=t$ for $i=0,\cdots,g-2$, and $u_{g-1}=k-\floorb{k/t}t$ Let $a\triangleq \floorb{k/t}$ and $b\triangleq k-at$, so that $k=at+b$. From \eqref{eq:Cutsetsum2}, we obtain \begin{eqnarray} \mathcal{M}^{s} &\leq& \sum\limits_{i=0}^{a-1} (t-l_1^i) (d-it)\beta \nonumber\\ &&{+}\: (b-l_1^a)\left\{(d-at)\beta + (t-b)\beta'\right\}. \end{eqnarray} Considering possible allocations of eavesdroppers in this bound, i.e., $\{l_1^i\}_{i = 0}^{g-1}$, we obtain the following bound (where we collect eavesdropper dependent terms in the variable $S$ given below). \begin{eqnarray} \mathcal{M}^{s} &\leq& \beta\left\{kd+\frac{(k-b)(t-k-b)}{2}\right\} \nonumber\\ &&{+}\: \beta' b(t-b) -S,\label{eq:CoBound3} \end{eqnarray} where $S$ is given by \begin{eqnarray} S &\triangleq& \max\limits_{l_1^i\leq t \textrm{ s.t. } \sum\limits_{i=0}^{a} l_1^i=\ell_1} \sum\limits_{i=0}^{a-1} l_1^i(d-it)\beta \label{eq:S}\\ &&\quad\quad\quad\quad\quad\quad\quad\quad {+}\: l_1^a\left\{(d-at)\beta+(t-b)\beta'\right\} \nonumber \end{eqnarray} \begin{equation} = \begin{cases} \sum\limits_{i=0}^{\floorb{\ell_1/t}-1}t(d-it)\beta \nonumber\\ \quad{+}\:(\ell_1-\floorb{\ell_1/t}t)(d-\floorb{\ell_1/t}t)\beta \nonumber\\ =\beta \ell_1 (d-\floorb{\ell_1/t}t) + \frac{t^2\beta}{2}\floorb{\ell_1/t}(\floorb{\ell_1/t}+1), \nonumber\\ \quad\quad \mbox{if } \ell_1\leq at=k-b. \\ \\ \sum\limits_{i=0}^{a-1}t(d-it)\beta \nonumber\\ \quad{+}\:(\ell_1-at)\{(d-at)\beta+(\ell_1-at)(t-b)\beta'\} \nonumber\\ =\beta \ell_1 (d-at) + \frac{t^2\beta}{2}a(a+1) +(\ell_1-at)(t-b)\beta', \nonumber\\ \quad\quad \mbox{if } \ell_1\geq at=k-b. \end{cases} \end{equation} Note that we consider the worst case eavesdropper allocation to maximize $S$ in the above derivation. The normalized values at the MBCR point are given by \begin{eqnarray}\label{eq:MBCRpoint} \beta'=1, \beta=2, \alpha=\gamma=2d+t-1, \mathcal{M}=k(2d-k+t). \end{eqnarray} Using this and the bounds given in \eqref{eq:CoBound1}, \eqref{eq:CoBound2}, and \eqref{eq:CoBound3}, we get a bound on the secure file size at the MBCR point. We state this result in the following. \begin{proposition}\label{thm:MBCRbound} Cooperative regenerating codes operating at the MBCR point with a secure file size of $\mathcal{M}^{s}$ satisfy \begin{eqnarray} \mathcal{M}^{s} &\leq& k(2d-k+t)-\ell_1(2d-\ell_1+t) \nonumber\\ &=& (k-\ell_1)(2d+t-k-\ell_1) \label{eq:MBCRbound}, \end{eqnarray} and the MBCR point is given by $\beta'=1$, $\beta=2$, $\alpha=\gamma=2d+t-1$ for a file size of $\mathcal{M}=k(2d-k+t)$. \end{proposition} \begin{proof} We show that \eqref{eq:CoBound2} and \eqref{eq:CoBound3} result in loose bounds compared to that of \eqref{eq:CoBound1} in Appendix~\ref{sec:MBCRBounds}. And, \eqref{eq:CoBound1} evaluates to the stated bound at the MBCR point. \end{proof} \subsection{Code construction for secure MBCR when $n=d+t$} \label{sec:MBCR} We consider secrecy precoding of the data at hand before storing it to DSS nodes using an MBCR code. We establish this precoding with maximum rank distance (MRD) codes. In vector representation, assuming $m\geq n$, the norm of a vector ${\bf v}\in\field{F}_{q^m}^n$ is the column rank of ${\bf v}$ over the base field $\field{F}_q$, denoted by $Rk({\bf v}|\field{F}_q)$. (This is the maximum number of linearly independent coordinates of ${\bf v}$ over the base field $\field{F}_q$, for a given basis of $\field{F}_{q^m}$ over $\field{F}_q$. A basis also establishes an isomorphism between $n$-length vectors, in $\field{F}_{q^m}^n$, to $m\times n$ matrices, in $\field{F}_q^{m\times n}$.) Rank distance between two vectors is defined by $d({\bf v}_1,{\bf v}_2) =Rk({\bf v}_1-{\bf v}_2|\field{F}_q)$. (In matrix representation, this is equivalent to the rank of the difference of the two corresponding matrices of the vectors.) An $[n,k,d]$ MRD code over the extension field $\field{F}_{q^m}$ achieving the maximum rank distance $d=n-k+1$ (for $m\geq n$) can be constructed with the following linearized polynomial. (This is referred to as the Gabidulin construction of MRD codes, or Gabidulin codes \cite{Gabidulin:Theory85, Delsarte:Bilinear78, Roth:Maximum91, MacWilliams:Theory77}.) \begin{eqnarray} f(g)=\sum\limits_{i=0}^{k-1} u_i g^{[i]}, \end{eqnarray} where $[i]=q^i$, and $g,u_i\in\field{F}_{q^m}$. Then, given $n$ linearly independent elements over $\field{F}_q$, $\{g_1,\cdots,g_n\}$ with $g_j\in\field{F}_{q^m}$, the codewords for a given set of $k$ elements, $u_i\in\field{F}_{q^m}, i=[0:k-1]$, are obtained by $x_j=f(g_j)=\sum\limits_{i=0}^{k-1} u_i g_j^{[i]}$ for $j=[1:n]$. (With generator matrix representation, we have ${\bf x}={\bf u}{\bf G}$, where ${\bf G}=[g_1, \cdots, g_n; \cdots; g_1^{[k-1]}, \cdots, g_n^{[k-1]}]$.) We also note that the linearized polynomial satisfies $f(a_1g_1+a_2g_2)=a_1f(g_1)+a_2f(g_2)$, for a given $a_1,a_2\in\field{F}_q$ and $g_1,g_2\in\field{F}_{q^m}$, and this will be utilized in the following. Consider now the MBCR point given by $\mathcal{M}=k(2d-k+t)$, $\beta'=1$, $\beta=2$, $\alpha=\gamma=2d+t-1$, $\mathcal{M}^{s}=k(2d-k+t)-\ell_1(2d-\ell_1+t)$, and $n=d+t$. We use MRD codes with $n=k={\cal M}$; hence, the rank distance bound $d\leq n-k+1$ is saturated at $d=1$. Accordingly, we utilize $[{\cal M},{\cal M},1]$ MRD codes over $\field{F}_{q^m}$, which maps length ${\cal M}$ vectors (each element of it being in $\field{F}_{q^m}$) to length ${\cal M}$ codewords in $\field{F}_{q^m}^{{\cal M}}$ (with $m\geq {\cal M}$). The coefficients of the underlying linearized polynomial ($f(g)$) are chosen by $\mathcal{M}-\mathcal{M}^{s}$ random symbols denoted by ${\bf r}\in \field{F}_{q^m}^{{\cal M}-{\cal M}^s}$ and $\mathcal{M}^{s}$ secure data symbols denoted by ${\bf u}\in \field{F}_{q^m}^{{\cal M}^s}$. The corresponding polynomial $f(g)$ is evaluated at $\mathcal{M}$ points $\{g_1$,\ldots, $g_{\mathcal{M}}\}$, which are linearly independent over $\mathbb{F}_q$. We denote these as $x_j=f(g_j)$ for $j=1,\cdots,\mathcal{M}$. This finalizes the secrecy precoding step. The second encoding step is based on the encoding scheme for cooperative repair proposed in~\cite{Jiekak:CROSS12}. (Here, we will summarize file recovery and node repair processes for the case of MRD precoding, and provide the proof of security.) Split the $\mathcal{M}$ symbols into two parts a) $x_1$ to $x_{nk}$, and b) $x_{nk+1}$ to $x_{nk+k(d-k)}$. (Note that $n=d+t$ and $\mathcal{M}=nk+k(d-k)$.) The first part is divided into $n$ groups of $k$ symbols, and stored in $n$ nodes. Here, node $i$ stores $x_{(i-1)k+1}$ to $x_{ik}$. The second part is divided into $d-k$ groups of $k$ symbols. These symbols are encoded with an $(n,k)$ MDS code, and stored on $n$ nodes. In particular, $\{y_{j,1},\ldots, y_{j,n}\}$ are generated from symbols $\{x_{nk+(j-1)k+1},\ldots, x_{nk+jk}\}$, and $y_{j,i}$ is stored at node $i$, for $j=1,\cdots, d-k$. Node $i$, having stored $\{x_{(i-1)k+1},\ldots, x_{ik}, y_{1,i},\ldots y_{d-k,i}\}$, which is referred to as the primary data of node $i$, encodes these symbols using an $(n-1,d)$ MDS code having a Vandermonde matrix $\Phi$ of size $d\times (n-1)$ as its generator matrix. (This choice of $\Phi$ ensure that $[{\bf I}_d~\Phi]$ is generator matrix for an $(n+d-1,d)$ MDS code.) These $n-1$ symbols are stored in every other node one-by-one. We denote the encoded primary data of node $i$ that is stored in node $j\neq i$ as $z_{j,i}$. We call these as the secondary data. This procedure is repeated for every node, so that each node $i$ stores $\{x_{(i-1)k+1}, \ldots, x_{ik},y_{1,i},\ldots,y_{d-k,i}, z_{i,1},\ldots, z_{i,i-1}, z_{i,i+1},$ $\ldots, z_{i,n}\}$, and hence total number of symbols stored at each node is $k+(d-k)+(n-1) = d+n-1 = 2d+t-1= \alpha$. \emph{File recovery at DC:} DC connects to any $k$ nodes, without loss of generality we assume the first $k$ nodes. From $y_{j,1:k}$, DC can obtain $x_{nk+(j-1)k+1}, \cdots, x_{nk+jk}$, for each $j=[1:d-k]$. It can re-encode this into $y_{j,1:n}$ using the MDS code, and obtain the other $y$ symbols at the remaining nodes. Then, for each $i\in [k+1:n]$, DC can use the MDS property of $[{\bf I}_d \: \Phi]$, to obtain $x_{(i-1)k+1}, \cdots, x_{ik}$ symbols of node $i$ from the $k$ secondary data symbols of the contacted nodes, i.e., $z_{j,i}$ for $j=[1:k]$, and additional $d-k$ symbols, $y_{j,i}$ for $j=[1:d-k]$. Having obtained $x_1, \cdots, x_{{\cal M}}$, DC can perform interpolation to solve for both data and random coefficients. \emph{Node repair:} Assume that the first $t$ nodes fail. From the secondary data stored in the remaining $d=n-t$ nodes, $z_{t+1,i},\cdots,z_{n,i}$, one can recover $x_{(i-1)k+1}, \cdots, x_{ik}$ and $y_{1,i},\cdots,y_{d-k,i}$; for node $i=1,\cdots, t$. (This corresponds to sending $1$ symbol from each of $d$ nodes to each of the $t$ nodes.) Then, to recover the secondary data stored at each node under repair, say for the node $j=1,\cdots,t$, every other node, i.e., nodes $i\neq j$, including the nodes under repair, computes and sends its corresponding encoded primary data, i.e., $z_{j,i}$, to node $j$. (This corresponds to sending $1$ symbol from each node to each of the $t$ nodes.) This achieves $\beta=2$ and $\beta'=1$ symbols for the repair procedure. \emph{Security:} Consider that the eavesdropper is observing the first $\ell_1$ nodes. Due to the code construction, the symbols in the sets ${\cal X}=\{x_{1}, \ldots, x_{\ell_1k}\}$, ${\cal Y}=\{y_{1,1},\ldots, y_{d-k,1}, \cdots, y_{1,\ell_1},\ldots, y_{d-k,\ell_1}\}$, ${\cal Z}=\{z_{j,i} \textrm{ for } j= 1,\ldots,\ell_1, \textrm{ and } i=\ell_1+1,\cdots,n$\} correspond to linearly independent evaluation points. (Note that, the symbols $\{z_{j,i}\}$ for $j=1,\cdots,\ell_1$; $i=1,\cdots,\ell_1$; $j\neq i$, are linear combinations of the symbols in ${\cal X}\cup{\cal Y}$.) Due to the linearized property of the code, the eavesdropper observing $\ell_1\alpha=\ell_1 (2d+t-1)$ symbols, has evaluation of polynomial $f(\cdot)$ at $\ell_1(2d+t-\ell_1)$ linearly independent points. Using the data symbols, together with interpolation from these $\ell_1(2d+t-\ell_1)$ symbols, the eavesdropper can solve for $\ell_1(2d+t-\ell_1)$ random symbols. Therefore, denoting the eavesdroppers' observation as ${\bf e}$, we have $H({\bf r}|{\bf e},{\bf u})=0$. As, $H({\bf e})=H({\bf r})$, from Lemma~\ref{thm:SecrecyLemma}, we have $I({\bf u};{\bf e})=0$. Using the upper bound given in Proposition~\ref{thm:MBCRbound}, we obtain the following result. \begin{proposition} The secrecy capacity at MBCR point for a file size of $\mathcal{M}^{s}=k(2d-k+t)$ is given by $\mathcal{M}^{s}=k(2d-k+t)-\ell_1(2d-\ell_1+t)$, if $n=d+t$. \end{proposition} \subsection{Does cooperation enhances/degrades security at MBCR?} Cooperative regenerating codes has a repair bandwidth given by $\gamma=d\beta + (t-1)\beta'$. In this section, we analyze $\frac{\gamma}{\mathcal{M}^{s}}$, the ratio of repair bandwidth to the secure file size. In the following, we refer to this parameter as the normalized repair bandwidth (NRBW). Without the security constraints, for which $\ell_1=0$ in Proposition~\ref{thm:MBCRbound}, we observe that at MBCR point NRBW is given by \begin{equation} \textrm{NRBW}(\ell_1=0) = \frac{2d+t-1}{k(2d-k+t)}, \end{equation} which is equal to \begin{equation} \textrm{NRBW}(\ell_1=0,n=d+t) = \frac{2n-t-1}{k(2n-k-t)} \end{equation} for a system with $n=d+t$. Here, the classical (i.e., non-cooperative) scenario corresponds to $t=1$ case, which has an NRBW of \begin{equation} \textrm{NRBW}(\ell_1=0,n=d+t, t=1) = \frac{2n-2}{k(2n-k-1)}. \end{equation} Comparing the last two equations, we see that $$\textrm{NRBW}(\ell_1=0,n=d+t)\geq \textrm{NRBW}(\ell_1=0,n=d+t, t=1),$$ with equality iff $t=1$. Therefore, without the security constraints, having simultaneous repairs of size greater than $1$ actually increases the repair bandwidth. This nature of cooperation also results in the conclusion that deliberately delaying the repairs does not bring additional savings~\cite{Kermarrec:Repairing11}. (This observation is proposed for both MBCR and MSCR points in~\cite{Kermarrec:Repairing11} with an analysis of derivative of $\gamma$ with respect to $t$. Here, we provide an analysis with NRBW.) We revisit the above conclusion under security constraints. The question is whether the cooperation (i.e., having a system with multiple failures, or deliberately delaying the repairs) results in a loss/gain in secure DSS. A calculation similar to above shows that NRBW for the case $t>1$ is strictly greater than that of $t=1$ when $n=d+t$ for $l_1<k$. The MBCR points given in Proposition~\ref{thm:MBCRbound} for codes satisfying $0\leq \ell_1 < k < n$, $d\geq k$, and $d=n-t$ are given in Table~\ref{tab:Coop} in Appendix~\ref{app:CoopTable}. As evident from the table, we see that cooperation does not bring additional savings for secure DSS at MBCR point when $d+t=n$. This in turn means that one may not delay the repairs to achieve a better performance than that of single failure-repair if $d$ is chosen such that $n=d+t$ for a given $t,n$. However, if the downloads within the cooperative group are less costly compared to the downloads from the live nodes, then delaying repairs would be beneficial in reducing the total cost. We will revisit this analysis for codes having $n>d+t$ in the next subsection. \subsection{General Code Construction for Secure MBCR} The code construction above needs the requirement of $d=n-t$. However, for practical systems, it may not be possible that a failed node connects to all the remaining nodes. This brings the necessity of code constructions for $d<n-t$. Remarkably, for a fixed $(n,k,d,{\mathcal{M}})$, increasing $t$ can reduce the repair bandwidth in the secrecy scenario we consider here. This is reported in~\cite{Shum:Exact11} for DSS without secrecy constraints. Hence, for a fixed $d$, delaying the repairs can be advantageous, e.g., when there is a limit on the number of live nodes that can be connected. In the following, we present a general construction which works for any parameters, in particular for $n>d+t$. The construction is based on the code construction proposed in \cite{Wang:Exact12}. In \cite{Wang:Exact12}, a bivariate polynomial is constructed using ${\mathcal{M}} = k(2d + t - k)$ message symbols as the coefficients of the polynomial: \begin{align} \label{eq:mbcr_poly} F(X,Y) &= \sum_{\substack{0 \leq i < k,\\ 0 \leq j < k}}a_{ij}X^iY^j + \sum_{\substack{0\leq i < k,\\ k \leq j < d+t}}b_{ij}X^iY^j& \nonumber \\ &+ \sum_{\substack{k\leq i < d,\\ 0 \leq j < k}}c_{ij}X^iY^j& \end{align} \begin{figure*} \footnotesize \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $F(x_1,y_1)$ & $F(x_1,y_2)$ & $\cdots$ & $F(x_1,y_{\ell_1})$ & $\cdots$ & $F(x_1,y_{d+t})$ & & & \\\hline $F(x_2,y_1)$&$F(x_2,y_2)$&$\cdots$&$F(x_2,y_{\ell_1})$&$\cdots$ & $F(x_2,y_{d+t})$ & $\bf{\textcolor{blue}{F(x_2,y_{d+t+1})}}$ & &\\\hline $\cdots$ & & & & & & & & \\\hline $F(x_{\ell_1},y_1)$ & $F(x_{\ell_1},y_2)$ & $\cdots$ & $F(x_{\ell_1},y_{\ell_1})$ & $\cdots$ & $F(x_{\ell_1},y_{d+t})$ & $\bf{\textcolor{blue}{F(x_{\ell_1},y_{d+t+1})}}$ & $\bf{\textcolor{blue}{\cdots}}$ & $\bf{\textcolor{blue}{F(x_{\ell_1},y_{d+t-1+\ell_1})}}$ \\\hline $\cdots$ & & & & & & & & \\\hline $F(x_{d},y_1)$ & $F(x_{d},y_2)$ & $\cdots$ & $F(x_{d},y_{\ell_1})$ & & & & & \\\hline & $\bf{\textcolor{blue}{F(x_{d+1},y_2)}}$ & $\bf{\textcolor{blue}{\cdots}}$ & $\bf{\textcolor{blue}{F(x_{d+1},y_{\ell_1})}}$ & & & & & \\\hline & & $\bf{\textcolor{blue}{\cdots}}$ & $\bf{\textcolor{blue}{\cdot}}$ & & & & & \\\hline & & & $\bf{\textcolor{blue}{F(x_{d+\ell_1-1},y_{\ell_1})}}$ & & & & & \\\hline \end{tabular} \caption{Observed symbols at the eavesdroppers for a given $\ell_1$.} \label{fig:MBCREave} \end{figure*} \normalsize Given $q > n$, two set of $n$ distinct points, $\{x_1, x_2,\ldots, x_n\}$ and $\{y_1, y_2,\ldots, y_n\}$, are chosen. The $i^{th}$ node in the DSS store the following $2d + t - 1$ evaluations of polynomial $F(X,Y)$: \begin{eqnarray} \label{eq:MBCR_node_content} F(x_i,y_i), F(x_i,y_{i\oplus1}),\ldots, F(x_i,y_{i\oplus(d+t-1)}) \nonumber \\ F(x_{i\oplus1},y_i), F(x_{i\oplus2},y_{i}),\ldots, F(x_{i\oplus(d-1)},y_{i}) \end{eqnarray} where $\oplus$ denotes addition modulo $n$. The first $d+t$ evaluation at node $i$ can be seen as the evaluation of univariate polynomial $f_i(Y) = F(x_i,Y)$ of degree at most $d+t-1$ at $d+t$ points. This uniquely defines the polynomial $f_i(Y)$. Similarly, the first evaluation in (\ref{eq:MBCR_node_content}), $F(x_i,y_i)$, along with last $d-1$ evaluations uniquely define the univariate polynomial $g_i(X) = F(X,y_i)$ of degree at most $d-1$. This property of the proposed bivariate polynomial based coding scheme is utilized for the exact node repair and data reconstruction processes at MBCR point. (We refer to \cite{Wang:Exact12} for details.) In order to get an $(\ell_1,0)$ secure code at MBCR point, we rewrite the polynomial in (\ref{eq:mbcr_poly}) as follows: \begin{align} \label{eq:secureMBCR_ploy} F(X,Y) &=\sum_{\substack{0 \leq i < \ell_1, \\ 0 \leq j < \ell_1}}a_{ij}X^iY^j + \sum_{\substack{0 \leq i < \ell_1, \\ \ell_1 \leq j < k}}a_{ij}X^iY^j& \nonumber \\ &+ \sum_{\substack{\ell_1 \leq i < k,\\ 0 \leq j < \ell_1}}a_{ij}X^iY^j + \sum_{\substack{\ell_1 \leq i < k, \\ \ell_1 \leq j < k}}a_{ij}X^iY^j& \nonumber \\ &+ \sum_{\substack{0\leq i < \ell_1,\\ k \leq j < d+t}}b_{ij}X^iY^j + \sum_{\substack{\ell_1 \leq i < k,\\ k \leq j < d+t}}b_{ij}X^iY^j& \nonumber \\ &+ \sum_{\substack{k\leq i < d,\\ 0 \leq j < \ell_1}}c_{ij}X^iY^j + \sum_{\substack{k\leq i < d,\\ \ell_1 \leq j < k}}c_{ij}X^iY^j& \end{align} Next, we choose $\ell_1^2 + \ell_1(k-\ell_1) + (k-\ell_1)\ell_1 + \ell_1(d+t - k) + (d-k)\ell_1 = \ell_1(2d + t - \ell_1)$ coefficients of $F(X,Y)$, $\{a_{ij}\}_{0 \leq i < \ell_1, 0 \leq j < \ell_1},$ $\{a_{ij}\}_{0 \leq i < \ell_1, \ell_1 \leq j < k},$ $\{a_{ij}\}_{\ell_1 \leq i < k, 0 \leq j < \ell_1},$ $\{b_{ij}\}_{0\leq i < \ell_1, k \leq j < d+t},$ $\{c_{ij}\}_{k\leq i < d, 0 \leq j < \ell_1}$, to be random symbols drawn from $\field{F}_{q}$ in an i.i.d. manner. Remaining $k(2d + t -k) - \ell_1(2d + t -\ell_1) = \mathcal{M}^{s}$ coefficients of $F(X,Y)$ are chosen to be the data symbols that need to be stored on the DSS. Each node $i \in [n]$ stores the evaluation of $F(X,Y)$ as illustrated in (\ref{eq:MBCR_node_content}). It follows from the description of the coding scheme of \cite{Wang:Exact12} in the beginning of this subsection that the resulting coding scheme is an exact repairable code at MBCR point. Next, we show that the proposed scheme is indeed $(\ell_1,0)-$secure. If ${\bf e}$, ${\bf u}$, and ${\bf r}$ denote the data observed by eavesdropper, original data to be stored, and the randomness added to the original data before encoding respectively, then it is sufficient to show (i) $H({\bf e}) \leq H({\bf r})$ and (ii) $H({\bf r}| {\bf u}, {\bf e}) = 0$ in order to establish the secrecy claim (see Lemma~\ref{thm:SecrecyLemma}). To argue the first requirement, noting that number of eavesdropped symbols are $\ell_1\alpha=\ell_1(2d+t-1)$, we will show that $\ell_1^2-\ell_1$ number of these are linearly dependent on the remaining ones. The eavesdropper, without loss of generality considering the first $\ell_1$ nodes, observes the symbols given in Fig.~\ref{fig:MBCREave}. Due to the code construction, each row above represents evaluations of a polynomial of degree less than $d+t$ and each column represents a polynomial of degree less than $d$. Hence, we observe that each of the symbols denoted with bold (blue) font in the matrix of Fig.~\ref{fig:MBCREave} is a linear combination of the remaining ones. Therefore, $H({\bf e})=\ell_1\alpha-\ell_1(\ell_1-1)=H({\bf r})$. In order to show that second requirement also holds, we present a method to decode randomness ${\bf r}$ given ${\bf u}$ and data stored on any $\ell_1$ nodes. Once we know the data symbols ${\bf u}$, we can remove the monomials associated to data symbols in $F(X,Y)$ and the contribution of these monomials from the polynomial evaluations stored on DSS. Let $\widehat{F}(X,Y)$ denote the bivariate polynomial that we obtain by removing the data monomials: \begin{align} \label{eq:MBCR_eaves_poly1} \widehat{F}(X,Y) &=\sum_{\substack{0 \leq i < \ell_1,\\ 0 \leq j < \ell_1}}a_{ij}X^iY^j + \sum_{\substack{0 \leq i < \ell_1,\\ \ell_1 \leq j < k}}a_{ij}X^iY^j& \nonumber \\ &+ \sum_{\substack{\ell_1 \leq i < k,\\ 0 \leq j < \ell_1}}a_{ij}X^iY^j + \sum_{\substack{0\leq i < \ell_1,\\ k \leq j < d+t}}b_{ij}X^iY^j& \nonumber \\ &+ \sum_{\substack{k\leq i < d,\\ 0 \leq j < \ell_1}}c_{ij}X^iY^j& \end{align} $\widehat{F}(X,Y)$ can be rewritten as: \begin{align} \label{eq:MBCR_eaves_poly2} \widehat{F}(X,Y) &=\sum_{\substack{0 \leq i < \ell_1,\\ 0 \leq j < \ell_1}}\hat{a}_{ij}X^iY^j + \sum_{\substack{0 \leq i < \ell_1,\\ \ell_1 \leq j < d+t}}\hat{b}_{ij}X^iY^j& \nonumber \\ &+ \sum_{\substack{\ell_1 \leq i < d,\\ 0 \leq j < \ell_1}}\hat{c}_{ij}X^iY^j& \end{align} where \begin{align} \{\hat{a}_{ij}\}_{\substack{0 \leq i < \ell_1,\\ 0 \leq j < \ell_1}} &= \{a_{ij}\}_{\substack{0 \leq i < \ell_1,\\ 0 \leq j < \ell_1}}& \nonumber \\ \{\hat{b}_{ij}\}_{\substack{0 \leq i < \ell_1,\\ \ell_1 \leq j < k}} &= \{a_{ij}\}_{\substack{0 \leq i < \ell_1,\\ \ell_1 \leq j < k}}& \nonumber \\ \{\hat{b}_{ij}\}_{\substack{0\leq i < \ell_1,\\ k \leq j < d+t}} &= \{b_{ij}\}_{\substack{0\leq i < \ell_1,\\ k \leq j < d+t}}& \nonumber \\ \{\hat{c}_{ij}\}_{\substack{\ell_1 \leq i < k,\\ 0 \leq j < \ell_1}} &= \{a_{ij}\}_{\substack{\ell_1 \leq i < k,\\ 0 \leq j < \ell_1}}& \nonumber \\ \text{and}~~\{\hat{c}_{ij}\}_{\substack{k\leq i < d,\\ 0 \leq j < \ell_1}} &= \{c_{ij}\}_{\substack{k\leq i < d,\\ 0 \leq j < \ell_1}}.& \end{align} $\widehat{F}(X,Y)$ in (\ref{eq:MBCR_eaves_poly2}) takes the same form as $F(X,Y)$ in (\ref{eq:mbcr_poly}) with $k$ replaced with $\ell_1$. Therefore the randomness ${\bf r}$, coefficients of $\widehat{F}(X,Y)$ in (\ref{eq:MBCR_eaves_poly2}), can be decoded from the data observed on $\ell_1$ nodes using the data reconstruction method described in \cite{Wang:Exact12}. Thus, we obtain the following result. \begin{proposition} The secrecy capacity at MBCR point for a file size of $\mathcal{M}^{s}=k(2d-k+t)$ is given by $\mathcal{M}^{s}=k(2d-k+t)-\ell_1(2d-\ell_1+t)$ for any $n\geq d+t$. \end{proposition} We list some instances of this construction in Table~\ref{tab:Coop2} in Appendix~\ref{app:CoopTable}. As evident from the table, cooperation helps to reduce the repair bandwidth if $d<n-t$. Thus, (secure) coding schemes for the case of $n>d+t$ are of significant interest in order to reduce the repair bandwidth in cooperative repair. \section{Secure MSCR Codes} We first consider upper bound on the secure file size, and then utilize appropriate secrecy precoding mechanisms to construct achievable schemes. \subsection{Upper bound on the secure file size} At MSCR point, the nodes have minimum possible storage, i.e., $\alpha=\frac{{\mathcal{M}}}{k}$. Using the cut-set analysis given in the previous section, one can obtain that the minimum repair bandwidth can be attained with $\beta=\beta'=\frac{\alpha}{d-k+t}=\frac{{\mathcal{M}}}{k(d-k+t)}$. (See also~\cite{Kermarrec:Repairing11,Oggier:Coding12}.) At MSCR, therefore the downloaded data can be larger than the data stored in the nodes. Thus, for secrecy constraints, we consider two eavesdropper types: storage-only eavesdroppers (${\cal E}_1$) and storage-and-download eavesdroppers (${\cal E}_2$). Using the size of these sets we denote the eavesdropper setting with $(\ell_1,\ell_2)$ as introduced in Section~\ref{sec:sys_model}. Here, the eavesdroppers in ${\cal E}_2$ observe both the downloaded data from live nodes and from that of cooperation nodes. Similar to the secure file size bound analysis given in the previous section, we obtain the following bound. \begin{eqnarray} \mathcal{M}^{s} &\leq& \sum\limits_{i=0}^{k-1} (1-l_1^i-l_2^i) \min \Big\{ \alpha - I({\bf s}_{i};{\bf d}_{i,{\cal E}_2}), \nonumber\\ &&\quad (d-i)\beta + (t-1)\beta'\Big\} \label{eq:CoBound4}. \end{eqnarray} Here, we consider $u_i=1$ number of nodes of stage $i$ include $l_1^i$ number of eavesdroppers from ${\cal E}_1$ and $l_2^i$ number of eavesdroppers from ${\cal E}_2$. Compared to the MBCR bounds, due to eavesdroppers in ${\cal E}_2$, nodes that are not eavesdropped may leak information during their participation in repair of a node having an $\mathcal{E}_2$ type eavesdropper. Thus, the values of the cuts of type $1$ include additional penalty terms $I({\bf s}_{i};{\bf d}_{i,{\cal E}_2})$, counting the leakage from the storage at the $i$-th node to nodes indexed with ${\cal E}_2$. (Here, the cut value can be written as $H({\bf s}_i|{\bf d}_{i,{\cal E}_2})=H({\bf s}_i)-I({\bf s}_i;{\bf d}_{i,{\cal E}_2})$.) Considering the MSCR point values of $\alpha$, $\beta$, and $\beta'$ given above, the second term of \eqref{eq:CoBound4} will be loose. (The cases considered for \eqref{eq:CoBound2} and \eqref{eq:CoBound3}, when specialized to the MSCR point, do not give tighter bound than that of \eqref{eq:CoBound4}.) Hence, considering that the first $k-\ell_1-\ell_2$ repairs are eavesdropper-free, \eqref{eq:CoBound4} will evaluate to the following bound. \begin{proposition}\label{thm:MSCRbound} Cooperative regenerating codes operating at the MSCR point with a secure file size of $\mathcal{M}^{s}$ satisfy \begin{eqnarray} \mathcal{M}^{s} \leq \sum\limits_{i=0}^{k-\ell_1-\ell_2-1} \alpha - I({\bf s}_i;{\bf d}_{i,{\cal E}_2}), \end{eqnarray} where MSCR point is given by $\beta=\beta'=1$, $\alpha=d-k+t$, for a file size of ${\mathcal{M}}=k(d-k+t)$. In addition, at MSCR, one can bound $I({\bf s}_i;{\bf d}_{i,{\cal E}_2})\geq \beta'=\beta$ and obtain the bound $$\mathcal{M}^{s} \leq (k-\ell_1-\ell_2)(\alpha-\beta).$$ \end{proposition} \subsection{Code construction for secure MSCR when $k=t=2$} We consider an interference alignment approach based on the one proposed in~\cite{LeScouarnec:Exact12}, considering $k=t=2$. (See also \cite{Shah:Interference12,Suh:Exact10}.) For any $(n,k,d,t)$ with $d\geq k$ and $n=d+t$, we have $\alpha=d-k+t=n-2$, and ${\mathcal{M}}=k(d-k+t)=2(d-k+t)=2\alpha$ at MSCR point. From the bound given in Proposition~\ref{thm:MSCRbound}, the achievability of positive secure file size is possible only when $(\ell_1,\ell_2)=(1,0)$ or $(\ell_1,\ell_2)=(0,1)$ when $k=2$. Corresponding bounds are given by $\mathcal{M}^{s} \leq \alpha$ and $\mathcal{M}^{s} \leq \alpha-1$, respectively. (For the latter bound, as $|{\cal E}_2|=1$, ${\bf d}_{i,{\cal E}_2}$ necessarily consists of one symbol as $\beta=\beta'=1$, and the non-eavesdropped node participates in the repair of the eavesdropped node by sending $\beta$ or $\beta'$ symbols.) In the following, we construct codes achieving the stated bounds for both cases, hence establishing the secrecy capacity when $k=t=2$. We show this with codes having $n=d+t$, i.e., all the nodes participate in the repair. The construction can be extended to cases with $n>d+t$ by following a similar approach and choosing a larger field size. \textbf{Case $1$: $\mathcal{M}^{s} = \alpha$ when $(\ell_1,\ell_2)=(1,0)$} Consider a finite field size of $q=n-1$ with generator $w$, $\alpha$ number of random symbols $r_1, \cdots, r_\alpha$, and $\alpha$ number of secure information symbols $s_1, \cdots, s_\alpha$. Both information and random symbols are uniformly distributed over the field. We construct the file given by ${\mathcal{M}}=\{a_1 \triangleq r_1, \cdots, a_\alpha \triangleq r_\alpha, b_1 \triangleq r_1+s_1, \cdots, b_\alpha \triangleq r_\alpha+s_\alpha\}$, and consider the following placement \begin{itemize} \item Store ${\bf a}=(a_1,\cdots,a_\alpha)^T$ at the first node, \item Store ${\bf b}=(b_1,\cdots,b_\alpha)^T$ at the second node, and \item Store ${\bf r}_i=(a_1+w^{(i-1) \mod \alpha}b_1, \cdots, a_\alpha+w^{(i+\alpha-2) \mod \alpha}b_\alpha)^T$ at the $i$-th redundancy node, $i\in\{1,\cdots, \alpha\}$. \end{itemize} Data collector can reconstruct the file ${\mathcal{M}}$ by contacting any of the $k$ nodes, and solving $\alpha$ groups of $2$ equations over $2$ unknowns for each group. From file ${\mathcal{M}}$, it can then obtain the secure symbols $s_1, \cdots, s_\alpha$. For cooperative repair, considering the repair of the first systematic node, $i$-th redundancy node storing ${\bf r}_i={\bf a} + {\bf B}_i {\bf b}$, sends ${\bf v}_{1,i}{\bf r}_i={\bf w}_{1,i}{\bf a} + {\bf z} {\bf b}$, where ${\bf v}_{1,i}={\bf z} {\bf B}_i^{-1}$ and ${\bf z}=(1,\cdots,1)$. (Repair of the second systematic node is symmetric to the first one, and, without loss of generality, we consider the repair of the two systematic nodes. Repairs of stages involving redundancy nodes can be performed as that of the systematic nodes after change of variables.) Second systematic node, having received ${\bf c}_2=\{{\bf v}_{2,1}{\bf r}_1,\cdots, {\bf v}_{2,d}{\bf r}_d\}$, chooses the repair vector ${\bf v}_{1,0}$ such that ${\bf v}_{1,0}{\bf c}_2={\bf w}_{1,0}{\bf a}+{\bf z}{\bf b}$; and sends ${\bf v}_{1,0}{\bf c}_2$ to the first systematic node. Then, the first systematic node solves $d+1$ equations $\{{\bf w}_{1,0}{\bf a}+{\bf z}{\bf b},{\bf w}_{1,1}{\bf a}+{\bf z}{\bf b}, \cdots, {\bf w}_{1,d}{\bf a}+{\bf z}{\bf b}\}$ in $d+1$ unknowns $\{a_1,\cdots,a_\alpha,{\bf z}{\bf b}\}$. Noting that the regeneration and repair are similar to the ones proposed in~\cite{LeScouarnec:Exact12}, it remains to show the secrecy of the file. Here, regardless of eavesdropped node being in the systematic or parity nodes, given the secure symbols, ${\bf u}=s_1, \cdots, s_\alpha$, the eavesdropper can obtain $\alpha$ equations in $\alpha$ unknowns ${\bf r}=r_1, \cdots, r_\alpha$; and solve for ${\bf r}$. This shows that $H({\bf r}|{\bf u},{\bf e})=0$, where the eavesdropper observes the content of the eavesdropped node, i.e., ${\bf e}={\bf s}_{{\cal E}_1}$. We see that, at the eavesdropped node, the content of the stored data necessarily satisfies $H({\bf e})=H({\bf s}_{{\cal E}_1})=\alpha$. Then, as the code satisfies both $H({\bf e})\leq H({\bf r})$ and $H({\bf r}|{\bf u},{\bf e})=0$, we obtain from Lemma~\ref{thm:SecrecyLemma} that $I({\bf u};{\bf e})=I(s_1, \cdots, s_\alpha;{\bf s}_{{\cal E}_1})=0$. \textbf{Case $2$: $\mathcal{M}^{s} = \alpha-1$ when $(\ell_1,\ell_2)=(0,1)$} We modify the above construction by considering the file given by ${\mathcal{M}}=\{a_1 \triangleq r_1, \cdots, a_\alpha \triangleq r_\alpha, b_1 \triangleq r_1+s_1, \cdots, b_{\alpha-1} \triangleq r_{\alpha-1}+s_{\alpha-1}, b_\alpha\triangleq r_{\alpha+1}\}$. The regeneration and repair parts are the same as that of the previous section. We show that the secrecy constraint is satisfied here. The content of the eavesdropped node ${\bf s}_{{\cal E}_2}$ is generated from the downloaded data ${\bf d}_{{\cal E}_2}$. Thus, we need to show $I({\bf u};{\bf e})=0$ with ${\bf u}=\{s_1,\cdots,s_{\alpha-1}\}$ and ${\bf e}={\bf d}_{{\cal E}_2}$. Without loss generality, we consider the eavesdropper observing the first systematic node. Considering the repair process described above, we have ${\bf e}={\bf d}_{{\cal E}_2}=\{{\bf w}_{1,0}{\bf a}+{\bf z}{\bf b},{\bf w}_{1,1}{\bf a}+{\bf z}{\bf b}, \cdots, {\bf w}_{1,d}{\bf a}+{\bf z}{\bf b}\}$, from which we obtain that $H({\bf e})\leq \alpha+1$. In addition, as the eavesdropper can solve for $({\bf a},{\bf z}{\bf b})$, it can solve for ${\bf r}=\{r_1,\cdots,r_{\alpha+1}\}$ from the $\alpha+1$ number of equations in $({\bf a},{\bf z}{\bf b})$, after canceling out the secure symbols ${\bf u}=\{s_1,\cdots,s_{\alpha-1}\}$. This shows that $H({\bf r}|{\bf u},{\bf e})=0$. This, together with $H({\bf r})=\alpha+1$ and Lemma~\ref{thm:SecrecyLemma}, we obtain that $I({\bf u};{\bf e})=I(s_1, \cdots, s_{\alpha-1};{\bf d}_{{\cal E}_2})=0$. \begin{proposition} The secrecy capacity at MSCR point for a file size of ${\cal M}=k(d-k+t)$ is given by $\mathcal{M}^{s}=\alpha$, if $(\ell_1,\ell_2)=(1,0)$ and $k=t=2$; and by $\mathcal{M}^{s}=\alpha-1$, if $(\ell_1,\ell_2)=(0,1)$ and $k=t=2$. \end{proposition} \subsection{Code construction for secure MSCR when $d=k$} The above construction is limited to the $k=2$ case. Here, we provide secure MSCR code when $d=k$, and hence allowing $k>2$. (Note that as $d\geq k \geq \ell_1+\ell_2$, we necessarily have $\ell_1+\ell_2\leq d=k$ here.) Again, we apply the two-stage encoding, with using an MRD code as the secrecy pre-coding. Consider ${\mathcal{M}}=k(d-k+t)=kt$, $\beta=\beta'=1$, $\alpha=d-k+t=t$, $\mathcal{M}^{s}=kt-(\ell_1+\ell_2)t-\ell_2(k-\ell_1-\ell_2)$, and $n\geq d+t$. We encode the data using the linearized polynomial $f(g)=\sum\limits_{i=0}^{{\mathcal{M}}-1} u_i g^{q^i}$. (This is the Gabidulin construction of MRD codes \cite{Gabidulin:Theory85} summarized in Section~\ref{sec:MBCR}.) The coefficients of $f(g)$ is chosen by $\mathcal{M}^{s}$ data symbols denoted by ${\bf u}$ and ${\mathcal{M}}-\mathcal{M}^{s}$ random symbols denoted by ${\bf r}$. The function $f(g)$ is evaluated at ${\mathcal{M}}$ points in $\field{F}_{q^m}$ $\{g_1,\ldots, g_{\mathcal{M}}\}$ that are linearly independent over $\mathbb{F}_q$. (Here, the data and random symbols belong to $\mbox{\bb F}_{q^m}$ with $m\geq {\mathcal{M}}$.) We denote these points as $x_i=f(\alpha_i)$ for $i=1,\cdots,{\mathcal{M}}=kt$. We consider the code provided in~\cite{Shum:Cooperative11} in the secrecy setting here. We place ${\mathcal{M}}=kt$ symbols into vectors ${\bf m}_1,\cdots,{\bf m}_t$, each having $k$ symbols. We encode these vectors with a Vandermonde matrix of size $k\times n$, whose columns are represented as ${\bf g}_i$ for $i=1,\cdots,n$. We store ${\bf m}_j^T {\bf g}_i$ at node $i$. Data collector, by contacting any $k$ nodes, can obtain $k$ equations for each of ${\bf m}_j$, and solve them to obtain $x_i$ for $i=1,\cdots,{\mathcal{M}}=kt$. It can then obtain the secure data symbols by interpolation. For node repair, consider that node $j\in[t]$ contacts $d=k$ live nodes, named as $j_1$ to $j_k$. It will download ${\bf m}_j^T{\bf g}_{j_l}$ from live node $j_l$ for $l=1,\cdots,k$. Node $j$ then will obtain ${\bf m}_j$ by solving these $k$ equations, and send ${\bf m}_j{\bf g}_{j'}$ to $j'\neq j$, $j'\in[t]$, the remaining nodes under repair. Each node $j\in[t]$ will repeat this procedure. (Then, node $j$ will also recover its ${\bf m}_{j'}{\bf g}_{j}$ by downloading a symbol from each of the node being under repair.) We here show the secrecy constraint has met assuming $l_2\leq t$. (Otherwise, this construction can not achieve a positive secure file size as the ${\cal E}_2$ eavesdroppers can obtain all ${\bf m}_{[1:t]}$ symbols from their downloads.) We observe that ${\cal E}_2$ nodes being under repair obtain $\ell_2 k$ equations from the live nodes (these will reveal $\ell_2$ number of ${\bf m}_j$s), and store additional $\ell_2(t-\ell_2)=\ell_2(\alpha-\ell_2)$ symbols received from the remaining nodes under repair. ${\cal E}_1$ nodes observe $\ell_1\alpha$ number of symbols. However, $\ell_1 \ell_2$ of these symbols are linearly dependent to the ones downloaded by the ${\cal E}_2$ nodes (as ${\cal E}_2$ nodes have the knowledge of $\ell_2$ number of ${\bf m}_j$s). Therefore, using the given polynomial and the secure data of length $\mathcal{M}^{s}$, the eavesdropper can solve for the random symbols using these $\ell_2(k+\alpha-\ell_2)+\ell_1(\alpha-\ell_2)=\ell_2(k+t-\ell_2)+\ell_1 (t-\ell_2) =(k-\ell_1-\ell_2)\ell_2+(\ell_1+\ell_2)t={\mathcal{M}}-\mathcal{M}^{s}$ linearly independent evaluations of the polynomial. Thus, we have $H({\bf r}|{\bf u},{\bf e})=0$, where ${\bf e}$ denotes the observations of ${\cal E}_1$ and ${\cal E}_2$ eavesdroppers. This construction also satisfies $H({\bf e})=\ell_2k+(\alpha-\ell_2)\ell_2+\ell_1(\alpha-\ell_2)=H({\bf r})$ as argued above, and it follows from Lemma~\ref{thm:SecrecyLemma} that we have $I({\bf u};{\bf e})=0$. This code achieves the secure file size of $kt-(\ell_1+\ell_2)t-\ell_2(k-\ell_1-\ell_2)$ when $l_2\leq t$. \begin{proposition} The secure file size of $\mathcal{M}^{s} =(k-\ell_1-\ell_2)[t-\ell_2]^+$ is achievable at the MSCR point for a file size of ${\mathcal{M}}=k(d-k+t)$ when $d=k$. \end{proposition} Note that this achieves the secrecy capacity when $\ell_2\leq 1$ for any $\ell_1$ as can be observed from the bound given by Proposition~\ref{thm:MSCRbound}. \section{Conclusion} \label{sec:con} DSS store data in multiple nodes. These systems not only require resilience against node failures, but also have to satisfy security constraints and to perform multiple node repair. Regenerating codes proposed for DSS address the node failure resilience while efficiently trading off storage vs. repair bandwidth. In this paper, we considered secure cooperative regenerating codes for DSS. The eavesdropper model analyzed in this paper belongs to the class of passive attack models, where the eavesdroppers observe the content of the nodes in the system. Accordingly, we considered an $(\ell_1,\ell_2)$-eavesdropper, where the storage content of any $\ell_1$ nodes, and the download content of any $\ell_2$ nodes are leaked to the eavesdropper. With such an eavesdropper model, we studied the security for the multiple repair scenario, in particular secure cooperative regenerating codes. For the minimum bandwidth cooperative regenerating (MBCR) point, we established a bound on the secrecy capacity, and by modifying the existing coding schemes in the literature, devised new codes achieving the secrecy capacity. For the minimum storage cooperative regenerating (MSCR) point, on the other hand, we proposed an upper bound and lower bounds on the secure file size, which match under special cases. The results show that it is possible to design regenerating codes that not only efficiently trades storage vs. repair bandwidth, but also resilient against security attacks in a cooperative repair scenario. Finally, as evident from some of our secrecy-achieving constructions, we would like to emphasize the role that the maximum rank distance (MRD) codes can take in secrecy problems. In particular, we have utilized the Gabidulin construction~\cite{Gabidulin:Theory85} of MRD codes and properties of linearized polynomials in obtaining some of the results. Similar properties of such codes have been utilized to achieve secrecy in earlier works \cite{Gabidulin:Ideals91,Gibson:security96,Koetter:Coding07,Silva:rank08}, and they proved their potential again here as an essential component for achieving secrecy in DSS. We list some avenues for further research here. The secrecy capacity of MSCR codes remain as an open problem, as we have established the optimal codes under some parameter settings. To attempt this problem, codes for MSCR without security constraints have to be further investigated. One can also consider cooperative repair in a DSS having locally repairable structure. As other distributed systems, DSS may exhibit simultaneous node failures that need to be recovered with local connections. According to our best knowledge, this setting has not been studied (even without security constraints). Our ongoing efforts are on the design of coding schemes for DSS satisfying these properties. \appendices \section{Proof of Lemma~\ref{thm:SecrecyLemma}} \label{app:SecrecyLemma} \begin{proof} The proof follows from the classical techniques given by~\cite{Wyner:Wire-tap75}, where instead of $0$-leakage, $\epsilon$-leakage rate is considered. (The application of this technique in DSS is also considered in~\cite{Shah:Information11}.) We have \begin{eqnarray} I({\bf u};{\bf e}) &=& H({\bf e})-H({\bf e}|{\bf u}) \\ &\stackrel{(a)}{\leq}&H({\bf e})-H({\bf e}|{\bf u}) + H({\bf e}|{\bf u},{\bf r}) \\ &\stackrel{(b)}{\leq}&H({\bf r})-I({\bf e};{\bf r}|{\bf u}) \\ &\stackrel{(c)}{=}&H({\bf r}|{\bf u},{\bf e}) \\ &\stackrel{(d)}{=}&0 \end{eqnarray} where (a) follows by non-negativity of $H({\bf e}|{\bf u},{\bf r})$, (b) is the condition $H({\bf e})\leq H({\bf r})$, (c) is due to $H({\bf r}|{\bf u})=H({\bf r})$ as ${\bf r}$ and ${\bf u}$ are independent, (d) is the condition $H({\bf r}|{\bf u},{\bf e})=0$. \begin{remark} If the eavesdropper has a vanishing probability of error in decoding ${\bf r}$ given ${\bf e}$ and ${\bf u}$, then, by Fano's inequality, one can write $H({\bf r}|{\bf u},{\bf e})\leq |{\bf r}|\epsilon$, and, by following the above steps, can show the bound $I({\bf u};{\bf e})\leq |{\bf r}|\epsilon$, where $|{\bf r}|$ is the number of random bits, and $\epsilon$ can be made small if the probability of error is vanishing. This shows that the leakage rate $I({\bf u};{\bf e})/|{\bf e}|$ is vanishing. (See, e.g.,~\cite{Wyner:Wire-tap75}.) \end{remark} \end{proof} \section{Proof or Proposition~\ref{thm:MBCRbound}} \label{sec:MBCRBounds} \begin{proof} \eqref{eq:CoBound1} evaluates to the following bound at the MBCR point for a file size of ${\mathcal{M}}=k(2d-k+t)$. \begin{eqnarray} \mathcal{M}^{s}&\leq& \bar{{\mathcal{M}}}^{s} \triangleq k(2d-k+t)-\ell_1(2d-\ell_1+t) \nonumber\\ &=& (k-\ell_1)(2d+t-k-\ell_1) \label{eq:AppMBCR} \end{eqnarray} We compare this with the bounds \eqref{eq:CoBound2} and \eqref{eq:CoBound3}. \begin{itemize} \item If $t\geq k$: \eqref{eq:CoBound2} evaluates to the following bound \begin{eqnarray} \mathcal{M}^{s}\leq \bar{{\mathcal{M}}}^{s}_1 \triangleq (k-\ell_1)(2d+t-k). \end{eqnarray} Here, as $\ell_1\geq 0$, $\bar{{\mathcal{M}}}^{s} \leq \bar{{\mathcal{M}}}^{s}_1$. Hence, \eqref{eq:AppMBCR} gives a tighter bound. \item If $t\leq k$ and $\ell_1\leq at$ with $a=\floorb{k/t}$: Let $\ell_1=\tilde{a}t+\tilde{b}$, where $\tilde{a}=\floorb{\ell_1/t}$ and $\tilde{b}\in [0:t-1]$. The expression of $S$ in \eqref{eq:CoBound3} at MBCR point is given by \begin{eqnarray} S &=& 2\ell_1(d-\tilde{a}t) + t^2\tilde{a}(\tilde{a}+1) \nonumber\\ &\stackrel{(a)}{=}& \ell_1(2d-2\ell_1+2\tilde{b}) + (\ell_1-\tilde{b})(\ell_1-\tilde{b}+t)\nonumber\\ &=& \ell_1(2d-\ell_1+\tilde{b}+t)-\tilde{b}(\ell_1-\tilde{b}+t)\nonumber\\ &=& \ell_1(2d-\ell_1+t)-\tilde{b}(t-\tilde{b}) \end{eqnarray} where in (a) we used $\tilde{a}t=\ell_1-\tilde{b}$. Therefore, \eqref{eq:CoBound3} evaluates to \begin{eqnarray} \mathcal{M}^{s}&\leq& \bar{{\mathcal{M}}}^{s}_2 \triangleq k(2d+t-k)-\ell_1(2d-\ell_1+t)\nonumber\\ &&\quad\quad\quad {+}\:\tilde{b}(t-\tilde{b}). \end{eqnarray} As $\tilde{b}(t-\tilde{b})\geq 0$, we obtain that $\bar{{\mathcal{M}}}^{s}\leq \bar{{\mathcal{M}}}^{s}_2$. Hence, \eqref{eq:AppMBCR} gives a tighter bound. \item If $t\leq k$ and $\ell_1\geq at$ with $a=\floorb{k/t}$: Let $\ell_1=at+\tilde{b}$, where $a=\floorb{\ell_1/t}$ and $\tilde{b}\in [0,t-1]$. The expression of $S$ in \eqref{eq:CoBound3} at MBCR point is given by \begin{eqnarray} S &=& 2\ell_1(d-at) + t^2a(a+1) + (\ell_1-at)(t-b) \nonumber\\ &\stackrel{(a)}{=}& \ell_1(2d-2\ell_1+2\tilde{b}) + (\ell_1-\tilde{b})(\ell_1-\tilde{b}+t) \nonumber\\ &&{+}\: \tilde{b}(t-b)\nonumber\\ &=& \ell_1(2d-\ell_1+t)-\tilde{b}(b-\tilde{b}) \end{eqnarray} where in (a) we used $at=\ell_1-\tilde{b}$. Therfore, \eqref{eq:CoBound3} evaluates to \begin{eqnarray} \mathcal{M}^{s}&\leq& \bar{{\mathcal{M}}}^{s}_3 \triangleq k(2d+t-k)-\ell_1(2d-\ell_1+t)\nonumber\\ &&\quad\quad\quad{+}\:\tilde{b}(b-\tilde{b}). \end{eqnarray} As $\tilde{b}(b-\tilde{b})\geq 0$ due to $k\geq \ell_1$, we obtain that $\bar{{\mathcal{M}}}^{s}\leq \bar{{\mathcal{M}}}^{s}_3$. Hence, \eqref{eq:AppMBCR} gives a tighter bound. \end{itemize} Combining the cases above, we see that the upper bound on the secure MBCR file size is given by \eqref{eq:AppMBCR}. \end{proof} \section{NRBW values for MBCR point in DSS}\label{app:CoopTable} The parameters of Proposition~\ref{thm:MBCRbound} are given in the following tables. $\ell_1=0$ case corresponds to the systems without security constraints. $t=1$ case corresponds to non-cooperative case. Red (green) font highlights cases with greater (respectively, smaller) cooperative NRBW ($\gamma/\mathcal{M}^{s}$) compared that of $t=1$. We observed that the same trend continues for higher $n$ values. \small \begin{table}[htbp] \caption{NRBW for $n=4,5$, $d\geq k$, $d+t=n$.} \label{tab:Coop} \centering \begin{tabular}{|cccccccccc|}\hline $n$ & $k$ & $l$ & $t$ & $d$ & $\beta/\mathcal{M}^{s}$ & $\beta'/\mathcal{M}^{s}$ & $\gamma/\mathcal{M}^{s}$ & ${\mathcal{M}}$ & $\mathcal{M}^{s}$ \\ \hline 4 & 2 & 0 & 1 & 3 & 0.2000 & 0.1000 & 0.6000 & 10 & 10\\ \hline 4 & 2 & 0 & 2 & 2 & 0.2500 & 0.1250 & \textbf{\textcolor{red}{0.6250}} & 8 & 8\\ \hline 4 & 2 & 1 & 1 & 3 & 0.5000 & 0.2500 & 1.5000 & 10 & 4\\ \hline 4 & 2 & 1 & 2 & 2 & 0.6667 & 0.3333 & \textbf{\textcolor{red}{1.6667}} & 8 & 3\\ \hline 4 & 3 & 0 & 1 & 3 & 0.1667 & 0.0833 & 0.5000 & 12 & 12\\ \hline 4 & 3 & 1 & 1 & 3 & 0.3333 & 0.1667 & 1.0000 & 12 & 6\\ \hline 4 & 3 & 2 & 1 & 3 & 1.0000 & 0.5000 & 3.0000 & 12 & 2\\ \hline 5 & 2 & 0 & 1 & 4 & 0.1429 & 0.0714 & 0.5714 & 14 & 14\\ \hline 5 & 2 & 0 & 2 & 3 & 0.1667 & 0.0833 & \textbf{\textcolor{red}{0.5833}} & 12 & 12\\ \hline 5 & 2 & 0 & 3 & 2 & 0.2000 & 0.1000 & \textbf{\textcolor{red}{0.6000}} & 10 & 10\\ \hline 5 & 2 & 1 & 1 & 4 & 0.3333 & 0.1667 & 1.3333 & 14 & 6\\ \hline 5 & 2 & 1 & 2 & 3 & 0.4000 & 0.2000 & \textbf{\textcolor{red}{1.4000}} & 12 & 5\\ \hline 5 & 2 & 1 & 3 & 2 & 0.5000 & 0.2500 & \textbf{\textcolor{red}{1.5000}} & 10 & 4\\ \hline 5 & 3 & 0 & 1 & 4 & 0.1111 & 0.0556 & 0.4444 & 18 & 18\\ \hline 5 & 3 & 0 & 2 & 3 & 0.1333 & 0.0667 & \textbf{\textcolor{red}{0.4667}} & 15 & 15\\ \hline 5 & 3 & 1 & 1 & 4 & 0.2000 & 0.1000 & 0.8000 & 18 & 10\\ \hline 5 & 3 & 1 & 2 & 3 & 0.2500 & 0.1250 & \textbf{\textcolor{red}{0.8750}} & 15 & 8\\ \hline 5 & 3 & 2 & 1 & 4 & 0.5000 & 0.2500 & 2.0000 & 18 & 4\\ \hline 5 & 3 & 2 & 2 & 3 & 0.6667 & 0.3333 & \textbf{\textcolor{red}{2.3333}} & 15 & 3\\ \hline 5 & 4 & 0 & 1 & 4 & 0.1000 & 0.0500 & 0.4000 & 20 & 20\\ \hline 5 & 4 & 1 & 1 & 4 & 0.1667 & 0.0833 & 0.6667 & 20 & 12\\ \hline 5 & 4 & 2 & 1 & 4 & 0.3333 & 0.1667 & 1.3333 & 20 & 6\\ \hline 5 & 4 & 3 & 1 & 4 & 1.0000 & 0.5000 & 4.0000 & 20 & 2\\ \hline \end{tabular} \end{table} \small \begin{table}[htbp] \caption{NRBW for $n=4,5$, $d\geq k$, $d+t\leq n$.} \label{tab:Coop2} \centering \begin{tabular}{|cccccccccc|}\hline $n$ & $k$ & $l$ & $t$ & $d$ & $\beta/\mathcal{M}^{s}$ & $\beta'/\mathcal{M}^{s}$ & $\gamma/\mathcal{M}^{s}$ & ${\mathcal{M}}$ & $\mathcal{M}^{s}$ \\ \hline 4 & 2 & 0 & 1 & 3 & 0.2000 & 0.1000 & 0.6000 & 10 & 10\\ \hline 4 & 2 & 0 & 1 & 2 & 0.3333 & 0.1667 & 0.6667 & 6 & 6\\ \hline 4 & 2 & 0 & 2 & 2 & 0.2500 & 0.1250 & \textbf{\textcolor{green}{0.6250}} & 8 & 8 \\ \hline 4 & 2 & 1 & 1 & 3 & 0.5000 & 0.2500 & 1.5000 & 10 & 4\\ \hline 4 & 2 & 1 & 1 & 2 & 1.0000 & 0.5000 & 2.0000 & 6 & 2\\ \hline 4 & 2 & 1 & 2 & 2 & 0.6667 & 0.3333 & \textbf{\textcolor{green}{1.6667}} & 8 & 3 \\ \hline 4 & 3 & 0 & 1 & 3 & 0.1667 & 0.0833 & 0.5000 & 12 & 12\\ \hline 4 & 3 & 1 & 1 & 3 & 0.3333 & 0.1667 & 1.0000 & 12 & 6\\ \hline 4 & 3 & 2 & 1 & 3 & 1.0000 & 0.5000 & 3.0000 & 12 & 2\\ \hline 5 & 2 & 0 & 1 & 4 & 0.1429 & 0.0714 & 0.5714 & 14 & 14\\ \hline 5 & 2 & 0 & 1 & 3 & 0.2000 & 0.1000 & 0.6000 & 10 & 10\\ \hline 5 & 2 & 0 & 2 & 3 & 0.1667 & 0.0833 & \textbf{\textcolor{green}{0.5833}} & 12 & 12 \\ \hline 5 & 2 & 0 & 1 & 2 & 0.3333 & 0.1667 & 0.6667 & 6 & 6\\ \hline 5 & 2 & 0 & 2 & 2 & 0.2500 & 0.1250 & \textbf{\textcolor{green}{0.6250}} & 8 & 8 \\ \hline 5 & 2 & 0 & 3 & 2 & 0.2000 & 0.1000 & \textbf{\textcolor{green}{0.6000}} & 10 & 10 \\ \hline 5 & 2 & 1 & 1 & 4 & 0.3333 & 0.1667 & 1.3333 & 14 & 6\\ \hline 5 & 2 & 1 & 1 & 3 & 0.5000 & 0.2500 & 1.5000 & 10 & 4\\ \hline 5 & 2 & 1 & 2 & 3 & 0.4000 & 0.2000 & \textbf{\textcolor{green}{1.4000}} & 12 & 5 \\ \hline 5 & 2 & 1 & 1 & 2 & 1.0000 & 0.5000 & 2.0000 & 6 & 2\\ \hline 5 & 2 & 1 & 2 & 2 & 0.6667 & 0.3333 & \textbf{\textcolor{green}{1.6667}} & 8 & 3 \\ \hline 5 & 2 & 1 & 3 & 2 & 0.5000 & 0.2500 & \textbf{\textcolor{green}{1.5000}} & 10 & 4 \\ \hline 5 & 3 & 0 & 1 & 4 & 0.1111 & 0.0556 & 0.4444 & 18 & 18\\ \hline 5 & 3 & 0 & 1 & 3 & 0.1667 & 0.0833 & 0.5000 & 12 & 12\\ \hline 5 & 3 & 0 & 2 & 3 & 0.1333 & 0.0667 & \textbf{\textcolor{green}{0.4667}} & 15 & 15 \\ \hline 5 & 3 & 1 & 1 & 4 & 0.2000 & 0.1000 & 0.8000 & 18 & 10\\ \hline 5 & 3 & 1 & 1 & 3 & 0.3333 & 0.1667 & 1.0000 & 12 & 6\\ \hline 5 & 3 & 1 & 2 & 3 & 0.2500 & 0.1250 & \textbf{\textcolor{green}{0.8750}} & 15 & 8 \\ \hline 5 & 3 & 2 & 1 & 4 & 0.5000 & 0.2500 & 2.0000 & 18 & 4\\ \hline 5 & 3 & 2 & 1 & 3 & 1.0000 & 0.5000 & 3.0000 & 12 & 2\\ \hline 5 & 3 & 2 & 2 & 3 & 0.6667 & 0.3333 & \textbf{\textcolor{green}{2.3333}} & 15 & 3 \\ \hline 5 & 4 & 0 & 1 & 4 & 0.1000 & 0.0500 & 0.4000 & 20 & 20\\ \hline 5 & 4 & 1 & 1 & 4 & 0.1667 & 0.0833 & 0.6667 & 20 & 12\\ \hline 5 & 4 & 2 & 1 & 4 & 0.3333 & 0.1667 & 1.3333 & 20 & 6\\ \hline 5 & 4 & 3 & 1 & 4 & 1.0000 & 0.5000 & 4.0000 & 20 & 2\\ \hline \end{tabular} \end{table}
{ "timestamp": "2012-10-16T02:01:05", "yymm": "1210", "arxiv_id": "1210.3664", "language": "en", "url": "https://arxiv.org/abs/1210.3664" }
\section{INTRODUCTION} The A15 phase Nb$_3$Sn compound \cite{Matthias53} is currently being used in a variety of large-scale scientific projects employing high-field superconducting magnets (above 10 T) \cite{miyazaki03}, including ITER (the International Thermonuclear Experimental Reactor) \cite{vostner06, mitchell12, devred12}, the 1 GHz NMR project \cite{wada02}, and the CERN LHC Luminosity Upgrade \cite{bottura12}. In these high-field magnets, the mechanical loads during cooldown (due to different thermal contractions) and operation (due to Lorentz forces) can be very large, and since the superconducting properties of Nb$_3$Sn strongly depend on strain \cite{rupp77, ekin79, tenhaken94, ekin87}, an overall performance degradation can take place. Therefore, for a magnet's sound design it is of fundamental importance to have knowledge of the behavior of the superconducting parameters (namely the critical temperature,T$_c$, the upper critical field, B$_{c2}$, and the critical current, I$_c$) as a function of strain, $\varepsilon$. \begin{figure}[hb] \begin{center} \scalebox{0.32}{\includegraphics{figure1}} \caption{\label{fig:1}The arrangement of atoms in the cubic (a) and tetragonal (b) phase of A15 Nb$_3$Sn. The Nb atoms in the 2e sites form chains along the [001] direction, whereas those in the 4k sites are in chains along the [100] and [010] directions. To make the distortion clear, in (b), the cell is stretched to the abnormal value of $\epsilon$ = 40\%, whereas the Poisson's ratio $\nu$ is set to 0.4.} \end{center} \end{figure} Of particular interest is the uniaxial stress (either in tension or compression), acting along the axial direction of the composite, multi-filamentary wires used in such systems: in a Cable-in-Conduit Conductor (CICC) \cite{hoenig79, bruzzone06, spadoni94}, for example, the Nb$_3$Sn wires are inserted into stainless steel conduits, and compressive stresses due to the different thermal contraction coefficients of the different materials become important \cite{nijhuis06, mitchell05}. Trasverse load components might also be important \cite{ekin87, mondonico12}, but we will focus here only on the uniaxial ones. \begin{figure*}[ht] \begin{center} \scalebox{0.55}{\includegraphics{figure2}} \caption{\label{fig:2}The electronic band structure and density of states for Nb$_3$Sn calculated for three representative strain states: +1.0\% ($\bullet$), zero applied strain ($\filleddiamond$), and -1.0\% ($\filledtriangleup$). The Fermi level is set to 0 eV.} \end{center} \end{figure*} Within the framework of the Unified Scaling Law \cite{ekin80}, many authors \cite{ekin80, taylor05, oh06, godeke06, arbelaez09, markiewicz06} have proposed modified scaling equations which take into account the uniaxial strain dependence through the so-called strain function, s($\varepsilon$), but few attempts have been made \cite{oh06, taylor05, markiewicz04} on obtaining a scaling law based on microscopic parameters. For this purpose, a very first step would be to accurately determine the electronic band structures, the phonon dispersion curves and the electron-phonon coupling terms, and study their evolution as a function of applied strain. To this aim, the knowledge of the Nb$_3$Sn lattice cell deformation when a multifilamentary wire is subject to different stress components is of basic importance. Recent high resolution X-ray diffraction experiments on mechanically loaded samples \cite{muzzi12} have shown in detail how the Nb$_3$Sn lattice cell deforms in the axial and the transverse directions; in particular, it was observed that the stress is completely transferred from the macroscopic level to the individual grains within the composite structure, so that a macroscopic uniaxial load directly corresponds to a stretching of the Nb$_3$Sn lattice cell along the same direction, with the cell contracting in the transverse direction of an amount corresponding to a Poisson's ratio $\nu$ equal to 0.38. The structural and electronic properties of Nb$_3$Sn have been theoretically studied by several groups \cite{klein01, sadigh98, lu97, mattheiss82, klein78, mattheiss75}, whereas the full phonon dispersion relations have been calculated by means of a tight-binding method \cite{weber84, weber84book} and - more recently - by an \emph{ab initio} pseudopotential approach \cite{tutuncu06}. In particular, calculations by T\"ut\"unc\"u \emph{et al.} give evidence of a strong interaction between the electronic states near the Fermi level and several phonon modes (longitudinal acoustic phonons and a group of optical phonon modes with average frequency of 4.5 THz) along the [111] direction. However, to the best of our knowledge no systematic \emph{ab initio} investigations have been made on studying the evolution of the band structure, phonon dispersion curves and superconducting parameters (electron-phonon mass enhancement parameter, $\lambda$, and T$_c$) as a function of an applied uniaxial strain. In the present work, this issue has been addressed by employing the plane-wave pseudopotential method, the density-functional theory, and a linear-response technique \cite{baroni87, baroni01}, and by using the results by T\"ut\"unc\"u \emph{et al.} for the undistorted cell as a starting baseline for our calculations. \section{DETAILS OF CALCULATIONS AND COMPUTATIONAL METHOD} We used density functional theory and density-functional perturbation theory \cite{baroni01} as implemented in the {\sc Quantum-ESPRESSO} software distribution \cite{QE}, within the local-density approximation \cite{troullier91}, a plane-wave expansion up to 40 Ry for the kinetic energy cutoff and ultrasoft pseudopotentials for Nb and Sn \cite{pseudo}. The Brillouin zone has been sampled on a 8$\times$8$\times$8 Monkhorst-Pack (MP) mesh, corresponding to 126 special \textbf{k}-points within the irreducible part of the Brillouin zone (IBZ). We also have checked more dense grids (up to 16$\times$16$\times$16) but the results did not change considerably. Lattice dynamical calculations have been performed within the framework of the self-consistent density functional perturbation theory (DFPT) \cite{baroni01}, in which the dynamical matrices are calculated by sampling the IBZ with 8 independent $\textbf{q}$-points in the tetragonal phase. Dynamical matrices at any wave vectors can be Fourier deconvolved on this mesh, and the phonon dispersion curves along arbitrary symmetry directions can be easily obtained. In order to check the accuracy of the Fourier interpolation, we compared the results of this procedure with direct calculations on selected $\textbf{q}$-points not present in the grid. \begin{figure*}[t] \begin{center} \scalebox{0.55}{\includegraphics{figure3}} \caption{\label{fig:3} (a) Phonon dispersion curves of Nb$_3$Sn at three different strain states: +1.0\% ($\bullet$), zero applied strain ($\filleddiamond$), and -1.0\% ($\filledtriangleup$); (b) phonon DOS at each calculated strain. For completeness, experimental curves are also reported.} \end{center} \end{figure*} A denser grid of \textbf{k}-points (24$\times$24$\times$24 MP divisions) has been used in order to determine the electron-phonon interaction parameter $\lambda$, calculated as the Brillouin-zone average of the mode-resolved coupling strengths $\lambda_{\textbf{q}j}$: \begin{equation} \lambda = \sum_{\textbf{q}j} W(\textbf{q}) \lambda_{\textbf{q}j} \end{equation} where $\emph{j}$ indicates a phonon polarization branch, and W(\textbf{q}) are the weights associated with the phonon wavevectors \textbf{q}, normalized to 1 in the first Brillouin zone. The lattice parameter of the cubic cell is set to 5.29 $\AA$ \cite{maier69, guritanu04}. For an uniaxial stress along the \emph{z}-direction ($\sigma_z$) and under the assumption that the system is transversally isotropic, the strain state can be expressed as: \begin{equation}\label{stress-strain} \begin{aligned} &\epsilon_x = \epsilon_y = -\nu\frac{\sigma_z}{E}\\ & \epsilon_z = \frac{\sigma_z}{E} \end{aligned} \end{equation} which reflects the variation of the lattice parameters of the tetragonally distorted cell. In Eq. \ref{stress-strain}, $\nu$ is the Poisson's ratio whereas $\emph{E}$ represents the Young's modulus. In our computations, $\nu$ has been set to the value measured in composite wires ($\nu$ = 0.4 \cite{muzzi12}), and the distortions have been calculated in the strain range $\pm$ 1.0\%, in steps of 0.2\%. \section{RESULTS AND DISCUSSION} The Nb$_3$Sn cubic phase belongs to the $(P\frac{4_2}{m}\bar3\frac{2}{n})$ space group and the $O_h^3$ point-group symmetries, as shown in Fig.\ref{fig:1}(a). The Sn atoms are situated on a bcc matrix whereas the faces of the cube are occupied by Nb atoms which form three sets of orthogonal chains along the principal axes. When a uniaxial strain is applied along the c-direction, the lattice is tetragonally distorted, with the Nb-chains in the [001] direction differing from those in the [100] and [010] directions, as shown in Fig. \ref{fig:1}(b). The distorted structure has a reduced symmetry $D_{4h}^9 (P\frac{4_2}{m}\frac{2}{m}\frac{2}{c})$. Starting from the cubic cell, a uniaxial strain has been applied to the cell along the \emph{c}-direction, according to Eq. (\ref{stress-strain}). As a result, the deviatoric components of the strain lowered the system's point group symmetries: most of the phonon degeneracies have been removed and a changement in both the electronic and phonon dispersion bands have been induced. \subsection{Electronic band structures and phonon dispersion curves} \begin{figure}[t] \begin{center} \scalebox{0.5}{\includegraphics{figure4}} \caption{\label{fig:4}The behavior of: (a) the superconducting critical temperature, $T_c$; (b) the \emph{el-ph} coupling, $\lambda$; and (c) the logarithmically averaged phonon frequency $\omega{_ln}$ as a function of an applied uniaxial strain. Lines are guide for eye.} \end{center} \end{figure} The calculated electronic structure along many high-symmetry directions of the simple-cubic Brillouin zone are displayed in Fig. \ref{fig:2} for three representative values of the applied strain (zero, 1.0\% and -1.0\%). The energy bands of the cubic crystal are shown as diamonds, whereas the circles and the triangles represents the 1.0\% and -1.0\% strain states, respectively. Indeed, the tetragonal deformation does not affect the electronic structure in a severe way: the energy bands of the distorted and undistorted cell are almost unchanged, and there is no evident splitting of the cubic bands at the Fermi level, $E_F$. The electronic DOS is also not drastically affected by the tetragonal distortion. In Fig. \ref{fig:2}, the Fermi level is marked by a dashed horizontal line and is set to 0 eV. It is interesting to notice that $E_F$ falls close to a sharp peak in the electronic DOS \cite{sadigh98}, with a value for the density of states of the order of 20 states$\slash$eV. This peak is generated by several nearly dispersionless bands crossing the Fermi level in the $\Gamma-M$, $\Gamma-R$, and $M-R$ directions and deriving from the \emph{4d} states of Nb atoms \cite{tutuncu06}. \begin{figure}[t] \begin{center} \scalebox{0.47}{\includegraphics{figure5}} \caption{\label{fig:5}Density of states at Fermi level, $N(E_F)$ calculated as a function of strain. The curve is overlapped to the product function $\lambda\omega_{ln}^2$, which in good approximation should be proportional to $N(E_F)$\cite{mcmillan68}.} \end{center} \end{figure} The phonon dispersion curves, calculated along several high-symmetry directions in the Brillouin zone, are plotted for three representative strains in Fig. \ref{fig:3}; the phonon DOS is depicted in the right panel. There is good agreement with previous published results \cite{tutuncu06} at \emph{$\epsilon$ = 0}. For the sake of completeness, some inelastic neutron scattering measurements \cite{pintschovius85, axe65, shirane78} are also reported in Fig. \ref{fig:3}a, and again a good agreement is obtained. \subsection{Derivation of superconducting $T_c$ as a function of strain} The modifications of the phonon dispersion curves induced by a tetragonal distortion should have a strong effect on \emph{$T_c$} through the strain sensitivities of the averaged phonon frequencies $\omega_{ln}$, and of the \emph{el-ph} coupling $\lambda$. Therefore, both $\omega_{ln}(\epsilon)$ and $\lambda(\epsilon)$ have been explicitly calculated, and $T_c(\epsilon)$ has been estimated by means of the Allen-Dynes modification of the McMillan formula \cite{mcmillan68, allen75}: \begin{equation} T_c = \frac{\hbar\omega_{ln}}{1.20}e^{-\frac{1.04(1+\lambda)}{\lambda - \mu^*(1 + 0.62\lambda)}} \end{equation} where $\mu^*$ is the effective Coulomb-repulsion parameter which describes the interaction beetween electrons, and $\omega_{ln}$ is a weighted logaritmically averaged phonon frequency, defined as: \begin{equation} \omega_{ln} = e^{\frac{2}{\lambda}\int_0^{+\infty} \frac{\mathrm{d}\omega}{\omega} \! \alpha^2(\omega)F(\omega)\mathrm{ln}\omega \,} \end{equation} where $\alpha^2(\omega)F(\omega)$ is the Eliashberg spectral function. We assumed a negligible strain dependence of $\mu^*$ compared to the other parameters, and frozen it as a constant in our computations \cite{taylor05}. Our results are reported in Fig. \ref{fig:4}a-c. As far as the cubic phase is concerned, our finding are in good agreement with the ones by T\"ut\"unc\"u \emph{et al.} \cite{tutuncu06}: a group of six phonon modes at the R-point (whose averaged phonon frequency is approximately 140 cm$^{-1}$) are found to strongly interact with the \emph{p-d} electronic states near the Fermi level. In these modes only the Nb chains vibrate, the Sn atoms being frozen at their equilibrium positions. The $\lambda_{\textbf{q}j}$ corresponding to these modes are comprised in the range 0.134$-$0.197. The overall electron-phonon interaction parameter ($\lambda$ = 1.85) agrees with the experimentally measured value \cite{wolf80}, and - choosing $\mu^*$ = 0.25 - the estimated critical temperature for the strain-free state is $T_c$ = 18.3 K (also in agreement with the highest reported $T_c$ \cite{hanak64}). As it can be clearly seen in Figs. \ref{fig:4}b and \ref{fig:4}c, both $\omega_{ln}(\epsilon)$ and $\lambda(\epsilon)$ show a parabolic profile as a function of strain. Very close to the cubic phase (strain-free cell), $\omega_{ln}$ has a maximum, implying a softening of the logarithmically averaged phonon frequencies when the system undergoes a distortion. The same behavior is found for $\lambda$, whose maximum is $\sim$ 1.85. The strength of the \emph{el-ph} interaction weakens as $\vert\epsilon\vert$ increases. As a result, by varying the axial strain the $T_c$ curve assumes the characteristic bell shape (Fig. \ref{fig:4}a) \cite{flukiger84,ekin07}. In addition, the curve shows a slight asymmetry with respect to the maximum, due mainly to an asymmetry in the phononic contribution ($\omega_{ln}$). However, a clear confirmation of this would deserve a more detailed analysis, based on an increased density of points on the curve. Qualitatively, the $T_c$ vs. $\epsilon$ curve reproduces the experimental strain sensitivity of the critical current found in A15 superconductors: as it is well known, when a Nb$_3$Sn multifilamentary wire is subjected to a longitudinal strain, its critical current shows a maximum at zero intrinsic strain, and decreases reversibly with the applied load (see for example Ref. \onlinecite{godeke_thesis} and references therein), with a slight asymmetry\cite{flukiger05} between the compressive and tensile sides. In this sense, these first-principle calculations suggest that the origin of such strain sensitivity in Nb$_3$Sn is intrinsic and microscopic in its nature. Our calculations also show that $N(E_F)$ is influenced by strain. This quantity is related to the \emph{el-ph} coupling constant and to the averaged phonon frequency trhough the following expression \cite{mcmillan68}: \begin{equation}\label{eq:dos} N(E_F) = M <I^2>\omega_{RMS}^2\lambda \end{equation} \begin{figure}[ht] \begin{center} \scalebox{0.3}{\includegraphics{figure6}} \caption{\label{fig:6}Direct comparison between the theoretical (dashed line) and experimental (line and markers) strain function\cite{demarzi12}. The theoretical $s(\epsilon)$ correctly reproduces a bell shape, the mismatch being attributed to those extrinsic effects generally observed in technological wires.} \end{center} \end{figure} in which $<I^2>$ is the average over the Fermi surface of the \emph{el-ph} matrix element squared, $\omega_{RMS}$ is a weighted RMS phonon frequency and \emph{M} is the average ionic mass. By further assuming that the strain sensitivity of the normalized averaged frequencies $\omega_{RMS}$ and $\omega_{ln}$ are the same \cite{lim83} and that $<I^2>$ does not depend on any applied strain, it follows that $N(E_F, \epsilon)\propto\lambda\omega_{ln}^2$. The strain dependence of $N(E_F)$, calculated either directly or through Eq. \ref{eq:dos} are consistent with one another, as can be clearly seen in Fig. \ref{fig:5}. Results of Figs.\ref{fig:4} and \ref{fig:5} thus provide evidence that the strain is affecting both the phononic and the electronic properties in the same way. Indeed, existing studies consider only the two extreme cases where either the lattice deformations or the electronic properties modifications are considered as a source for the strain sensitivity. Markiewicz \cite{markiewicz04, markiewicz04bis, markiewicz06} has attempted to correlate the microscopic full invariant analysis with the Elisahberg-based relations for $T_c$ through strain-induced modifications in the electron-phonon spectrum. In this model, however, the strain induced changes in $N(E_F)$ are only accounted for through a strain-modified frequency dependence of the \emph{el-ph} interaction. In other words, the changes in $N(E_F)$ are not directly calculated, although the model is sufficiently accurate. Furthermore, microscopical theoretical predictions by Taylor and Hampshire \cite{taylor05} and Markiewicz \cite{markiewicz04bis} have shown that the variation of the superconducting properties of Nb$_3$Sn multifilamentary wires submitted to uniaxial strain are correlated to changes of the phonon spectrum, rather than to the electronic density of states. On the other side, many works\cite{weger74, klein79bis, lim83} have linked the superconducting properties of A15 compounds to the variations in the electronic properties and $N(E_F)$. From the experimental point of view, it is important to highlight new methods for extracting the electron DOS from resistivity data have been explored with the aim to study whether the strain sensitivity is correlated to the electronic modifications\cite{mentink12}. However, according to our results, any model aiming to describe the superconducting properties of A15 compounds from microscopic theories should take both contributions into account. \subsection{Comparison with experimental results} Starting from the pioneering work of Ekin \cite{ekin80}, in many models available in literature \cite{ekin80, taylor05, oh06, godeke06, arbelaez09, markiewicz06} the strain sensitivity of Nb$_3$Sn is generally parameterized using the strain function, s($\epsilon$), defined as \cite{ekin80, welch80}: \begin{equation}\label{eq:strainfunction} s(\epsilon) \doteq \frac{B_{c2}(\epsilon, T = 0)}{B_{c2}(0, T = 0)} = \left[ \frac{T_c(\epsilon)}{T_c(0)}\right]^w \end{equation} where $B_{c2}(\epsilon, T)$ represents the superconducting upper critical field, depending on strain and temperature, and where \emph{w} $\approx$ 3 for A15 materials \cite{ekin80}. The strain function calculated using Eq. \ref{eq:strainfunction} is plotted in Fig. \ref{fig:6} (dashed line) and compared with the curve extracted from experimental $I_c vs. \epsilon$ measurements \cite{demarzi12} on a technological, multifilamentary wire (lines and markers in the figure). Although both curves show the expected bell shape, the experimental and theoretical curves are quite different. In particular, the strain sensitivity of the composite wire is enhanced when compared to the theoretical prediction of a bulk, perfectly binary and stoichiometric Nb$_3$Sn system, and this remains valid also if the value of \emph{w} is increased (within physical accepted ranges). This is not surprising, and the difference might have either an intrinsic or extrinsic origin. Among extrinsic phenomena inducing performance degradation with strain in technological wires\cite{ekin77}, filament breakage, reduction in wire's cross-sectional area, stress-induced martensitic formation\cite{demarzi12}, and microcrack/defect formation in the superconductor might play a role. The reversibility of the experimental data plotted in Fig. \ref{fig:6} implies that some of these phenomena can be ruled out, \emph{e.g.} filament breakage, microcrack and extended defect formation. All others mentioned above, being reversible, can in principle explain the strain sensitivity in Nb$_3$Sn, but their effect over $I_c$ is small and cannot account for the observed behavior \cite{ekin77}. As far as intrinsic mechanisms are considered, our calculations unambiguously show that strain sensitivity in Nb$_3$Sn is associated to lattice and electronic deformations, which result in shifts in the Nb$_3$Sn critical surface. It should be underlined the fact that in this first principles study we have neglected any sublattice displacements of the Nb atoms \cite{sadigh98} leading to Nb-chains dimerization. This can also have an effect on the theoretical $s(\epsilon)$. In our calculations, Nb atoms are frozen in their ideal positions, and the cubic cell is stable with respect to a tetragonal strain\cite{sadigh98}. However, if the Nb atoms are allowed to relax, a dimerization of the chains occurs, and the undistorted cubic structure become unstable with respect to a spontaneous sublattice distortion. However, such distortions are small (the ratio between the major and minor axis of the tetragonal cell spans from 0.9938 to 0.9964\cite{sadigh98}) and therefore their effect on $s(\epsilon)$ is expected to be negligible. Also, our system is perfectly binary, whereas in technological wires Ti or Ta is inserted as the ternary element, with the aim to improve the pinning efficiency and therefore $J_c(B)$. Moreover, due to compositional inhomogeneities, a distribution of the superconducting properties is observed ($T_c$ depends on the atomic Sn content, having a maximum at Sn\% = 0.25\%); see for example Refs. \onlinecite{senatore07} and \onlinecite{godeke06bis} and reference therein. Considering all these aspects, it is clear that the polycrystalline Nb$_3$Sn formed by a reaction heat treatment inside a composite, multifilamentary system has a microscopic structure that inevitably deviates from that of an ideal Nb$_3$Sn lattice cell. Therefore, differences between the theoretical computations of $s(\epsilon)$ and the experimental degradation in wires can be expected, which might possibly suggest paths towards the improvement of the strain tolerance in technological wires. For example, at 0.5\% compressive strain the strain sensitivity could in principle be reduced to $\sim$ 0.07\%, thus helping in the design of those devices where the levels of strain experienced by Nb$_3$Sn are sufficiently large, as is the case of high-field superconducting magnets. \begin{acknowledgments} We thank Federico Quagliata and Pietro D'Angelo for setting up the environment and the CRESCO parallel cluster at ENEA C.R. Frascati. M.B.N. wishes to acknowledge partial support from the Office of Basic Energy Sciences, U.S. Department of Energy at Oak Ridge National Laboratory under Contract No. DE-AC05-00OR22725 with UT-Battelle, LLC. \end{acknowledgments}
{ "timestamp": "2012-10-16T02:01:52", "yymm": "1210", "arxiv_id": "1210.3705", "language": "en", "url": "https://arxiv.org/abs/1210.3705" }
\section{Introduction} \label{sec:introduction} Within the context of the currently favored hierarchical model for structure formation, massive clusters of galaxies are, as a population, the most recently formed gravitationally bound structures in the cosmos. Consequently, characteristics such as the shape and evolutionary behavior of their mass function can, in principle, be exploited as precision probes of cosmology. The resulting estimates of parameters---such as the amplitude of the primordial fluctuations and the density and equation of state of the mysterious dark energy---can certainly complement and even compete with determinations based on studies of the cosmic microwave background \citep[for a review see ][]{Allen11}. The efficacy of clusters as cosmological probes depends on three factors: (1) the ability to compile a large well-understood catalog of clusters; (2) the identification of an easily determined survey observable (or combinations thereof) --- hereafter referred to as a ``mass proxy'' --- that can offer an accurate measure of cluster masses; and (3) the existence of a well-calibrated relationship between the mass proxy and the actual mass of the cluster. Of these, we shall focus our attention on the latter two since at present, the effective use of clusters as cosmological probes is primarily limited by systematic errors in the estimates of the true mass of the cluster \citep{Henry09,Vikhlinin09,Mantz10}. One of the first---and still among the most commonly used---mass proxies is the "hydrostatic mass estimate", derived from X-ray observations under the assumption that the clusters are spherically symmetric and that the hot, diffuse, X-ray emitting gas in galaxy clusters is in thermal pressure-supported hydrostatic equilibrium (HSE). Over the years, mismatch between hydrostatic mass estimates and mass estimates derived by alternate means have led a number of researchers to question the use of this proxy \citep[e.g][]{MiraldaEscude95,Fischer97,Girardi97b,Ota04}. Recent studies suggest that the HSE masses of relaxed clusters are subject to a systematic 10\%-20\% underestimate which grows to 30\% or more for unrelaxed systems \citep{Arnaud07,Mahdavi08,Lau09}. Numerical simulation studies suggest that this bias is due to incomplete thermalization of the hot diffuse intracluster medium (ICM) \citep{Evrard90,Rasia06,Nagai07,Shaw10,Rasia12}. Concerns with the HSE mass estimate have renewed interest in identifying more well-behaved mass proxies that can give unbiased estimates of the cluster mass. One example of such an X-ray mass proxy is $Y_X$, the product of the gas mass $M_g$ and ICM temperature $T_X$ within a given aperture \citep{Kravtsov06}. In numerical simulation studies, this pressure-like quantity has been shown be a much better mass proxy and has been successfully deployed in measurements of cosmological parameters including the dark energy equation of state \citep{Vikhlinin09a,Vikhlinin09}. More recently, the gas mass $M_g$ has also emerged as a mass proxy with similar predictive power to $Y_X$ \citep{Okabe10,Mantz11}. Success in tests involving simulated clusters is necessary but far from sufficient. At present, numerically simulated clusters capture only a fraction of the physical processes that affect the intracluster medium in real clusters. An alternative way of independently testing the validity of the individual mass proxies is via multiwavelength observations. Specifically, comparisons of X-ray proxies and weak gravitational lensing masses ($M_L$) are particularly interesting given the fact that gravitational lensing provides a \emph{total} mass estimate that neither depends on baryonic physics nor requires any strong assumptions about the equilibrium state of the gas and dark matter, and which can be determined over a wide range of spatial scales. However, lensing measures the projected (2D) mass and converting this to a unprojected (3D) mass has the effect of adding an amount of scatter that is related to the geometry of the mass distribution, its orientation along the line of sight, and projection of extra-cluster mass along the line of sight \citep{Rasia12}. In extreme cases, these effects can result in an under- or over-estimate of the cluster mass of as much as a factor of 2 \citep{Feroz12}, depending on the specific technique used. In this work, we employ a technique that achieves a low systematic weak lensing mass bias of 3-4\%, thanks to the procedure described in detail in \cite{Hoekstra12}. This bias level is lower than the 5-10\% that is usual for numerical simulations, which also have a typical scatter of $20\%-30\%$ {\protect\citep{Becker11,Bahe12,Rasia12,High12}}; \hlb{the actual amount of bias depends on the range of physical radii used in the weak lensing analysis.} At any rate, weak lensing masses are, at present, the best measures of cluster mass and very well suited for use in calibrating the different mass proxies and identifying the best one of the lot. Moreover, the study of the relationship between the weak lensing mass estimate and an observable mass proxy can potentially yield important insights into the physics at play within cluster environments. These are the goals of the present paper. To facilitate our study, we have assembled a sample of galaxy clusters named the Canadian Cluster Comparison Project\footnote{Not to be confused with the Chandra Cluster Cosmology Project \protect\citep{Vikhlinin09}, which forms an identical acronym.}. We describe this sample in \S\ref{sec:data}. In the present study, we restrict ourselves to studying the relationships between weak lensing mass determinations and the mass proxies derived jointly from \emph{Chandra} and \emph{XMM-Newton} observations. We use the Joint Analysis of Observations (JACO) code base \citep{Mahdavi07} to derive the mass proxies of interest from the X-ray data. JACO makes maximal use of the available data while incorporating detailed corrections for instrumental effects (for example, we model spatial and energy variations of the PSF for both Chandra and XMM-Newton) to yield self-consistent radial profiles for both the dark and the baryonic components. Further details are given in \S\ref{sec:mass}. In \S\ref{sec:data} we summarize our data reduction procedure; in \S{\ref{sec:mass}} we describe our mass modeling technique. Our quantitative measures of substructure, the luminosity-temperature relation, the lensing mass-observable relations, and deviations from hydrostatic equilibrium are discussed in \S\ref{sec:struct}, \S\ref{sec:lt}, \S\ref{sec:proxy}, and \S\ref{sec:hydro}, respectively. We conclude in \S\ref{sec:conclusion}. Throughout the paper we take $H_0 = 70$ km/s/Mpc, $\Omega_M = 0.3$, and $\Omega_\Lambda = 0.7$. \section{Sample and Data Reduction} \label{sec:data} \subsection{Sample Characterization} The Canadian Cluster Comparison Project (CCCP) was established primarily to study the different baryonic tracers of cluster mass and to explore insights about the thermal properties of the hot diffuse gas and the dynamical states of the clusters that can be gained from cluster-to-cluster variations in these relationships. For this purpose, we assembled a sample of 50 clusters of galaxies in the redshift range $0.15 < z < 0.55$. Since we wanted to carry out a weak lensing analysis, we required that the clusters be observable from the Canada-France-Hawaii Telescope (CFHT) so we could take advantage of the excellent capabilities of this facility. The latter constraint restricts our cluster sample to systems at $-15^{\circ}\;<\;{\rm declination}\;<65^{\circ}$. We also required our clusters to have an ASCA temperature $k_BT_X > 3$ keV. To establish cluster temperature, we primarily relied on a systematically reduced cluster catalog of \cite{Horner01} based on ASCA archival data, although in a few instances we used temperatures from other (published) sources. As a starting point, we scoured the CFHT archives for clusters with high quality optical data suitable for weak lensing analysis, including observations in two bands. We identified \hlb{20} suitable clusters observed with the CFH12k camera and with B and R band data meeting our criteria. Nearly half of these clusters were originally observed as part of the Canadian Network for Observational Cosmology (CNOC1) Survey \citep{Yee96,Carlberg96} and comprise the brightest clusters in the {\it Einstein Observatory} Extended Medium Sensitivity Survey (EMSS) \citep{Gioia90}. Since the EMSS sample is known to have a mild bias against X-ray luminous clusters with pronounced substructure \citep{Pesce90,Donahue92,Ebeling00}, and we were specifically interested in putting together a representative sample of clusters that encompassed the spectrum of observed variations in thermal and dynamical states, we randomly selected \hlb{30} additional clusters from the Horner sample that met our temperature, declination and redshift constraints and additionally, guaranteed that our final sample fully sampled the scatter in the $L_X$ vs. $T_X$ plane. Of these systems, those without deep, high quality optical data were observed with the CFHT MegaCam wide-field imager, using the $g^\prime$ and $r^\prime$ optical filter sets. \hlb{The resulting weak lensing masses for this sample are discussed in \protect\cite{Hoekstra12}}. Our final sample comprises 50 clusters listed in Table 1. All except 3 clusters have been observed by the {\it Chandra Observatory}. These three, plus 21 others, have also been observed by {\it XMM-Newton}. Subsets of the CCCP cluster sample have been used in several prior studies \citep{Hoekstra07,Mahdavi08,Bildfell08,Bildfell12}. \hly{The CCCP sample has served as the source for studies of individual clusters that are interesting in their own right, such as Abell 520 and IRAS 09104+4109 \protect\citep{Mahdavi07,Jee12,OSullivan12}.} In the left panel of Figure 1, we compare the distribution of the CCCP clusters in the $L_X$---$T_X$ plane to those of two better characterized samples of galaxies clusters: MACS \citep{Ebeling10} and HIFLUGCS \citep{Reiprich02}, both of which employ well-defined flux-based selection criteria based on the ROSAT All-Sky Survey. HIFLUGS is on the average a lower redshift sample compared to our CCCP sample, and MACS is on the average at a higher redshift. The samples have comparable scatter, suggesting that our CCCP sample is not significantly more biased than HIFLUGCS or MACS, which have better understood selection functions. In the right panel of Figure 1, we plot the distribution of the orthogonal scatter about the mean $L_X$--$T_X$ of the all three samples combined. A KS test indicates that the three distributions are statistically indistinguishable. This confirms that while the CCCP sample may not be a complete sample, it is a representative sample in that it properly captures the scatter in the $L_X$---$T_X$ and to the extent that these have physical origins, the range of cluster thermal and dynamical states. \begin{figure*} \resizebox{6.1in}{!}{\includegraphics{macs-cccp.pdf}} \caption{Comparison of the luminosity-temperature relationship for JACO/CCCP sample (solid dots), HIFLUGS (open dots) and MACS (stars) }. \label{fig:sample} \end{figure*} \begin{figure*} \begin{tabular}{cc} \resizebox{3in}{!}{\includegraphics{chandra-xmm-r2500-orig.pdf}} & \resizebox{3in}{!}{\includegraphics{chandra-xmm-bias-r2500-orig.pdf}} \\ \resizebox{3in}{!}{\includegraphics{t-uncorr.pdf}} & \resizebox{3in}{!}{\includegraphics{t-corr.pdf}}\\ \resizebox{3in}{!}{\includegraphics{l-uncorr.pdf}}& \resizebox{3in}{!}{\includegraphics{l-corr.pdf}} \end{tabular} \caption{Comparison of XMM-Newton and Chandra X-ray masses (\emph{top}), temperatures (\emph{middle}), and bolometric X-ray luminosities (\emph{bottom}) within lensing $r_{2500}^{WL}$. The left-hand column shows the unmodified Chandra values, while the right-hand column shows the result of scaling the Chandra effective area by a power law in energy of slope $\zeta=0.07$, Chandra and XMM-Newton observables come into better agreement. The dashed line shows equality in all cases. \label{fig:crosscal}} \end{figure*} \subsection{Choice of density contrast} For most of what follows, we study masses, temperatures, substructure measures, and other thermodynamic quantities integrated within a specific spherical radius. The choice of this radius is not obvious; using fixed physical radii has the advantage of straighforwardness, but the disadvantage that we would be probing characteristically different regions of clusters as a function of masses. \hlb{Using fixed overdensity radii $r_\Delta$ (defined such that $r_\Delta$ contains a mean matter density of $\Delta$ times the critical density of the universe at the redshift of the cluster)} is a better choice, but even here, the value of $\Delta$ to use is not quite obvious. At the redshift of our sample, X-ray data quality tends to be best around $r_{2500}$, but most of the literature lists properties at $r_{500}$. Even after a choice of $\Delta$, one must still decide whether to use the lensing or X-ray value, since they are not guaranteed to agree. We choose to standardize the \hlb{bulk of our discussion} on the weak-lensing overdensity radius $r_{500}^{WL}$, because lensing masses are likely to be \hly{more unbiased for non-relaxed clusters \protect\citep{Meneghetti10}}. \hly{For the most part, our results do not significantly change if we switch to X-ray $r_{500}$; one exception is the mass-temperature relation below, which tightens significantly with the switch. } In \S\ref{sec:mpc}, we also consider scaling relations with observables measured within fixed physical radii, because these are more likely to be useful for calibrating large data sets. \subsection{Weak Lensing Overview} The clusters in our sample were drawn from \cite{Hoekstra12}, which contains a weak lensing analysis of CFH12k and Megacam data from the Canada-France-Hawaii Telescope. We refer interested readers to \cite{Hoekstra12} for details of the data reduction and weak lensing analysis procedure. We base our lensing masses on the aperture mass estimates \citep[for details see the discussion in \S3.5 in][]{Hoekstra07}. This approach has the advantage that it is practically model independent. Additionally, as the mass estimate relies only on shear measurements at large radii, contamination by cluster members is minimal. \cite{Hoekstra07} and \cite{Hoekstra12} removed galaxies that lie on the cluster red-sequence and boosted the signal based on excess number counts of galaxies. As an extreme scenario we omitted those corrections and found that the lensing masses change by only a few percent; for details see \citep{Hoekstra12}. Hence our masses are robust against contamination by cluster members at the percent level. The weak lensing signal, however, only provides a direct estimate of the {\it projected} mass. To calculate 3D masses from the model-independent 2D aperture masses we project and renormalize a density profile of the form $\rho_\mr{tot}(r) \propto r^{-1} (r_{200}+c r)^{-2}$ \citep{NFW}. The relationship between the concentration $c$ and the virial mass is fixed at $c \propto M_\mr{200}^{-0.14}/(1+z)$ from numerical simulations \citep{Duffy08}. Hence, the deprojection itself, though well motivated based on numerical simulations, is model dependent. However, the model dependence is weak---\hlb{20\% variations in the normalization of the mass-concentration relationship yield $\approx 5\%$ variations in the measured masses} \citep[\S4.3]{Hoekstra12}. We also note that the lensing analysis differs from the X-ray analysis in that in the X-ray analysis, no mass-concentration relationship is assumed (i.e., the concentrations and masses are allowed to vary independently). We plan to address the effects of relaxing the lensing mass-concentration relation in a future paper. \subsection{X-ray Data Reduction} We refer the reader to \cite{Mahdavi07} for details of the X-ray data reduction procedure, which we briefly summarize and update here. We use both Chandra CALDB 4.2.2 (April 2010) and CALDB 4.4.7 (December 2011). We also check our results against the latest CALDB (4.5.1) at the time of writing. For XMM-Newton we use calibration files up-to-date to January 2012; we also checked calibration files dating as far back as April 2010. We detected no statistically significant changes in the calibration files over this period for either Chandra or XMM-Newton, except as detailed in \S\ref{sec:crosscal} below. We follow a standard data reduction procedure. We use the software packages CIAO (Chandra) and SAS (XMM-Newton) to process raw event files using the recommended settings for each observation mode and detector temperature. Where possible, we make event grade selections that maximize the data quality for extended sources (including the VFAINT mode optimizations for Chandra). We use the wavelet detection algorithm WAVDETECT on exposure-corrected images to identify contaminating sources; we masked out point and extended sources using the detected wavelet radius. Each masking was checked by eye for missing extended sources or underestimated masking radii. The bulk of the X-ray background consists of a particle component which bypasses the mirror assembly, plus an astrophysical component that is folded through the mirror response. To remove the particle background we match the 8-12 keV photon count rate from the outer regions of each detector to the recommended blank sky observations for each detector, and then subtract the renormalized blank-sky spectra. What remains is the source plus \hly{an} over- or under-subtracted astrophysical background, plus in some cases residual particle background. All these residual backgrounds are modeled jointly with the spatially resolved ICM model spectra, and their parameters marginalized over for the final results. To extract spatially resolved spectra, we find the surface brightness peak in the Chandra image (if available) or XMM-Newton image (if Chandra is not available). We then draw circular annuli that contain a minimum of 1500 background-subtracted photon counts; where both Chandra and XMM-Newton data are available, the annuli are taken to be exactly the same for both sets of observations, with the minimum count requirement being imposed on the Chandra data (for photons within 8$\arcmin$) or XMM-Newton data (for photons outside 8$\arcmin$). We then compute appropriately weighted ancilliary response files (ARF) and redistribution matrix files (RMF) for each spectrum, and subtract appropriately scaled particle background spectra. We emphasize that all spectra for each cluster undergo a simultaneous joint fit using a forward-convolved spectral model of the entire cluster, so that the choice of 1500 background-subtracted counts is not a sensitivity-limiting factor. That is to say, in no case is a single measurement derived from a single spectrum of 1500 counts, but rather such spectra are fit together in large batches on a cluster-by-cluster basis. The detailed properties of the sample, including global X-ray temperatures and bolometric X-ray luminosities, masses, and substructure measures are listed in tables \ref{tbl:sample} and \ref{tbl:data}. \input{table1} \label{sec:mass} \subsection{X-ray Mass Modeling} Here we summarize and update the modeling procedure of \cite{Mahdavi07}, in which the cluster is spherically symmetric and that the gas is in thermal pressure supported hydrostatic equilibrium within the cluster potential. The essence of the technique is to directly compare the observed spatially resolved spectra with model predictions. For a spectrum observed in an annulus with inner and outer radii $R_1$ and $R_2$, the model is \begin{equation} L_\nu = \int_{R_1}^{R_2} 2 \pi R dR \int_{R}^{r_\mr{max}} n_e n_H \Lambda_\nu[T(r),Z(r)] \frac{2 r dr}{\sqrt{r^2-R^2}} \end{equation} where $r$ denotes unprojected radius, $R$ denotes projected radius, $r_\mr{max}$ is the termination radius of the X-ray gas (taken to be $r_{100}$ in this paper), $\Lambda_\nu$ is the frequency-dependent cooling function which is a function of temperature $T$ and metallicity $Z$, and $n_e$ and $n_H$ are the electron and hydrogen number density, respectively. One feature of the above method is that the unprojected temperature profile is calculated self-consistently assuming hydrostatic equilibrium of assumed gas and dark matter density profiles. As a result, we never have to specify or fit a temperature profile; temperature is merely an intermediate ``dummy'' quantity connecting the gas and dark matter mass distributions to the X-ray spectra. This avoids subjective weighting involved in the fitting of 2D projected temperature profiles \citep{Mazzotta04,Rasia05,Vikhlinin06b}, which are more difficult to correct for the effects of PSF distortion. \subsection{Parameters of the Hydrostatic Model} The hydrostatic model assumes a flexible spherical electron density distribution \begin{eqnarray} n(r) & = & n_{e_0} \left(\frac{r}{r_{x_0}}\right)^{-\alpha} B(r,r_{x_0},\beta_0)+\\ \nonumber & & n_{e_1} B(r,r_{x_1},\beta_1)+ n_{e_2} B(r,r_{x_2},\beta_2) \end{eqnarray} where the familiar ``beta'' model is \begin{equation} B(r,r_{x_i},\beta_i) = \left(1+\frac{r}{r_{x_i}} \right)^{-\frac{3 \beta_i}{2}} \nonumber \end{equation} In other words, the gas mass profile consists of a fully general triple ``beta'' model profile, where the first beta model is further allowed to be multiplied by a single power law. The metallicity distribution is modeled as \citep[e.g.][]{Pizzolato03} \begin{equation} \frac{Z}{Z_\odot} = Z_0 \left(1 + \frac{r^2}{r_z^2}\right)^{-3 \beta_z} \end{equation} with $r_Z$, $\beta_z$, and $Z_0$ free parameters. Finally, the total mass distribution (baryons and dark matter) are modeled as a \cite{NFW} profile: \begin{equation} \rho_\mr{tot} = \frac{M_0}{r (c r + r_\Delta)^2} \end{equation} where $M_0$ is the normalization, $c$ is the halo concentration, and $r_\Delta$ is the overdensity radius (see above). These are also free parameters, except that rather than fitting $M_0$, we fit $M_\Delta$---the mass within $r_\Delta$---as the normalization constant (because there is a one-to-one relationship between $M_0$ and $M_\Delta$). In general, some of the above parameters are better determined than the others. For example, the inner slope of the gas density distribution, $\alpha$, is always well measured (with a typical uncertainty of $\pm 0.1$, and follows the well-known trend \citep[e.g][]{Sanderson10} that low central entropy clusters have steeper inner profiles, $\alpha \approx 0.5$, whereas high entropy clusters have flatter profiles, $\alpha \approx 0$). The central metallicities are similarly well-determined. On the other hand, quantities such as the slopes and core radii of multiple $\beta$-model profiles---such as $\beta_2$ and $\beta_3$ or $r_Z$ and $\beta_Z$---frequently reveal significant degeneracies with each other. In all cases, such degeneracies are properly marginalized over using the Hrothgar Markov chain monte carlo procedure described in \cite{Mahdavi07}, and the one-dimensional error bars in Table \ref{tbl:data} always properly reflect any and all degeneracies among the many parameters in this many-dimensional model. \input{table2} \label{sec:crosscal} \subsection{Joint Calibration of Chandra and XMM-Newton Masses} Where available, we use both Chandra and XMM-Newton data for a cluster. This has several advantages: in the inner regions, Chandra is able to resolve the cluster cores well; while XMM-Newton's wider field of view yields better coverage of the outer regions of the cluster. The simultaneous coverage of intermediate regions helps constrain residual backgrounds following blank sky subtraction. When combining Chandra and XMM-Newton data, cross-calibration is a significant issue. In general, there are slight differences among the responses of the Chandra ACIS and the XMM-Newton pn, MOS1, and MOS2 detectors. Even after over a decade in flight, the source of these differences has not been conclusively identified. Typically, comparisons show that Chandra temperatures are $5-15\%$ higher \citep{Snowden08,Reese10}. The most recent calibration tests \citep{Tsujimoto11} use the G21.5-0.9 pulsar (which is fainter than the usual source, the Crab nebula, and hence not subject to detector pileup). \cite{Tsujimoto11} find that the XMM-Newton pn has a 15\% lower flux in the 2.0-8.0 keV energy band compared to the Chandra ACIS-S. This confirms an earlier finding by \cite{Nevalainen10} who found similar results. Lower hard band flux naturally leads to lower X-ray temperatures when 0.5-2.0 keV photons are also included. This primarily affects masses for which spectral line emission is not dominant (i.e., in hot, k T $ > 4$ keV clusters). \hly{It is at this point unknown where the source of the disagreement lies and which instrument is better calibrated}. \begin{figure*} \begin{center} \resizebox{4.3in}{!}{\includegraphics{bcg-entropy-2.pdf}} \end{center} \caption{Bimodality in the joint distribution of BCG offset and central entropy; contours show lines of constant probability density after the points have been smoothed with a 0.25 dex Gaussian. The top and right axes show the 1D probability density for central entropy and BCG offset. Blue triangles show cool-core clusters and red triangles show non-cool-core clusters. The horizontal thin line shows our chosen division between cool-core and non-cool-core systems, while the vertical line shows our chosen division between low BCG offset and high BCG offset systems. \label{fig:k0bcg}} \end{figure*} \begin{figure*} \begin{tabular}{cc} \vspace*{-0.2in}\resizebox{3.5in}{!}{\includegraphics{p3p0-bcg.pdf}} & \resizebox{3.5in}{!}{\includegraphics{bcg-wx.pdf}} \\ \vspace*{-0.2in}\resizebox{3.5in}{!}{\includegraphics{p3p0-entropy.pdf}} & \resizebox{3.5in}{!}{\includegraphics{entropy-wx.pdf}} \\ \end{tabular} \caption{Correlation of four different substructure measures (central entropy $K_0$, BCG offset $D_\mr{BCG}$, X-ray centroid variance $w_X$, and $P3/P0$ power ratio) against each other. Blue triangles show cool-core clusters and red triangles show non-cool-core clusters; blue circles show low-BCG-offset systems and red circles show high-BCG-offset systems. \label{fig:substruct}} \end{figure*} Figure \ref{fig:crosscal} shows the X-ray mass measured within lensing $r_{2500}^{WL}$ for the 19 clusters in our sample which contain both Chandra and XMM-Newton data. Shown are the results for CALDB 4.2.2 (April 2010). We also checked CALDB 4.4.7 (December 2011) and CALDB 4.5.1 (June 2012). The calibration for our sample changed little during this period, and in all three cases, we find that Chandra masses are higher than XMM-Newton masses by roughly 15\%. All observations were recorded prior to 2010, and taken as a whole, the change in the Chandra masses of these systems is not statistically significant between CALDB 4.2.2 and 4.4.7. We adopt the 2010 CALDB for the remainder of this paper, stressing that any changes to our results would be well within the statistical errors presented were we to switch to a different calibration release. To be able to combine Chandra and XMM-Newton data, one must first ensure that they are consistent. We find that the following simple cross-calibration prescription is able to bring the data into self-consistency: \begin{equation} A^\mr{corrected}_\mr{CXO}(E) = A_\mr{CXO}(E) \left(\frac{E}{\mr{keV}} \right)^\zeta \end{equation} where $\zeta =0$ gives the unmodified CALDB area, and $\zeta > 0$ has the effect of down-weighting the high energy effective area of Chandra. We find that setting $\zeta = 0.07$ brings Chandra and XMM-Newton masses into agreement as shown in Figure \ref{fig:crosscal}. In either case, the intrinsic scatter between Chandra and XMM-Newton mass measurements at these fixed radii is certainly less than 10\%, though inconsistent with zero at the 2$\sigma$ level. Figure \ref{fig:crosscal} also shows that the integrated X-ray temperatures and luminosities within lensing $r_{2500}$ are also improved by our suggested calibration. The discrepancy between unmodified Chandra X-ray temperatures and XMM temperatures is roughly the same as the discrepancy in hydrostatic masses. The bolometric X-ray luminosities are also in better agreement as a result of the effective area re-calibration, though in this case the original discrepancy is less severe than in the realm of the spectroscopic temperature. We chose to modify the Chandra effective area, and not the XMM-Newton effective area, based on the fact that the XMM-Newton has exhibited the least variation over the years, whereas Chandra has enacted larger 10-15\% changes in its effective area calibration historically. \hly{ We note that had we modified the XMM-Newton effective area to match that of Chandra, then we would have found in what follows that clusters no longer exhibit self-similar behavior and that (a) those with obvious substructure would be the ones whose masses calculated assuming hydrostatic equilibrium would agree with their weak lensing masses,} \hlb{and (b) that clusters with cool cores would have hydrostatic masses greater than their weak lensing masses.} This uncertainty in the telescopes' effective areas must be viewed as a fundamental systematic limitation of X-ray astronomy at least as related to cluster science. \subsection{Online Data and Regression Tool} All data and analysis software used for this paper are available online at \url{http://sfstar.sfsu.edu/cccp}. Fits of scaling relations (i.e., the modeling of linear or power law relationships among measured quantities) are complicated by the fact that error in both coordinates makes ordinary $\chi^2$ analysis invalid. A detailed treatise of recent developments in the theory behind modeling 2D data with errors in both coordinates appears in \cite{Hogg10}. These techniques allow the simultaneous estimation of slope, intercept, and intrinsic scatter in such relations. We implement the methods of \cite{Hogg10} at the data website for this article. \section{Measures of non-relaxed status} \label{sec:struct} The gas in all clusters of galaxies exhibits some degree of deviation from an idealized smooth, triaxial distribution. Such deviation could come in terms of subclumping, asymmetry, or both. Its presence gives some clue as to the nature of its evolutionary history; for example, asymmetry could indicate either the beginning or the end of a merger event; subclumps could either be recently accreted small groups of galaxies, or surviving cold cores from recent mergers. Despite this ambiguity, objective measures of substructure are helpful in arriving at quantitative estimates of departure from equilibrium. To begin, we employ two common and well-tested measures of substructure: power ratios and centroid shift variance. Power ratios are Fourier-space estimators of fluctuations in the overall cluster surface brightness distribution, while the centroid shift is a measure of the variance of the distance between the X-ray surface brightness peak (which is always well defined) and the centroid (which in a non-relaxed cluster often varies significantly as a function isophote used for its estimation). We refer the reader to \cite{Buote95,Poole06,Jeltema05,Jeltema08} and \cite{Boehringer10} for details on the calculation of these estimators. As further tracers of the relaxed or nonrelaxed state of a system, we also consider the somewhat more straightforward measures, central entropy and the X-ray to optical center offset. Low central entropies indicate the presence of a cool core, which tend to be associated (non-exclusively) with relaxed clusters. \hlo{We define the central entropy as:} \begin{equation} K_0 \equiv K(20 \mr{kpc}) \end{equation} \hlb{In other words, the central entropy is defined as the deprojected entropy profile evaluated at a radius of 20kpc from the cluster center.} Similarly, the distance between the brightest cluster galaxy (BCG) and the X-ray surface brightness peak can be a good predictor of relaxed state, with large shifts indicating ongoing or residual merger activity \citep{Poole07}. \hlo{We measure this distance via simple astrometry on X-ray and optical images, and call it $D_\mr{BCG}$.} One would expect relaxed halos to be more representative of idealized halo growth models. Hence we expect scaling relations among the various thermodynamic and dark matter parameters to be tighter for clusters selected on the basis of the more well-behaved substructure indicators. We also expect the most powerful substructure measures to be correlated with each other. \subsection{Correlations among measures of substructure} \label{sec:corr} We explore the possibility of whether our substructure measures show inherent correlation. The presence of such correlations, particularly when involving both X-ray and optical data, can serve as road maps towards our goal of quantifying departures from equilibrium as economically as possible. We use the Spearman's rank correlation coefficient, with bootstrap resampling for determining $1\sigma$ uncertainties. The relationship between central entropy and BCG offset is the most significant correlation in our sample. This also happens to be the most interesting correlation due to the relative ease of deriving central entropy and BCG offset from observables. Figure \ref{fig:k0bcg} shows that the two substructures measures appear to form a two-peaked joint distribution, with low central entropy, low BCG offsets in one corner, and high central entropy, high BCG offset clusters in another. The dividing line is best seen as a curve with equation \begin{equation} K_0 = 7 \mr{\,keV\,cm}^2 \left(\frac{D_\mr{BCG}}{\mr{Mpc}}\right)^{-1/2} \end{equation} The high correlation coefficient between $K_0$ and $D_\mr{BCG}$ appears to be due to bimodality: \hly{when we calculate the correlation coefficient separately for either cloud, we find that the clouds individually do not contain significant internal correlation.} Though the above formula offers the most clean separation between the two clouds, most of the separation can be captured by imposing cuts in entropy, or, somewhat less cleanly, \hly{in} BCG offset. For this reason, throughout the rest of the paper, we introduce a labeling system that represents cuts in these two most easily measured substructure estimators. We use blue triangles to indicate $K_0< 70$ \hly{keV} cm$^2$ (``cool core systems'' or CC), and red triangles to indicate $K_0 > 70$ \hly{keV} cm$^2$ (``non-cool-core systems'' or NCC). \hlo{This nomenclature is based on the fact that of 70 keV cm$^{2}$ corresponds to a cooling time of $\approx 1.5$ Gyr; most cool core clusters have central cooling times below this value.} Similarly, we use blue circles to indicate systems with $D_\mr{BCG} < 0.01$ \hlo{Mpc} (``low BCG offset systems'') and red circles to indicate $D_\mr{BCG} > 0.01$ \hlo{Mpc} (``high BCG offset systems''). In Figure \ref{fig:substruct}, we look for inherent correlations among the other various indicators of substructure. Strong correlations exist between the BCG offset $D_\mr{BCG}$, the central entropy $K_0$, the X-ray centroid shift $w_X$ at $r_{500}^{WL}$, and the P3/P0 ratio at $r_{2500}^{WL}$ (in measuring the latter two, we cut out the central 0.15 $r_{500}^{WL}$ to avoid dilution of the signal by the cool core). Interestingly, the P3/P0 ratio measured at $r_{500}^{WL}$ (instead of $r_{2500}^{WL}$\hly{)} showed much larger scatter (presumably due to noise) and proved much less tightly correlated with the other substructure measures than the P3/P0 ratio at $r_{2500}^{WL}$. In particular, it should be noted that \hlb{$P3/P0$ exhibits almost as strong a correlation with BCG offset as does central entropy}, though there is no evidence for bimodality. For non-cool-core clusters, the \hlb{$P3/P0$} is \emph{\protect\hlo{significantly} more} correlated with BCG offset than is the central entropy. This is quite a surprising result, since \hlb{$P3/P0$} traces cluster dynamics outside the cool core, whereas the central entropy is more sensitive to the inner parts. The BCG correlation trends are consistent with the well-known tendency of cool cores to occur in smoother (i.e. more relaxed, hence lower $w_X$, low power ratio) clusters where a BCG sits close to the bottom of the potential well \citep{Bildfell08}. This demonstrates the tight quantitative link between these completely independent X-ray and optical indicators of substructure. \section{The $L_X$-$T_X$ Relation} \label{sec:lt} Similarly to previous studies \citep[e.g.][]{Morandi07,Pratt10,Mittal11}, we find that the \hlb{luminosity-temperature ($L_X-T_X$)} relationship exhibits a significant scatter of $\approx 50\%$ when the core of the cluster is included---a scatter which is diminished considerably, to 36\%, when the core is excised. This effect is due to the overall non-self-similarity of cluster cool cores in comparison to the regions outside the cool core \citep[e.g][]{Vikhlinin06}. When the core is not excised, the cool-core clusters lie significantly above the non-cool-core clusters, an effect first noted by \cite{Fabian94b} and subsequently studied in detail by \cite{McCarthy04} and \cite{Maughan12}. In Figure \ref{fig:lt} and Table \ref{tbl:scaling}, we show that when we include all cluster emission, the residuals of the $L_X-T_X$ relation show a strong and significant correlation with both the central entropy of the cluster and the centroid shift $w_X$ (we choose $w_X$ because of the four measures discussed in \S\ref{sec:corr} it offers the strongest correlation). However, when we cut out the central 0.15 $r_{500}^{WL}$, the distinction disappears, and the cool-core and non-cool-core clusters become indistinguishable in terms of entropy as well as $w_X$. This is consistent with the findings of \cite{Maughan12} in the sense that once the cool core is taken out of consideration, residuals in the L-T relation no longer carry information regarding the dynamical state of the cluster. This is an example of ``irreversible scatter''---in other words, outside their cores, the clusters of galaxies in our sample have ``forgotten'' the cause of the intrinsic scatter in the $L_X$-$T_X$ relation. This has implications for scaling relation correction procedures such as described in \cite[e.g.]{Jeltema08}, where relationship between the residuals and the substructure measures for simulated clusters are used to produce corrected observables which sit more tightly on the scaling relations. The lack of correlation in our case implies that such procedures will not reduce the scatter in the measured scaling relations (at least for the JACO/CCCP sample). \begin{figure*} \begin{tabular}{cc} \vspace*{-0.2in}\resizebox{3.1in}{!}{\includegraphics{lt-noncut.pdf}} & \resizebox{3.1in}{!}{\includegraphics{lt-noncut-res-wx.pdf}} \\ \vspace*{-0.2in}\resizebox{3.1in}{!}{\includegraphics{lt-cut.pdf}} & \resizebox{3.1in}{!}{\includegraphics{lt-cut-res-wx.pdf}} \\ \end{tabular} \caption{(top panels) The luminosity-temperature relationship at lensing $r_{500}^{WL}$ and its residuals compared to centroid shift variance $w_X$. (bottom panels) same as top, except that the inner 0.15 $r_{500}^{WL}$ has been removed. The residuals are uncorrelated with all four substructure measures. Blue triangles show cool-core clusters and red triangles show non-cool-core clusters. \label{fig:lt}} \end{figure*} \section{Lensing Mass-Observable Relation} \label{sec:proxy} The mass-observable relationship is an important ingredient in the determination of the cosmological parameters with clusters of galaxies. Because the mass function is the ultimate connector between the cosmological parameters and the data, finding accurate mass proxies \hly{using multiple methods and wavelength regimes is important}. Comparison of X-ray derived observables with weak gravitational lensing masses, which do not require the assumption of hydrostatic equilibrium, has proved a fruitful path towards this end \citep[e.g.][]{Mahdavi08,Okabe10,Jee11}. We list our results for several different mass-observable relations in Table \ref{tbl:scaling}. \subsection{Temperature, Gas Mass, and Pseuo-Pressure} We begin by examining the lensing mass-gas temperature relationship in Figure \ref{fig:mt}; while exhibiting significant intrinsic scatter \citep{Ventimiglia08,Zhang08,Mantz10}, the $M$-$T$ relation is still a worthwhile keystone for comparison with previous work. We find that the relationship is consistent with being self-similar, with a larger scatter and uncertainty at lensing $r_{500}$ than at X-ray $r_{500}$. Regardless of whether we consider the cool-core or the non-cool-core subsamples, the scatter is roughly $46\%$. The scatter drops dramatically to $17\% \pm 8\%$ when we use X-ray $r_{500}$ because of the inherent correlation between the gas temperature and X-ray $r_{500}$ itself, which do not attempt to model. The phenomenon of inherent correlation is discussed in greater detail by \cite{Kravtsov12}, and arises because the \emph{aperture} used to measure the mass is highly correlated with the observable on the other axis (in this case, X-ray $r_{500}$ and X-ray temperature are highly correlated). The normalization derived for the mass-temperature relation is consistent with previous work, for example \cite{Pedersen07}, \cite{Henry09} and \cite{Okabe10}. \hlb{Table {\protect\ref{tbl:scaling}} also shows similar results for the core-excised X-ray luminosity-lensing mass ($L_X-M_{WL}$) relation. The instrinsic scatter ($35\% \pm 13\%$) is consistent with that of the mass-temperature relation, and as before, the scatter is dramatically lower at $r_{500}^X$ than at $r_{500}^{WL}$, again likely due to internal correlation between $r_{500}^X$ and $L_X$ which we do not model.} Far more impressive is the gas mass-lensing mass relationship. The gas mass has been shown in previous work to be a useful mass proxy \citep{Mantz10,Okabe10}---essentially, the assumption that rich clusters of galaxies have the same gas fraction is turning out to be a remarkably robust one. \hlb{We improve the significance of the } \protect\cite{Okabe10} \hlb{finding with our sample of $50$ clusters}: at $r_{500}^{WL}$, the gas mass is consistent with being proportional to the lensing mass, with a log slope of $1.04 \pm 0.1$, and a normalization implying a gas fraction $f_\mr{gas} = 0.12 \pm 0.01$. We find a low scatter of $15 \pm 8\%$ for the $M_\mr{gas}-M_L$ relation (Figure \ref{fig:mg}) \hly{for all clusters, regardless of dynamical state}. Interestingly, the same scatter holds regardless of whether we use lensing $r_{500}^{WL}$ or a fixed aperture of 1 Mpc. This low scatter at fixed radius is important. Recently, sophisticated treatments of the covariance between the axes in the mass-observable relation have become possible \citep{Hogg10}. Specifically, in the case of gas mass and lensing mass measured at $r_{500}^{WL}$, there is a subtle correlation between the two axes, even though one quantity (lensing mass) is measured using optical data and the other quantity (gas mass) is measured using X-ray data. The issue is that the aperture itself, $r_{500}^{WL}$, depends on the lensing mass, and therefore, by choosing the same aperture for the gas mass, we might introduce a correlation that produces artificially low scatter. \hlb{This effect was described in detail by} \protect\cite{Becker11} \hlb{who find that such correlations can result in the measured scatter being $\approx 50\%$ smaller than the true scatter.} \begin{figure*} \begin{tabular}{cc} \resizebox{3.2in}{!}{\includegraphics{K0-t-mlens.pdf}} & \resizebox{3.2in}{!}{\includegraphics{K0-t-mlens-rx500.pdf}} \\ \end{tabular} \caption{The mass-temperature relationship at lensing $r_{500}$ (left) and at X-ray $r_{500}$ (right). The latter shows less scatter due to the intrinsic correlation of X-ray $r_{500}$ with temperature. Blue triangles show cool-core clusters and red triangles show non-cool-core clusters. \label{fig:mt}} \end{figure*} \begin{figure*} \begin{center} \resizebox{4in}{!}{\includegraphics{bcg-mg-mlens}} \end{center} \caption{The gas mass-lensing mass relationship at lensing $r_{500}$. \protect\hlb{Blue circles show low-BCG-offset systems and red circles show high-BCG-offset systems. Most of the low BCG offset systems are also low central entropy clusters.} \label{fig:mg}} \end{figure*} \begin{figure} \resizebox{3.2in}{!}{\includegraphics{bcg-mg-mlens-mpc}} \caption{Lensing mass vs. gas mass at a fixed physical radius of 1 Mpc. Blue circles show low-BCG-offset clusters and red \protect\hly{circles} show high-BCG-offset clusters. \protect\hlo{The relation retains the low scatter of the relations at fixed density contrast.} \label{fig:mgmpc} } \end{figure} \begin{figure} \resizebox{3.2in}{!}{\includegraphics{K0-yx-mlens}} \caption{Lensing mass vs. pseudo-pressure $Y_X$. Blue triangles show cool-core clusters and red circles show non cool-core clusters.\label{fig:yx}}t \end{figure} However, using a physical aperture of 1 Mpc completely takes away any possibility of covariance between the two axes. In \hlb{Figure \protect\ref{fig:mgmpc}}, we truly have two statistically independent observations, and yet the intrinsic scatter remains remarkably low, $16\% \pm 7\%$. The fact that the scatter does not change when switching to a fixed physical aperture is reassuring. \hlb{The $1\sigma$ scatter uncertainties are just large enough to accommodate the scatter underestimate predicted by} {\protect\cite{Becker11}} \hlb{(e.g. if the ``true'' scatter at both $r_{500}^{WL}$ and 1 Mpc is 20\%, our $1\sigma$ errors would be consistent with a 50\% scatter underestimate at $r_{500}^{WL}$ and no scatter underestimate at 1 Mpc).} \hlb{In Table {\protect\ref{tbl:scaling}} we also list the performance of $Y_X$, $L_X$, and $T_X$, measured at fixed physical radius of 1 Mpc, as predictors of $M_{WL}(<1 $Mpc$)$. Overall, we find little difference between the intrinsic scatter at 1 Mpc compared to $r_{500}^{WL}$.} \subsection{Regularity of cool core and low BCG offset clusters} Another point of particular importance is the fact that for the cool-core clusters, the $1\sigma$ scatter is $<10\%$ (the scatter is $<6\%$ if we cut on BCG offset instead)---these numbers are low enough to be consistent with zero. Simulations and analytical work \citep[e.g][]{Becker11} show that \hlb{$\approx 15\%$} is roughly the amount of intrinsic scatter we can expect due to geometric errors from the assumption of spherical geometry. Thus deviations from spherical symmetry can produce scatter \hlb{we observe} in the cool-core $M_\mr{gas}-M_L$ relation, and as a result, we can begin to claim that we are approaching a full accounting of all sources of systematic error in the mass-observable scaling relation. We note that the BCG offset works as well as central entropy in identifying the low-scatter subsample. This is an interesting result, because of our four substructure measures, BCG offset is by far the least expensive to calculate, in that it does not require X-ray temperature (spectral) information---\hlb{a set of X-ray and optical images} is sufficient to calculate $D_\mr{BCG}$. \hlo{However, it is worth noting that while the low BCG offset and cool-core subsamples have significant overlap, they are not precisely the same, and the two cuts trace two different types of equilibrium (dynamical and thermal, respectively)}. Another frequently used mass proxy is $Y_X$, the pseudo-integrated pressure first pioneered by \cite{Kravtsov06} and examined by \cite{Vikhlinin06}; being the product of the gas mass and the core-cut temperature at $r_{500}^{WL}$, $Y_X$ is directly comparable to the integrated Sunyaev-Zel'dovich compton $Y$ parameter \citep{Plagge10,Andersson11}. We show the $Y_X$-$M_L$ relation at $r_{500}^{WL}$ for our sample in Figure \ref{fig:yx}; we find consistency with the expected self-similar slope of 0.6, but slightly higher intrinsic scatter to the gas mass when used as a mass proxy: the overall intrinsic deviation is \hlb{$\approx 23\% \pm 6\%$} regardless of whether we use the entire sample or the cool-core subsample. \hlb{One might be tempted to argue that} gas mass is a superior mass proxy to $Y_X$, not simply because of its ease of calculation and comparable overall intrisic scatter, but also because of the systematically lower intrinsic scatter that comes about when only cool-core clusters are considered. \hlb{However, this discrimination between relaxed and non-relaxed clusters is perhaps not optimal in a cosmological context, where uniformity of scatter across the entire sample is important. Where uniformity is most important, $Y_X$ is a superior choice to gas mass, because as we show in Table {\protect\ref{tbl:scaling}} it has uniform scatter regardless of cluster central entropy or BCG offset.} \hlb{Finally, it is instructive to compare $Y_X$ with its radio counterpart, the cylindrically integrated Sunyaev-Zel'dovich (SZ) pressure $Y_{SZ}$}. {\protect\cite{Hoekstra12}} \hlb{consider direct correlations between $Y_\mr{SZ}$ from the \emph{Planck} mission and projected weak lensing masses; they find an intrinsic scatter of $12\pm5\%$ at projected $r_{2500}$. As a point of comparison, when we conduct a similar exercise on spherically determined $Y_X$ and $M_{WL}$ (both measured at spherical $r_{2500}^{WL}$), we find an intrinsic scatter of $18\% \pm 6\%$, consistent with the} {\protect\cite{Hoekstra12}} \hlb{SZ comparison.} \subsection{Predicting $M_{500}^{WL}$ with fixed aperture mass proxies for surveys} \label{sec:mpc} \hlb{In a blind X-ray survey, the aperture $r_{500}^{WL}$ or even $r_{500}^X$ may not be easily available. For cosmology, we still need to know $M_{500}$. It is therefore useful to investigate whether one can directly predict $M_{500}^{WL}$ without the need to calculate overdensity radii $r_\Delta$ for the various X-ray observables. } For example, a wide-field all-sky X-ray survey may be able to measure hundreds of thousands of gas masses within fixed physical apertures, but lack the photon statistics to allow for the calculation of X-ray overdensity radii. In Figures \ref{fig:mixed} we consider this situation, plotting $M_\mr{WL}(<r_{500}^{WL})$ against gas mass and $Y_X$ measured within a fixed radius of $1$ Mpc. As expected, the slopes now deviate from self-similar, and the intrinsic scatter is considerably higher than in Figures \ref{fig:mg} and \ref{fig:yx}. However, interestingly, $Y_X$ exhibits somewhat less scatter (29\%) in this ``mixed'' scaling relation than does gas mass (40\%). In surveys with poor photon statistics where no X-ray or weak lensing estimates of $r_{500}$ are readily available, $Y_X$ \emph{measured within a fixed physical aperture} may constitute a better mass proxy, because no separate estimate of X-ray $r_{500}$ is required to use the relations shown in Figure \ref{fig:mixed}. The results are summarized in Table \ref{tbl:scaling}. \begin{figure*} \begin{tabular}{cc} \resizebox{3.5in}{!}{\includegraphics{bcg-mg-mlens-mixed.pdf}} & \resizebox{3.5in}{!}{\includegraphics{bcg-yx-mlens-mixed.pdf}} \end{tabular} \caption{The lensing mass at $r_{500}$ vs. gas mass and $Y_X$ measured at \emph{fixed} radius of 1 Mpc. Blue circles show low BCG offset clusters, while red circles show high BCG offset clusters. \label{fig:mixed}} \end{figure*} These data leave us with the perhaps dispiriting result that low ($<10\%$) scatter X-ray mass proxies may be derived either at fixed physical radii, yielding total mass estimates within fixed physical radii; or they may be derived at fixed overdensity radii, yielding total mass estimates within fixed overdensities. But it seems difficult to achieve very low scatter without either committing to fixed physical radii (straightforward to measure, but more difficult to use for cosmology); or to fixed overdensity radii (difficult to measure, but more useful for cosmology) in both axes. \begin{figure*} \begin{tabular}{cc} \resizebox{3.5in}{!}{\includegraphics{K0-mhydro-mlens-r2500.pdf}} & \resizebox{3.5in}{!}{\includegraphics{K0-mhydro-mlens-r500.pdf}} \end{tabular} \caption{The relationship between hydrostatic mass and lensing mass at $r_{2500}^{WL}$ (\emph{left}) and $r_{500}^{WL}$ (\emph{right}). Blue triangles show cool-core clusters and red triangles show non-cool-core clusters. Cool core clusters tend to have hydrostatic masses that agree with lensing masses; non-cool-core clusters tend to exhibit the hydrostatic mass underestimate. \protect\hlb{The solid line indicates the best fit; the long-dashed line indicates the line of equality; the short-dashed line corresponds to the cool-core clusters, and the dotted line corresponds to the non-cool-core clusters.}\label{fig:mhml}} \end{figure*} \begin{figure*} \begin{center} \resizebox{4in}{!}{\includegraphics{comp}} \end{center} \caption{The X-ray to Weak-Lensing mass ratio as a function of density contrast for cool-core systems (blue triangles) and non-cool-core systems (red triangles). \protect\hlb{The error bars are not independent because the data within $r_{2500}$ also contributes to the measurement at $r_{500}$.} The shaded region shows the range of X-ray cluster mass underestimate as determined by \protect\cite{Lau09}. \label{fig:mhmlsummary}} \end{figure*} \begin{figure*} \begin{center} \begin{tabular}{cc} \resizebox{3.5in}{!}{\includegraphics{ell}} & \resizebox{3.5in}{!}{\includegraphics{ell500}} \\ \end{tabular} \end{center} \caption{The X-ray to Weak-Lensing mass ratio as a function of BCG ellipticity for 43 BCGs with measureable ellipticities at 30 kpc. The largest error bar in ellipticity belongs to CL0024. Shown are non-cool-core systems (red triangles) and cool-core systems (blue triangles). \protect\hlb{The correlation is significant only for the non-cool-core systems at $r_{2500}$; there is a marginal correlation at $r_{500}$. Cool-core systems do not participate in the trend}. \label{fig:ell}} \end{figure*} \subsection{Lack of Correlation with Substructure Measures} We have already argued that the intrinsic scatter in the lensing mass to X-ray observable relations is potentially fully accounted for by the triaxiality of the clusters; nevertheless, it is still useful to consider whether the scatter in such relation may be further minimized via correlation with measures of substructure, at least as an empirical means to gauge the effect of this triaxiality. However, we find that none of the substructure measures---BCG offset, central entropy, centroid shift variance, or power ratio---have any significant correlation with residuals in the mass-observable relation. We note that \cite{Marrone12} did find a residual correlation with BCG \emph{ellipticity} in the relationship between weak lensing mass and the integrated Sunyaev-Zel'dovich effect signal $Y_\mr{SZ}$. We examine a similar relation in \S\ref{sec:ell}. It follows from this lack of correlation that the $M_x/M_{WL}$ ratio itself is not correlated with morphological measures such as centroid shift or power ratio, either. \cite{Rasia12} consider a similar question; they examine whether the ratio of the X-ray mass to the true mass is correlated with centroid shift or power ratio. They find a weak correlation between this ratio and the substructure measures, with a Pearson rank coefficient of -0.2 to -0.3, significant at $2\sigma$. We do not observe such a correlation, most likely because we do not measure the X-ray to true mass ratio, but rather the X-ray to weak lensing mass ratio, the latter of which has its own intrinsic scatter. In our present sample, then, it is possible to minimize the scatter in the mass-observable relation by conducting a cut on central entropy, but it is not possible to ``correct'' this scatter for the non-cool-core clusters by utilizing any of the four substructure quantifiers we consider or even BCG ellipticity. \section{Deviations from Hydrostatic Equilibrium} \label{sec:hydro} \subsection{Hydrostatic Mass Underestimate} \cite{Mahdavi08} argued that a subsample of the clusters discussed here have X-ray masses at $r_{500}^{WL}$ that are on the average $15\%$ lower than their lensing masses at $r_{500}^{WL}$. This discrepancy may be attributed to deviations from hydrostatic equilibrium due to residual gas motions and incomplete thermalization of the ICM; the fact that hydrostatic masses tend to underestimate the true mass by 10-20\% was first discussed by \cite{Evrard90} and continues to be important in grid-based simulations \citep[e.g.][]{Lau09,Nelson12}, SPH simulations \citep[e.g.][]{Rasia06,Battaglia12,Rasia12}, and observations of distant clusters \citep{Andersson11,Jee11}. Biases in gravitational lensing masses could in principle also affect the X-ray to weak-lensing mass ratio, such systematic biases are only $\approx 5-10\%$, but would have the effect of increasing the X-ray to weak lensing mass ratio \citep[e.g.][]{Becker11}. Note that in recent N-body hydrodynamical simulations, even though the hydrostatic bias (10-15\%) is roughly twice the level of the weak lensing bias, the scatter about this bias is larger by a factor of two for weak lensing masses than for the hydrostatic masses \citep{Meneghetti10,Rasia12,Nelson12}. It is worth pointing out, however, that the technique we use for our lensing mass measurements should yield lower bias than suggested by these simulations. The technique achieves this lower bias of 3-4\% (rather than the expected bias of 5-10\%) by omitting the regions of the shear map that are most susceptible, at the cost of increased statistical uncertainty. We refer the reader to \cite{Hoekstra12} for details. In Figure \ref{fig:mhml} we extend our results to the full sample of 50 clusters. The larger size of the sample allows us to resolve differences between cool-core and non-cool core clusters. We find that cool-core clusters and non-cool-core clusters do not exhibit the same level of departure from hydrostatic equilibrium. Cool core clusters have hydrostatic masses that are proportional to their weak lensing masses at all radii. The $M_X-M_L$ relation for this subsample has a small scatter ($<20\%$), about the right level for all the scatter to be accounted for by triaxiality. Overall, we find that cool core clusters are consistent with having no difference between their X-ray and weak lensing masses. The picture is dramatically different for non-cool-core clusters. In these systems, we find a roughly constant hydrostatic mass to lensing mass ratio of $80\%$, regardless of whether we look at $r_{500}^{WL}$ or $r_{2500}^{WL}$. Our results are consistent with N-body gas dynamical simulations as shown in Figure \ref{fig:mhmlsummary} and Table \ref{tbl:mhmlsummary}. Broadly, these results are consistent the hydrostatic mass underestimates predicted by gasdynamical simulations that account for unthermalized gas, such as \cite{Nagai07}, \cite{Jeltema08} and \cite{Lau09}. We find that the non-cool-core clusters populate the lower end of the region allowed by these simulations, whereas the cool-core clusters populate the region where X-ray and true mass agree within 10\%. Of these simulations \cite{Jeltema08} is the most consistent with our measured $20\%$ average mass underestimate for disturbed systems. \subsection{Correlation with BCG Ellipticity} \label{sec:ell} Finally, we consider the question of whether BCG ellipticity is correlated with differences between hydrostatic and weak lensing masses. Such a correlation is suggested by \cite{Marrone12}, who use the integrated Compton parameter $Y_\mr{sph}$ as a mass proxy. In figure \ref{fig:ell}, we \hlb{show $M_X / M_\mr{WL}$ at $r_{2500}$ and $r_{500}$, plotted against CFHT ellipticities measured at 30kpc}. \hlo{We find that cool-core systems are consistent with $M_X/M_L=1$ $(\chi^2/\nu = 18/14)$; whereas non-cool-core systems are definitively not consistent with $M_x/M_L=1$ ($\chi^2/\nu=70/29$).} Therefore, for non-cool-core systems, we find a good correlation between BCG ellipticity and the X-ray to weak lensing ratio ratio at $r_{2500}^{WL}$, and a weak correlation at $r_{500}$. \hlo{While this is similar to the trend found by \protect\cite{Marrone12} for $Y_\mr{sph}$, there is a difference in that our cool-core systems do not appear to participate in the correlation}. \hlo{Furthermore, also in apparent contrast with \protect\cite{Marrone12}}, \hly{our correlation becomes } \hlb{ less significant at $r_{500}^{WL}$}. We interpret this result as suggesting that while cluster orientation plays some role in low X-ray to weak lensing mass ratios, it is not the only agent at work in this complex relationship \hlb{(indeed, the hydrostatic mass underestimate must also play a role).} We note that it is not altogether surprising that the trend of ellipticity with $M_X/M_L$ for cool core clusters is insignificant. We have shown in \S\ref{sec:hydro} that our X-ray and weak-lensing masses are consistent for this sub-population (in contrast, \cite{Marrone12} contained several undisturbed clusters with significant $Y_\mr{sph}$ to weak lensing mass discrepancies). \hlb{Furthermore,} it is difficult to untangle the effects of elongation along the line of sight (which would chiefly bias weak lensing masses high) and non-hydrostatic gas (which would chiefly bias the X-ray masses low). \hlb{We also stress that the trend is altogether absent at $r_{500}$} However, empirically, we can point out that the non-cool-core clusters with the highest ellipticities have consistent X-ray and weak lensing masses, something corroborated by \protect\cite{Marrone12}. \begin{deluxetable*}{lccrccc} \tablecaption{Mass Proxy Fits with Lognormal Intrinsic Scatter \label{tbl:scaling}} \tablehead{ & \colhead{Proxy} & \colhead{$M_{WL}$} & & \colhead{Log} & \colhead{Log} & \colhead{Fractional Scatter} \\ Proxy&Aperture&Aperture& Sample & \colhead{Slope} & \colhead{Intercept} & \colhead{in $M_{WL}$ at fixed proxy}} \startdata \multicolumn{7}{c}{Relations at Fixed Overdensity in Proxy and Mass} \\ \hline $T^{cut}_X/8$ keV & $r_{500}^{WL}$ & $r_{500}^{WL}$ & all & $1.97\pm0.89$ & $1.04 \pm 0.06$ & $0.46\pm0.23$ \\ $T^{cut}_X/8$ keV & $r_{500}^{X}$ & $r_{500}^{X}$ & all & $1.42\pm0.19$ & $0.96\pm0.02$ & $0.17\pm0.08$ \\ \\ $L^{cut}_X E(z)^{-1} $ & $r_{500}^{WL}$ & $r_{500}^{WL}$ & all & $0.54\pm0.12$ & $0.81\pm0.04$ & $0.36\pm0.12$ \\ $L^{cut}_X E(z)^{-1} $ & $r_{500}^{X}$ & $r_{500}^{X}$ & all & $0.57\pm0.08$ & $0.78\pm0.03$ & $0.27\pm0.05$ \\ \\ $M_\mr{Gas} E(z)$ & $r_{500}^{WL}$ & $r_{500}^{WL}$ & all & $1.04\pm0.10$ & $0.90\pm0.02$ & $0.15\pm0.06$ \\ &&&{$K_0<70$ keV cm$^2$} & $0.91\pm0.20$ & $0.89\pm0.03$ & $<0.1$ \\ &&&{$K_0>70$ keV cm$^2$} & $1.09\pm0.13$ & $0.90\pm0.02$ & $0.18\pm0.09$ \\ &&&{$D_\mr{BCG} < $ 0.01 Mpc} & $0.93\pm0.13$ & $0.89\pm0.02$ & $<0.06$ \\ &&&{$D_\mr{BCG} > $0.01 Mpc} & $1.13\pm0.18$ & $0.90\pm0.03$ & $0.22\pm0.15$ \\ \\ $Y_X E(z)^{0.6}$ & $r_{500}^{WL}$ & $r_{500}^{WL}$ & all & $0.56\pm0.07$ & $0.45\pm0.07$ & $0.22\pm0.05$ \\ &&& {$K_0<70$ keV cm$^2$}& $0.44\pm0.14$ & $0.53\pm0.11$ & $0.24\pm0.18$ \\ &&& {$K_0>70$ keV cm$^2$}& $0.62\pm0.10$ & $0.41\pm0.09$ & $0.21\pm0.09$ \\ &&& {$D_\mr{BCG} < $ 0.01} & $0.48\pm0.09$ & $0.52\pm0.08$ & $0.17\pm0.11$ \\ &&& {$D_\mr{BCG} > $0.01} & $0.65\pm0.14$ & $0.36\pm0.13$ & $0.27\pm0.17$ \\ \\ \multicolumn{7}{c}{Relations at Other Radii} \\ \hline $T^{cut}_X$/8 keV (keV) & 1 Mpc & 1 Mpc & all & $1.10\pm0.57$ & $0.80\pm0.02$ & $0.15\pm0.11$ \\ $L^{cut}_X$ & "& " & all & $0.26\pm0.07$ & $0.71\pm0.02$ & $0.19\pm0.04$ \\ $M_{Gas} $ & " & " & all & $0.83\pm0.14$ & $0.90\pm0.03$ & $0.16\pm0.10$ \\ $Y_X $ & " & " & all & $0.40\pm0.06$ & $0.48\pm0.05$ & $0.12\pm0.04$ \\ $T^{cut}_X$/8 keV & " & $r_{500}^{WL}$ & all & $3.04\pm1.38$ & $1.03\pm0.08$ & $0.46\pm0.31$ \\ $L^{cut}_X $ & " & $r_{500}^{WL}$& all & $0.58\pm0.15$ & $0.80\pm0.04$ & $0.38\pm0.13$ \\ $M_{Gas}$ & " & $r_{500}^{WL}$& all & $1.73\pm0.59$ & $1.20\pm0.13$ & $0.39\pm0.18$ \\ $Y_X$ & " & $r_{500}^{WL}$ & all & $0.80\pm0.15$ & $0.35\pm0.11$ & $0.28\pm0.14$ \\ \enddata \tablecomments{All proxies are fit against $M_{WL} E(z)$ at an aperture of $r_{500}^{WL}$ or $M_{WL}$ at an aperture of 1 Mpc. All masses are in units of $10^{14} M_\odot$. The core-cut X-ray luminosity is in units of $10^{45}$ erg s$^{-1}$, and $Y_X$ is in units of $10^{14} M_\odot$ keV. The self-similar evolution model for clusters of galaxies \protect\citep[e.g][]{Kaiser91,Kravtsov12} posits $M E(z) \propto T_X^{3/2} \propto L_X^{3/4} E(z)^{-1} \propto Y_X^{3/5} E(z)^{3/5}$, where $E(z)^2 = \Omega_M (1+z)^3+\Omega_\Lambda$.} \end{deluxetable*} \begin{deluxetable*}{rlccc} \tablecaption{X-ray to Weak Lensing Mass Ratios \label{tbl:mhmlsummary}} \tablehead{Contrast & Sample & $M_X/M_L$ & \colhead{Fractional Scatter} \\ &&&\colhead{in $M_X$ at fixed $M_L$}} \startdata $r_{2500}^\mr{WL}$& All & $0.92\pm0.05$ & $0.19\pm0.05$ \\ & $K_0 < 70$ keV cm$^{2}$ & $1.11\pm0.10$ & $<10\%$ \\ & $K_0 > 70$ keV cm$^{2}$ & $0.85\pm0.05$ & $0.19\pm0.06$ \\ & $D_\mr{BCG} < 0.01$ Mpc & $1.04\pm0.07$ & $<0.15$ \\ & $D_\mr{BCG} > 0.01$ Mpc & $0.81\pm0.07$ & $0.24\pm0.07$ \\ \\ $r_{1000}^\mr{WL}$& All & $0.89\pm0.05$ & $0.20\pm0.05$ \\ & $K_0 < 70$ keV cm$^{2}$ & $1.08\pm0.09$ & $<9\%$ \\ & $K_0 > 70$ keV cm$^{2}$ &$0.83\pm0.06$ & $0.20\pm0.06$ \\ & $D_\mr{BCG} < 0.01$ Mpc & $0.97\pm0.07$ & $0.13\pm0.10$ \\ & $D_\mr{BCG} > 0.01$ Mpc & $0.84\pm0.06$ & $0.22\pm0.07$ \\ \\ $r_{500}^\mr{WL}$ & All & $0.88\pm0.05$ & $0.21\pm0.06$ \\ & $K_0 < 70$ keV cm$^{2}$ & $0.97\pm0.10$ & $0.17\pm0.13$ \\ & $K_0 > 70$ keV cm$^{2}$ & $0.83\pm0.07$ & $0.22\pm0.07$ \\ & $D_\mr{BCG} < 0.01$ Mpc & $0.85\pm0.09$ & $0.22\pm0.11$ \\ & $D_\mr{BCG} > 0.01$ Mpc & $0.89\pm0.07$ & $0.20\pm0.08$ \\ \end{deluxetable*} \section{Conclusion} \label{sec:conclusion} We examine archival X-ray data on a sample of 50 clusters of galaxies; most of the clusters have \emph{Chandra} data, while roughly half have \emph{XMM-Newton} data of good quality. All clusters have CFHT weak gravitational lensing data from either the CFH12k or the Megacam instruments. In attempting to combine \emph{Chandra} and \emph{XMM-Newton} data to maximize both effective area and spatial resolution, we confirm previously reported systematic calibration differences between the two observatories. Using multiple calibration releases, we find a $15\%$ systematic difference in hydrostatic masses between \emph{Chandra} and \emph{XMM-Newton}. Reassuringly, there is no intrinsic scatter between the masses for the two observatories, indicating that the issue is merely a matter of overall gain calibration and not a more serious spatially dependent issue. We develop an effective area correction that revises \emph{Chandra} masses downward into agreement with \emph{XMM-Newton} masses. This correction is only valid for high temperature ($\gtrsim 5$ keV) clusters such as ours; at lower temperatures, the two observatories are more consistent due to the abundant prominence of X-ray lines. Using the $L_X-T_X$ relation, we find that our sample is consistent with being randomly drawn from the same parent population as samples with well understood selection functions, such as HIFLUGS \citep{Reiprich02} and MACS \citep{Ebeling10}. We examine several measures of substructure, including central entropy, BCG to X-ray peak offset, centroid shift variance, and power ratios. There is a significant correlation among all the substructure measures. The most strikingly correlated quantities are the BCG to X-ray peak offset (in Mpc) and the central entropy measured at a radius of 20 kpc. The hint of bimodality in the joint 2D distribution of the BCG offset and central entropy indicates a complex connection between the thermal and dynamical relaxation times of galaxy clusters. Gas mass is by far the most robust predictor of weak lensing mass, with $<10\%$ scatter for cool-core clusters and $14\% \pm 6\%$ scatter for the sample overall. It is followed by the X-ray pseudo-pressure, $Y_X$, which has $22\% \pm 5$ intrinsic scatter for both cool core clusters and the sample overall. The mass-temperature relationship has even higher scatter, $43\% \pm 21\%$ for the sample overall. All scaling relations have slopes that are consistent with the expected self-similar value. \hly{We also find that core-excised X-ray luminosity is somewhat better than temperature at predicting weak lensing mass, yielding $28\%\pm18\%$ intrinsic scatter for relaxed systems}. By comparing hydrostatic and weak gravitational lensing masses, we extend our earlier detection \citep{Mahdavi08} of \hly{non-hydrostatic gas, with associated deviations from} hydrostatic equilibrium, in X-ray clusters of galaxies. We are able to quantify the hydrostatic mass underestimate separately for cool-core and non-cool-core clusters. We find that cool-core clusters exhibit \hly{little or no difference} between their weak lensing and X-ray masses; the hydrostatic mass underestimate is consistent with 0\% at both $r_{2500}^{WL}$ and at $r_{500}^{WL}$. Non cool-core clusters, on the other hand, have fairly consistent, $\approx 20\% \pm 10\%$, underestimates \hly{between the same radii}. \hly{This is broadly consistent with N-body gasdynamical simulations of unthermalized gas}. Except for the non-core-cut $L_X$-$T_X$ relation, we do not find a significant correlation between the \emph{residuals} in a given scaling relation and any of our four substructure measures (central entropy, BCG offset, centroid shift variance or P3/P0 power ratio). We interpret this result as indicating that it is not possible to reduce the intrinsic scatter in a scaling relation (other than the $L_X-T_X$ relation) by applying corrections based on substructure measures to individual clusters. In essence, clusters of galaxies have ``forgotten'' the sources of their departures from self-similarity. \hlb{This lack of correlation suggests that we may have accounted for most if not all the parameters that could affect the cluster selection function for cosmological surveys, and that few if any ``hidden'' parameters remain.} \hly{However, we do find a partial trend with cluster ellipticity: cool-core clusters have consistent X-ray and weak lensing masses at $r_{2500}^{WL}$; whereas non-cool-core clusters have increasing $M_X(<r_{2500}^{WL})/M_L(<r_{2500}^{WL})$ with BCG ellipticity at 30 kpc. Clusters with low ellipticity BCGs are the most likely to have mismatched X-ray and weak lensing masses, while clusters with higher ellipticity are more likely to have concordant X-ray and weak lensing masses. We leave it to future studies to determine which combination of X-ray non-hydrostatic bias and lensing projection bias is contributing to this trend.} We emphasize that the X-ray peak to BCG location offset is perhaps the most efficient among our inspected substructure measures. Selecting clusters based on low BCG offset is sufficient to guarantee scatter consistent with zero in the gas mass-lensing mass relation, at least for a sample as large or larger than ours. In summary, we find that cool-core clusters with $K_0< 70$ keV cm$^{2}$ or BCG offset \hlo{$<0.01$ Mpc} are extremely well-behaved and regular systems with respect to their X-ray and lensing properties. \hlo{However, it should be noted that the two cuts do not select the same subsamples, because low BCG offset is indicative of the dynamical equilibrium, whereas low central entropy is a result of thermal equilibrium. While there are clusters that are in both thermal and dynamical equilibrium, the overlap is not perfect.} Clusters with $K_0 > 70$ keV cm$^{2}$ show some intriguing properties---such as tightly correlated \hlo{P3/P0 power ratios} and BCG offsets, \hlo{a linear correlation between $M_X/M_L$ and ellipticity}, and consistently low X-ray to weak lensing mass ratios---but larger samples and more careful theoretical studies are required before we can learn how to use these relations to gain greater physical insight into their evolution. \acknowledgments{ The authors would like to acknowledge productive discussions with Steve Allen, Hans B\"ohringer, Dick Bond, Maru\u{s}a Brada\u{c}, Megan Donahue, Stefano Ettori, Gus Evrard, Fabio Gastaldello, Andrey Kravtsov, Dan Marrone, Daisuke Nagai, Trevor Ponman, Graham Smith, David Spergel, and Mark Voit. The anonymous referee made comments which improved the paper. AM \hly{and TJ} were supported by NASA through Chandra award No. AR0-11016A, issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. AM was also supported \hly{through NASA ADAP grant 11-ADAP11-0270. AB would also like to acknowledge research funding from NSERC Canada through its Discovery Grant program as well as support provided by J. Criswick}. \hlb{HH acknowledges support from the Netherlands organisation for Scientific Research (NWO) through VIDI grant 639.042.814; HH and CB acknowledge support from Marie Curie IRG Grant 230924.} AM and AB acknowledge an especially productive time at the Kavli Institute for Theoretical Physics, where this research was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915. } \vspace{0.2in} \vspace{0.1in} \ifthenelse{\isundefined{1}}{ \section*{References} \input{ms.bbl} }{ \bibliographystyle{myrefs/apj}
{ "timestamp": "2013-04-09T02:03:27", "yymm": "1210", "arxiv_id": "1210.3689", "language": "en", "url": "https://arxiv.org/abs/1210.3689" }
\section{Introduction} The recent surge of interest in bandstructure of unusual crystals spurred from various developments in studying condensed matter systems. To name two examples, relevant to the present work, are the successful isolation of single layer graphene \cite{Neto} and the fabrication of 3D topological insulators \cite{Hasan,Qi}. In contrast to ordinary two-dimensional (2D) crystals, the low-energy hamiltonian involves at least two coupled bands. It gives rise to band crossings which, depending on the material parameters, can be gapped or not. The extra degree of freedom in the internal space (generically called the pseudospin space) offers opportunity for the investigation of new system properties. Most recently, hamiltonian engineering with artificial crystals -- which is a main theme of the active field of quantum simulation, see e.g. \cite{BDN} -- provides a complementary route to realize coupled-band systems \cite{Mortessagne,Tarruell,Gomes}. While the low-energy hamiltonian mimics closely that of its solid-state counterpart, it is no longer limited to parameters of the actual material. For example, in the cold atom experiment performed at ETH Z\"{u}rich \cite{Tarruell}, a Dirac cone pair in the bandstructure is brought to merge as a function of laser parameters, thus realizing a Lifshitz transition which has never been reached in graphene \cite{Hasegawa,Dietl,Wunsch,MontambauxUH}. The merging is a topological transition in which two Dirac cones of opposite Berry phase approach and annihilate before a gap opens. In the ETH Z\"urich experiment, Bloch oscillations of non-interacting fermionic atoms in a honeycomb-like optical lattice are executed to study the merging transition of Dirac cones. As the atom traces out a closed trajectory in the momentum space, it may tunnel to the second band when it comes close to a linear avoided band crossing (i.e. a Dirac cone), a process known as Landau-Zener tunneling. By measuring the transfer probability after performing a Bloch cycle, information about the bandstructure can be extracted with momentum resolution \cite{Weitz}. In Ref. \cite{Lim}, we presented a tight-binding model that reproduces the optical band structure in the parameter space of the experiment. Using a low-energy description of the tight-binding model known as the universal hamiltonian \cite{MontambauxUH}, we quantitatively reproduce the experimental results of Ref. \cite{Tarruell}. In the framework of the universal hamiltonian, the inter-band tunneling problem depends only on two relevant parameters: the merging gap $d$ -- which controls the proximity to the transition -- and the momentum perpendicular to the direction of motion $k$ -- which controls the adiabaticity of the motion, or in other words how far in momentum space the atom is from hitting exactly the tip of the Dirac cone. In particular, in Ref. \cite{Lim} we explain the situation where the two Dirac cones are hit in succession during a single Bloch oscillation (see Fig. \ref{fig:zenery}) by using a simple approximation, known as the St\"uckelberg theory \cite{Stuckelberg,revueStuck}, in which tunneling events are assumed to be independent. The validity of this approach is restricted to the gapless phase ($d<0$) and not too close to the merging transition ($d\ll -1,-k$), i.e. the two Dirac cones are well separated. The present paper is an extension of our letter \cite{Lim} and focuses on the tunneling problem where the atom encounters two Dirac cones in succession. Here, we go beyond the independent cone approximation and present a complete picture of the inter-band transition probability as a function of the two parameters $d$ and $k$. In particular, we now access the whole phase diagram, including the gapped phase ($d>0$) and the transition point ($d=0$). Our paper is organized as follows. In section II, we formulate the inter-band tunneling problem for the universal hamiltonian. In section III, we recall the approximate solution used in \cite{Lim} based on the St\"uckelberg theory. We then present three other analytical approaches: diabatic perturbation theory in section IV, adiabatic perturbation theory in section V and a modified St\"uckelberg formula in section VI. In section VII, we present numerical solutions in the whole parameter space and compare the results of the different approaches. Finally in section VIII, we compare our results to the ETH Z\"urich experiment before concluding in section IX. \begin{figure}[ht] \begin{center} \includegraphics[width=5cm]{zenery.jpg} \end{center} \caption{(Color online) Energy spectrum in the gapless phase ($\Delta_*<0$): energy $E=\pm \sqrt{(p_x^2/(2m^*)+\Delta_*)^2+c_y^2p_y^2}$ as a function of momentum $p_x\sim t$ and $p_y \sim k$. The distance between the two Dirac cones is controlled by the merging gap $\Delta_*\propto d$. The perpendicular gap $c_y p_y \propto k$ controls how far the particle is from hitting the Dirac cones, which are located at $(t= \pm \sqrt{|d|},k=0)$, directly at their tip. The black lines are lines of constant $k$.} \label{fig:zenery} \end{figure} \section{Inter-band tunneling in the universal hamiltonian} In the Landau-Zener (LZ) problem \cite{Landau,Zener,Wittig}, an avoided linear crossing between two bands is considered and the probability for a particle (which will be called an electron in the following) to tunnel from the lower to the upper band under a constant applied force is calculated. Landau solved the problem approximately using perturbation theory and the semiclassical approximation \cite{Landau}, while Zener was able to find the exact solution \cite{Zener}. For a concise modern presentation, see Ref. [\onlinecite{Wittig}] . Here we consider such a tunneling problem for the case of a quadratic band crossing. The latter occurs close to the merging transition of Dirac points \cite{Hasegawa,Dietl,Wunsch,MontambauxUH} and was recently observed in a cold atom realization of a graphene analog \cite{Tarruell}. In the gapless phase, the quadratic band crossing can be approximated as two successive linear crossings (or Dirac cones), which is at the heart of the St\"uckelberg approach (see below). We start from the universal hamiltonian describing the vicinity of the merging transition \cite{MontambauxUH}: \begin{equation} H_u=\left[\frac{p_x^2}{2m_*} + \Delta_*\right] \sigma_x + c_y p_y \sigma_y \label{Hu} \end{equation} It depends on three real parameters: an effective mass $m_*> 0$ giving the spectrum curvature in the $x$ direction, an effective velocity $c_y>0$ for the $y$ direction and a merging gap $\Delta_*$, which is a real number controlling the distance to the transition \cite{xy}. The state space is that of an electron moving in a two-dimensional plane and carrying a pseudo-spin 1/2 described by the Pauli matrices $\sigma_x,\sigma_y,\sigma_z$. The corresponding spectrum is $E=\pm \sqrt{(p_x^2/2m_*+\Delta_*)^2+c_y^2p_y^2}$ and is plotted in Fig. \ref{fig:zenery} when $\Delta_*<0$. If the merging gap is negative, the spectrum is gapless and contains two Dirac cones at $(p_x=\pm \sqrt{2m_* |\Delta_*|},p_y=0)$. If it is zero (the merging point), the two Dirac cones are on top of each other and the spectrum is linear in one direction and quadratic in the perpendicular direction $E=\pm \sqrt{(p_x^2/2m_*)^2+c_y^2p_y^2}$ \cite{Dietl}. If it is positive, there are no band touching points anymore but a true gap $2\Delta_*$ between the two bands. We add a constant electric field $\mathcal{E}$ in the $x$ direction such that during its motion an electron encounters the two Dirac cones in succession \cite{nosinglecone}, see Fig. \ref{fig:zenery}. The gauge is such that the vector potential $A_x=-\mathcal{E}t$ and $A_y=0$. Therefore \begin{equation} H_u(t)=\left[\frac{(p_x-Ft)^2}{2m_*} + \Delta_*\right] \sigma_x + c_y p_y \sigma_y \label{Hu2} \end{equation} The force $F=e\mathcal{E}$ is taken to be positive and $-e<0$ is the electron charge. The hamiltonian commutes with $p_x$ and $p_y$ and therefore the non-trivial dynamics only occurs in the internal space of the pseudo-spin $1/2$ and $p_x$ and $p_y$ can be taken as c-numbers (conserved quantities). Shifting the origin of time $Ft-p_x\to Ft$, it is now possible to get rid of $p_x$. This hamiltonian defines a characteristic energy scale $E_{char}=(\hbar F)^{2/3}/(2m_*)^{1/3}$, and therefore a timescale $t_{char}=\hbar/E_{char}$ and a length scale $L_{char}=E_{char}/F$. Energies, times and lengths are therefore given in units of these characteristic scales. We then define the dimensionless quantities $d\equiv \Delta_*/E_{char}$ and $k\equiv c_y p_y/E_{char}$ and the dimensionless hamiltonian $H_u(t)=[t^2+d]\sigma_x+ k \sigma_z$. Performing a unitary transformation in pseudo-spin space allows one to rewrite the $2\times 2$ hamiltonian in a familiar LZ form. Let $(\sigma_x,\sigma_y,\sigma_z)\to (\sigma_z,\sigma_x,\sigma_y)$ which is realized by the unitary operator $U=\exp(i \frac{2\pi}{3}\boldsymbol{\sigma}\cdot \mathbf{n})=\frac{1}{2}(\mathbb{I}+i\sigma_x+i\sigma_y+i\sigma_z)$ where $\mathbf{n}=(1,1,1)/\sqrt{3}$. Then $H_u(t)$ becomes \begin{equation} H(t) =\left(\begin{array}{cc}E_1(t)&H_{12}\\H_{21}&E_2(t) \end{array} \right)=[t^2+d]\sigma_z+ k \sigma_x \label{dimensionlessh} \end{equation} where $E_1(t)=-E_2(t)=t^2+d$ is a quadratic function of time (in contrast to the original LZ problem in which $E_1(t)=-E_2(t) \propto t$) and $H_{21}=H_{12}=k=\textrm{const}$. The quantities $d$ and $k$ are the only two relevant dimensionless parameters. The first parameter, $d$, controls the distance to the merging transition, which occurs at $d=0$. When $d<0$ there are two Dirac cones (gapless phase) and when $d>0$ there are no Dirac cones (gapped phase). The other parameter, $k$, controls how far the electron is from hitting the Dirac cones directly \cite{tk}, see Fig. \ref{fig:zenery}; it is also a measure of the adiabaticity. We call $d$ the merging gap and $k$ the perpendicular gap. The orthonormal basis $\{|1\rangle,|2\rangle\}$ in which the hamiltonian is written is called the diabatic basis. The diabatic spectrum corresponds to a negligible $k$ and is simply $E_1(t)=t^2+d$ and $E_2(t)=-E_1(t)$. It is plotted in Fig. \ref{fig:spectrum}(a) for negative $d$. It features two band crossings in real time. \begin{figure}[ht] \begin{center} \includegraphics[width=6cm]{diabats} \includegraphics[width=6cm]{adiabats} \end{center} \caption{(Color online) Energy $E$ as a function of time $t$ when $d=-2<0$. (a) Diabats $E_{1,2}=\pm [t^2 +d]$ are parabolas that intercept in real time (continuous blue line is for $+$ sign and dashed red line is for $-$ sign). (b) Adiabats $E_{+,-}=\pm \sqrt{[t^2 + d]^2+k^2}$ (with $k=0.5$) do not intercept in real time (continuous blue line is for $+$ sign and dashed red line is for $-$ sign) but do intercept in complex time. } \label{fig:spectrum} \end{figure} The state of the electron at a given time $t$ is described by the bispinor $|\psi(t)\rangle$. Its time evolution is given by the Schr\"odinger equation $i\frac{d}{dt}|\psi\rangle = H(t)|\psi\rangle$. Let us assume that initially ($t\to -\infty$), the electron is in the lower band $|\psi(-\infty)\rangle\sim |2\rangle$. Our aim is to compute the probability $P(k,d)=|\langle 1|\psi(\infty)\rangle|^2$ that it ends in the upper band ($t\to \infty$) as a function of the two parameters $k$ and $d$. As the result does not depend on the sign of $k$, we will assume that $k\geq 0$, without loss of generality. In the following, we mathematically formulate this problem in two different bases, namely the diabatic and the adiabatic bases. \subsection{Diabatic basis} In the diabatic basis, we write the state at an arbitrary time as a function of two complex numbers $A_1(t)$ and $A_2(t)$: \begin{equation} |\psi(t)\rangle=A_1(t)e^{-i\int^t dt' E_1(t') }|1\rangle + A_2(t)e^{-i \int^t dt' E_2(t')} |2\rangle \end{equation} The time evolution is governed by the Schr\"odinger equation, which reads: \begin{eqnarray} \dot{A}_1&=&-i H_{12}A_2(t) e^{i \int^t dt' E_{12}(t')} \nonumber \\ \dot{A}_2&=&-i H_{12}^* A_1(t) e^{-i \int^t dt' E_{12}(t')} \label{eq:diababasis} \end{eqnarray} where $E_{12}(t)\equiv E_1 (t) - E_2 (t)$. One therefore needs to solve this system of coupled equations with the initial conditions $A_1(-\infty)=0$ and $A_2(-\infty)=1$ (up to a global phase factor). As $|A_1(t)|^2+|A_2(t)|^2=1$ at any time, we are only interested in finding $P=|A_1(\infty)|^2$. This system of two coupled first order differential equations can also be written as a single second order differential equation for $A_1$ (or $A_2$) alone \cite{Zener}. If the force is large, the motion of the electron is fast and $k=c_y p_y (2m_*)^{1/3}/(\hbar F)^{2/3}\ll 1$ is negligible. This is the diabatic or sudden limit. In such a limit, the electron stays in the lower state $|2\rangle$ and $P\to 0$. Indeed, when $k=0$, the tunneling probability is zero for all $d$ as a result of the conservation of the pseudo-spin $\sigma_z$, which commutes with the hamiltonian $H(t)$. This may seem surprising as it means that even when the two bands overlap the probability of interband tunneling is zero. In particular, when $d=0$ with a single quadratic crossing point, the electron does not tunnel to the upper band even though the gap vanishes. This is actually the same phenomenon as Klein tunneling for a 1D version of the graphene bilayer, see e.g. the appendix of Ref. [\onlinecite{AllainFuchs}]. When in the gapless phase $d<0$, this may be seen as two successive perfect Klein tunnelings for a 1D massless Dirac electron: first going from the lower to the upper band with unit probability and then going down to the lower band with certainty at the second Dirac cone. When $k$ is non-zero but small, one can solve the coupled differential equations in perturbation theory as shown below, and show that the probability becomes finite. \subsection{Adiabatic basis} It is also useful to write the same problem in the adiabatic basis, which corresponds to diagonalizing $H(t)$ with $t$ being treated as a parameter. We call $E_\alpha (t)=\alpha \sqrt{(t^2+d)^2+k^2}=\alpha E_+$ the adiabatic eigenenergies (plotted in Fig. \ref{fig:spectrum}(b) when $d<0$, see also Fig. \ref{fig:zenery}), where $\alpha=\pm $ is the band index, and $|\psi_\alpha(t)\rangle$ the corresponding eigenvectors. They satisfy $H(t)|\psi_\alpha(t)\rangle =E_\alpha(t)|\psi_\alpha(t)\rangle$. The angle $\theta(t)$ is defined by $\sin \theta =k/E_+$ and $\cos \theta = (t^2+d)/E_+$, which allows us to write the adiabatic eigenvectors as \begin{equation} |\psi_+(t)\rangle =\left(\begin{array}{c}\cos (\theta/2)\\ \sin (\theta/2) \end{array} \right); \,\, |\psi_-(t)\rangle =\left(\begin{array}{c}\sin (\theta/2)\\ -\cos (\theta/2) \end{array} \right) \end{equation} They form an orthonormal basis at each $t$. The state of the electron at any time can now be expressed in this basis as \begin{equation} |\psi(t)\rangle=\sum_\alpha A_\alpha (t) e^{-i\int^t dt' E_\alpha(t')}|\psi_\alpha(t)\rangle \end{equation} in terms of two unknown amplitudes $A_\alpha(t)$, which satisfy $\sum_\alpha |A_\alpha|^2=1$. The initial state is such that $A_-(-\infty)=1$ (up to a global phase factor) and $A_+(-\infty)=0$ and we are interested in $P=|A_+(\infty)|^2$. Indeed, as $t\to \pm\infty$, $\theta \approx 0$ and $|\psi_-(t)\rangle \approx -|2\rangle$ and $|\psi_+(t)\rangle \approx |1\rangle$. Therefore at both initial and final times, the adiabatic and diabatic basis coincide. The time dependent amplitudes satisfy the following Schr\"odinger equations \begin{eqnarray} \dot{A}_+ + A_+ \langle \psi_+|\dot{\psi}_+\rangle&=&- \langle \psi_+|\dot{\psi}_- \rangle A_- e^{i \int^t dt' E_{+-}(t')} \nonumber \\ \dot{A}_- + A_- \langle \psi_-|\dot{\psi}_-\rangle&=&- \langle \psi_-|\dot{\psi}_+ \rangle A_+ e^{-i \int^t dt' E_{+-}(t')} \end{eqnarray} where $E_{+-}\equiv E_+ - E_-$. As $|\dot{\psi}_{\pm}\rangle=\mp \frac{\dot{\theta}}{2}|\psi_{\mp}\rangle$, one has $\langle \psi_\pm | \dot{\psi}_\pm \rangle =0$ and $\langle \psi_- | \dot{\psi}_+ \rangle = -\langle \psi_+ | \dot{\psi}_- \rangle =-\frac{\dot{\theta}}{2}$, so that the equations become \begin{eqnarray} \dot{A}_+ &=&- \langle \psi_+|\dot{\psi}_- \rangle A_- e^{i \int^t dt E_{+-}(t)} \nonumber \\ \dot{A}_- &=& (\langle \psi_+|\dot{\psi}_- \rangle)^* A_+ e^{-i \int^t dt E_{+-}(t)} \label{adiabaschro} \end{eqnarray} and are quite similar to the ones obtained in the diabatic basis, see eq. (\ref{eq:diababasis}). They also depend on two functions of time: one is the energy difference between the two basis states $E_{+-}(t)$ (instead of $E_{12}(t)$) and the other is the coupling between these states $\langle \psi_+|\dot{\psi}_- \rangle(t)=\dot{\theta}/2$ (instead of $iH_{12}(t)$). If the force is small, the motion of the electron is slow and $k=c_y p_y (2m_*)^{1/3}/(\hbar F)^{2/3}\gg 1$ is large. This is the adiabatic limit and $k$ can be thought of as an adiabaticity parameter. It is also the semiclassical limit as it is equivalent to $\hbar \to 0$ (in a purely classical problem, the transition probability would always be zero). In this limit, the electron stays in the state $|\psi_-(t)\rangle$, which in both limits $t\to \pm \infty$ is $\sim |2\rangle$. As a consequence the transition probability $P=|\langle 1|\psi(\infty)\rangle|^2\approx |\langle 1|\psi_-(\infty)\rangle|^2$ is also zero. When $k$ is large but finite, it is possible to compute the transition probability in perturbation theory (this time the small parameter being $1/k$) as shown below. To summarize, both when $k\ll 1$ and $k\gg 1$, the transition probability vanishes. Away from these two limits, the probability will be non-zero. This already shows that the probability $P$ is a non monotonic function of the perpendicular gap $k$, which is in stark contrast to the LZ problem of a linear avoided band crossing. In the following, we use perturbation theory to compute the transition probability first in the diabatic and then in the adiabatic basis. \section{St\"uckelberg theory in the gapless phase} We start by recalling the results we obtained previously using the St\"uckelberg theory in the gapless phase ($d<0$), see the supplemental material of \cite{Lim}. We first compute the transition probability associated to the two successive LZ events, in the limit where they can be considered to be far apart (i.e. deep in the gapless phase) using the St\"uckelberg approach \cite{Stuckelberg,revueStuck,Lim}. During a single LZ event the probability amplitude to stay in the upper/lower band is $\sqrt{1-P_Z}e^{\mp i\varphi_{St}}$ where the Zener probability is $P_Z=e^{-2\pi\delta}$. The non-adiabatic phase delay $\mp \varphi_{St}$ (where $\mp$ refers to the upper/lower band) is given in terms of the Stokes phase \cite{revueStuck} \begin{equation} \varphi_{St}=\frac{\pi}{4} + \delta (\ln \delta -1) + \textrm{Arg}\, \Gamma(1 - i \delta) \label{eq:stokes} \end{equation} where \begin{equation} \delta = \frac{k^2}{4\sqrt{|d|}} \label{eq:delta} \end{equation} is the adiabaticity parameter in the linear band crossing problem (not to be confused with $k$, which is the adiabaticity parameter in the quadratic band crossing). In the diabatic limit, $\delta \to 0$, the Stokes phase is $\pi/4$ and it monotonically goes to zero in the adiabatic limit $\delta \to \infty$. If the sequence between the two tunneling events is coherent, the two avoided linear crossings realize a St\"uckelberg interferometer. The total probability amplitude to go from the lower to the upper band is the sum of the amplitude for two distinct paths. In the first path, the electron jumps to the upper band at the first Dirac cone and then stays in the upper band at the second, such that the amplitude is $A_+=-\sqrt{P_Z}\times e^{i\varphi_{+}}\times \sqrt{1-P_Z}e^{-i\varphi_{St}}$ where $-\sqrt{P_Z}$ is the amplitude to jump at the first avoided band crossing and $\varphi_{+}=\int_{-\sqrt{|d|}}^{\sqrt{|d|}} dt E_+(t)$ is the phase dynamically acquired by the electron traveling in the upper band between the two Dirac cones, with $2\sqrt{|d|}$ the time needed to travel between the two Dirac points. In the second path, the electron stays in the lower band at the first Dirac cone and then jumps to the upper band at the second. The associated amplitude is $A_-=\sqrt{1-P_Z}e^{i\varphi_{St}}\times e^{i\varphi_{-}}\times \sqrt{P_Z}$ where $\sqrt{P_Z}$ is the amplitude to jump at the second avoided band crossing and $\varphi_{-}=\int_{-\sqrt{|d|}}^{\sqrt{|d|}} dt E_-(t)$ is the dynamically acquired phase of the electron traveling in the lower band from one Dirac cone to the other. Note that the jumping amplitudes $\mp \sqrt{P_Z}$ at the two avoided crossings are opposite to each other. This is related to the fact that the linear LZ problem is not exactly the same for the two Dirac cones: indeed the local low-energy hamiltonians are slightly different just as the ones describing the two different valleys of graphene \cite{valleys}. The total probability $P =|A_+ + A_- |^2$ is therefore \begin{equation} P = 4 P_Z ( 1 - P_Z) \sin^2 (\frac{\varphi_{dy}}{2} + \varphi_{St}) \label{LZS} \end{equation} where $\varphi_{dy}=\varphi_{-}-\varphi_{+}$ is the dynamically accumulated phase between the two tunneling events \cite{Lim} \begin{equation} \varphi_{dy} = \int_{-\sqrt{|d|}}^{\sqrt{|d|}} E_{+-} (t) d t =4|d|^{3/2}I(\frac{k}{|d|}) \end{equation} written in terms of the integral $I(x)\equiv \int_0^1 du \sqrt{(u^2-1)^2+x^2}$. This probability as a function of $d$ and $k$ is plotted in Fig. \ref{fig:stueckelberg3dplot}(a). \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{stuckcont.jpg} \includegraphics[width=7cm]{stuckincohcont.jpg} \end{center} \caption{(Color online) Contour plot of the transition probability $P$ computed with the St\"uckelberg approach as a function of the merging gap $d$ and the perpendicular gap $k$. The white region corresponds to $d\geq 0$, where the St\"uckelberg approach is not defined. (a) In the coherent case, interferences both as a function of $d$ and $k$ are clearly visible as well as the vanishing of $P$ in the $k\to 0$ and $k\to \infty$ limits. The probability varies between 0 and 1 as given by the color code (color steps corresponds to 0.1). (b) In the incoherent case, the oscillations are washed out and the maximum probability is $1/2$ instead of $1$ in the coherent case (note the change of color scale for $P$).} \label{fig:stueckelberg3dplot} \end{figure} If the two tunneling events are incoherent, the interferences are washed out, $\sin^2 \to 1/2$ and the probability becomes $P_\textrm{incoh}=2P_Z(1-P_Z)$ (see Fig. \ref{fig:stueckelberg3dplot}(b)). In the limit $\delta \gg 1$, $P_Z=\exp(-2\pi\delta)\to 0$, $1-P_Z\to 1$, $\varphi_{St} \approx 0$ and $\varphi_{dy} \approx 4k \sqrt{|d|}$. Therefore \begin{equation} P\approx 4e^{-\frac{\pi k^2}{2\sqrt{|d|}}}\sin^2\left(2k\sqrt{|d|}\right) \end{equation} The probability goes to zero exponentially because of the large gap, as is usual for tunneling process. In the opposite limit $\delta \ll 1$, $P_Z\to 1$, $1-P_Z\approx 2\pi \delta \to 0$, $\varphi_{St} \approx \pi/4$ and $\varphi_{dy} \approx 8|d|^{3/2}/3$. Therefore \begin{equation} P\approx 2\pi\frac{k^2}{\sqrt{|d|}} \sin^2\left(\frac{4}{3}|d|^{3/2}+\frac{\pi}{4} \right) \label{eq:stuckdeltasmall} \end{equation} The probability also goes to zero but as $k^2$ because of the special symmetry when $k=0$ (conservation of the pseudo-spin $\sigma_z$). Quantitatively, the St\"uckelberg approach is valid if the Zener tunneling time $\sim \textrm{max} (\delta,\sqrt{\delta})/k$ is shorter than the time $2\sqrt{|d|}$ it takes for an electron to travel between the two Dirac points \cite{revueStuck}. This means that $-d\gg 1$ and $-d \gg k$. Therefore, one needs to be deep in the gapless phase (far from the merging) and with a perpendicular gap that is not too large. \section{Perturbation theory in the diabatic/sudden limit} In the diabatic basis, we perform perturbation theory in the perpendicular gap $k\ll 1$. Assuming that $A_2(t)\approx 1$ for all $t$ gives the probability $P=|A_1(\infty)|^2$ to tunnel from the lower to the upper band in terms of the amplitude \begin{eqnarray} A_1(\infty)&=&-k\int_{-\infty}^{\infty} dt A_2(t) \exp[i\int_0^t dt' E_{12}(t')]\nonumber \\ &\approx& -k\int_{-\infty}^{\infty} dt \exp[i(2 t d + 2 t^3/3)] \label{eq:airyamp} \end{eqnarray} computed at first order in $k$. The probability \begin{equation} P\approx 4^{2/3}\pi^2 k^2 [\textrm{Ai}(4^{1/3}d)]^2 \qquad (k\ll 1) \label{eq:airy} \end{equation} is given in terms of the Airy function which has the following definition (when its argument $x$ is real): \begin{equation} \textrm{Ai}(x)\equiv \frac{1}{2\pi}\int_{-\infty}^\infty dy e^{i(\frac{1}{3}y^3+xy)} \end{equation} As it will be useful later, we also perform a saddle point analysis of the integral in eq. (\ref{eq:airyamp}) in three different limits to obtain simpler analytical results. If $d\neq 0$, there are two saddle points $t_0$ in the complex time plane. If $d>0$, $t_0=\pm i\sqrt{d}$ and only $t_0=i\sqrt{d}$ contributes, as $\textrm{Im } t_0\geq 0$ is needed. If $d<0$, $t_0=\pm \sqrt{|d|}$ and the two saddle points contribute giving rise to interferences (this is really a stationary phase approximation). If $d=0$, there is a single saddle point at $t_0=0$. The results of the saddle point approximation are \begin{equation} P\approx \left(\frac{2}{3}\right)^{4/3} \left(\frac{\pi}{\Gamma(2/3)}\right)^{2} k^2 \textrm{ if } |d|\ll 1 \end{equation} \begin{equation} P\approx \frac{\pi k^2}{2\sqrt{d}}e^{-\frac{8d^{3/2}}{3}}\textrm{ if } d\gg 1 \end{equation} \begin{equation} P\approx \frac{2\pi k^2}{\sqrt{|d|}}\sin^2 \left(\frac{4}{3}|d|^{3/2}+\frac{\pi}{4}\right) \textrm{ if } -d \gg 1 \end{equation} The last case recovers the result of the previous section, see Eq. (\ref{eq:stuckdeltasmall}), featuring St\"uckelberg oscillations. These three limits are well known expansions of the Airy function. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{diacont_adiacont.jpg} \end{center} \caption{(Color online) Contour plot of the transition probability $P$ as a function of the merging gap $d$ and the perpendicular gap $k$. For small $k$, it is computed with the diabatic perturbation theory eq. (\ref{eq:airy}); while for large $k$, it is computed using adiabatic perturbation theory eq. (\ref{eq:adiabaproba}), see section V. Interferences as a function of $d$ are clearly visible in the gapless phase ($d<0$). In the gapped phase ($d>0$), the probability vanishes exponentially. White regions corresponds to the probability exceeding 1. Indeed close to $k\sim 1$, the two perturbative approaches break down. The color code is the same as in Fig. \ref{fig:stueckelberg3dplot}(a)} \label{fig:diabapertu3dplot} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{AiryFunctionSquared} \end{center} \caption{Transition probability $P$ as a function of $d$ for fixed $k=0.1$ as computed in diabatic perturbation theory, see eq. (\ref{eq:airy}). For negative argument (gapless phase, $d<0$) it shows St\"uckelberg oscillations and then decays exponentially for positive argument (gapped phase, $d>0$). There is an inflexion point right at the merging (when the argument vanishes $d=0$). } \label{fig:Airy} \end{figure} The transition probability $P$ of eq. (\ref{eq:airy}) is plotted in Figs. \ref{fig:diabapertu3dplot} and \ref{fig:Airy}. It goes to zero as $k \to 0$ as expected from the $\sigma_z$ conservation and increases quadratically with $k$. As a function of $d$, $P$ shows oscillations when $d<0$, which we interpret as St\"uckelberg interferences, and decreases exponentially when there is a true gap $d>0$ in the diabatic spectrum. \section{Perturbation theory in the adiabatic/semiclassical limit} In order to consider the opposite adiabatic limit ($k\gg 1$), we use perturbation theory in the adiabatic basis \cite{Dykhne}. From the adiabatic eigenenergies and eigenvectors, we find that $E_{+-}=2E_+$, $\langle \psi_+|\dot{\psi}_+\rangle=0$ and $\langle \psi_+|\dot{\psi}_-\rangle=\dot{\theta}/2=-k t/E_+^2$. The Schr\"odinger equations to be solved are therefore \begin{eqnarray} \dot{A}_+ &=&-\frac{\dot{\theta}}{2}A_- e^{2i\int^t dt' E_{+}(t')} \\ \dot{A}_- &=& \frac{\dot{\theta}}{2} A_+ e^{-2i \int^t dt' E_{+}(t')} \end{eqnarray} with the initial conditions $A_-(-\infty)=1$ (up to a phase factor) and $A_+(-\infty)=0$. If we now assume that the coupling $\dot{\theta}$ is small, we find that $A_-(t)\approx 1$, $\dot{A}_+ \approx -\frac{\dot{\theta}}{2} e^{2i\int^t dt E_{+}(t)}$ and therefore: \begin{equation} A_+(\infty)\approx -\int_{-\infty}^{+\infty} dt \frac{\dot{\theta}}{2} e^{2i\int^t dt' E_{+}(t')}\equiv-\int_{-\infty}^{+\infty} dt \frac{\dot{\theta}}{2} e^{i \phi(t)} \end{equation} where $\phi(t)\equiv 2\int_{t_l}^t dt' E_{+}(t')$ is the adiabatic phase and $t_l$ is the lower bound of the phase integral, which is undecided for the moment except that it has to be a real number. The amplitude $A_+(\infty)$ can be computed by integration in the complex plane. Firstly, there are four poles corresponding to $E_+(t)=0$ i.e. to $t^2=-d\pm ik$. Note that these are also saddle points as $\dot{\phi}(t)=2E_+$. In the following we refer to them simply as poles even when they are playing there the role of saddle points. If we write $-d+ik=\sqrt{k^2+d^2}e^{i\beta}$, which defines the angle $\beta$, the four poles are $t_1=(k^2+d^2)^{1/4} e^{i\beta/2}$, $t_2=t_1^*$, $t_3=-t_1$ and $t_4=-t_1^*$. In addition to the four poles, there are also branch cuts coming from the square root function in the exponential. The corresponding branching points are at the same position as the poles. Therefore, there is a branch cut linking $t_1$ and $t_4$ and another one linking $t_2$ to $t_3$, for example. When constructing a closed contour, one has to keep in mind this branch cut structure. Of the four poles, only $t_1$ and $t_4$ have a positive imaginary part and are therefore relevant as we want to close the integration contour in the upper plane (see Fig. \ref{fig:poles}). Since $\textrm{Im } t_1=\textrm{Im } t_4$, both poles contribute equally to the amplitude. We choose the lower bound $t_l=0$ such that $\phi(t)= 2\int_0^t dt' E_{+}(t')$. It is important to make a single choice for $t_l$ for both poles as they will interfere. As $E_+(-t)=E_+(t)$ and $t_4=-t_1^*$, we have $\phi(t_4)=-\phi(t_1)^*$. One possibility is therefore to construct a contour that encloses both these two poles and the branch cuts. \begin{figure}[ht] \begin{center} \psfrag{t1}{$t_1$} \psfrag{t4}{$t_4$} \includegraphics[width=5cm]{poles3.pdf} \end{center} \caption{(Color online) Poles $t_1$ and $t_4=-t_1^*$ in the complex time plane. Poles are represented at the merging transition ($d=0$ such that $\textrm{arg } t_1=\pi/4=-\textrm{arg } t_4$). The short green arrows indicate their motion when $d$ increases (gapped phase $d>0$) and the long red arrows that when it decreases (gapless phase $d<0$). When the poles have a finite real part, there are oscillations in the probability; whereas a finite imaginary part implies an exponential decay of the probability. Note that there is a branch cut relating the two poles.} \label{fig:poles} \end{figure} Another trick to perform this integral is to make a change of variable from the time $t$ to the phase $\phi$ variable, see for example Ref. \cite{BerryMount}. The resulting integral \begin{equation} A_+(\infty)\approx -\frac{1}{2} \int_{-\infty}^{+\infty} d\phi \, \frac{d \theta}{d \phi} e^{i \phi} \end{equation} is over a function $(d\theta / d\phi)e^{i\phi}$ that has no branch cut anymore and only four isolated poles at $\phi_1\equiv \phi(t_1)$, $\phi(t_2)=\phi_1^*$, $\phi(t_3)=-\phi_1$ and $\phi(t_4)=-\phi_1^*$. The residue theorem can now be used with a simple contour closed in the upper complex plane of the $\phi$ variable. As the residues are $-e^{i\phi_1}/3i$ and $e^{-i\phi_1^*}/3i$, we obtain \begin{equation} A_+(\infty)\approx \frac{\pi}{3}(e^{i\phi_1}-e^{-i\phi_1^*})=\frac{2i\pi}{3} \sin (\textrm{Re }\phi_1) e^{-\textrm{Im } \phi_1} \label{eq:amplitude} \end{equation} This is valid whatever the sign of $d$. Therefore \begin{equation} P\approx \frac{4\pi^2}{9} \sin^2 (\textrm{Re }\phi_1) e^{-2\textrm{Im } \phi_1} \label{eq:probawrong} \end{equation} where \begin{equation} \phi_1\equiv \phi(t_1)=2k^{3/2}\int_0^{u_1} du \sqrt{1+(u^2+D)^2} \label{eq:phione} \end{equation} with $D\equiv d/k$ and $u_1\equiv t_1/\sqrt{k}=(\sqrt{\sqrt{1+D^2}-D}+i\sqrt{\sqrt{1+D^2}+D})/\sqrt{2}$. The result we found is the first-order perturbation in the adiabatic basis. The exponential behavior is correct but not the prefactor, even in the adiabatic limit, as argued by Landau long ago \cite{LandauBook, seealsodykhne}. This is known in the literature as the ``$\pi/3$ problem'' \cite{BerryMount,Berry1990}. Actually, in the case of a single linear band crossing ($E_1=-E_2=\alpha t/2$ and $H_{12}=$ constant), which is the standard LZ problem, adiabatic perturbation theory gives $P=(\pi/3)^2 \exp(-2\pi |H_{12}|^2/\alpha)$ \cite{DavisPechukas} whereas the exact result found by Zener is $P_Z=\exp(-2\pi |H_{12}|^2/\alpha)$ \cite{Zener}. The reason for this discrepancy is well explained in \cite{Berry}: it comes from the fact that each order of the adiabatic perturbation expansion for $A_+$ contains a term of the form $\# \exp(-\pi |H_{12}|^2/\alpha)$. Obtaining the exact factor in front of the exponential requires re-summing the whole series by keeping only the dominant -- in the adiabatic limit -- exponential behavior in each order. This series has a first term which is $\pi/3$ and a sum which is $1$ \cite{Dykhne,DavisPechukas,Berry,Berry1990}. In the adiabatic limit, the correct pre-exponential factor in the usual LZ problem is such that \begin{equation} P\approx e^{-\textrm{Im} \int_{t_0^*}^{t_0}dt E_{+-}(t)}=e^{-2 \textrm{Im} \int_{t_l}^{t_0}dt E_{+-}(t)} \end{equation} which amounts to reducing the residue of the pole found in first order adiabatic perturbation theory from $\pi/3$ to $1$ in the amplitude. This can also be done in the two poles case and we find that $A_+(\infty)\approx \frac{\pi}{3}(e^{i\phi(t_1)}-e^{-i\phi(t_1)^*})$ $\to$ $e^{i\phi(t_1)}-e^{-i\phi(t_1)^*}$ so that the probability becomes, instead of eq. (\ref{eq:probawrong}), \begin{equation} P\approx 4 \sin^2 (\textrm{Re }\phi_1) e^{-2\textrm{Im } \phi_1} \qquad (k\gg 1) \label{eq:adiabaproba} \end{equation} where $\phi_1(k,d)$ is given in eq. (\ref{eq:phione}). As we will later see, this result agrees very well with the exact numerical solution. It also agrees with the St\"uckelberg theory in the adiabatic limit when $P_Z\ll 1$ such that $P\approx 4P_Z \sin^2(\varphi_{dy}/2)$ where $P_Z=\exp(-\pi k^2/(2\sqrt{-d}))$ deep in the gapless phase (indeed $2e^{-2 \textrm{Im } \phi_1}\approx e^{-\pi k^2/(2\sqrt{|d|})}$). Therefore, we take this result as the correct analytical expression in the adiabatic limit. We now come back to the phase $\phi_1$ given in eq. \ref{eq:phione}. As $u_1(D)=iu_1(-D)^*$, one has $\phi_1(k,d)=i\phi_1(k,-d)^*$ and therefore $\textrm{Re }\phi_1(k,d)=\textrm{Im }\phi_1(k,-d)$, which allows one to express $P$ in terms of $\textrm{Im}\phi_1$ only. The integral $J(D)\equiv \textrm{Im }\int_0^{u_1} du \sqrt{1+(u^2+D)^2}$ giving $\textrm{Im } \phi_1(k,d)=2k^{3/2}J(d/k)$ can be computed numerically for any $D$ and analytically in three limits. When $D\sim 0$, $J(D)\approx \frac{\Gamma(1/4)^2}{12\sqrt{\pi}}+\frac{\pi^{3/2}}{\Gamma(1/4)^2}D$. When $D\to \infty$, $J(D)\approx 2D^{3/2}/3+\ln D/(4\sqrt{D})$. In practise, a good approximate interpolation between $0$ and $\infty$ is $J(D)\approx \frac{\Gamma(1/4)^2}{12\sqrt{\pi}}+\frac{2}{3}D^{3/2}$. When $D\to - \infty$, $J(D)\approx \pi/(8\sqrt{|D|})$. This function $J(D)$ is plotted in figure \ref{fig:JD}. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{ImJofD} \end{center} \caption{(Color online) Integral $J\equiv \textrm{Im }\int_0^{u_1} du \sqrt{1+(u^2+D)^2}$ -- giving $\textrm{Im }\phi_1=2k^{3/2}J(d/k)$ -- plotted as a function of $D$. The numerical calculation is in continuous red and is compared to different analytical results: $\pi/(8\sqrt{|D|})$ in dashed green, $\frac{\Gamma(1/4)^2}{12\sqrt{\pi}}+\frac{\pi^{3/2}}{\Gamma(1/4)^2}D$ in dotted magenta, and $2D^{3/2}/3+\ln D/(4\sqrt{D})$ in dot-dashed blue. Interpolation formula (for positive $d$) $\frac{\Gamma(1/4)^2}{12\sqrt{\pi}}+\frac{2}{3}D^{3/2}$ is in thin black.} \label{fig:JD} \end{figure} From the behavior of $J(D)$, we can obtain approximate analytical results for the probability $P$ in three limits: \begin{equation} P\approx 4e^{-\frac{\Gamma(1/4)^2}{3\sqrt{\pi}} k^{3/2}}\sin^2(\frac{\Gamma(1/4)^2}{6\sqrt{\pi}} k^{3/2}) \textrm{ if } |d| \ll k \end{equation} \begin{equation} P\approx 4e^{-\frac{8 d^{3/2}}{3}}\sin^2(\frac{\pi k^2}{4\sqrt{d}}) \textrm{ if } d \gg k>0 \end{equation} \begin{equation} P\approx 4e^{-\frac{\pi k^2}{2\sqrt{|d|}}}\sin^2(\frac{4 |d|^{3/2}}{3}) \textrm{ if } -d \gg k>0 \end{equation} \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{AdiabaProba} \end{center} \caption{(Color online) Transition probability $P$ as a function of $d$ (at fixed $k=2.5$) computed in adiabatic perturbation theory (continuous red line), see eq. (\ref{eq:adiabaproba}). Also shown, in dashed black, is the incoherent probability $P_{incoh}$, see eq. (\ref{eq:incohadiabaproba}).} \label{fig:AdiaProba} \end{figure} If the contribution of the two poles add incoherently, $\sin^2 \to 1/2$, and the oscillations are washed out: \begin{equation} P_\textrm{incoh}\approx 2 e^{-2\textrm{Im} \phi_1} \label{eq:incohadiabaproba} \end{equation} as in the St\"uckelberg theory when $P_Z\ll 1$. It is interesting to discuss the motion of the two poles $t_1=(k^2+d^2)^{1/4} e^{i\beta/2}$ and $t_4=-t_1^*$, where $-d+ik=\sqrt{k^2+d^2}e^{i\beta}$, in the complex time plane as $k$ and $d$ vary \cite{tphi}. These two poles correspond to the band crossings at complex times and always exist (whatever the sign of $d$ and even at the merging). When $d=0$, $\beta/2=\pi/4$ and $\textrm{Re } t_1 = \textrm{Im } t_1$. When $d>0$, $\pi/4<\beta/2<\pi/2$ and $\textrm{Re } t_1< \textrm{Im } t_1$, the poles are close to the imaginary axis, the corresponding exponentials are essentially decaying and the probability as well. In the limit $d\to +\infty$, $\beta/2\to \pi/2$ and the two poles are on the imaginary axis. Remember that, in the diabatic limit $k\to 0$, we found a saddle point at $t_0=i\sqrt{d}$, i.e. $\beta/2 \sim \pi/2$ and $(k^2+d^2)^{1/4}\sim \sqrt{d}$. When $d<0$, $0<\beta/2<\pi/4$ and $\textrm{Re } t_1>\textrm{Im } t_1$, the poles are close to the real axis, the corresponding exponentials are essentially oscillating and the interference of the two give oscillations in the probability. In the limit $d\to -\infty$, $\beta/2\to 0$ and the two poles are on the real axis. Remember that, in the diabatic limit $k\to 0$, we found two stationary points at $t_0=\pm \sqrt{-d}$, i.e. $\beta/2 \sim 0$ and $(k^2+d^2)^{1/4}\sim \sqrt{-d}$. The approximate St\"uckelberg theory also falls in this general frame. It corresponds to a situation where $-d\gg 1,k$ such that $t_1\approx \sqrt{|d|}$ and $t_4 \approx -\sqrt{|d|}$ (there, we identified $t_1-t_4\approx 2\sqrt{|d|}$ as the time needed to travel between the two Dirac cones). We speculate that in the general case of arbitrary $k$ and $d$, there are always two separated complex poles with the same positive imaginary part. The motion of the poles in the complex $t$ plane as $d$ changes at fixed $k\neq 0$ is illustrated in Fig. \ref{fig:poles}. \section{Modified St\"uckelberg formula} In the preceding section, adiabatic perturbation theory helped us uncover a general two poles structure -- either in the complex $t$ or complex $\phi$ plane --, which leads to a total probability of the St\"uckelberg form $P=4P_S(1-P_S)\sin^2(...)$, where $P_S$ is the probability for a single avoided crossing. This should be valid for all $k$ and $d$ and not only when the spectrum is gapless. A reasonable guess (see also \cite{Suominen}) is to combine the adiabatic perturbation theory, giving the exponential weight of the two poles and their interferences in the adiabatic limit, with the St\"uckelberg approach, giving the $P_S(1-P_S)$ structure. From eqs. (\ref{LZS}) and (\ref{eq:adiabaproba}), we obtain: \begin{equation} P\approx 4 e^{-2\textrm{Im } \phi_1}(1-e^{-2\textrm{Im } \phi_1}) \sin^2 (\textrm{Re }\phi_1+\varphi_{na}) \label{eq:modstuckcont} \end{equation} There, $e^{i \phi_1}$ is the amplitude to tunnel for a single pole, so that $e^{-2\textrm{Im } \phi_1}$ plays the role of the Zener probability $P_Z$ for a single Dirac cone and $\textrm{Re }\phi_1$ that of $\varphi_{dy}/2$. The quantity $\varphi_{na}$ is the non-adiabatic phase acquired by a particle when it does not tunnel at a single pole -- the associated amplitude being $\sqrt{1-e^{-2\textrm{Im } \phi_1}}e^{i\varphi_{na}}$. We only know its expression in the St\"uckelberg limit ($d\ll -1, -k$), where it is given by the Stokes phase $\varphi_{St}$, see eq. (\ref{eq:stokes}) with $\delta$ as in eq. (\ref{eq:delta}). Here, we assume that $\varphi_{na}\approx \varphi_{St}$ for all $k$ and $d$, which is a reasonable approximation except when $k<1$ and $d\geq 0$. Equation (\ref{eq:modstuckcont}) should be exact both for small $k$ and negative $d$, where it recovers the St\"uckelberg result eq. (\ref{LZS}), and for large $k$, where it recovers the result of adiabatic perturbation theory eq. (\ref{eq:adiabaproba}) for all $d$. By continuity, it should also be reasonable in the intermediate region $k\sim 1$, see Fig. \ref{fig:smallk}(b). It allows one to have an approximate analytical formula that can describe the crossover from small to large $k$ at fixed $d$, see Fig. \ref{fig:negatived}. This modified St\"uckelberg probability is plotted in Fig. \ref{fig:modstuckcont}(a). As seen, this formula is not applicable for positive $d$ and small $k$ as the relevant non-adiabatic phase is no more simply given by the Stokes phase. In the incoherent case, the probability becomes \begin{equation} P_\textrm{incoh}\approx 2 e^{-2\textrm{Im } \phi_1}(1-e^{-2\textrm{Im } \phi_1}) \label{eq:incohmodstuck} \end{equation} and is plotted in Fig. \ref{fig:modstuckcont}(b). As this incoherent probability does not depend on the partly unknown non-adiabatic phase, it should be reasonable in the whole $(d,k)$ plane. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{modstuckcont_complete.jpg} \includegraphics[width=7cm]{incohmodstuckcont.jpg} \end{center} \caption{(Color online) Contour plot of the modified St\"uckelberg transition probability $P$ as a function of the merging gap $d$ and the perpendicular gap $k$. (a) Coherent case (see eq. (\ref{eq:modstuckcont})): the probability is in between 0 and 1 (the color code is the same as in Fig. \ref{fig:stueckelberg3dplot}(a)). Note that the modified St\"uckelberg formula does not work in the ($d\geq 0,k<1$) region as the non-adiabatic phase is not properly given by the Stokes phase; (b) Incoherent case (see eq. (\ref{eq:incohmodstuck})): the probability is in between 0 and 0.5 (the color code is the same as in Fig. \ref{fig:stueckelberg3dplot}(b)).} \label{fig:modstuckcont} \end{figure} \section{Numerical solution and comparison between different approaches} The coupled first-order differential equations of section II, see eq.~(\ref{eq:diababasis}) and eq.~(\ref{adiabaschro}), are solved numerically. We checked that solving these equations either in the diabatic or in the adiabatic formulation gives the same answer (up to numerical errors of order $10^{-3}$ in the probability). We can therefore consider that these numerical solutions are essentially exact and use them to check the approximate analytical solutions. The probability obtained numerically for any $d$ and $k$ is shown in Fig. \ref{fig:num3dplot}. When compared to diabatic perturbation theory, the agreement is perfect for small $k\ll 1$. When compared with the St\"uckelberg theory, the agreement is very good when $d$ is very negative and $k$ not too large compared to $-d$ ($-d\gg 1$ and $-d \gg k$). It also compares very well with adiabatic perturbation theory (provided $\pi/3 \to 1$) when $k$ is large ($k\gg 1$). \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{numcont2.jpg} \end{center} \caption{(Color online) Contour plot of the numerically computed transition probability $P$ as a function of the merging gap $d$ and the perpendicular gap $k$. Oscillations are clearly visible in the gapless phase, whereas the probability is vanishingly small in the gapped phase. The vanishing of the probability in both the diabatic $k\ll 1$ and adiabatic $k\gg 1$ limits is also visible. The color code is the same as in Fig. \ref{fig:stueckelberg3dplot}(a).} \label{fig:num3dplot} \end{figure} To compare the different approaches, we first concentrate on the $d=0$ case exactly at the merging transition. The numerical solution along with the diabatic and adiabatic perturbative results are shown in Fig. \ref{fig:atmerging}. Note the excellent agreement in both the $k\to 0$ (diabatic perturbation theory) and the $k\to \infty$ limits (adiabatic perturbation theory). At large $k$, a surprising oscillation in the probability is seen both in the numerical solution and in the adiabatic perturbative result. It is surprising because the spectrum (whether diabatic $E=\pm t^2$ or adiabatic $E=\pm \sqrt{t^2+k^2}$) features at most a single real time crossing. However in complex time, the adiabatic bands cross twice. This leads to an interference between the two complex poles $t_1$ and $t_4$ and results in oscillations in the probability $P$. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{ComparisonProbaAtMerging} \includegraphics[width=7cm]{ComparisonProbaAtMergingZoom} \end{center} \caption{(Color online) Transition probability $P$ at the merging $d=0$ as a function of $k$. The numerically exact result is in continuous blue. The diabatic perturbative result eq. (\ref{eq:airy}) is in dashed red and the adiabatic perturbative result eq. (\ref{eq:adiabaproba}) is in dotted black. (a) $k$ between 0 and 2. (b) $k$ between 1.5 and 3 (note the change of vertical scale by a factor $10^3$): there is a tiny oscillation due to the interference between two poles.} \label{fig:atmerging} \end{figure} Next we consider the gapless region $d=-1$ and compare the different analytical approaches to numerics, see Fig. \ref{fig:negatived}. Note the excellent job done by the modified St\"uckelberg formula which is able to describe the whole crossover from small to large $k$. The only discrepancy with the numerical result is close to the maximum probability near $k=0.5$. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{negatived} \end{center} \caption{(Color online) Transition probability $P$ at fixed $d=-1$ as a function of $k$. The numerically exact result is in continuous blue, the diabatic perturbative result eq. (\ref{eq:airy}) is in dashed red, the St\"uckelberg result eq. (\ref{LZS}) is in dotted black, the adiabatic perturbative result eq. (\ref{eq:adiabaproba}) is in dot-dashed green and the modified St\"uckelberg formula eq. (\ref{eq:modstuckcont}) is in long dashed magenta.} \label{fig:negatived} \end{figure} Then we consider small $k$ and compare numerics, the St\"uckelberg approach and diabatic perturbation theory as a function of $d$, see Fig. \ref{fig:smallk}(a). Diabatic perturbation theory agrees very well with the numerical result except for a small difference close to $d=-1$ where the probability is not small and the approximation is therefore not so good anymore. St\"uckelberg theory works very well deep in the gapless phase and its validity breaks down as one approaches the merging transition. The opposite limit of large $k$ shows that the adiabatic perturbation theory is very good (see Fig. \ref{fig:smallk}(c)). The St\"uckelberg theory works qualitatively in the gapless regime but not as well as for small $k$. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{numdiapertustuck.pdf} \includegraphics[width=7cm]{intermediatek.pdf} \includegraphics[width=7cm]{largek.pdf} \end{center} \caption{(Color online) Transition probability $P$ at fixed $k$ as a function of $d$. The numerically exact result is in continuous blue, the diabatic perturbative result eq. (\ref{eq:airy}) in dashed red, the St\"uckelberg result eq. (\ref{LZS}) in dotted black, the adiabatic perturbative result eq. (\ref{eq:adiabaproba}) in dot-dashed green and the modified St\"uckelberg formula eq. (\ref{eq:modstuckcont}) in long dashed magenta. (a) $k=0.1$ (the modified St\"uckelberg formula co\"incides with the St\"uckelberg probability when $d<0$ -- it is not shown for clarity -- and is not applicable when $d\geq 0$); (b) $k=1$; (c) $k=2.5$. Note the different probability scales in the three graphs.} \label{fig:smallk} \end{figure} There are also regimes which are difficult to access analytically. This is the case for intermediate $k\sim 1$. See Fig. \ref{fig:smallk}(b) for the $k=1$ curve as a function of $d$. St\"uckelberg theory works fine but only covers the large $-d$ regime, whereas both perturbative calculations (not shown for clarity) are unreliable for intermediate $k$. The modified St\"uckelberg is qualitatively correct also when $k\sim 1$. \section{Comparison to the experiment: absence of interferences} Recently, an experiment with ultracold fermionic atoms in an optical lattice could study the merging transition and the inter-band tunneling of atoms performing Bloch oscillations \cite{Tarruell}. There, atoms moving in an artificial graphene-like crystal could mimic Bloch electrons in a usual solid state crystal. In Ref. \cite{Lim}, in order to understand the result of this experiment, we computed the inter-band transition probability for a single atom using the approximate St\"uckelberg theory as a function of $k$ and $d$. Then we translated these two parameters into the experimentally tunable laser intensities $V_{\bar{X}}$ and $V_X$ defined in \cite{Tarruell}. Qualitatively, $V_{\bar{X}}$ controls the merging transition and is roughly equivalent to $-d$ (called $-\Delta_*$ in \cite{Lim}), whereas $V_X$ controls the transverse gap and is equivalent to $k$ (called $c_x$ in \cite{Lim}). A last step was to average the probability over the atomic distribution of a two-dimensional degenerate Fermi gas. The agreement between theory and experiment was found to be very good: compare Fig. 4(b) in \cite{Tarruell} with Fig. 4(b) in \cite{Lim}. However, as the St\"uckelberg theory is only valid in the gapless phase ($d<0$) and not too close to the merging transition (see Fig. \ref{fig:stueckelberg3dplot}(b)), we could only compare theory and experiment in the gapless region. Near the transition and in the gapped region, the experimental signal was vanishingly small and could not be compared with any theoretical prediction. Within the present framework, it is now possible to understand the inter-band probability very close to the merging. When looking at Fig. 4(b) of Ref. \cite{Tarruell} in detail, one sees that the red line of maximum transition probability -- which lies essentially in the gapless region -- actually crosses the merging line and slightly extends in the gapped region at very small $V_X$. This line qualitatively corresponds to $P_Z =1/2$ such that $P=1/2$. Such a behavior is found in our calculations as well: see Fig. \ref{fig:modstuckcont}(b), where the orange region of maximum probability (between 0.4 and 0.5) lies essentially in the gapless ($d<0$) region but also slightly extends to the gapped region ($d>0$) reaching $d\sim 0.4$ when $k\to 0$. We now consider the merging point ($d=0$) and study both the inter-band probability $P_2$ for the motion in the direction where two Dirac cones are hit ($x$ direction) and that, $P_1$, in the perpendicular direction in which a single cone is hit ($y$ direction) \cite{xy}. In the present paper, we concentrate on $P_2$ -- which is called $P(d,k)$ -- as $P_1$ is simply given by the usual LZ formula and was studied in detail in Ref. \cite{Lim}. The merging point is special in the sense that the spectrum is gapless and features a single contact point, which is linearly dispersing in the $p_y$ direction and quadratically in the $p_x$ direction \cite{Dietl}. The LZ formula gives $P_1=\exp[-\pi (p_x^2/(2m_*))^2/(\hbar c_y F)]$. In the coherent case, the numerical solution of section VII gives $P_2=P(d=0,k)$ where $k= c_y p_y (2m_*)^{1/3}/(\hbar F)^{2/3}$ as plotted in Fig. \ref{fig:atmerging} (see the continuous blue line) and, in the incoherent case, the modified St\"uckelberg eq. (\ref{eq:incohmodstuck}) gives $P_2\approx 2\exp(-\frac{\Gamma(1/4)^2}{3\sqrt{\pi}}k^{3/2})[1-\exp(-\frac{\Gamma(1/4)^2}{3\sqrt{\pi}}k^{3/2})]$. The probability $P_1$ depends on $p_x$ and varies between 0 and 1, whereas $P_2$ depends on $p_y$ (i.e. on $k$) and varies between 0 and $\sim 0.55$ (coherent case, see Fig. \ref{fig:atmerging}(a)) or $0.5$ (incoherent case). The ratio $P_2/P_1$ can therefore take any positive value depending on what are the relevant $p_x$ and $p_y$ values. The latter depend on the size of the atomic cloud and on the way the averaging over the atomic cloud is done. For example, for a single atom $p_x=p_y=0$ giving $P_1=1$ and $P_2=0$ so that $P_2/P_1=0$. In particular, there is no reason for this ratio to take the simple value 0.5 \cite{Tarruell}. We have performed averaging over various atomic cloud sizes comparable to that in the ETH Z\"urich experiment and find that $\langle P_2 \rangle / \langle P_1 \rangle$ can vary between 0 and $\sim 0.7$. One very striking experimental fact remains to be explained: the agreement is actually obtained with the {\it incoherent} inter-band probability (see e.g. Fig. \ref{fig:modstuckcont}(b)) rather than with the coherent probability (see e.g. Fig. \ref{fig:num3dplot}). In other words, St\"uckelberg oscillations (interferences) are not observed in the experiment, whereas they are predicted. Here we would like to discuss this specific point in more details. The absence of interferences could be due to (i) decoherence, (ii) blurring or (iii) washing out because of some averaging process. (i) Decoherence is unlikely in a cold atom experiment with almost {\it non-interacting} fermions. We estimate the decoherence time due to spontaneous emission following Ref. \cite{Kolovsky}. It is roughly given by $1/\tilde{\gamma} = (\delta/\Omega)^2/\gamma\sim 10^3$ s where $\gamma \sim 6$ MHz is the natural line width for the relevant transition of $^{40}$K, $\delta \sim 108$ THz is the detuning and $\Omega \sim 1$ GHz is the Rabi frequency estimated from $\hbar \Omega^2/\delta \sim E_R$, where $E_R\sim 4.4$ kHz is the recoil energy. It is much longer than the experimental time, therefore ruling out decoherence as a possible mechanism to explain the absence of interferences. (ii) Blurring of the interferences could also occur because of the detection process using a finite pixel size. We checked that possibility and found that the pixel size is small enough that it should allow experimentalists to resolve the interferences. (iii) We are left with the possibility of washing out of the oscillations due to several averaging processes. We included averaging over a two-dimensional atomic distribution in reciprocal space, which only resulted in slightly smoothing the oscillations (compare Fig. \ref{fig:num3dplot} here and Fig. 4(d) in \cite{Lim}). However, the atomic cloud in the experiment was actually not two but three-dimensional, even though the optical lattice was two-dimensional. The atomic gas was indeed confined by an anisotropic three dimensional harmonic trap but very far from the regime where one of the direction of motion would be frozen. This means that the system is best seen as a bunch of parallel one-dimensional tubes, each tube corresponding to a single site of a two-dimensional honeycomb-like lattice. The atoms hop in a kind of tight-binding lattice in the $xy$ plane (except for a weak harmonic trap $m\omega_x^2 x^2/2 + m \omega_y^2 y^2/2$) and are almost free to move in the $z$ direction (except for a weak harmonic trap $m\omega_z^2 z^2/2$). The period of the harmonic motion in the $z$ direction $2\pi/\omega_z$ is very long compared to the time an atom spends in the St\"uckelberg interferometer $\sim 2\sqrt{|d|} t_{car}$, where $t_{car}=(2m^* \hbar)^{1/3}/F^{2/3} $. One can therefore think that an atom moves in the interferometer at an almost constant $z$. However, because of the finite laser waist, the laser intensities are inhomogeneous, so that atoms at different $z$ experience a slightly different optical lattice. In other words, the parameters $d$ and $k$ of the universal hamiltonian are slightly $z$-dependent. As seen in Fig. \ref{fig:num3dplot} for example, the interferences in the inter-band probability $P$ are essentially a function of $d$ (and not so much of $k$), with a fringe spacing of roughly $\delta d \sim 1$ (which is the same as saying that $\delta \Delta_*\sim 0.04 E_R$ \cite{Lim}). From the experimental conditions of Ref. \cite{Tarruell}, we estimate a laser waist of $\sim 150$ microns and a cloud radius of $\sim 30$ microns in the $z$ direction (half of the tube's length) so that the parameter $d$ varies by roughly $0.7$ between the center and the edge of the atomic cloud. As this is comparable to the spacing between a dark and a bright fringe, it should be enough to wash out the oscillations. In the experiment, the inter-band probability is automatically averaged over the third spatial direction, i.e. along the tubes axis. Therefore, we think that the averaging over the third spatial direction could be responsible for the absence of the oscillations in the inter-band probability. An alternative explanation for the disappearance of the oscillations was very recently proposed in Ref. \cite{Uehlinger}. It is based on the spatial inhomogeneity of the applied force in the 2D plane, which also leads to averaging and washing out of the probability fringes. By breaking the inversion symmetry of the lattice, it is also possible to induce a mass to the Dirac fermions, i.e. to gap the Dirac cones when $d<0$ \cite{Tarruell}. Such a situation is easily incorporated in our theory by a simple mapping $k\to \sqrt{k^2+g^2}$. The hamilonian (\ref{dimensionlessh}) becomes $H=[t^2+d]\sigma_z+k\sigma_x+g\sigma_y$ where $g$ is the (dimensionless) mass gap induced by inversion symmetry breaking. The inter-band transition probability $\mathcal{P}(d,k,g)$ when $g\neq 0$ is simply related to that $P(d,k)$ at $g=0$ by $\mathcal{P}(d,k,g)=\mathcal{P}(d,\sqrt{k^2+g^2},0)=P(d,\sqrt{k^2+g^2})$. This mapping is easily found by looking at the coupled differential equations (\ref{eq:diababasis}), in which $H_{12}=k$ becomes $H_{12}=k-ig=\sqrt{k^2+g^2}e^{i\gamma}$ where $\gamma \equiv \textrm{Arg }(k-ig)$. The phase $\gamma$ is time independent and can be gauged away so that only the modulus of $k-ig$ matters and $H_{12}$ becomes $\sqrt{k^2+g^2}$. \section{Conclusion} Inspired by a recent experiment probing the merging transition of Dirac cones via Bloch-Zener oscillations of ultracold fermionic atoms \cite{Tarruell,Lim}, we have studied inter-band tunneling for a quadratic band crossing. The latter problem depends on two dimensionless parameters, which are the merging gap $d$ and the perpendicular gap $k$. We computed the probability $P$ for a particle to tunnel from the lower to the upper band as a function of $k$ and $d$. Qualitatively, the probability oscillates as a function of $d$ in the gapless phase and decays exponentially in the gapped phase. The oscillations are a result of St\"uckelberg interferences. As a function of $k$, the probability shows quite an unusual non-monotonic behavior: $P$ vanishes exponentially in the adiabatic/semiclassical limit (large $k$), which is the expected tunneling behavior in the large gap limit, but it vanishes also in the opposite diabatic/sudden limit (small $k$) as a result of a special symmetry. Indeed, when $k=0$, the conservation of the pseudo-spin $\sigma_z$ implies that $P$ vanishes. When $k\neq 0$, this symmetry is broken and, quite counter-intuitively, opening of a gap leads first to a quadratic increase of the probability to tunnel between the bands. In addition, when $k\gg 1$, there are oscillations of $P$ (as a function of both $k$ and $d$) whatever the sign of $d$. These are due to interferences between two poles in the complex time plane. The latter exist not only in the presence of Dirac points (gapless phase) but also in the gapped phase (in which case the bands do cross but at times with a finite imaginary part). The probability $P$ of inter-band tunneling was calculated using different methods. To summarize: the numerically exact solution of the time-dependent Schr\"odinger equation is given in section VI. We also used approximate analytical techniques to compute $P$: for small $k\ll 1$ and arbitrary $d$, we used diabatic perturbation theory, see eq. (\ref{eq:airy}). For negative $d\ll -1$ and small $k\ll -d$, we employed the St\"uckelberg approach, see eq. (\ref{LZS}). And for large $k\gg 1$, we used adiabatic perturbation theory, see eq. (\ref{eq:adiabaproba}). For intermediate $k$'s, we have no exact analytical prediction but an approximate modified St\"uckelberg formula, see eq. (\ref{eq:modstuckcont}), that compares well to the numerics in the whole negative $d$ region and also for large $k$ and positive $d$ (adiabatic regime). Using the tools we have developed, it should be possible to compute the inter-band tunneling probability for many two-bands hamiltonians. {\it Note added}: After completion of the present work, we became aware of related articles in the context of atomic collisions, in which a parabolic level crossing problem was studied, see Ref. \cite{Suominen}. The specific case $d=0$ (exactly at the merging transition) was also very recently analyzed in \cite{LehtoSuominen}, where it is called parabolic level glancing. \begin{acknowledgments} We thank Fr\'ed\'eric Jean Marcel Pi\'echon for many useful discussions. We acknowledge support from the Nanosim Graphene project under grant No. ANR-09-NANO-016-01. \end{acknowledgments}
{ "timestamp": "2012-11-30T02:01:52", "yymm": "1210", "arxiv_id": "1210.3703", "language": "en", "url": "https://arxiv.org/abs/1210.3703" }
\section{Introduction} \indent Semiconductor nanowires grown by the vapour-liquid-solid (VLS) method \cite{dayeh2009,plante2009,schroer2010, joyce2010, dick2010} are the subject of active study, with many potential applications ranging from nanoscale circuits \cite{xiang2006} and gas sensors \cite{du2009} to high-efficiency solar cells \cite{garnett2011,hochbaum2010, lapierre2011a, lapierre2011b}. In particular, InAs nanowires form Ohmic contacts easily\cite{suyatin2007}, and can be grown with low structural defect densities \cite{schroer2010}, giving rise to high electron mobilities \cite{ford2009}, though still low compared to high-quality bulk InAs \cite{rode1971}. The quasi-one-dimensional nature of electron transport at low temperatures \cite{blomers2011} together with a spin-orbit coupling $\sim 40$ times larger than GaAs makes InAs an attractive material for the development of spintronic devices such as electron spin qubits in gate-defined quantum dots \cite{nadjperge2010, baugh2010, schroer2011}. Although transport in InAs nanowires is well-studied \cite{dayeh2010}, the detailed role played by surface states and the surface potential \cite{wieder1974, watkins1995, schrieffer1955, affentauschegg2001} with regard to the electron mobility is not well understood. \\ \indent In this paper, we present electron mobility measurements on low defect density InAs nanowire field-effect transistors (FETs) that show a characteristic temperature dependence. The mobility peaks in the range $3,000-20,000$ cm$^2$V$^{-1}$s$^{-1}$ near $40$ K, with a positive slope at lower temperatures and a negative slope at higher temperatures. Even though acoustic phonon scattering produces a temperature dependence \cite{madelung1964} consistent with the data above $\sim 50$ K, the estimated mobility is much too large (2-3 orders of magnitude) to explain our observations. A similar argument excludes optical phonon scattering as a dominant mechanism in this temperature range (it might dominate at even higher temperatures). We expect this to remain true even in quasi-one-dimensional systems, where phonon scattering is moderately enhanced due to a larger available phase space for scattering \cite{Bruus1993}. Furthermore, our experimental results are obtained on nanowires with low stacking fault densities, which we confirm using transmission electron microscopy to inspect devices after transport measurements. This excludes stacking faults or twinning defects from explaining the qualitative temperature dependence of mobility. On the other hand, the nanowire geometry suggests that a surface scattering mechanism should be dominant. Surface states are known to be present at densities $\sim 10^{11}-10^{12}$ cm$^{-2}$ eV$^{-1}$ and to act as electron donors. We argue that these positively charged surface states should be more effective at scattering electrons than surface roughness (charge neutral defects), and therefore limit the mobility. Our numerical simulations show that surface charges at the known densities will indeed lead to scattering rates that produce mobilities of the correct order. We find that the decrease in mobility with temperature above $\sim 50$ K can be explained by an increase in the number of ionized surface states due to thermal activation. Consistent with this picture, chemical treatment of the nanowire surface is seen to have a strong effect on the temperature-dependent mobility. Surface roughness scattering, on the other hand, should produce a weaker temperature dependence than what we observe \cite{nag1980}. These results underscore the need for tailored surface passivation techniques \cite{tilburg2010, Haapamaki2012} to reduce the density of surface scatterers and smooth the local electronic potential, leading to increased carrier mobility and more ideal devices for a wide range of quantum transport, nanoscale circuitry and optoelectronics applications. \\ \begin{figure*}[t] \includegraphics[width= 16cm]{fig1.png} \caption{(a) Low and (b) high magnification bright-field TEM images of an InAs nanowire grown by GS-MBE at 0.5 $\mu$m/hr. Scale bars are 500 nm in (a) and 5 nm in (b). The inset in (b) shows selected area diffraction pattern along the $\left[2\bar{1}\bar{1}0\right]$ zone axis indicating pure wurtzite crystal structure. A majority of wires grown under these conditions had low stacking fault densities $<1~\mu$m$^{-1}$.} \label{fig1} \end{figure*} \section{Nanowire growth by gas-source MBE} \indent InAs nanowires were grown in a gas source molecular beam epitaxy (GS-MBE) system using Au seed particles. A 1 nm Au film is heated to form nanoparticles on a GaAs (111)B substrate. For nanowire growth, In atoms were supplied as monomers from an effusion cell, and As$_2$ dimers were supplied from an AsH$_3$ gas cracker operating at 950$^{\circ}$C. Nanowire growth proceeded at a substrate temperature of 420$^{\circ}$C, an In impingement rate of 0.5 $\mu$m/hr, and a V/III flux ratio of 4. The nanowires grew in random orientations with respect to the GaAs (111)B substrate, possibly due to the large lattice mismatch strain between InAs and GaAs. Transmission electron microscopy (TEM) analysis, shown in \ref{fig1}a, indicated a Au nanoparticle at the end of each nanowire (darker contrast at the left end), consistent with the VLS process. Most nanowires had a rod-shaped morphology with negligible tapering and a diameter ($\sim20-80$ nm) that was roughly equal to the Au nanoparticle diameter at the top of each nanowire, indicating minimal sidewall deposition. \\ \indent A common occurrence in III-V nanowires is the existence of stacking faults whereby the crystal structure alternates between zincblende and wurtzite, or exhibits twinning, along the nanowire length. Joyce et al. \cite{joyce2010} and Dick et al. \cite{dick2010} have shown that growth parameters in metalorganic chemical vapour deposition (MOCVD) have profound effects on the InAs nanowire crystal phase. Zincblende, wurtzite, or mixed zincblende/wurtzite nanowires were formed by simply tuning the temperature and V/III ratio. We have found that for GS-MBE grown InAs nanowires, stacking faults can be nearly eliminated and pure wurtzite structures can be realized at sufficiently low growth rate $\sim0.5 \mu$m/hr. At higher growth rates, but otherwise identical growth conditions, the InAs nanowires exhibited a much larger fraction of stacking faults on average. For example, TEM analysis of InAs nanowires grown at a rate of 1 $\mu$m hr$^{-1}$ exhibited an average linear density of stacking faults $\approx 1~\mu$m$^{-1}$. Similar to GaAs nanowires \cite{fortuna2008,shtrikman2009,joyce2010}, the density of faults diminished dramatically when the growth rate was reduced. Selected area electron diffraction for a typical nanowire (inset of \ref{fig1}b) confirms the pure wurtzite crystal structure and the absence of stacking faults.\\ \begin{figure*}[t] \includegraphics[width= 16cm]{fig2.png} \caption{(a-c) Conductance versus backgate voltage for devices $1-3$ at selected temperatures. $D$ is the nanowire diameter and $L$ the FET channel length (device 1 is tapered with an average nanowire diameter $\langle D\rangle=71$ nm). The tangent lines drawn on the $T=122$ K and $T=60$ K traces in (a) indicate the maximum slopes corresponding to peak field-effect mobility. The pinchoff threshold voltage is defined as the intercept between this tangent line and the $G=0$ axis. (d-f) The pinchoff threshold voltages versus temperature extracted from the conductance measurements. In (d), data are shown for device 1 before and after an ammonium sulfide treatment was applied to the FET channel (the data in (a) correspond to the untreated case). The empirical fits in (d-f) are of the form $V_t = V_0 + V_1 e^{-E_a/kT}$, as described in the text.} \label{fig2} \end{figure*} \section{Mobility in field-effect transistors} \indent Field-effect transistors (FETs) were fabricated by mechanically depositing as-grown nanowires on a $175$ nm thick SiO$_2$ layer above a n$^{+}$-Si substrate that functions as a backgate, and writing source/drain contacts for selected wires using electron-beam lithography (schematic device layout is shown in supplementary figure 1). This was followed by an ammonium sulfide etching and chemical passivation process to remove the native oxide and prevent regrowth \cite{suyatin2007} prior to evaporation of Ni/Au contacts. This process yields devices with contact resistance that is small compared to the channel resistance \cite{suyatin2007}. Channel lengths ranged from 0.7 to 3 $\mu$m. Transport measurements were carried out in He vapour in an Oxford continuous flow cryostat from 10 K to room temperature. Bias and gate voltages were applied using a high resolution home-built voltage source, and a DL Instruments current preamplifier was used to measure DC current at a noise floor $\sim 0.5$ pA$/\sqrt{Hz}$. All devices tested at room temperature displayed fully Ohmic I-V characteristics, with resistances typically in the range of $10-200$ k$\Omega$. Gate sweeps were performed at a rate between 3 mV/s (lower temperatures) and 10 mV/s (higher temperatures). Earlier work reported that a sweep rate of 7 mV/s led to very small hysteresis and therefore minimal interface capacitance \cite{dayeh2007b}. Under these conditions, we observe a shift with respect to gate voltage of less than 50 mV upon changing sweep direction, and no observable change in the shape of the conductance curve. Note that FET devices with channel lengths greater than $\sim 200$ nm are known to be in the diffusive transport regime \cite{Zhou2006}.\\ \indent The gate capacitance per unit length was calculated using the expression \cite{wunnicke2006, ford2009, tilburg2010} \begin{equation} C'_g = 2\pi \epsilon_0 \epsilon_r/\cosh^{-1}\left(\frac{R+t_{ox}}{R}\right) \end{equation}\label{eqn3} where $R$ is the nanowire radius, $\epsilon_0$ is the electric constant, $\epsilon_r=3.9$ is the relative dielectric constant and $t_{ox}$ the thickness of the SiO$_2$ layer, respectively. For the devices studied here, TEM analysis indicated $t_{ox}=175$ nm. The equation above assumes that the nanowire is embedded in SiO$_2$; to compensate for the fact that the nanowire actually sits atop the SiO$_2$ and is surrounded by vacuum ($\epsilon_r=1$), it was shown by Wunnicke \cite{wunnicke2006} that a modified dielectric constant $\epsilon'_r = 2.25$ can be taken. Our numerical simulations, comparing the pinchoff threshold voltages of the FET device calculated with and without SiO$_2$ embedding, confirmed that this is a suitable correction factor. The capacitances based on \ref{eqn3} are listed in Table 1. \begin{table*}[t!] \begin{tabular}{c|ccc} \hline device $\#$ & $D$ (nm) & ~~ $L$ ($\mu$m)~~ & ~~$C'_g$ (aF$\cdot \mu$m$^{-1}$) ~~\\ \hline\hline 1 & 71 & 2.95 & 50.76 \\ 2& 50 & 0.97 & 45.21\\ 3& 35 & 0.77 & 40.52\\ \hline \end{tabular} \caption {Diameters ($D$) and channel lengths ($L$), measured by AFM and TEM, and calculated capacitance per unit length ($C'_g$) for the three main FET devices investigated. Uncertainties in diameter are $\pm 2$ nm (for tapered device 1, $D$ is the average diameter).} \label{smtable2} \end{table*} \subsection{Results} \indent We investigated 10 devices to varying levels of detail, and found qualitatively similar results. Here we will focus on three representative devices, denoted 1, 2 and 3 with nanowire diameters $D =$ 71, 50 and 35 nm, respectively. The nanowires in devices 2 and 3 were untapered, whereas the nanowire in device 1 was tapered, with diameter linearly varying from 53 nm to 90 nm across the FET channel (average diameter $\langle D \rangle =$ 71 nm). TEM analysis was carried out on selected devices after transport studies were complete to check for the presence of stacking fault defects. Devices 1 and 3 were found to have zero and one fault, respectively, whereas a fourth device ($D =$ 55 nm) with low mobility was found to have an atypically large fault density (see section~\ref{faults} below). TEM analysis was not performed on device 2. Transport data for an additional high mobility device with $D =$ 50 nm is shown in supplementary figure 3. The channel of device 1 was subjected to an ammonium sulfide etching and passivation treatment, similar to that carried out prior to contacting, after the initial set of transport measurements were completed. Subsequent transport measurements were taken several days later, likely after the native oxide had partially or fully regrown.\\ \indent Figure~\ref{fig2}(a-c) shows conductance $G=I_{sd}/V_{sd}$, where $I_{sd}$ and $V_{sd}$ are the source-drain current and bias, respectively, versus backgate voltage $V_g$ for devices 1, 2 and 3 at selected temperatures. The bias is set to $V_{sd}=1$ mV (similar results are obtained at higher bias). For all three devices, the maximum transconductance $\left(\frac{dI_{sd}}{dV_g}\right)_{max}$ is seen to decrease as temperature is raised above $\sim 30-50$ K. Figures~\ref{fig2}(d-f) show the pinchoff threshold voltages $V_t$ corresponding to the data in figures~\ref{fig2}(a-c), where $V_t$ is defined as the intercept between the maximum slope tangent line and the $G=0$ axis. $V_t$ typically shifts toward more positive gate voltages as temperature decreases, and saturates below $\sim 50$ K. All temperature sweeps reported here were from low to high temperature. We fit the pinchoff threshold data to an empirical function based on thermal activation $V_t = V_0 + V_1 e^{-E_a/kT}$, where $k$ is the Boltzmann constant, typically yielding an $E_a\sim 5-30$ meV. Note that for device 1 in figure~\ref{fig2}(d) we also plot the $V_t$ measured after the chemical treatment was applied to the FET channel. $V_t$ shifted considerably to more positive gate voltage post-treatment, and also showed a weaker temperature dependence. This suggests that the surface potential and density of conduction electrons in the nanowire are controlled in large part by the surface chemistry \cite{du2009}. Post-passivation conductance versus gate curves for device 1 are given in supplementary figure 2. \\ \begin{figure*}[!t] \includegraphics[width= 16cm]{fig3.png} \caption{(a) Comparison of the field-effect and effective mobilities for device 2 at $T=40$ K. (b) The temperature dependence of effective mobility for device 2 at different values of gate voltage relative to $V_{peak}$, the gate voltage at which peak mobility occurs. The values at $V_{peak}$ are shown by black dots, at $V_{peak}+0.25$ V by red dots, etc. The mobility at $V_{peak}+0.5$ V (green stars) is near the crossover point between the two slopes seen in the effective mobility in the left panel. } \label{fig3} \end{figure*} \begin{figure*}[!t] \includegraphics[width= 17cm]{fig4.png} \caption{(a) Experimental peak effective mobilities versus temperature for devices $1-3$ (diameters 71, 50, and 35 nm, respectively). The empirical fitting function described in the text (solid lines) is given by $\mu = AT^x (1+Be^{-E_a/kT})^{-2}$, where $x$, $A$, $B$ and $E_a$ are fitting parameters given in the main text. (b) Comparison of peak effective mobilities versus temperature for device 1 before and after an ammonium sulfide etching and passivation treatment was applied to the FET channel. The fitting function is of the same form. For comparison, the pinchoff threshold voltages before and after treatment are shown in figure~\ref{fig2}(d). } \label{fig4} \end{figure*} \indent From the measured conductance versus backgate voltage curves, both the field-effect mobility and the effective mobility \cite{ford2009} may be extracted. The field-effect mobility is a lower bound on the effective mobility, and is defined as \begin{equation} \mu_{\text{fe}} =q^{-1}\frac{d\sigma}{dn} = \frac{L}{C'_g}\frac{dG}{dV_g}, \label{fem} \end{equation} where $\sigma$ is conductivity, $n$ is the electron concentration, $q$ is electron charge, $C'_g$ is the gate capacitance per unit length and $L$ is the channel length. Equation~\ref{fem} only strictly holds at peak mobility, where $\frac{d\mu_{\text{fe}}}{dn}=0$. The effective mobility is defined as \begin{equation} \mu_{\text{eff}} = \frac{LG}{C'_g(V_g-V_t)}, \label{effm} \end{equation} where $V_t$ is the pinchoff threshold voltage defined previously, and the expression only holds for $V_{sd}<<V_g-V_t$. The two mobility measures are compared in figure~\ref{fig3}(a) for device 2 at 40 K. The effective mobility is typically a smoother function of $V_g$, and $\mu_{\text{eff}}\geq\mu_{\text{fe}}$ for all of our data. Two regimes can be clearly seen in $\mu_{\text{eff}}$: the slope $|\frac{d\mu_{\text{eff}}}{dV_g}|$ is larger from $V_g= -0.25V$ to $V_g= +0.25V$ than at more positive gate voltages. In figure~\ref{fig3}(b) we show the effective mobility versus temperature for device 2 at different values of gate voltage relative to the position of peak effective mobility ($V_{peak}$). The data shown are for $V_g=V_{peak}+\delta$, where the top curve (black dots) is for $\delta=0$, and the lower curves (red, green, blue) are for $\delta=0.25, 0.5, 1.0$ V, respectively. The temperature dependence is most pronounced at peak mobility, but follows a similar trend for points on the high slope region of the effective mobility curve. At large positive gate voltages relative to $V_{peak}$, the mobility shows little to no dependence on temperature. We ascribe the gate dependence of effective mobility, which for all devices is qualitatively similar to that shown in figure~\ref{fig3}(a), mainly to a surface accumulation layer of electrons that forms as the gate is made more positive. This accumulation layer will act to screen the conduction electrons in the core of the nanowire, effectively reducing the gate capacitance to the core electrons and producing a smaller observed mobility, since we do not take this screening into account in equation~\ref{effm}. As the electron density in the accumulation layer becomes larger, it also dominates the device conductance and has a lower intrinsic mobility due to its proximity to the surface. The peak mobility, however, occurs close to pinchoff where the accumulation layer should be absent or negligible. At peak mobility, the nanowire surface potential is close to the flat-band condition, and we would also expect little or no interface capacitance \cite{dayeh2007b} as long as the gate sweep is sufficiently slow. Hence, the peak mobility should be a good approximation to the intrinsic mobility of the conduction electrons in the bulk of the nanowire, so that is the quantity we focus on in the remainder of the paper. \\ \indent A possible source of systematic error in mobility is shielding due to Ohmic contacts \cite{Pitanti2012}, which can become large for short channel lengths. For our shortest channel length of 770 nm (device 3), the calculated mobility could be overestimated by up to a factor of two in the worst case. The shielding error should be negligible for device 1. This type of error is independent of temperature, and therefore does not affect the qualitative behaviour of mobility. Another concern is the dependence of the measured mobility on the bias voltage. We observe no difference, within statistical error, between mobilities measured at 1 mV and 10 mV bias. Recently, challenging Hall effect measurements were carried out \cite{Storm2012,Blomers2012} on InAs nanowires showing that immobile interface charge accounts for an appreciable fraction of the the total gate-induced charge, meaning that field-effect measurements tend to underestimate the true mobility. We argue that this mechanism would most strongly affect the mobility estimates in the device ``on" state rather than at peak mobility where the surface potential is nearly flat. Therefore we expect that the qualitative temperature dependence we measure reflects intrinsic behaviour and is not an artifact of interface capacitance effects. Temperature and gate-dependent Hall measurements on our (relatively smaller diameter) nanowires are desirable to confirm this, but are beyond the scope of this paper.\\ \indent Devices 1 and 3 show qualitatively similar behaviour to device 2, as shown in figure~\ref{fig4}(a). The maximum in mobility at around $T=50$ K is consistent with previous reports \cite{ford2009, Dhara2011}. At a given temperature, the mobility increases with nanowire diameter, as was also reported previously \cite{ford2009}. This is consistent with the mobility being dominated by surface charge scattering, as the overlap of the carrier distribution with the scattering potential becomes much stronger at smaller diameters \cite{Das2005}. Note, however, that we have not examined enough devices to make firm conclusions about diameter dependence on statistical grounds. Motivated by the hypothesis that surface scattering dominates the mobility, the data in figure~\ref{fig4} are fit to empirical function of the form $\mu(T) \propto T^x N(T)^{-y}$, where $N(T)$ is the number of surface scatterers. This function does not result from an analytical solution of the surface scattering problem, which is in general too difficult to solve without resorting to numerics \cite{Das2005}. Rather, this function provides a good model for our data and is based on the the following reasoning. For a fixed number of scatterers, the average mobility increases with temperature as $T^x$, where $x\sim 1$, since the carrier concentration increases with temperature leading to an increase in the Fermi velocity, which reduces the scattering probability \cite{Das2005, nag1980}. On the other hand, an increase in the number of scatterers decreases mobility. In the limit of a low density of scatterers and a high probability of scattering per defect, scattering events can be treated as uncorrelated, and $\mu \propto N^{-1}$ (or equivalently, the scattering rate is proportional to the number of scatterers). However, for scattering from positively charged surface states, there is a high density of scatterers with a low probability of scattering per defect, leading to correlated scattering \cite{Evans2005} (see section below). Here, the electron wavefunction remains coherent while interacting with multiple surface charges simultaneously, which leads roughly to $\mu \propto N^{-2}$, since the scattering matrix element is roughly proportional to $N$, so the transition rate is proportional to $N^2$. We model $N(T)$ based on the thermal activation of surface donors: $N(T) \propto (1+Be^{-E_a/kT})$, where $B$ and $E_a$ are free parameters, similar to the expression used in figure~\ref{fig2} to model the pinchoff threshold voltages. \\ \indent The data in figure~\ref{fig4} are fit to $\mu = A T^x (1+Be^{-E_a/kT})^{-2}$. For $D=(71, 50, 35)$ nm, the fit parameters (excluding scaling factor $A$) are the following: $x=(1.0, 1.25, 0.67)$, $B=(13.4, 14.6, 3.0)$, and $E_a=(17.2, 15.1, 8.0)$ meV. We note that the data can be fit equally well to a functional form $\mu \propto N^{-1}$, albeit with different fit parameters, but we chose the $N^{-2}$ form for consistency with the numerical modelling results in the next section. The $E_a$ values suggest thermal ionization of the surface donor states with activation energies in the range $8-20$ meV, consistent with the range of $E_a$ values obtained from fitting $V_t$ in figure~\ref{fig2}. The smaller value of $B$ for the 35 nm diameter nanowire is consistent with the weaker temperature dependence of its pinchoff threshold voltage in figure~\ref{fig2}(f), indicating a smaller number of thermally activated donor states relative to the larger diameter nanowires. Figure~\ref{fig4}(b) compares the data for device 1 before and after an ammonium sulfide etching and passivation treatment was applied to the FET channel. The best fit parameters in the latter case are $x=0.62$, $B=4.5$, $E_a=20.2$ meV. After the chemical treatment, the turnover in mobility broadens and shifts to higher temperatures. This is accompanied by a much weaker change in the pinchoff threshold voltage with temperature, shown in figure~\ref{fig2}(d). The smaller value of fit parameter $B$ after chemical treatment is consistent with the weaker temperature dependence of pinchoff threshold voltage after treatment. We note here that the detailed condition of the nanowire surface post-treatment is not known, and it is likely that the native oxide partially or fully regrew before or during the post-treatment transport measurements. The data are presented only to show that the nanowire transport properties are significantly altered by chemical removal of the oxide followed by unknown surface chemical processes; these processes evidently incur some change in the nature or density of surface states. The overall reduction in mobility is consistent with previous observations of low mobility in nanowires exposed to wet etching conditions \cite{Storm2012, Dhara2011}, which could be due to changes in surface states, increased surface roughness, or a combination of the two. \\ \section{Numerical modelling} \indent We carried out numerical modelling of the nanowire transistor to test whether scattering from charged surface states can account for the magnitude and temperature dependence of the experimental mobilities. The nanowire transistor was simulated using a finite-element method implemented in the COMSOL\textsuperscript{\textregistered} multiphysics package. The model consisted of a 1 $\mu$m long, 50 nm diameter nanowire atop a 175 nm thick SiO$_2$ layer with underlying backgate. The layer above the SiO$_2$ that embeds the nanowire is vacuum, with $\epsilon_r=1$, and we take $\epsilon_{r}=15.15$ for the InAs nanowire. In consideration of the low effective mass of electrons in InAs, we used a self-consistent Poisson-Schrodinger solver \cite{Datta2005} to calculate the electrostatic potential and charge distribution in the nanowire so that quantum confinement is properly taken into account. The model assumes that the conduction electron concentration at zero gate voltage is due to a surface density of positively charged donor states, $\sigma^{+}_{ss}\sim 10^{11}-10^{12}$ cm$^{-2}$, an input parameter that is allowed to vary with temperature.\\ \indent Consider a Cartesian coordinate system with $z$ aligned with the nanowire axis and radial coordinates $(x,y)$. The potential $V(x,y,z)$ that is a solution to the Poisson equation is nearly independent of axial coordinate $z$, so we solve the Schrodinger equation in a two-dimensional cross-section of the nanowire to obtain the radial eigenstates $\psi_i(x,y)$. The electron density as a function of the radial coordinates $n(x,y)$ is calculated from these solutions as \begin{equation} n(x,y)=\sum_i n_{i}(x,y) = \sum_i|\psi_i(x,y)|^2\int_{E_i}^{\infty}f(E)g(E-E_i)dE \label{nxy} \end{equation} where $g(E-E_i)=\frac{L}{\pi\hbar} \sqrt{\frac{2m^*}{E-E_i}}$ is the one-dimensional (1D) density of states, $f(E)$ is the Fermi-Dirac distribution, and $E_i$ and $\psi_i(x,y)$ are the energy and wavefunction of the $i^{th}$ eigenstate, respectively. The average electron concentration is obtained by integrating over the radial coordinates and dividing by the volume $\pi R^2 L$. A change of variables $E \rightarrow (E-E_i)/kT$ leads to a compact form: \begin{equation} \langle n \rangle = \frac{\sqrt{2m^*k_BT}}{\pi^2 \hbar R^2}\sum_{i} F_{-1/2}\left(\frac{E_F-E_i}{k_BT}\right), \label{density} \end{equation} where $E_F$ is the quasi-Fermi level, $F_{-1/2}$ is the Fermi-Dirac integral of order $-1/2$, and $m^*$ is $0.023$ times the electron mass. The Fermi energy $E_F$ is determined by the net conduction electron concentration at zero gate voltage. Figure~\ref{fig5}(a) shows the values of $\sigma^{+}_{ss}(T)$ used in the simulations, and the resulting average conduction electron density $\langle n \rangle$ versus temperature. We chose a function $\sigma^{+}_{ss}(T) = \sigma_0+\sigma_1e^{-E_a/kT}$ to model the thermal activation of surface donor states, where $\sigma_0 = 1.7\times10^9$ cm$^{-2}$, $\sigma_1 = 9.8\times10^{10}$ cm$^{-2}$ and $E_a = 6.7$ meV for the curve in figure~\ref{fig5}(a). These values were chosen so that the simulated electron density at zero gate voltage would roughly match the experimentally measured carrier density of device 2 at peak mobility. Note that peak mobility occurred at negative gate voltages in the real device, so the actual densities of surface donor states are likely larger than the values used in simulation. The reason for carrying out the simulations at zero gate voltage was to model the behaviour for a radially symmetric wavefunction, unperturbed by the presence of a nonzero gate voltage, for simplicity. \\ \begin{figure*}[!t] \includegraphics[width= 15cm]{fig5.png} \caption{(a) The values of surface donor density, $\sigma^{+}_{ss}(T)$, used as inputs for the numerical simulation of a 50 nm diameter nanowire are shown on the right vertical axis. The functional form, described in the text, models a simple thermal activation of donors. The resulting average conduction electron density $\langle n \rangle$ is shown on the left axis. The $\sigma^{+}_{ss}(T)$ values were chosen to produce $\langle n \rangle$ at $V_g=0$ similar in magnitude to the values observed experimentally for device 2 at peak mobility. (b) Fermi wavenumbers $k_1,..,k_6$ of the first six radial subbands calculated from the Schrodinger-Poisson solutions for inputs $\sigma^{+}_{ss}(T)$. $\langle k \rangle$ is the average value over thermal occupation, and is proportional to the average electron velocity.} \label{fig5} \end{figure*} \indent Mobility is calculated using a multi-subband momentum relaxation time approximation \cite{Ferry1997}. We define three-dimensional eigenstates $|m,k\rangle = \psi_m(x,y) e^{i k z}/\sqrt{L}$, where $m$ is the radial subband index and $k$ is the axial wavenumber. The transition probability $T_{k,k'}^{mn}$ between the states $|m,k\rangle$, $|n,k'\rangle$ are calculated using Fermi's golden rule: \begin {equation} T^{mn}_{k,k'}=\frac{2\pi}{\hbar}|M^{mn}_{k,k'}|^2\delta(E_k-E_{k'}) \end{equation} where $M^{mn}_{k,k'}$ is the scattering matrix element $\langle k,m|V_C|k',n\rangle$ resulting from the Coulomb interaction potential $V_C$ of charged surface impurities. In our numerical simulations, $V_C$ is obtained directly from the Poisson solver, and this takes into account both screening and dielectric mismatch effects \cite{Jena2007, Salfi2012}. In the absence of these effects, $V_C$ would be analytically expressed as a sum over unscreened point-charge potentials. In a cylindrical coordinate system $(r, \theta, z)$ where $r$ and $z$ are the radial and axial coordinates, \begin{equation} V_C=\sum_i V_{C,i} = \frac{e^2}{4\pi\epsilon_0\epsilon_r}\sum_i \left(r^2 + (D/2)^2 - rD\cos{\theta_i} + (z-z_i)^2 \right)^{-1/2} \end{equation} where $V_{C,i}$ is the potential due to a single impurity located at $\bold{r}_i = (D/2, \theta_i, z_i)$. With the numerically-derived $V_C$ that includes screening effects, we find that the value of $M^{mn}_{k,k'}$ for a single positively charged surface impurity is on the order of $10^{-2}$ meV or less. Its smallness is due to the vanishing of $|\psi|^2$ at the surface, the large dielectric constant for InAs, screening effects, and that the scattering potential is attractive. In this case, treating scattering from single impurities independently and incoherently adding their rates can only lead to the observed mobilities if the surface impurity charge densities are unreasonably high, $N \sim 10^{13}$ cm$^{-2}$. At such densities, the mean separation between scatterers is too small for the picture of uncorrelated scattering to be valid. On the other hand, for a $V_C$ that is the collective potential corresponding to a random distribution of many scatterers over the length of the nanowire, we are able to obtain the observed mobilities at impurity densities $N(T)\sim \sigma^{+}_{ss}(T)$ (see Figure 6). This approach justifies the empirical expression $\propto N^{-2}$ used in the previous section to fit the experimental data, since the scattering matrix element $M^{mn}$ now roughly scales with $N$, rather than being independent of $N$ in the picture of uncorrelated single-defect scattering.\\ \indent The scattering matrix element is given by \begin{equation} M^{mn}_{k,k'}= \int_{0}^{D/2} \int_{0}^{2\pi} \int_{-L/2}^{L/2} r \psi_m(r,\theta) V_C \psi^*_n(r,\theta)e^{-i(k-k')z}dz d\theta dr \label{element} \end{equation} where $V_C$ is the total potential corresponding to a set of impurities. The integral in equation~\ref{element} has no straightforward analytical solution, so is generally solved numerically \cite{Das2005}. The geometry for simulating correlated scattering is indicated schematically in figure~\ref{fig6}(a), and the Poisson solution $V_C$ obtained for a random impurity distribution is shown in figure~\ref{fig6}(b). The relaxation rate in subband $m$ due to scattering into subband $n$ is calculated as \begin{equation} 1/\tau^{mn}(k)=\sum_{k'}(1-\cos{\phi}) T^{mn}_{k,k'} \label{rate} \end{equation} where $\phi$ is the angle of deflection between the incoming wave vector $k$ and the outgoing wave vector $k'$. The values of $k'$ are given by energy conservation, $E_m+\hbar^2k^2/2m^{*}=E_n+\hbar^2k'^2/2m^{*}=E_F$. In a 1D geometry, only back scattering events contribute to electron relaxation rates. When the electron concentration permits the occupation of multiple subbands, the relaxation rate in the $m^{th}$ subband is obtained as $1/\tau_m(k)=\sum_{n} 1/\tau^{mn}(k)$, where $k$ is the initial momentum. At low temperatures, it is valid to only consider the relaxation time for an electron with Fermi wavenumber $k_F$. Making this approximation, we substitute the Fermi wavenumber in each subband for $k$. The average relaxation time is given by $\tau=\sum_i \tau_i n_i/n$, where $n_i$ is the population of $i^{th}$ subband, leading to an average electron mobility $\mu=e\tau/m^*$. Figure~\ref{fig5}(b) shows the Fermi wavenumbers of the first few radial subbands calculated from the Schrodinger-Poisson solutions for input donor densities $\sigma^{+}_{ss}(T)$. The first excited subband appears near 40 K, producing a dip in the average wavenumber $\langle k_F \rangle$. The sharp drop in Fermi velocity as temperature is lowered below 40 K strongly increases the ionized impurity scattering rate, which causes a drop in mobility. \\ \indent We performed the scattering calculations in two ways: (i) calculating integrals $M^{mn}_{k,k'}$ for the electron wavefunction and scattering potential over the entire length of the $L=1$ $\mu$m nanowire, and (ii) restricting the problem to a subsection of the nanowire of length $l < L$. Method (ii) is motivated by the fact that the experimentally observed mobilities suggest a mean free path $l_{mf} \sim 100-200$ nm or less \cite{dayeh2007c}, so that on average, we expect an electron traversing the nanowire to experience several uncorrelated scattering events. In the latter picture, the scattering rate $\tau^{-1}$ is calculated from the $T^{mn}_{k,k'}$ for the electron wavefunction restricted to a length $l$ comparable to the mean free path, and the scattering rate for the entire length of nanowire is $L/l$ times this rate. On the other hand, the probability for the electron to be in any one subsection is $l/L$, so these factors cancel. The only difference between the two cases is that the 1D density of states $g l$, which appears in the evaluation of equation~\ref{rate}, is proportional to the subsection length. Hence, for an electron treated quantum mechanically on a length scale $l$ (but classically on larger length scales), the density of states to scatter into is lower than if the wavefunction were spread across length $L$, increasing the calculated mobility. Therefore a factor $L/l$ larger density of scatterers is required in calculation (ii) relative to (i) in order to produce the same calculated mobility. \\ \begin{figure*}[!t] \includegraphics[width= 15cm]{fig6.png} \caption{(a) Geometry used for calculating scattering from a random distribution of surface charges for a nanowire of total length $L =$ 1 $\mu$m and diameter $D = $50 nm. The total scattering rate is obtained by calculating the scattering matrix elements over the entire nanowire in method (i), or by calculating the matrix elements over a subsection of length $l$ and incoherently adding the rates from all $L/l$ sections in method (ii). (b) Poisson potential $V_C$ corresponding to the surface charge distribution in (a), projected onto a plane along the axis of the nanowire. (c) Comparison of the experimental mobilities (device 2) and the mobilities calculated using method (i) (the results using method (ii) are nearly identical). (d) The densities of surface charges $N(T)$ that produce the calculated mobilities in (c) for both methods. The subsection lengths $l$ used in method (ii), loosely identified with mean free path, are shown on the right axis.} \label{fig6} \end{figure*} \indent The results of these calculations are shown in figure~\ref{fig6}: (d) shows the density of scatterers $N$ obtained by calculations (i) and (ii) that reproduce the experimental mobilities. In calculation (ii), a variable subsection length $l$ was chosen such that $N(T)\approx \sigma^{+}_{ss}(T)$; these $l$ values are plotted on the right axis. The calculated mobilities from (ii) are shown in figure 6(c) in comparison with the experimental values. A three-fold increase of $N$ over the range 40-150 K is able to explain the observed decrease in mobility with temperature for both calculation methods. Furthermore, the density of scatterers is nearly a perfect match to the assumed ionized surface donor density for method (ii). It is reasonable to expect that the increase of $N$ with temperature results from the thermally activated ionization of surface donor states. Confinement also plays a role in this temperature dependence, since higher radial subbands contribute to a larger electron concentration near the surface, with a corresponding increased scattering rate. However, for a fixed $N$, this confinement effect is too small to cause a negative slope of the mobility-versus-temperature. We find that interband scattering plays a very limited role, giving at most a correction of order $10\%$ to the scattering rates. As expected, the positive slope of mobility below 40 K follows the behaviour of the average Fermi velocity (figure~\ref{fig5}(b)) over the same temperature range, where only the lowest radial subband is occupied. Overall, the simulation results confirm that scattering from charged surface states at densities typical of InAs can explain the magnitude and temperature dependence of the experimental mobilities. \\ \begin{figure*}[!t] \includegraphics[width= 14cm]{fig7.png} \caption{Stacking fault density and reduced mobility. Peak field-effect mobilities (left) and post-measurement TEM images (right) for device 3 ($D = $35 nm) and a low-mobility $D = $55 nm nanowire FET device. Stacking faults are indicated by the red arrows; at least $7$ faults can be seen in the $D = $55 nm nanowire, compared to only one visible fault in the $D = $35 nm nanowire. The nanowires are imaged along the [$2 \bar{1} \bar{1} 0$] zone axis so that all planar defects will be visible. The solid lines show power law fits to $T^{-0.4}$ and $T^{-0.3}$ for the 35 nm and 55 nm devices, respectively. No faults were observed along the entire channel for device 1 (average diameter 71 nm).} \label{fig7} \end{figure*} \section{Structural defects and mobility} \label{faults} \indent Finally, we studied the relationship between structure and mobility by performing post-measurement TEM on selected devices; this was motivated by the observation that a fraction of devices displayed significantly lower mobilities than were typical for a given nanowire diameter. A Focused Ion Beam (FIB) was used to remove devices from the substrate, after which they were placed on a holey carbon TEM grid for inspection. Indeed, it was observed that a $55$ nm diameter nanowire with low mobility $\sim 1,000$ cm$^2$/Vs had a high linear density of stacking faults, at least $\sim$($70$ nm)$^{-1}$ as shown in figure \ref{fig7}. In contrast, the highest mobility device we measured, device 1, had no visible faults along the entire channel length. Device 3 ($D=35$ nm) was found to have only one visible fault as shown in figure \ref{fig7}, and better mobility than the $D=55$ nm device, despite having a smaller diameter. The magnitude and temperature dependence of mobility appear to be greatly reduced in the $D=55$ nm device due to the high density of stacking faults. Wurtzite InAs has a $\sim 20\%$ larger bandgap than zincblende InAs \cite{bao2009}, so that for electrons, stacking faults correspond to potential wells that may be as deep as $\sim 70$ meV. Since these are planar defects, the reflection coefficient for an incoming plane wave can be a sizable fraction of unity. On the other hand, we cannot obtain theoretical mobilities as low as $\sim 1,000$ cm$^2$/Vs from a simple 1D model of square well potentials at the linear defect density observed here. It is possible that the longer zincblende sections may contain bound states that trap electrons \cite{Wallentin2012}, leading to Coulomb scattering. Gap states that trap charges locally can arise at dislocations \cite{Ebert2001}, however, there are no mechanisms within the VLS growth method through which dislocations could form for the bare (111) oriented InAs nanowires studied here. A stacking fault is simply a rotation of the tetrahedral coordination for one monolayer, which leaves the lattice four-fold covalently bonded and free of distortion. Further investigation is required to clarify the origin of the surprisingly low mobilities seen here. Importantly, the low fault densities observed in devices 1 and 3, together with the characteristic mobility temperature dependence in figure 4, rules out the possibility of stacking faults being responsible for the turnover in mobility below 50 K. \\ \section{Discussion} \indent While the data and modelling in sections 3 and 4 are consistent with a dominant role of positively charged surface states as scatterers, it is also possible that negatively charged impurities, such as native oxide charge traps \cite{Salfi2011}, might play a role. Negative charges produce stronger scattering potentials \cite{Salfi2012}, so that a relatively small number of impurities could limit the electron mobility. On the other hand, we observe that the pinchoff threshold voltage shifts to more positive values as temperature is reduced, but more positive gate voltages should lead to \emph{higher} occupation of negative traps. Furthermore, if oxide charge traps limited mobility, then we would expect much higher mobilities in epitaxial core-shell nanowires where the oxide surface is $10-20$ nm away from the core. Somewhat higher mobilities were observed in those nanowires \cite{tilburg2010}, but only by a factor $\sim 1.4$ compared to the best results with unpassivated nanowires reported elsewhere \cite{ford2009} and in the present work. We suspect this improvement in mobility is due to passivation of surface states rather than moving oxide charge traps further away from the channel. Further experiments on chemically and epitaxially passivated nanowires may test this hypothesis. A related concern is the possibility of scattering due to electrostatic fields from trapped charges in the underlying SiO$_2$ substrate. This cannot be firmly ruled out from the present data, but could be addressed by future experiments on suspended FET devices. Surface roughness scattering might also limit the mobility, and it is not clear from the literature what temperature dependence to expect, although there is some indication it should be weak \cite{nag1980}. From high-resolution TEM we estimate a typical roughness less than 2-3 monolayers for these nanowires. We expect surface roughness to play a more significant role in the low mobility of the accumulation layer than in limiting the mobility of the bulk conduction electrons. Especially at low temperature and close to pinchoff, the electron distribution is predominantly in the center of the nanowire, with vanishing probability at the surface. Hence, Coulomb scattering should dominate over neutral defects like surface roughness if the density of surface charges is sufficiently high ($\sim10^{11} - 10^{12}$ cm$^{-2}$). At low temperatures we must also consider the Coulomb interaction between electrons that form `puddles' in a disordered potential, i.e. charging effects. This might provide an alternate explanation for the observed mobility drop below 50 K. However, we have recently observed an opposite trend in InAs-In$_{0.8}$Al$_{0.2}$As core-shell nanowires \cite{Holloway2013a}, in which the mobility continues to increase as temperature is lowered, despite the fact that strong, qualitatively similar Coulomb oscillations appear below $\sim 10$ K in both types of nanowire. We ascribe the difference in mobility behaviour to a reduction of InAs surface states by the epitaxial passivation. \\ \section{Conclusion} \indent In conclusion, our data and numerical simulations support the hypothesis that ionized impurity scattering by charged surface states dominates the peak electron mobility in low defect density InAs nanowires across a wide range of temperatures. Transport measurements show a ubiquitous turnover in the temperature-dependent mobility below $\sim$50 K. The behaviour above 50 K can be explained by a thermally activated increase in the number of ionized scatterers. These results on pure InAs nanowires provide a benchmark to compare with the transport behaviour of heteroepitaxial core-shell nanowires or nanowires with stable chemical passivation. Additionally, post-transport TEM measurements show that a high stacking fault density, observed in a small fraction of these nanowires, leads to sharply reduced mobilities and a weaker temperature dependence. \\ \ack{We thank the Canadian Centre for Electron Microscopy, Julia Huang and Fred Pearson for technical help with FIB and TEM; the Centre for Emerging Device Technologies and Sharam Tavakoli for assistance with MBE; Om Patange and David G. Cory for use of AFM; the QNC Nanofabrication facility and its supporting agencies. This work benefitted from discussions with Milad Khoshnegar, Daryoush Shiri and Mohammad Ansari. This research was supported by NSERC, the Ontario Research Fund, the Canada Foundation for Innovation and Industry Canada. G. W. H. acknowledges a WIN Fellowship.} \section*{References} \bibliographystyle{iopart-num}
{ "timestamp": "2013-05-27T02:00:23", "yymm": "1210", "arxiv_id": "1210.3665", "language": "en", "url": "https://arxiv.org/abs/1210.3665" }
\section{Introduction} \label{sec:Introduction} \def\D{\mathrm{d}} \def\E{\mathrm{e}} \def\I{\mathrm{i}} While the LHC methodically examines the energy scale of the electroweak theory and above, it is time to recall the two criteria for evaluating a physical theory, mentioned by A. Einstein~\cite{Einstein:1949}. The first point of view is obvious: a theory must not contradict empirical facts, and it is called the ``external confirmation''. The test of this criterion both for the standard model and its various extensions is now engaged in the LHC. The second point of view called the ``inner perfection'' of the theory, may be very important to refine the search area for new physics. All existing experimental data in particle physics are in good agreement with the standard model predictions. However, the problems exist which could not be resolved within the standard model and it is obviously not a complete or final theory. It is unquestionable that the standard model should be the low-energy limit of some higher symmetry. The question is what could be this symmetry. And the main question is, what is the mass scale of this symmetry restoration. A gloomy prospect is the restoration of this higher symmetry at once on a very high mass scale, the so-called gauge desert. A concept of a consecutive symmetry restoration is much more attractive. It looks natural in this case to suppose a correspondence of the hierarchy of symmetries and the hierarchy of the mass scales of their restoration. Now we are on the first step of some stairway of symmetries and we try to guess what could be the next one. If we consider some well-known higher symmetries from this point of view, two questions are pertinent. First, isn't the supersymmetry\cite{Nilles:1984} as the symmetry of bosons and fermions, higher than the symmetry within the fermion sector, namely, the quark-lepton symmetry~\cite{Pati:1974}, or the symmetry within the boson sector, namely, the left-right symmetry~\cite{Lipmanov:1967,Lipmanov:1968a,Lipmanov:1968,Beg:1977}? Second, wouldn't the supersymmetry restoration be connected with a higher mass scale than the others? The recent searches for supersymmetry carried out at the Tevatron and the LHC colliders~\cite{Portell_Bueso:2011} shown that no significant deviations from the standard model predictions have been found, the vast parameter space available for supersymmetry has been substantially reduced and the most probable scenarios predicted by electroweak precision tests are now excluded or under some constraints after the new stringent limits. We should like to analyse a possibility when the quark-lepton symmetry is the next step beyond the standard model. Along with the ``inner perfection'' argument for this theory, there exists a direct evidence in favor of it. The puzzle of fermion generations is recognized as one of the most outstanding problems of present particle physics, and may be the main justification for the need to go beyond the standard model. Namely, the cancellation of triangle axial anomalies which is necessary for the standard model to be renormalized, requires that fermions be grouped into generations. This association provides an equation $\sum_f \, T_{3f} \, Q_f^2 = 0$, where the summation is taken over all fermions of a generation, both quarks of three colors and leptons, $T_{3f}$ is the 3d component of the weak isospin, and $Q_f$ is the electric charge of a fermion. Due to this equation, the divergent axial-vector part of the triangle $Z \gamma \gamma$ diagram with a fermion loop vanishes. The model where a combination of quarks and leptons into generations looked the most natural, proposed by J.C. Pati and A. Salam~\cite{Pati:1974} was based on the quark-lepton symmetry. The lepton number was treated in the model as the fourth color. As the minimal gauge group realizing this symmetry, one can consider the semi-simple group $SU(4)_V \otimes SU(2)_L \otimes G_R$. To begin with, one can take the group $U(1)_R$ as $G_R$. The fermions were combined into the fundamental representations of the $SU(4)_V$ subgroup, the neutrinos with the \emph{up} quarks and the charged leptons with the \emph{down} quarks: \begin{equation} \left ( \begin{array}{c} u^1 \\ u^2 \\ u^3 \\ \nu \end{array} \right )_i \, , \qquad \left ( \begin{array}{c} d^1 \\ d^2 \\ d^3 \\ \ell \end{array} \right )_i \, , \qquad i=1,2,3 \dots \, (?) \,, \label{eq:q} \end{equation} where the superscripts 1,2,3 number colors and the subscript $i$ numbers fermion generations, i.e. $u_i$ denotes $u, c, t, \dots$ and $d_i$ denotes $d, s, b, \dots$. The left-handed fermions form fundamental representations of the $SU(2)_L$ subgroup: \begin{equation} \left ( \begin{array}{c} u^c \\ d^c \end{array} \right )_L \, , \qquad \left ( \begin{array}{c} \nu \\ \ell \end{array} \right )_L \, . \label{eq:d} \end{equation} One should keep in mind that when considering the mass eigenstates, it is necessary to take into account the mixing of fermion states~\eqref{eq:q}, \eqref{eq:d}, to be analysed below. Let us remind that such an extension of the standard model has a number of attractive features. \begin{enumerate} \item As it was mentioned above, definite quark-lepton symmetry is necessary in order that the standard model be renormalized: cancellation of triangle anomalies requires that fermions be grouped into generations. \item There is no proton decay because the lepton charge treated as the fourth color is strictly conserved. \item Rigid assignment of quarks and leptons to representations~\eqref{eq:q} leads to a natural explanation for a fractional quark hypercharge. Indeed, the traceless 15-th generator $T_{15}^V$ of the $SU(4)_V$ subgroup can be represented in the form \begin{equation} T_{15}^V=\sqrt{\frac{3}{8}} \; \text{diag}\left(\frac{1}{3}\,,\,\frac{1}{3}\,,\,\frac{1}{3}\,,\,-1\right) =\sqrt{\frac{3}{8}} \; Y_V \,. \label{eq:T15} \end{equation} It is remarkable that the values of the standard model hypercharge of the left-handed quarks and leptons combined into the $SU(2)_L$ doublets turn out to be placed on the diagonal. Let us call it the vector hypercharge, $Y_V$, and assume that it belongs to both the left- and right-handed fermions. \item Let us suppose that $G_R = U(1)_R$. The well-known values of the standard model hypercharge of the left and right, and \emph{up} and \emph{down} quarks and leptons are: \begin{equation} Y_{SM} \; = \; \left \{ \begin{array}{c} \left (\begin{array}{c} \frac{1}{3} \\ \\ \frac{1}{3} \end{array} \right ) \quad \mbox{for} \; q_L ; \\ \\ \left (\begin{array}{c} \frac{4}{3} \\ \\ -\frac{2}{3} \end{array} \right ) \quad \mbox{for} \; q_R ; \end{array} \begin{array}{c} \left (\begin{array}{c} - 1 \\ \\ - 1 \end{array} \right ) \quad \mbox{for} \; \ell_L \\ \\ \left (\begin{array}{c} 0 \\ \\ - 2 \end{array} \right ) \quad \mbox{for} \; \ell_R \end{array} \right \}. \label{eq:Y} \end{equation} Then, from the equation $Y_{SM} = Y_V + Y_R$, taking Eq.~\eqref{eq:T15} into account, one obtains that the values of the right hypercharge $Y_R$ occur to be equal $\pm 1$ for the \emph{up} and \emph{down} fermions correspondingly, both quarks and leptons. It is tempting to interpret this circumstance as the indication that the right-hand hypercharge is the doubled third component of the right-hand isospin. Thus, the subgroup $G_R$ may be $SU(2)_R$. \end{enumerate} ``Under these circumstances one would be surprised if Nature had made no use of it'', as P.~Dirac wrote on another occasion~\cite{Dirac:1931}. The most exotic object of the Pati--Salam type symmetry is the charged and colored gauge $X$ boson named leptoquark. Its mass $M_X$ should be the scale of breaking of $SU(4)_V$ to $SU(3)_c$. Bounds on the vector leptoquark mass are obtained both directly and indirectly, see Ref.~\cite{Beringer:2012}. The direct search\cite{Aaltonen:2008_LQ} for vector leptoquarks using $\tau^+ \tau^- b \bar b$ events in $p \bar p$ collisions at $E_{cm} = 1.96$ TeV have provided the lower mass limit at a level of 250--300 GeV, depending on the coupling assumed. Much more stringent indirect limits are calculated from the bounds on the leptoquark-induced four-fermion interactions, which are obtained from low-energy experiments. There is an extensive series of papers where such indirect limits on the vector leptoquark mass were estimated, see e.g. Refs.~\cite{Shanker:1982,Deshpande:1983,Leurer:1994b,Davidson:1994,Valencia:1994,Kuznetsov:1994, Kuznetsov:1995,Smirnov:1995a,Smirnov:1995b,Smirnov:2007,Smirnov:2008}. The most stringent bounds\cite{Beringer:2012} were obtained from the data on the $\pi \to e \nu$ decay and from the upper limits on the $K_L^0 \to e \mu$ and $B^0 \to e \tau$ decays. However, those estimations were not comprehensive because the phenomenon of a mixing in the lepton-quark currents was not considered there. It will be shown that such a mixing inevitably occurs in the theory. An important part of the model under consideration is its scalar sector, which also contains exotic objects such as scalar leptoquarks. We do not concern here the scalar sector, which could be much more ambiguous than the gauge one. Such an analysis can be found e.g. in Refs.~\cite{Smirnov:2007,Smirnov:2008,Leurer:1994a}. The paper is organized as follows. In Sec.~\ref{sec:Mixing}, it is argued that three types of fermion mixing inevitably arise at the loop level if initially fermions are taken without mixing. The effective four-fermion Lagrangian caused by the leptoquark interactions with quarks and leptons is presented in Sec.~\ref{sec:Lagrangian}. In Sec.~\ref{sec:Constraints}, we update the constraints on the parameters of the scheme which were obtained in our recent paper~\cite{Kuznetsov:2012} on a base of the data from different low-energy processes which are strongly suppressed or forbidden in the standard model. The updating of the constraint on the vector leptoquark mass is made in Sec.~\ref{sec:Update} basing on a new data from CMS and LHCb Collaborations on the rare decays $B^0_{d,s} \to \mu^+ \mu^-$~\cite{CMS:2012,LHCb:2012,Gushchin:2012}. \section{The third type of fermion mixing} \label{sec:Mixing} As the result of the Higgs mechanism in the Pati--Salam model, fractionally charged colored gauge $X$-bosons, vector leptoquarks appear. Leptoquarks are responsible for transitions between quarks and leptons. The scale of the breakdown of $SU(4)_V$ symmetry to $SU(3)_c$ is the leptoquark mass $M_X$. The three fermion generations are grouped into the following $\{4,2\}$ representations of the $SU(4)_V\otimes SU(2)_L$ group: \begin{equation} \begin{pmatrix} u^c & d^c\\ \nu & \ell \end{pmatrix}_i~\left(i=1,2,3\right). \label{eq:mixing} \end{equation} where $c$ is the color index to be further omitted. It is known that there exists the mixing of quarks in weak charged currents, which is described by the Cabibbo-Kobayashi-Maskawa matrix. Therefore, at least one of the states in \eqref{eq:mixing}, $u$ or $d$, is not diagonal in mass. It can easily be seen that, because of mixing that arises at the loop level, none of the components is generally a mass eigenstate. As usual, we assume that all the states in \eqref{eq:mixing}, with the exception of $d$, are initially diagonal in mass. This leads to nondiagonal transitions $\ell \to X + d (s,b) \to \ell^\prime$ through a quark-leptoquark loop, see Fig.~1. As this diagram is divergent, the corresponding counterterm should exist at the tree level. This means that the lepton states $\ell$ in \eqref{eq:mixing} are not the mass eigenstates, and there is mixing in the lepton sector. Other nondiagonal transitions arise in a similar way. Hence, in order that the theory be renormalizable, it is necessary to introduce all kinds of mixing even at the tree level. \begin{figure} \begin{center} \includegraphics*[width=0.3\textwidth]{X_loop.eps} \caption{Feynman diagram illustrating the appearance of fermion mixings.} \end{center} \label{fig:X_loop} \end{figure} As all the fermion representations are identical, they can always be regrouped in such a way that one state is diagonal in mass. The most natural way is to diagonalize charged leptons. In this case, fermion representations can be written in the form \begin{equation} \begin{pmatrix} u & d\\ \nu & \ell \end{pmatrix}_{\ell}= \begin{pmatrix} u_e & d_e\\ \nu_e & e \end{pmatrix},~ \begin{pmatrix} u_\mu & d_\mu\\ \nu_\mu & \mu \end{pmatrix},~ \begin{pmatrix} u_\tau & d_\tau\\ \nu_\tau & \tau \end{pmatrix}. \label{eq:repr1} \end{equation} Here, the quarks and neutrinos subscripts $\ell=e,\mu,\tau$ label the states which are not mass eigenstates and which enter into the same representation as the charged lepton $\ell$: \begin{equation} \nu_{\ell}=\sum_i {\cal K}_{{\ell} i}\nu_i \,, \quad u_{\ell}=\sum_p {\cal U}_{{\ell} p}u_p\,, \quad d_{\ell}=\sum_n {\cal D}_{{\ell} n}d_n \,. \label{eq:repr2} \end{equation} Here, ${\cal K}_{{\ell}i}$ is the unitary leptonic mixing matrix by Pontecorvo--Maki--Nakagawa--Sakata. The matrices ${\cal U}_{{\ell}p}$ and ${\cal D}_{{\ell}n}$ are the unitary mixing matrices in the interactions of leptoquarks with the \emph{up} and \emph{down} fermions correspondingly, both quarks and leptons. The states $\nu_i,~u_p$ and $d_n$ are the mass eigenstates: \begin{equation} \begin{aligned} &\nu_i=\left(\nu_1,\nu_2,\nu_3\right),\\ &u_p=\left(u_1,u_2,u_3\right)=\left(u,c,t\right),\\ &d_n=\left(d_1,d_2,d_3\right)=\left(d,s,b\right). \end{aligned} \label{eq:repr3} \end{equation} Thus, there are generally three types of mixing in this scheme. In our notation, the well-known Lagrangian describing the interaction of charge weak currents with $W$-bosons takes the form \begin{eqnarray} {\cal L}_W &=& \frac{g}{2\sqrt{2}}\left[\left(\bar{\nu}_{\ell} O_\alpha \ell \right)+\left(\bar{u}_{\ell} O_\alpha d_{\ell} \right)\right] W^\dagger_\alpha + \text{h.c.} \nonumber\\ &=&\frac{g}{2\sqrt{2}}\left[{\cal K}^*_{\ell i}\left(\bar{\nu}_i O_\alpha \ell \right) + {\cal U}^*_{\ell p} {\cal D}_{\ell n} \left(\bar{u}_p O_\alpha d_n \right)\right]W^\dagger_\alpha + \text{h.c.}, \label{eq:Lagr_W} \end{eqnarray} where $g$ is the constant of the $SU(2)_L$ group and $O_\alpha=\gamma_\alpha\left(1-\gamma_5\right)$. It follows that the standard Cabibbo--Kobayashi--Maskawa matrix is $V={\cal U}^\dagger {\cal D}$. This is the only available information about the matrices ${\cal U}$ and ${\cal D}$ of mixing in the leptoquark sector. The matrix ${\cal K}$ describing a mixing in the lepton sector is the subject of intensive experimental studies. Following the spontaneous breakdown of the $SU(4)_V$ symmetry to $SU(3)_c$ on the scale of $M_X$, six massive vector bosons forming three charged colored leptoquarks, decouple from the 15-plet of gauge fields. The interaction of these leptoquarks with fermions has the form \begin{equation} {\cal L}_X=\frac{g_S\left(M_X\right)}{\sqrt{2}} \left[ {\cal D}_{\ell n} \left(\bar{\ell} \gamma_\alpha d^c_n\right) + \left({\cal K}^\dagger {\cal U}\right)_{ip} \left(\bar{\nu}_i \gamma_\alpha u^c_p \right) \right] X^c_\alpha + \text{h.c.}, \label{eq:LagDen} \end{equation} where the color superscript $c$ is written explicitly once again. The coupling constant $g_S\left(M_X\right)$ is expressed in terms of the strong-interaction constant $\alpha_S$ on the scale of the leptoquark mass $M_X$ as $g_S^2\left(M_X\right)/4\pi=\alpha_S\left(M_X\right)$. \section{Effective Lagrangian with allowance for QCD corrections} \label{sec:Lagrangian} If the momentum transfer satisfies the condition $q^2\ll M_X^2$, the Lagrangian \eqref{eq:LagDen} leads to the effective four-fermion vector-vector interaction between quarks and leptons. By applying the Fierz transformation, we can isolate the lepton and quark currents (scalar, pseudoscalar, vector and axial-vector currents) in the effective Lagrangian. In constructing the effective Lagrangian of leptoquark interactions, it is necessary to take into account the QCD corrections, which can easily be estimated, see e.g. Refs.~\cite{Vainstein,Vysotskii}. In the case under study, we can use the approximation of leading logarithms because $\ln\left(M_X/\mu\right)\gg1$, where $\mu\sim1~\text{GeV}$ is the typical hadronic scale. As the result of taking the QCD corrections into account, the scalar and pseudoscalar coupling constants acquire the enhancement factor \begin{equation} Q\left(\mu\right) = \left(\frac{\alpha_S\left(\mu\right)}{\alpha_S\left(M_X\right)}\right)^{4/\bar{b}}, \label{eq:EnFact} \end{equation} where $\alpha_S\left(\mu\right)$ is the strong-interaction constant on the scale $\mu$, $\bar{b}=11-2/3\left(\bar{n}_f\right)$, and $\bar{n}_f$ is the mean number of quark flavors on the scales $\mu^2\leq q^2\leq M_X^2$; for $M_X^2\gg m_t^2$, we have $\bar{b}\simeq7$. Further we investigate the contribution to low-energy processes from the interaction Lagrangian \eqref{eq:LagDen} involving leptoquarks and find constraints on the parameters of the scheme from available experimental data. As the analysis shows, the most stringent constraints on the vector-leptoquark mass $M_X$ and on the elements of the mixing matrix ${\cal D}$ follow from the data on rare $\pi$ and $K$ meson decays. Possible constraints on the masses and coupling constants of vector leptoquarks from experimental data on rare $\pi$ and $K$ decays were analyzed in Refs.~\cite{Shanker:1982,Deshpande:1983,Leurer:1994b,Davidson:1994,Valencia:1994,Kuznetsov:1994, Kuznetsov:1995,Smirnov:1995a,Smirnov:1995b,Smirnov:2007,Smirnov:2008}. One approach\cite{Shanker:1982,Leurer:1994b,Davidson:1994} was based on using the phenomenological model-independent Lagrangians describing the interactions of leptoquarks with quarks and leptons. Pati--Salam quark-lepton symmetry was considered in Refs.~\cite{Deshpande:1983,Valencia:1994,Kuznetsov:1994,Kuznetsov:1995,Smirnov:1995a, Smirnov:1995b,Smirnov:2007,Smirnov:2008}. QCD corrections were included into an analysis in Refs.~\cite{Valencia:1994,Kuznetsov:1994,Kuznetsov:1995}. The authors of Ref.~\cite{Valencia:1994} considered the possibility of mixing in quark-lepton currents, but they analyzed only specific cases in which each charged lepton is associated with one quark generation. In our notation, this corresponds to the matrices ${\cal D}$ that are obtained from the unit matrix by making all possible permutation of columns. In the description of the $\pi$- and $K$-meson interactions, it is sufficient to retain only the scalar and pseudoscalar coupling constants in the effective Lagrangian. Really, these couplings are more significant in the amplitudes, because they are enhanced, first, by QCD corrections, and second, by the smallness of the current-quark masses arising in the amplitude denominators. The corresponding part of the effective Lagrangian can be represented as \begin{eqnarray} \Delta{\cal L}_{\pi,K} &=& -\frac{2\pi\alpha_S\left(M_X\right)}{M_X^2} \, Q \left(\mu\right) \left[{\cal D}_{\ell n} \left( {\cal U}^\dagger {\cal K}\right)_{pi} \left(\bar{\ell} \gamma_5 \nu_i \right) \left(\bar{u}_p\gamma_5d_n\right) + \text{h.c.} - \left( \gamma_5 \to 1 \right)\right] \nonumber\\ &-&\frac{2\pi\alpha_S\left(M_X\right)}{M_X^2} \, Q \left(\mu\right) \bigg[{\cal D}_{\ell n} {\cal D}^*_{\ell^\prime n^\prime} \left(\bar{\ell} \gamma_5 \ell^\prime \right) \left( \bar{d}_{n^\prime} \gamma_5 d_n \right) \nonumber\\ &+& \left({\cal K}^\dagger {\cal U}\right)_{ip}\left({\cal U}^\dagger {\cal K}\right)_{p^\prime i^\prime}\left(\bar{\nu}_i\gamma_5\nu_{i^\prime}\right)\left(\bar{u}_{p^\prime}\gamma_5u_p\right)-\left(\gamma_5\to1\right) \bigg]. \label{eq:LagPiK} \end{eqnarray} This Lagrangian contributes to the rare $\pi$, $K$, $\tau$ and $B$ decays, which are strongly suppressed or forbidden in the standard model. For the $\tau$ and $B$ decays, this Lagrangian is not enough, and a part with the product of axial-vector currents should be added. \section{Constraints on the parameters of the scheme from low-energy processes} \label{sec:Constraints} In our recent paper~\cite{Kuznetsov:2012}, we have performed a detailed analysis of a large set of experimental data on different low-energy processes which are strongly suppressed or forbidden in the standard model. The constraints on the vector leptoquark mass were obtained. In Table~\ref{tab:1}, the most stringent constraints of Ref.~\cite{Kuznetsov:2012} are summarized. All the constraints involve the elements of the unknown unitary mixing matrix ${\cal D}$: \begin{equation} {\cal D}_{\ell n} = \begin{pmatrix} {\cal D}_{e d} & {\cal D}_{e s} & {\cal D}_{e b}\\[2mm] {\cal D}_{\mu d} & {\cal D}_{\mu s} & {\cal D}_{\mu b}\\[2mm] {\cal D}_{\tau d} & {\cal D}_{\tau s} & {\cal D}_{\tau b} \end{pmatrix}. \label{Ddef} \end{equation} The possibility was analysed in Ref.~\cite{Kuznetsov:2012} for the constraints on the vector leptoquark mass $M_X$ to be much weaker than the numbers in Table~\ref{tab:1}. The case was considered when the elements ${\cal D}_{ed}$ and ${\cal D}_{es}$ are small enough, to eliminate the most strong restriction arising from the limit on the decays $K^0_L \to e^\pm \mu^\mp$. For evaluation, these elements were taken to be zero. Given the unitarity of the matrix ${\cal D}$, this meant that ${\cal D}_{eb} = 1$, and ${\cal D}_{\mu b} = {\cal D}_{\tau b} = 0$. The remaining $(2 \times 2)$-matrix was parameterized by one angle. The insertion of the phase factor allowed to eliminate the restriction arising from the limit on $Br(K^0_L \to \mu^+ \mu^-)$ which contained the real part of the ${\cal D}$ matrix elements product. The ${\cal D}$ matrix was taken in the form: \begin{equation} {\cal D}_{\ell n} \simeq \begin{pmatrix} 0 & 0 & 1~\\[2mm] \cos \varphi & ~\text{i} \sin \varphi~ & 0~\\[2mm] ~\text{i} \sin \varphi & \cos \varphi & 0~ \end{pmatrix}. \label{eq:Dfin} \end{equation} \begin{table}[ht] \caption{Constraints on the leptoquark mass and on the elements of the ${\cal D}$ matrix from experimental data on rare decays.} \begin{center} {\begin{tabular}{lcl} \\ \hline \\ Experimental limit & Ref. & Bound \\ \\ \hline \\ \bigskip $Br(K^0_L \to e^\pm \mu^\mp) < 4.7 \times 10^{-12}$ & \cite{AMBROSE98B} & $\frac{\mbox{\footnotesize $M_X$}} {\mbox{\footnotesize $|{\cal D}_{ed} {\cal D}^*_{\mu s}+{\cal D}_{es} {\cal D}^*_{\mu d}|^{1/2}$}} \, > \, 2100~\textrm{TeV}$ \\ \bigskip $Br(K^0_L \to \mu^+ \mu^-) = (6.84\pm0.11) \times 10^{-9}$ & \cite{AMBROSE00,ALEXOPOULOS04} & $\frac{\mbox{\footnotesize $M_X$}} {\mbox{\footnotesize $|\text{Re}({\cal D}_{\mu d} {\cal D}^*_{\mu s})|^{1/2}$}} \, > \, 1100~\textrm{TeV}$ \\ \bigskip $Br(B^0\to e^+ \mu^-)<6.4\times10^{-8}$ & \cite{Aaltonen:2009} & $\frac{\mbox{\footnotesize $M_X$}} {\mbox{\footnotesize $|{\cal D}_{\mu d} {\cal D}_{e b}|^{1/2}$}} \, > \, 55~\textrm{TeV}$ \\ \bigskip $Br(B^0\to\mu^+\mu^-)<1.5\times10^{-8}$ & \cite{Aaltonen:2008} & $\frac{\mbox{\footnotesize $M_X$}} {\mbox{\footnotesize $|{\cal D}_{\mu d} {\cal D}_{\mu b}|^{1/2}$}} \, > \, 79~\textrm{TeV}$ \\ \bigskip $Br(B^0_s\to e^+ \mu^-)<2.0\times10^{-7}$ & \cite{Aaltonen:2009} & $\frac{\mbox{\footnotesize $M_X$}} {\mbox{\footnotesize $|{\cal D}_{\mu s} {\cal D}_{e b}|^{1/2}$}} \, > \, 41~\textrm{TeV}$ \\ \bigskip $Br(B^0_s\to\mu^+\mu^-)<4.2\times10^{-8}$ & \cite{Abazov:2010} & $\frac{\mbox{\footnotesize $M_X$}} {\mbox{\footnotesize $|{\cal D}_{\mu s} {\cal D}_{\mu b}|^{1/2}$}} \, > \, 61~\textrm{TeV}$ \\ \hline \end{tabular} \label{tab:1}} \end{center} \end{table} The constraints on the vector leptoquark mass and the $\varphi$ angle arising from Table~\ref{tab:1} took the form: \newline i) $B^0\to e^+ \mu^-$ \begin{equation} M_X > 55~\textrm{TeV} \, |\cos \varphi|^{1/2} \,, \label{fin3} \end{equation} i1) $B^0_s\to e^+ \mu^-$ \begin{equation} M_X > 41~\textrm{TeV} \, |\sin \varphi|^{1/2} \,. \label{fin4} \end{equation} Combining these constraints, the limit on the vector leptoquark mass was obtained~\cite{Kuznetsov:2012}: \begin{equation} M_X > 38~\textrm{TeV} \,. \label{eq:finMX} \end{equation} \section{Different mixings for left-handed and right-handed fermions} \label{sec:Different} We have considered a possibility when the quark-lepton symmetry was the next step beyond the standard model. Then the left-right symmetry which is believed to exist in Nature, should restore at higher mass scale. But this means that the left-right symmetry should be already broken at the scale $M_X$. It is worthwhile to consider the matrices ${\cal D}^{(L)}, {\cal U}^{(L)}$ and ${\cal D}^{(R)}, {\cal U}^{(R)}$ which are in a general case different for left-handed and right-handed fermions. This possibility and some its consequences were also considered in Refs.~\cite{Smirnov:1995a,Smirnov:1995b,Smirnov:2007,Smirnov:2008}. The interaction Lagrangian of leptoquarks with fermions takes the form instead of Eq.~\eqref{eq:LagDen}: \begin{eqnarray} {\cal L}_X &=& \frac{g_S\left(M_X\right)}{2 \sqrt{2}} \bigg[ {\cal D}^{(L)}_{\ell n} \left(\bar{\ell} O_\alpha d_n \right) + {\cal D}^{(R)}_{\ell n} \left(\bar{\ell} O_\alpha^\prime d_n \right) \nonumber\\ &+& \left( {\cal K}^{(L)\dagger} {\cal U}^{(L)} \right)_{ip} \left(\bar{\nu}_i O_\alpha u_p \right) + \left( {\cal K}^{(R)\dagger} {\cal U}^{(R)} \right)_{ip} \left(\bar{\nu}_i O_\alpha^\prime u_p \right) \bigg] X_\alpha + \text{h.c.}, \label{Lagr_LR} \end{eqnarray} where $O_\alpha=\gamma_\alpha\left(1-\gamma_5\right)$, $O_\alpha^\prime=\gamma_\alpha\left(1+\gamma_5\right)$. The constraints on the model parameters from experimental data on rare $\pi$ and $K$ decays in the case of different mixings take the forms presented in Table 5 of Ref.~\cite{Kuznetsov:2012}. If one would wish to reduce the limits on $M_X$ presented there from thousands and hundreds to tens of TeV by varying the elements of the ${\cal D}^{(L)}$ and ${\cal D}^{(R)}$ matrices, it seems that the elements ${\cal D}^{(L)}_{e d}$ and ${\cal D}^{(R)}_{e d}$ should be taken small in any case. If one takes them for evaluation be zero, the most strong restriction from the limit on $Br(K^0_L \to e^\pm \mu^\mp)$ acquires the form: \begin{equation} \frac{M_X} {\left(\left|{\cal D}^{(L)}_{es} {\cal D}^{(R)}_{\mu d}\right|^2 + \left|{\cal D}^{(R)}_{es} {\cal D}^{(L)}_{\mu d}\right|^2 \right)^{1/4}} > 1770~\textrm{TeV} \,. \label{LR_2} \end{equation} There are two possibilities to eliminate this bound, which we call the symmetric and the asymmetric cases. \emph{The symmetric case} is realized when both of the matrices ${\cal D}^{(L)}$ and ${\cal D}^{(R)}$ are taken in the form of Eq.~\eqref{eq:Dfin} with the angles $\varphi_L$ and $\varphi_R$. In this case the restriction from the limit on $Br(K^0_L \to \mu^+ \mu^-)$ takes the form: \begin{equation} M_X > 780~\textrm{TeV} \, |\sin \left( \varphi_L - \varphi_R \right)|^{1/2} \,. \label{LR_3} \end{equation} To eliminate this bound, the angles should be close to each other or differ by $\pi$, in any case we come back to the result~\eqref{eq:finMX}. \emph{The asymmetric case} is realized when the matrices are taken in the form: \begin{equation} {\cal D}^{(L)}_{\ell n} \simeq \begin{pmatrix} ~0 & \cos \chi_L & ~\sin \chi_L~\\[2mm] ~0 & - \sin \chi_L & ~\cos \chi_L~\\[2mm] ~1 & 0 & 0 \end{pmatrix} , \quad {\cal D}^{(R)}_{\ell n} \simeq \begin{pmatrix} ~0~~ & 0~~ & 1~~\\[2mm] ~0~~ & 1~~ & 0~~\\[2mm] ~1~~ & 0~~ & 0~~ \end{pmatrix} . \label{LR_4} \end{equation} As the analysis shows~\cite{Kuznetsov:2012}, the most stringent constraints arise from the following limits on the branching ratios of the processes: \newline i) $B^0_s \to \mu^+ \mu^-$ \begin{equation} M_X > 51~\textrm{TeV} \, |\cos \chi_L|^{1/2} \,, \label{LR_6} \end{equation} ii) $B^0_s \to e^+ \mu^-$ \begin{equation} M_X > 41~\textrm{TeV} \, |\sin \chi_L|^{1/2} \,. \label{LR_7} \end{equation} From these constraints, the limit was obtained~\cite{Kuznetsov:2012} on the vector leptoquark mass from low-energy processes in the case of different mixing matrices for left-handed and right-handed fermions, which coincided, with a good accuracy, with the limit \eqref{eq:finMX} obtained in the left-right-symmetric case: \begin{equation} M_X > 38~\textrm{TeV} \,. \label{LR_8} \end{equation} \section{Updated constraints from the LHC data} \label{sec:Update} The updating of the constraint on the vector leptoquark mass is based on a new data from CMS and LHCb Collaborations on the rare decays $B^0_{d,s} \to \mu^+ \mu^-$~\cite{CMS:2012,LHCb:2012,Gushchin:2012}, which are presented in Table~\ref{tab:2}. \begin{table}[ht] \caption{Constraints on the model parameters from new data of the CMS and LHCb Collaborations on the rare decays $B^0_{d,s} \to \mu^+ \mu^-$ (90 \% C.L.)} \begin{center} {\begin{tabular}{lcl} \\ \hline \\ Experimental limit & Ref. & Bound \\ \\ \hline \\ \bigskip $Br(B^0\to\mu^+\mu^-)<1.4\times10^{-9}$ & CMS~\cite{CMS:2012} & $\frac{\mbox{\footnotesize $M_X$}} {\mbox{\footnotesize $|{\cal D}_{\mu d} {\cal D}_{\mu b}|^{1/2}$}} \, > \, 143~\textrm{TeV}$ \\ \bigskip $Br(B^0_s\to\mu^+\mu^-)<6.4\times10^{-9}$ & CMS~\cite{CMS:2012} & $\frac{\mbox{\footnotesize $M_X$}} {\mbox{\footnotesize $|{\cal D}_{\mu s} {\cal D}_{\mu b}|^{1/2}$}} \, > \, 98~\textrm{TeV}$ \\ \bigskip $Br(B^0\to\mu^+\mu^-)<0.81\times10^{-9}$ & LHCb~\cite{LHCb:2012} & $\frac{\mbox{\footnotesize $M_X$}} {\mbox{\footnotesize $|{\cal D}_{\mu d} {\cal D}_{\mu b}|^{1/2}$}} \, > \, 164~\textrm{TeV}$ \\ \bigskip $Br(B^0_s\to\mu^+\mu^-)<3.8\times10^{-9}$ & LHCb~\cite{LHCb:2012} & $\frac{\mbox{\footnotesize $M_X$}} {\mbox{\footnotesize $|{\cal D}_{\mu s} {\cal D}_{\mu b}|^{1/2}$}} \, > \, 112~\textrm{TeV}$ \\ \hline \end{tabular} \label{tab:2}} \end{center} \end{table} These new data improve the constraints obtained in the asymmetric case~\eqref{LR_4}, namely, the data of the LHCb Collaboration on the decay $B^0_s \to \mu^+ \mu^-$ provide, instead of~\eqref{LR_6}: \begin{equation} M_X > 94~\textrm{TeV} \, |\cos \chi_L|^{1/2} \,. \label{LR_LHCb1} \end{equation} Combining this bound with Eq.~\eqref{LR_7}, one obtains the final limit on the vector leptoquark mass in the case of different mixing matrices for left-handed and right-handed fermions: \begin{equation} M_X > 41~\textrm{TeV} \,. \label{LR_LHCb2} \end{equation} \section{Conclusion} Thus, the detailed analysis of the available experimental data on rare decays yields constraints on the vector leptoquark mass that always involve the elements of the unknown mixing matrix ${\cal D}$. Combining the most strong constraints from the experimental data on the low-energy processes, presented in Tables~\ref{tab:1} and~\ref{tab:2}, we have obtained in the case of identical mixings for left-handed and right-handed fermions the following lowest limit on the vector leptoquark mass: $M_X > 38~\textrm{TeV}$. The lowest limit obtained in the asymmetric case~\eqref{LR_4} of different mixing matrices for left-handed and right-handed fermions appears to be: $M_X > 41~\textrm{TeV}$. \section*{Acknowledgements} A.K. and N.M. express their deep gratitude to the organizers of the Seminar ``Quarks-2012'' for warm hospitality. We thank A.\,V.~Povarov and A.\,D.~Smirnov for useful discussions. The study was performed within the State Assignment for Yaroslavl University (Project No.~2.4176.2011), and was supported in part by the Russian Foundation for Basic Research (Project No.~11-02-00394-a).
{ "timestamp": "2012-10-16T02:01:43", "yymm": "1210", "arxiv_id": "1210.3697", "language": "en", "url": "https://arxiv.org/abs/1210.3697" }
\section{Introduction} \label{Section:Intro} The classical approach to analyzing the cellular downlink is to model the network as a lattice or regular grid of base stations \cite{lee:1986}. By using the geometry of the grid, which typically assumes a honeycomb-like arrangement of hexagons, along with models for the underlying propagation conditions and channel reuse strategies, performance metrics can be computed at various potential mobile locations. Often, the analysis focuses on the worst-case locations, which are at the cell edges. The hexagonal grid model was used to analyze the other-cell interference (OCI) of a power-controlled direct-sequence code-division multiple access (DS-CDMA) downlink in \cite{viterbi:1994}. Although conceptually simple and locally tractable, the grid assumption is a poor model for actual base-station deployments, which cannot assume a regular grid structure due to a variety of regulatory and physical constraints. Recently, a new approach to the analysis of cellular networks has been proposed in \cite{andrews:2011} and \cite{dec:2010}, which models the base-station locations as a realization of a random point process, thereby allowing the use of analytical tools from stochastic geometry \cite{stoyan:1996}. This approach can be used to determine the performance of a typical mobile user. By combining the effects of random base-station location, fading, and shadowing into a single random variable, performance metrics such as coverage probability (the complement of the outage probability) and average achievable rate can be determined. Under certain limitations, the performance metrics can be found in closed form with surprisingly simple expressions, and these expressions can provide insight into the influence of key network and channel parameters such as the path-loss exponent, the density of base stations, and the minimum required signal-to-interference-and-noise ratio (SINR) required to achieve acceptable coverage. More recently, this approach has been extended to account for multi-tier heterogenous networks \cite{dhillon:2012}. The grid-based and random-spatial approaches represent two extreme perspectives in modeling cellular networks. Whereas the grid-based approach overly constrains the base-station locations, a pure random-spatial approach is similarly unrealistic because it does not place enough constraints on the base-station placements. For instance, the spatial model in \cite{andrews:2011} assumes that the base stations are drawn from a two-dimensional Poisson point process (PPP) and that the network extends infinitely on the Euclidian plane. This is a poor model for several reasons. The first is that no network has an infinite area. The second is that any realization of a PPP could have a large, even infinite, number of base stations placed within a finite area. Finally, the pure PPP model does not permit a minimum separation between base stations, which is characteristic of actual macro-cellular deployments. In a typical modern cellular system, each base station is allocated a maximum total transmit power, which must be shared among the users in its cell through some resource allocation policy. Simple spatial models do not adequately capture the nuances of power allocation policies. For instance, even a simple equal-share power allocation policy is difficult to model analytically because the power allocated to a given user will depend on the number of mobiles in the same cell. As a complement to power control, systems often use rate control to ensure that the rate provided to each user is maximized under a constraint on outage probability. While average achievable rate is considered in \cite{andrews:2011}, it is computed under the assumption that the rate provided to each mobile perfectly adapts to the instantaneous SINR such that the outage is zero. This overly optimistic assumption is not realistic for current rate-control implementations, which adapt the modulation and coding scheme to provide a particular outage probability (typically 0.1) when averaged over the fading, but conditioned on the network realization. In this paper, we use a constrained random spatial approach to model the DS-CDMA downlink. The spatial model places a fixed number of base stations within a region of finite extent. The model enforces a minimum separation among the base stations. The model for base-station placement is a binomial point process (BPP) with repulsion, which we call a {\em uniform clustering} model. To facilitate the study of resource allocation policies that depend on the mobile locations, the mobiles are also placed according to a uniform clustering process with a higher density and smaller minimum separation than that used to place the base stations. The analysis in this paper is driven by a new closed-form expression, recently published in \cite{torrieri:2012}, for the {\em conditional} outage probability at each mobile, where the conditioning is with respect to the network realization. The approach involves drawing realizations of the network according to the desired spatial and shadowing models, and then computing the outage probability at each realized mobile location. Because the outage probability at each mobile is averaged over the fading, it can be found in closed form with no need to simulate the corresponding channels. A Nakagami-m fading model is assumed, which models a wide class of channels, and the Nakagami-m fading parameters do not need to be identical for all communication links. This is a useful feature that can be used to model situations where the base station serving a mobile is in the line of sight (LOS) while the interfering base stations are non-LOS. By averaging over many network realizations, the mean outage probability can be found for a typical mobile. However, characterizing only the average performance is of limited utility, and the approach presented in this paper allows the analysis to extend beyond the ergodic performance of the typical mobile user. For instance, by averaging over the mobiles that are farthest from their base station, the performance of cell-edge mobiles can be determined. More generally, the outage probability of each mobile can be constrained, and the statistics of the rate provided to each user can be determined under various power-control and rate-control policies. By plotting the complementary cumulative distribution function (ccdf) of the per-user rate, the variability in mobile performance can be visualized. The remainder of this paper is organized as follows. Section \ref{Section:SystemModel} presents a model of the network culminating in an expression for SINR. Section \ref{Section:Outage} provides an expression for the outage probability based on the analysis published in \cite{torrieri:2012}. Section \ref{Section:Policies} discusses policies for rate control and power control. A performance analysis is given in Section \ref{Section:Performance}, which compares power control with rate control on the basis of average rate, transmission capacity, and fairness. The Section also investigates the influence of the minimum base-station separation. Finally, the paper concludes in Section \ref{Section:Conclusion}. \section{Network Model} \label{Section:SystemModel} The network comprises $M$ cellular base stations $\{X_1, ..., X_M\}$ and $K$ mobiles $\{ Y_1, ..., Y_K\}$ placed on a disk of radius $r_{net}$ and area $A_{net} = \pi r^2_{net}$. The variable $X_i$ represents both the $i^{th}$ base station and its location, and similarly, $Y_j$ represents the $j^{th}$ mobile and its location. An {\em exclusion zone} of radius $r_{bs}$ surrounds each base station, and no other base stations are allowed within this zone. Similarly, an exclusion zone of radius $r_{m}$ surrounds each mobile, and no other mobiles are allowed within a placed mobile's exclusion zone. Fig. \ref{Figure:FigA} shows a portion of an example network with average number of mobiles per cell $K/M = 16$, a base-station exclusion radius $r_{bs} = 0.25$, and a mobile exclusion radius $r_m = 0.01$. The base station locations are given by the large filled circles, and the mobiles are dots. A Voronoi tessellation shows the cell boundaries that occur in the absence of shadowing. \begin{figure}[t] \centering \hspace{-0.25cm} \includegraphics[width=8.25cm]{FigA} \vspace{-0.25cm} \caption{ Close-up of an example network topology. Base stations are represented by large filled circles, and mobiles by small dots. Cell boundaries are indicated, and the minimum base-station separation is $r_{bs} = 0.25$. The average cell load is $K/M = 16$ mobiles. \label{Figure:FigA} } \vspace{-0.6cm} \end{figure} Each mobile connects to at most one base station. Let $\mathcal Y_i$ be the set of mobiles connected to base station $X_i$, and $K_i = | \mathcal Y_i |$ be the number of mobiles served by $X_i$. Furthermore, let $\mathsf{g}(j)$ be a function that returns the index of the base station serving $Y_j$ so that $Y_j \in \mathcal Y_i$ if $\mathsf{g}(j)=i$. If $Y_j$ cannot connect to any base station, which is possible when a cell runs out of available channels, then $\mathsf{g}(j) = 0$. The downlink signals use orthogonal DS-CDMA sequences with common spreading factor $G$. Because the sequences transmitted by a particular base station are orthogonal, the only source of intracell interference is due to multipath and the corresponding loss of orthogonality. However, if the ratio of the maximum-power and minimum-power multipath components is sufficiently small (e.g., less than about $0.1G$), then the multipath components will have negligible effect. For this reason, we neglect the intracell interference and assume that intercell interference is the only source of interference. The signal is transmitted by base station $X_i$ to mobile $Y_{j}$ with average power $P_{i,j}$. We assume that the base stations transmit with a common power $P_0$ such that \begin{eqnarray} \frac{1}{1-f_p} \sum_{j: Y_j \in \mathcal Y_i} P_{i,j} & = & P_0 \label{eqn:pwr_constraint} \end{eqnarray} for each $i$, where $f_{p}$ is the fraction of the base-station power reserved for pilot signals needed for synchronization and channel estimation. Power allocation strategies are considered later in this paper. Using spreading sequences with a spreading factor of $G$ directly reduces the power of the intercell interference. While the intracell sequences transmitted by a particular cell's base station are synchronous, the varying propagation delays from the other base stations cause the intercell interference to be asynchronous. Because of this asynchronism, the intercell interference is further reduced by the chip factor $h(\tau_{i,k})$, which is a function of the chip waveform and the timing offset $\tau_{i,k}$ at the mobile between the signal received from interfering base station $X_k$ and the signal received from serving base station $X_i$. When the timing offset $\tau_{i,k}$ is assumed to have a uniform distribution over the chip interval, then the expected value of $h(\tau_{i,k})$ is 2/3 \cite{torrieri:2011}. It is assumed henceforth that $G/h(\tau_{i,k})$ is a constant equal to $G/h$ at each mobile in the network. The power of $X_i$ received at $Y_j$ also depends on the fading and path loss models. We assume that path loss has a power-law dependence on distance and is perturbed by shadowing. When accounting for fading and path loss, the despread instantaneous power of $X_i$ at mobile $Y_j$ is \begin{eqnarray} \rho_{i,j} & = & \begin{cases} {P}_{i,j} g_{i,j} 10^{\xi_{i,j}/10} f\left( ||X_{i}-Y_{j}||\right) & \mbox{if $\mathsf{g}(j) = i$} \vspace{0.2cm} \\ \left( \frac{h}{G} \right) {P}_{i,j} g_{i,j} 10^{\xi_{i,j}/10} f\left( ||X_{i}-Y_{j}||\right) & \mbox{if $\mathsf{g}(j) \neq i$} \end{cases} \nonumber \\ \label{eqn:power}% \end{eqnarray} where $g_{i,j}$ is the power gain due to fading, $\xi_{i,j}$ is a \textit{shadowing factor}, and $f(\cdot)$ is a path-loss function. The \{$g_{i,j}\}$ are independent with unit-mean, and $g_{i,j}=a_{i,j}^{2}$, where $a_{i,j}$ is Nakagami with parameter $m_{i,j}$. While the $\{g_{i,j}\}$ are independent from mobile to mobile, they are not necessarily identically distributed, and each mobile can have a distinct Nakagami parameter $m_{i,j}$. When the channel between $X_i$ and $Y_j$ experiences Rayeigh fading, $m_{i,j}=1$ and $g_{i,j}$ is exponentially distributed. In the presence of log-normal shadowing, the $\{\xi_{i,j}\}$ are i.i.d. zero-mean Gaussian with variance $\sigma_{s}^{2}$. In the absence of shadowing, $\xi_{i,j}=0$. \ For $d \geq d_{0}$, the path-loss function is expressed as the attenuation power law \begin{equation} f\left( d\right) =\left( \frac{d}{d_{0}}\right) ^{-\alpha} \label{eqn:pathloss}% \end{equation} where $\alpha\geq2$ is the attenuation power-law exponent, and that $d_{0}$ is sufficiently large that the signals are in the far field. The base station $X_{\mathsf{g}(j)}$ that serves mobile $Y_j$ is selected to be the one with index \begin{eqnarray} \mathsf{g} (j) & = & \underset{i}{\operatorname{argmax}} \, \left\{ 10^{\xi_{i,j}/10} f\left( ||X_{i}-Y_{j}||\right) \right\} \label{eqn:connectivity} \end{eqnarray} which is the base station with minimum path loss to $Y_j$. In the absence of shadowing, it will be the base station that is closest to $Y_j$. In the presence of shadowing, a mobile may actually be associated with a base station that is more distant than the closest one, if the shadowing conditions are sufficiently better. We assume a maximum of $G$ orthogonal spreading sequences per cell, and once $G$ users are connected to a base station, no more can be served. The instantaneous SINR at mobile $Y_j$ is \begin{eqnarray} \gamma_j & = & \frac{\rho_{\mathsf{g}(j),j}}{\displaystyle{\mathcal{N}} + \mathop{ \sum_{i=1}^M }_{ i \neq \mathsf{g}(j) } \rho_{i,j}} \label{eqn:SINR1} \end{eqnarray} where $\mathcal{N}$ is the noise power. Substituting (\ref{eqn:power}) and (\ref{eqn:pathloss}) into (\ref{eqn:SINR1}) yields \vspace{-0.3cm} \begin{eqnarray} \gamma_j & = & \frac{g_{\mathsf{g}(j),j}\Omega_{\mathsf{g}(j),j}} {\displaystyle\Gamma^{-1} + \frac{h}{G} \mathop{ \sum_{i=1}^M }_{ i \neq \mathsf{g}(j) } g_{i,j}\Omega_{i,j}} \label{Equation:SINR2} \end{eqnarray} where $\Gamma=d_{0}^{\alpha}P_{0}/\mathcal{N}$ is the signal-to-noise ratio (SNR) at a mobile located at unit distance when fading and shadowing are absent, and \vspace{-0.2cm} \begin{eqnarray} \Omega_{i,j} & = & \frac{ P_{i,j}}{P_0} 10^{\xi_{i,j}/10} ||X_i-Y_j||^{-\alpha} \label{eqn:omega}% \end{eqnarray} is the normalized power of $X_i$ at receiver $Y_j$ before despreading. \section{Outage Probability} \label{Section:Outage} \label{Section:OutageProbability} Let $\beta_j$ denote the minimum SINR required by $Y_j$ for reliable reception and $\boldsymbol{\Omega }_j=\{\Omega_{1,j},...,\Omega _{M,j}\}$ represent the set of normalized despread base-station powers received by $Y_j$. An \emph{outage} occurs when the SINR falls below $\beta_j$. As discussed subsequently, there is a relationship between the SINR threshold and the supported {\em rate} of the transmission. Conditioning on $\boldsymbol{\Omega }_j$, the outage probability of mobile $Y_j$ is \begin{eqnarray} \epsilon_j & = & P \left[ \gamma_j \leq \beta_j \big| \boldsymbol \Omega_j \right]. \label{Equation:Outage1} \end{eqnarray} Because it is conditioned on $\boldsymbol{\Omega }_j$, the outage probability depends on the particular network realization, which has dynamics over timescales that are much slower than the fading. By defining a variable \vspace{-0.25cm} \begin{eqnarray} \mathsf Z_j & = & \beta_j^{-1} g_{\mathsf{g}(j),j} \Omega_{\mathsf{g}(j),j} - \frac{h}{G}\mathop{ \sum_{i=1}^M }_{ i \neq \mathsf{g}(j) } \Omega_{i,j} \label{eqn:z} \end{eqnarray} the conditional outage probability may be expressed as \begin{eqnarray} \epsilon_j & = & P \left[ \mathsf Z_j \leq \Gamma^{-1} \big| \boldsymbol \Omega_j \right] = F_{\mathsf Z_j} \left( \Gamma^{-1} \big| \boldsymbol \Omega_j \right) \label{Equation:OutageCDF} \end{eqnarray} which is the cumulative distribution function (cdf) of $\mathsf Z_j$ conditioned on $\boldsymbol \Omega_j$ and evaluated at $\Gamma^{-1}$. Define $\bar{F}_{\mathsf{Z}_j}(z | \boldsymbol \Omega_j) = 1 - F_{\mathsf{Z}_j}(z | \boldsymbol \Omega_j)$ to be the complementary cdf of $\mathsf{Z}_j$ conditioned on $\boldsymbol \Omega_j$. Restricting the Nakagami parameter $m_{\mathsf{g}(j),j}$ between mobile $Y_j$ and its serving base station $X_{\mathsf{g}(j)}$ to be integer-valued, the complementary cdf of $\mathsf{Z}_j$ conditioned on $\boldsymbol{\Omega}_j$ is proved in \cite{torrieri:2012} to be \begin{eqnarray} \bar{F}_{\mathsf Z_j}\left( z \big| \boldsymbol \Omega_j \right) & = & e^{-\beta_0 z } \sum_{n=0}^{m_0-1} {\left( \beta_0 z \right)}^n \sum_{k=0}^n \frac{ z^{-k} H_k ( \boldsymbol \Psi )}{ (n-k)! } \label{Equation:NakagamiConditional} \end{eqnarray} where $m_0 = m_{\mathsf{g}(j),j}$, $\beta_0 = \beta m_0/\Omega_0$, \begin{eqnarray} \Psi_i & = & \left( \frac{\beta_0 h \Omega_{i,j}}{G m_{i,j} } + 1 \right)^{-1}\hspace{-0.5cm} \label{Equation:Psi}\\ H_k ( \boldsymbol \Psi ) & = & \mathop{ \sum_{\ell_i \geq 0}}_{\sum_{i=0}^{M}\ell_i=k} \left( \mathop{ \prod_{i=1}^M }_{ i \neq \mathsf{g}(j) } G_{\ell_i} ( \Psi_i ) \right), \label{Equation:Hfunc} \end{eqnarray} the summation in (\ref{Equation:Hfunc}) is over all sets of indices that sum to $k$, and \begin{eqnarray} G_\ell( \Psi_i ) & = & \frac{\Gamma( \ell + m_{i,j} ) } {\ell! \Gamma( m_{i,j} ) } \left( \frac{\Omega_{i,j}}{ m_{i,j} } \right)^{\ell} \Psi_i^{ m_{i,j} +\ell}. \label{Equation:Gfunc} \end{eqnarray} \section{Policies}\label{Section:Policies} A key consideration in the operation of the network is the manner that the total power $P_0$ transmitted by a base station is shared by the mobiles it serves, which influences the rates provided to each user. This section discusses two options for allocating rate and power, {\em rate control} and {\em power control}. \subsection{Rate Control} A simple and efficient way to allocate $P_0$ is with an {\em equal-share} policy, which involves base station $X_i, i=\mathsf{g}(j)$, transmitting to mobile $Y_j$ with power \begin{eqnarray} P_{i,j} & = & \frac{P_0}{K_i(1-f_p)}. \label{Equation:Share} \end{eqnarray} Under this policy, the SINR will vary dramatically at each mobile. If a common SINR threshold $\beta_j$ is used by all mobiles, then the outage probability will likewise be highly variable. Instead of using a fixed threshold, the threshold $\beta_j$ of mobile $Y_j$ can be selected such that the outage probability of mobile $Y_j$ satisfies the constraint $\epsilon_j = \hat{\epsilon}$. A constraint of $\hat{\epsilon}=0.1$ is typical and appropriate for modern systems that use a hybrid automatic repeat request (HARQ) protocol. For a given $\beta_j$, there is a corresponding transmission rate $R_j$ that can be supported. Let $R_j = C(\beta_j)$ represent the relationship between $R_j$, expressed in units of bits per channel use (bpcu), and $\beta_j$. The relationship depends on the modulation and coding schemes used, and typically only a discrete set of $R_j$ can be supported. While the exact dependence of $R_j$ on $\beta_j$ can be determined empirically through tests or simulation, we make the simplifying assumption when computing our numerical results that $C(\beta_j) = \log_2(1+\beta_j)$ corresponding to the Shannon capacity. This assumption is fairly accurate for capacity-approaching codes with an infinite number of possible rates. This is a reasonable model for modern cellular systems, which use turbo codes with a large number of available rates. With an equal power share, the number of mobiles $K_i$ in the cell is first determined, and then the power share given to each is found from (\ref{Equation:Share}). For each mobile in the cell, the corresponding $\beta_j$ that achieves the outage constraint $\epsilon_j = \hat{\epsilon}$ is found by inverting the outage probability expression given in Section \ref{Section:Outage}. Once $\beta_j$ is found, the corresponding $R_j$ is found by using the function $R_j = C(\beta_j)$. The rate $R_j$ is adapted by changing the number of information bits per channel symbol. The processing gain $G$ and symbol rate are held constant, so there is no change in bandwidth. Because this policy involves fixing the transmit power and then determining the rate for each mobile that satisfies the outage constraint, we refer to it as {\em rate control}. \subsection{Power Control} A major drawback with rate control is that the rates provided to the different users in the network will vary significantly. This variation will result in unfairness to some users, particularly those located at the edges of the cells. To ensure fairness, $R_j$ could be constrained to be the same for all $X_j \in \mathcal Y_i$. The power transmitted to mobile $Y_j$ is then found such that the mobile has outage probability $\epsilon_j = \hat{\epsilon}$, while the total power provided to all mobiles in the cell satisfies power constraint (\ref{eqn:pwr_constraint}). Because the power transmitted to each mobile is varied while holding the rate constant for all mobiles in the cell, we refer to this policy as {\em power control}. Note that, while the rate is the same for all users within a given cell, it may vary from cell to cell. In particular, the rate of a given cell is found by determining the value of $R$ that allows the outage constraint $\epsilon_j = \hat{\epsilon}$ and power constraint (\ref{eqn:pwr_constraint}) to be simultaneously met. \section{Performance Analysis}\label{Section:Performance} Under outage constraint $\hat{\epsilon}$, the performance of a given network realization is largely determined by the set of achieved rates $\{R_j\}$ of the $K$ users in the network. Because the network realization is random, it follows that the set of rates is also random. Let the random variable $R$ represent the rate of an arbitrary user. The statistics of $R$ can be found for a given class of networks using a Monte Carlo approach as follows. Draw a realization of the network by placing $M$ base stations and $K$ mobiles within the disk of radius $r_{net}$ according to the uniform clustering model with minimum base-station separation $r_{bs}$ and minimum mobile separation $r_m$. Compute the path loss from each base station to each mobile, applying randomly generated shadowing factors if shadowing is present. Determine the set of mobiles associated with each base station. At each base station, apply the power allocation policy to determine the power it transmits to each mobile that it serves. By setting the outage equal to the outage constraint, invert the outage probability expression to determine the SINR threshold for each mobile in the cell. By applying the function $R_j=C(\gamma_j)$, find the rate of the mobile. Repeat this process for a large number of networks, all with the same spatial constraints. Let $E[R]$ represent the mean value of the variable $R$, which can be found by numerically averaging the values of $R$ obtained using the procedure described in the previous paragraph. While $E[R]$ is a useful metric, it does not account for the loss in throughput due to the inability to successfully decode during an outage, and it does not account for the spatial density of transmissions. These considerations are taken into account by the \emph{transmission capacity}, defined as \cite{weber:2010} \begin{eqnarray} \tau & = & \lambda \left( 1 - \hat{\epsilon} \right) E[R] \label{eqn:tc} \end{eqnarray} where $\lambda = K/A_{net}$ is the density of transmissions in the network. Transmission capacity can be interpreted as the spatial intensity of transmissions; i.e., the rate of successful data transmission per unit area. As an example, consider a network with $M=50$ base stations placed in a network of radius $r_{net} = 2$ with base-station exclusion zones of radius $r_{bs} = 0.25$. A variable number $K$ of mobiles are placed within the network using exclusion zones of radius $r_m = 0.01$. The outage constraint is set to $\bar{\epsilon} = 0.1$ and both power control and rate control are considered. A mobile in an overloaded cell is denied service. In particular, the $K_i - G$ mobiles whose path losses from the base station are greatest are denied service, in which case they do not appear in the set $\mathcal Y_i$ for any $i$, and their rates are set to $R_j = 0$. The SNR is set to $\Gamma = 10$ dB, the fraction of power devoted to pilots is $f_p = 0.1$, and the spreading factor set to $G=16$ with chip factor $h=2/3$. The propagation environment is characterized by a path-loss exponent $\alpha = 3$, and the Nakagami factors are $m_{i,j} = 3$ for $i = \mathsf{g}(j)$ while $m_{i,j} = 1$ for $i \neq \mathsf{g}(j)$; i.e., the signal from the serving base station experiences milder fading than the signals from the interfering base stations. This is a realistic model because the signal from the serving cell is likely to be in the line of sight (LOS), while the interfering cells are typically not LOS. \begin{figure}[t] \centering \includegraphics[width=8.75cm]{FigB} \vspace{-0.55cm} \caption{ Transmission capacity of the Example as a function of $K/M$ with rate control and power control. \label{Figure:FigB} } \vspace{-0.65cm} \end{figure} Fig. \ref{Figure:FigB} shows the transmission capacity, as a function of the ratio $K/M$, of rate control and power control in an unshadowed environment as well as in the presence of shadowing with $\sigma_s = 8$ dB. The figure shows that the transmission capacity under rate control is higher than it is under power control. This disparity occurs because mobiles that are close to the base station are able to be allocated extremely high rates under rate control, while with power control, mobiles close to the base station must be allocated the same rate as mobiles at the edge of the cell. As the network becomes denser ($K/M$ increases), shadowing actually improves the performance with rate control, while it degrades the performance with power control. This is because shadowing can sometimes cause the signal power of the base station serving a mobile to increase, while the powers of the interferers are reduced. The effect of favorable shadowing is equivalent to the mobile being located closer to its serving base station. When this occurs with rate control, the rate is increased, sometimes by a very large amount. While shadowing does not induce extremely favorable conditions very often, when it does the improvement in rate is significant enough to cause the average to increase. On the other hand, a single user with favorable shadowing conditions operating under power control will not improve the average rate because all mobile in a cell receive the same rate. \begin{figure}[t] \centering \includegraphics[width=8.75cm]{FigC} \vspace{-0.55cm} \caption{ Average rate of the Example as a function of $K/M$ in the presence of shadowing with rate control and power control. For rate control, the averaging is done over all mobiles and over just the cell-edge mobiles. With power control, all mobiles in a cell are given the same rate. \label{Figure:FigC} } \vspace{-0.5cm} \end{figure} While rate control offers a higher {\em average} rate than power control, the rates it offers are much more variable. This behavior can be seen in Fig. \ref{Figure:FigC}, which compares the rates of all users against those located at the cell edges. In particular, the figure shows the rate averaged across all mobiles for both power control and rate control, as well as the rate averaged across just the cell-edge mobiles for rate control, where the cell-edge mobiles are defined to be the 5 percent of the mobiles that are furthest from their serving base station. The average rate of cell-edge mobiles is not shown for power control because each cell-edge mobile has the same rate as all the mobiles in the same cell. As seen in the figure, the performance of cell-edge mobiles is worse with rate control than it is with power control. The fairness of a particular power allocation policy can be further visualized by plotting the ccdf of $R$, which is the probability that $R$ exceeds a threshold $r$; i.e. $P[R>r]$. Fig. \ref{Figure:FigD} shows the ccdf of $R$ for the Example with shadowing ($\sigma_s = 8$ dB) and either rate control or power control. Two system loads are considered: a lightly loaded system $(K/M=4)$ and a moderately-loaded system $(K/M=12)$. The ccdf curves for power control are steeper than they are for rate control, indicating less variability in the provided rates. The lower variability in rate corresponds to improved fairness. For instant, with a load $K/M = 4$, almost all (99.9\%) users are provided with rates of at least $r=0.5$ under power control, while with rate control 96\% of the users are provided with rates of at least $r=0.5$ implying that a significant fraction of the users are provided with lower rates. \begin{figure}[t] \centering \includegraphics[width=8.75cm]{FigD} \vspace{-0.55cm} \caption{ Ccdf of $R$ with either rate control or power control for a lightly loaded system $(K/M=4)$ and a moderately-loaded system $(K/M=12)$. \label{Figure:FigD} } \end{figure} \balance \begin{figure}[t] \centering \hspace{-0.5cm} \includegraphics[width=8.75cm]{FigF} \caption{ Transmission capacity for rate and power control as function of $r_{bs}$ for $K/M=8$ in Mixed Fading and Shadowing ($\sigma_s$ = 8 dB). \label{Figure:FigF} } \end{figure} A key feature of the proposed analysis is that it permits a minimum spacing of $r_{bs}$ around each base station. The dependence of the transmission capacity on $r_{bs}$ is shown in Fig. \ref{Figure:FigF} for both rate control and power control with a system load of $K/M = 8$. Curves are shown for two values of path-loss exponent: $\alpha = 3$ and $\alpha = 4$. Except for the values of $r_{bs}$ and $\alpha$, the parameters are the same as in the Example with shadowing. Increasing $r_{bs}$ improves the transmission capacity. The improvement is slightly more pronounced for rate control than for power control, and is more pronounced for the higher value of $\alpha$. \section{Conclusion} \label{Section:Conclusion} This paper has presented a powerful new approach for modeling and analyzing the cellular downlink. Unlike simple spatial models, the model in this paper allows constraints to be placed on the distance between base stations, the geographic footprint of the network, and the number of base stations and mobiles. The analysis features a flexible channel model, accounting for path loss, shadowing, and Nakagami-m fading with non-identical parameters. The proposed analytical approach provides a way to compare various access and resource allocation techniques. As a specific application of the model, the paper models a direct-sequence CDMA network, analyzes it using realistic network parameters, and compares the performance of two resource allocation policies. While this paper only considers resource allocation policies that implement either rate control or power control, other policies that fall between these two extremes could be considered. While in this paper mobiles in overloaded cells were denied service, the work can be extended to analyze reselection schemes, whereby mobiles in overloaded cells attempt to connect to another nearby base station serving an underloaded cell. This work could be extended to analyze the uplink and to model other types of access, such as orthogonal frequency-division multiple access (OFDMA). It could furthermore be extended to handle sectorized cells and coordinated multipoint strategies involving transmissions from multiple base stations. \bibliographystyle{ieeetr}
{ "timestamp": "2012-10-16T02:01:06", "yymm": "1210", "arxiv_id": "1210.3667", "language": "en", "url": "https://arxiv.org/abs/1210.3667" }
\section{Introduction} The Whipple 10\,m $\gamma$-ray telescope is located at the Fred Lawrence Whipple Observatory in southern Arizona. Until June 2011, it operated in the range $0.3 - 10$ TeV and pioneered the Imaging Atmospheric \v{C}erenkov Telescope (IACT) technique for the detection of VHE $\gamma$-rays. The telescope was of Davies-Cotton design, consisting of a reflector and a camera at the focal plane to record the $\gamma$-ray images. The reflector was composed of 248 tessellated hexagonal mirrors mounted on a spherical surface with a total reflecting area of $\sim75$ m$^2$. The last camera in operation consisted of 379 PMTs and had a field of view of $\sim2.6^\circ$ with an angular resolution of $0.117^\circ$, as described in~\cite{whipple_specs}. Detected by the Whipple 10\,m telescope, the Crab Nebula was the first TeV source discovered~\citep{whipple_crab}. It has since been considered a standard candle of VHE $\gamma$-ray astronomy, as it has for X-ray and lower energy $\gamma$-ray astronomy~\citep[e.g.,][]{x-ray_crab}. As the brightest source in the VHE sky, it was ideal for this role. In 2011, both AGILE~\citep{agile} and \emph{Fermi}-LAT~\citep{fermi_flare} reported the discovery of flaring activity in the Crab Nebula at MeV - GeV energies. The flares occur at a frequency of $\sim1-2$ per year, and have been observed to last between 4 and 15 days. ARGO-YBJ~\citep{argo} also reported enhanced flux at GeV - TeV energies from the Crab Nebula. These developments have motivated this search for VHE variability on short timescales for which the Atmospheric \v{C}erenkov Technique is particularly suited. \section{Analysis and results} A data set of Crab Nebula observations taken with the Whipple 10\,m telescope was compiled from the last 10 years ($2001 - 2011$). Data were taken in 28-minute observations with a standard experimental setup under good weather conditions with an elevation angle $>55^\circ$. Motivated by the flaring timescales reported by \emph{Fermi}-LAT and AGILE, the Whipple data set was searched for short-term variability on timescales of 7 and 14 days, as well as a shorter timescale of 1 day. A sliding window algorithm was developed to perform this analysis. The window is shifted along the data set night-by-night for each season. The significance of the emission in the window is calculated using the search window as the \texttt{ON} region and the the rest of the season as the \texttt{OFF} region. A high significance value would indicate the presence of a flare. \begin{SCfigure} \centering \includegraphics{all_1-day_sigs_bw.eps} \caption{1-day search window significances for the 10 years of Whipple data analysed. No significant periods of elevated emission are present in this data set.} \label{sigs} \end{SCfigure} Figure~\ref{sigs} shows the window significances for a search timescale of 1 day for the 10 observing seasons. The highest significance detected in a 1-day window was $3.42\sigma$ pre-trials, corresponding to a post-trials probability of $0.078$ ($\sim2.07\sigma$)~\citep[e.g.,][]{trials}. Both 7- and 14-day search windows yielded lower post-trials maximum significances. Thus, there is no evidence for strong VHE flaring activity on these timescales in the current data set. \begin{SCfigure} \centering \includegraphics{sigs_1-day_all.eps} \caption{Distribution of 1-day window significances, obtained by histogramming the data shown in Figure~\ref{sigs} and fitting with a Gaussian.} \label{datasigdist} \end{SCfigure} Figure~\ref{datasigdist} shows the distribution of window significances obtained with a 1-day search timescale for the 10-year data set. A similar distribution was created for each search window timescale, and fit with a Gaussian. The variances of the Gaussian fits to the significance histograms are not consistent with 1.0, indicating that the observed variations are not solely due to statistical fluctuations. Randomising the dates of the observations and reanalysing the ``shuffled'' data preserves the width of the distribution, so it is independent of the configuration of the data. \section{Simulations} A Monte Carlo simulation was developed to test whether the width of the significance histograms are consistent with statistical fluctuations, given the overlapping search windows. 18,000 nights of observations were simulated with the same source sampling as the real data and analysed. It was found that significance histograms produced with the three search timescales from the simulated data have variances very close to 1.0. 25,000 individual data sets, equivalent in length and sampling to the observational data, were then simulated and analysed and a distribution of the variances was produced. A variance of 1.3 (as seen in the real data, see Figure~\ref{datasigdist}) was observed in only two cases. This clearly points to a non-statistical source of the broadness of the data distributions. Varying observation angles and atmospheric changes are likely the main contributing factors. The simulation was adapted to simulate a single flare of known length and emission within an otherwise standard data set. The data sampling was adjusted to ensure one simulated run per night for the duration of the flare, while still maintaining random sampling in the rest of the data set. This idealised scenario of full sampling of the flare provides the means to put an upper limit on the level of flaring activity that would be detected. The simulation was run 600 times for two different flare emission levels. In both cases, a medium flare duration of 5 days was used, with flare emission levels of 2 $\times$ average Crab Nebula and 1.5 $\times$ average Crab Nebula. Figure~\ref{flares} shows typical data sets obtained for both flare emission levels. For a 7-day window, it was found that the $\times2$ flare was detected above $5\sigma$ post-trials in $69\%$ of the data sets, and even when not detected it was always clearly visible by eye. The $\times1.5$ flare was only detected once post-trials, and was generally impossible to pick out by eye. \begin{figure} \centering \includegraphics{sim_flares.eps} \caption{Window significances for simulated flare of 5-day duration: the left panel shows a flare with a $50\%$ increase in emission over average levels and the right panel shows a flare with a $100\%$ increase in emission over average levels. While the flare in the left panel cannot easily be seen, the flare in the right panel is clearly visible.} \label{flares} \end{figure} \section{Conclusion} No significant flaring activity has been found in this 10-year archival data set from the Whipple 10\,m telescope. The recent model of Bednarek and Idec~\citep{bednarek} predicts TeV flux variability of the order of $\sim10\%$ above 1 TeV and on the same timescales as that observed at GeV energies. The Monte Carlo simulations indicate that flares would need to be of the order of $\sim100\%$ in order to be significantly detected, and so this model cannot be constrained with the current data set. However, this work will be expanded by extending the Whipple archival data set to include earlier epochs, which could potentially double the data set. VERITAS data will also be added to the study, considerably augmenting the data set from 2007 onwards and potentially providing the sensitivity to constrain the emission model. A search for a long-term decline in the TeV Crab Nebula flux, similar to that seen at keV energies~\citep{crab_decline} will be undertaken. This is complicated by the fact that the Crab Nebula has been used as a calibration source for IACTs since the founding of this field. \section{Acknowledgements} \small This research is supported by grants from the U.S. Department of Energy Office of Science, the U.S. National Science Foundation and the Smithsonian Institution, by NSERC in Canada, by Science Foundation Ireland (SFI 10/RFP/AST2748) and by STFC in the U.K. We acknowledge the excellent work of the technical support staff at the Fred Lawrence Whipple Observatory and at the collaborating institutions in the construction and operation of the instrument. Anna O'Faol\'ain de Bhr\'oithe acknowledges the support of the Irish Research Council ``Embark Initiative''.
{ "timestamp": "2012-10-16T02:02:04", "yymm": "1210", "arxiv_id": "1210.3723", "language": "en", "url": "https://arxiv.org/abs/1210.3723" }
\section*{Description} A liquid drop of radius $R$ is suspended in another liquid. This drop is placed between two parallel plates separated by a distance $H$ and subjected to a simple shear flow. Both plates move with a speed $u_w$ in opposite directions, producing a shear rate of $\dot{\gamma}=\frac{2u_w}{H}$. The liquids have equal densities $\rho$ and viscosities $\mu$. The interfacial tension between the liquids is $\sigma$. The behaviour of the system is determined by viscous, capillary, and inertial forces. The capillary number $\mathrm{Ca} \equiv \frac{\mu\dot{\gamma}R}{\sigma}$ is the ratio between viscous and capillary forces. The Reynolds number based on the shear rate and drop radius is $\mathrm{Re}\equiv\frac{\rho\dot{\gamma}R^2}{\mu}$. The simulations were performed using the free-energy lattice Boltzmann method. With this diffuse interface method, topological changes of the interface do not need reconstruction of a mesh after breakup or coalescence of the droplets. However, a large number of lattice nodes is required to resolve both the drops and the thin film between them before they coalesce. To perform simulations with a sufficiently high resolution that allows us to capture the transition between coalescence and sliding after a collision, we use multiple GPUs in parallel. The simulations visualized in this video were performed using nine NVIDIA Tesla M2070 GPUs. The domain size was $1024 \times 512 \times 512$ nodes, and the initial radius of the spherical drop was 100 nodes. The simulations mimic experimental work on droplet collisions in shear flow. The first simulation involves drop breakup at $\mathrm{Ca}=0.2$ and $\mathrm{Re}=10$. The shear is stopped at $\dot{\gamma}t=22.2$, which is before the droplet breaks, to control the number of droplets that form. Due to inertia, the droplet continues to stretch and reaches a maximum elongation before shrinking back. As it retracts, a thin neck suddenly forms. Capillary forces dominate and the drop breaks into two daughter droplets. If the shear were stopped later, several additional satellite droplets would also form. The final horizontal droplet separation is $\frac{\Delta x}{2R}=1.77$ and the vertical offset is $\frac{\Delta y}{2R}=0.65$. Starting from this final state for the first simulation, two further numerical experiments were performed. To observe collisions between the two droplets, we reverse the shear flow and consider two shear rates. The new capillary and Reynolds numbers were computed using the radius of the smaller droplets. In the first collision simulation, $\mathrm{Ca}=0.24$ and $\mathrm{Re}=9.6$. Under these conditions, the droplets slide without coalescing. Both physical and simulated droplets do not coalesce unless the capillary number is below a critical value that separates the regions of coalescence and non-coalescence. In the second case, we repeat the numerical experiment with a lower shear rate at which $\mathrm{Ca}=0.08$ and $\mathrm{Re}=3.2$. This time the drops coalesce because the collision is sufficiently slow to allow the film between them to drain. \subsection*{Acknowledgment} This research has been enabled by the use of computing resources provided by WestGrid and Compute/Calcul Canada. \end{document}
{ "timestamp": "2012-10-16T02:01:15", "yymm": "1210", "arxiv_id": "1210.3674", "language": "en", "url": "https://arxiv.org/abs/1210.3674" }
\section{Conclusion} We have introduced a model that describes the effect of a line on which there is fast diffusion on the overall propagation of a species that diffuses with another constant and reproduces outside this line in a two dimensional framework. We have found that this model conserves the population in absence of reproduction and mortality and preserves order. Then, we have shown that owing to the exchanges taking place between the line and the plane, there is an asymptotic speed of spreading which is the invasion velocity along the line. We have computed the global asymptotic speed of spreading along the line. This is achieved with exponential solutions of the linearised system and compactly supported sub-solutions. The asymptotic speed is derived from an algebraic system. When $D$, the diffusion on the road, is less than or equal to $2d$, where $d$ is the diffusion in the field, there is no effect at all of the road: the propagation takes place at the classical KPP invasion speed. In contradistinction with this case, when $D$ is larger than $2d$, there is an enhancement effect of the diffusion on the road leading to a speed higher than KPP. Lastly, this invasion speed is shown to behave like $\sqrt{D}$ for large values of $D$. \begin{appendix} \numberwithin{equation}{section} \setcounter{equation}{0} \section*{Appendix} \setcounter{equation}{0} \section{Existence result for the Cauchy problem}\label{sec:ex} \noindent{\sc Proof of the existence part of Proposition \ref{pro:Cauchy}.} We prove the result for an initial datum $(u_0,v_0)$ which is locally H\"older continuous, together with its derivatives up to order 2, and satisfies the compatibility condition $$ -d\partial_y v_0(x,y,0)=\mu u_0(x,y)- v_0(x,y,0). $$ The regularity of the initial datum is therefore inherited by the solution of the Cauchy problem for all time $t\geq0$. The case of a merely continuous initial datum can then be handled by a standard regularization technique (see, e.g., \cite{Lady}). We will obtain a solution to \eqref{E}-\eqref{IC} as the limit of a subsequence of solutions $((u_n,v_n))_n$ of the following problems: \begin{equation}\label{un} \begin{cases} \partial_t u_n-D \Delta_x u_n-q\.\nabla_x u_n+\mu u_n= v_{n-1}(x,0,t) & x\in\mathbb R^N,\ t>0\\ u_n|_{t=0}=u_0 & \text{in }\mathbb R^N,\\ \end{cases}\end{equation} \begin{equation}\label{vn} \begin{cases} \partial_t v_n-d\Delta v_n-r\.\nabla v_n=f(v_n) & (x,y)\in\Omega,\ t>0\\ v_n(x,0,t)-d\partial_y v_n(x,0,t)=\mu u_n(x,t) & x\in\mathbb R^N,\ t>0\\ v_n|_{t=0}=v_0 & \text{in }\Omega, \end{cases}\end{equation} starting from $v_0$. \noindent Step 1. {\em Solvability of \eqref{un}, \eqref{vn}.}\\ We say that a function $w(z,t)$ has admissible growth in $z$ if it satisfies $|w(z,t)|\leq\beta e^{\sigma|z|^2}$, for some $\sigma,\beta>0$. It is well known that the linear Cauchy problem is uniquely solvable in the class of functions with admissible growth in the space variable. If $v_{n-1}$ is a continuous function with admissible growth, then problem \eqref{un} admits a unique classical solution $u_n$ with admissible growth. In order to solve \eqref{vn}, notice that it can be reduced to a homogeneous system by replacing $v_n$ with $v_n-v_0-\mu (u_n-u_0)$. It then follows from the standard parabolic theory that it admits a unique classical solution with admissible growth. Let $((u_n,v_n))_n$ denote the family of solutions constructed in this way, starting from $v_0$. \noindent Step 2. {\em $L^\infty$ estimates.}\\ We show, with a recursive argument, that $$\forall n\in\mathbb N,\quad 0\leq u_n\leq \frac1\mu H, \quad0\leq v_n\leq H,\quad \text{with }H:=\max\left(\mu\|u_0\|_\infty,\|v_0\|_\infty,1\right).$$ The property trivially holds for $n=0$. Assume that it holds for some value $n-1$. Since $0$ and $\displaystyle\frac1\mu H$ are respectively a sub and a supersolution of \eqref{un}, the comparison principle yields $0\leq u_n\leq\frac1\mu H$. It then follows that $H$ is a supersolution of \eqref{vn}, whence $0\leq v_n\leq H$. \noindent Step 3. {\em $W^{2,1}_p$ estimates.}\\ By step 1 we know that $0\leq v_{n-1}\leq H$. Thus, applying the local boundary estimates to \eqref{un} we infer that, for any given $\rho,T>0$ and $1<p<\infty$, $$\|u_n\|_{W^{2,1}_p(B_{\rho+1}\times(0,T))}\leq C H,$$ where $B_\rho$ denotes the $N$-dimensional ball of radius $\rho$ and centre $0$ and $C$ is a constant only depending on $N$, $D$, $q$, $\mu$, $\rho$, $T$, $p$ and $\|u_0\|_{W^2_p(B_{\rho+2})}$ (and not on $n$). Set $Q_\rho:=B_\rho\times(0,\rho)$. Since $0\leq v_n\leq H$ too, the estimates yield \[\begin{split} \|v_n\|_{W^{2,1}_p(Q_\rho\times(0,T))} &\leq C' \left(1+\|v_n\|_{L^\infty(Q_{\rho+1}\times(0,T))}+ \|u_n\|_{W^{2,1}_p(B_{\rho+1}\times(0,T))}\right)\\ &\leq C'(1+H+CH), \end{split}\] with $C'$ only depending on $N$, $d$, $r$, $\mu$, $\rho$, $T$, $p$, $\|f\|_\infty$, $\|u_0\|_{W^2_p(B_{\rho+1})}$ and $\|v_0\|_{W^2_p(Q_{\rho+1})}$. This shows that the $(u_n)_n$ and $(v_n)_n$ are uniformly bounded in $W^{2,1}_p(B_\rho\times(0,T))$ and $W^{2,1}_p(Q_\rho\times(0,T))$ respectively. \noindent Step 4. {\em Existence of a solution.}\\ Now that we know that $(u_n)_n$ and $(v_n)_n$ are uniformly bounded in compact sets with respect to the $W^{2,1}_p$ norm, taking $p>N+1$ and using the Morrey inequality, we infer that this is also true with respect to the $C^{\alpha}$ norm, for some $0<\alpha<1$. Then, by the Schauder estimates, the time derivative and the space derivatives up to order $2$ are uniformly H\"older continuous in compact sets too. As a consequence, $((u_n,v_n))_n$ converges (up to subsequences) in $C^{2,1}_{loc}$ to some $(u,v)$. Passing to the limit as $n\to\infty$ in \eqref{un}, \eqref{vn} we eventually find that $(u,v)$ satisfies \eqref{E}-\eqref{IC}. Form step 1 we know that $u$ and $v$ are bounded and nonnegative. \hfill$\square$\\ \setcounter{equation}{0} \section{The equation $h^L(c,\beta)=0$ }\label{sec:Rouche} In this section we describe in detail how, for $\xi>0$ small enough, equation \eqref{e4.8} admits two solutions close to $\tau_+$ and $\tau_-$ respectively. We recall that $\tau_\pm=\pm i\sqrt{(e/a)\xi}+O(\xi)$ are the roots of the trinomial $g(\tau):=a\tau^2+d\xi\tau+e\xi$. Let us focus on $\tau_+$, the other case being analogous. Let $B$ be the ball of radius $A\xi$, centred at $\tau_+$; $A$ large and to be adjusted. For $\tau\in\partial B$, we have $$|g(\tau)|=a|\tau-\tau_+|\, |\tau-\tau_-| \geq aA\xi(|\tau_+-\tau_-|- A\xi)=2aA\sqrt{e/a}\,\xi^{3/2}+O(\xi^2).$$ On the other hand we have $|\varphi(\tau,\xi)|\leq C\xi^{3/2}+O(\xi^2)$. We can therefore choose $A$ large enough and then $\xi$ small enough in such a way that $|g|>|\varphi|$ on $\partial B$. Since $g$ and $\varphi$ are holomorphic, by Rouch\'e's theorem the equation \eqref{e4.8} has the same number of solutions in $B$ as $g=0$, that is $1$. Notice that such a solution has positive imaginary part proportional to $\sqrt{\xi}$ and real part of order $\xi$. \end{appendix} \section*{Acknowledgements} This study was supported by the French "Agence Nationale de la Recherche" through the project PREFERED (ANR 08-BLAN-0313). H.B. was also supported by an NSF FRG grant DMS-1065979. L.R. was partially supported by the Fondazione CaRiPaRo Project ``Nonlinear Partial Differential Equations: models, analysis, and control-theoretic problems''.
{ "timestamp": "2012-10-17T02:09:50", "yymm": "1210", "arxiv_id": "1210.3721", "language": "en", "url": "https://arxiv.org/abs/1210.3721" }
\subsubsection*{\cref{#1}}} \newcommand{\fullsoln}[2]{\medbreak\subsubsection*{\fullcref{#1}{#2}}} \crefformat{chapter}{Lecture~#2#1#3} \crefmultiformat{chapter}{Lectures~#2#1#3}{ and~#2#1#3}{, #2#1#3}{} \crefname{exer}{exercise}{exercises} \Crefname{exer}{Exercise}{Exercises} \crefformat{exer}{Exercise~#2#1#3} \crefformat{exers}{Exercise~#2#1#3} \crefformat{egs}{Example~#2#1#3} \crefformat{unfortunate}{Fact~#2#1#3} \crefformat{page}{page~#2#1#3} \numberwithin{equation}{chapter} \renewcommand{\thesection}{\thelecture\Alph{section}} \renewcommand{\thesubsection}{\thesection(\alph{subsection})} \newcommand{\csee}[1]{(see \cref{#1})} \newcommand{\fullcsee}[2]{(see \cref{#1}\pref{#1-#2})} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \renewcommand{\natural}{\mathbb{N}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{H}}{\mathbb{H}} \newcommand{\mathbb{T}}{\mathbb{T}} \newcommand{\mathbf{G}}{\mathbf{G}} \newcommand{\power}[1]{2^{#1}} \newcommand{\mathrel{\lower0.5pt\hbox{\LARGE$\twoheadrightarrow$}}}{\mathrel{\lower0.5pt\hbox{\LARGE$\twoheadrightarrow$}}} \newcommand{\stackrel{?}{\implies}{}}{\stackrel{?}{\implies}{}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\triangleleft}{\triangleleft} \newcommand{\cong}{\cong} \newcommand{\stackrel{.}{\supset}{}}{\stackrel{.}{\supset}{}} \newcommand{\rank_{\rational}}{\rank_{\mathbb{Q}}} \newcommand{\rank_{\real}}{\rank_{\mathbb{R}}} \def\midline#1{\setbox0\hbox{\kern1pt $#1$\kern1pt }\mathord{\hbox to 0pt{\kern1pt $#1$\hss}\vrule width\wd0 height2.25pt depth -1.75pt}} \def\midline{p}{\midline{p}} \DeclareMathOperator{\Homeo}{Homeo} \DeclareMathOperator{\Isom}{Isom} \DeclareMathOperator{\Diff}{Diff} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\Perm}{Perm} \DeclareMathOperator{\SL}{SL} \DeclareMathOperator{\PSL}{PSL} \DeclareMathOperator{\SO}{SO} \DeclareMathOperator{\SU}{SU} \DeclareMathOperator{\Sp}{Sp} \DeclareMathOperator{\Prob}{Prob} \DeclareMathOperator{\Stab}{Stab} \DeclareMathOperator{\rank}{\mathrm{rank}} \DeclareMathOperator{\NH}{Near} \DeclareMathOperator{\QM}{Quasi} \newcommand{\overline{U}}{\overline{U}} \newcommand{\underline{V}}{\underline{V}} \newcommand{H_b}{H_b} \newcommand{C_b}{C_b} \newcommand{Z_b}{Z_b} \newcommand{B_b}{B_b} \newcommand{\widetilde}{\widetilde} \newcommand{\mathord{\rm Id}}{\mathord{\rm Id}} \cornersize{0.5} \setlength\fboxsep{2pt} \newcommand{\heis}[1]{\lower0.75pt\hbox{\ovalbox{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\not \mkern-1mu \Gamma}{\not \mkern-1mu \Gamma} \newcommand{\pref}[1]{{\upshape(}\ref{#1}{\upshape)}} \newcommand{\fullcref}[2]{\cref{#1}{\upshape(}\ref{#1-#2}{\upshape)}} \makeatletter \newcommand{\noprelistbreak}{\@nobreaktrue\nopagebreak\smallskip} \makeatother \renewcommand{\labelitemii}{$\circ$} \newcommand{\zz}[1]{\hbox to 0pt{#1\hss}} \newcommand{\hintit}[1]{\textsf{\smaller{\upshape[}#1{\upshape]}}} \newcommand{\hint}[1]{\hintit{\emph{Hint:} #1}} \begin{document} \frontmatter \tableofcontents \mainmatter \LectureSeries[Some arithmetic groups that do not act on $S^1$]% {Some~arithmetic~groups that do~not~act~on~the~circle \author{Dave Witte Morris}} \address{Department of Mathematics and Computer Science, University of Lethbridge, Lethbridge, Alberta, T1K~3M4, Canada} \email{Dave.Morris@uleth.ca} \section*{Abstract} The group $\SL(3,\mathbb{Z})$ cannot act (faithfully) on the circle (by homeomorphisms). We will see that many other arithmetic groups also cannot act on the circle. The discussion will involve several important topics in group theory, such as ordered groups, amenability, bounded generation, and bounded cohomology. \Cref{LOLect} provides an introduction to the subject, and uses the theory of left-orderable groups to prove that $\SL(3,\mathbb{Z})$ does not act on the circle. \Cref{BddGenLect} discusses bounded generation, and proves that groups of the form $\SL \bigl( 2, \mathbb{Z}[\alpha] \bigr)$ do not act on the real line. \Cref{AmenLect,BddCohoLect} are brief introductions to amenable groups and bounded cohomology, respectively. They also explain how these ideas can be used to prove that actions on the circle have finite orbits. An appendix provides hints or references for all of the exercises. These notes are slightly expanded from talks given at the Park City Mathematics Institute's Graduate Summer School in July 2012. The author is grateful to the PCMI staff for their hospitality, the organizers for the invitation to take part in such an excellent conference, and the students for their energetic participation and helpful comments that made the course so rewarding (and improved these notes). \lecture{Left-orderable groups and a proof for $\SL(3,\mathbb{Z})$} \label{LOLect} \section{Introduction} In Geometric Group Theory (and many other fields of mathematics), one of the main methods for understanding a group is to look at the spaces it can act on. (For example, speakers at this conference have discussed actions of groups on $\delta$-hyperbolic spaces, $\mathrm{CAT}(0)$ cube complexes, Euclidean buildings, and other spaces of geometric interest.) In these lectures, we consider only very simple spaces, namely, the real line~$\mathbb{R}$ and the circle~$S^1$. Also, we consider only a single, very interesting class of groups, namely, the arithmetic groups. More precisely, the topic of these lectures is: \begin{mainques} \label{MainQues} Let\/ $\Gamma$ be $\SL(n, \mathbb{Z})$, or some other arithmetic group. \begin{enumerate} \item Does there exist a faithful action of~$\Gamma$ on~$\mathbb{R}$? \item Does there exist a faithful action of~$\Gamma$ on~$S^1$? \end{enumerate} All actions are assumed to be continuous, so the questions ask whether there exists a faithful homomorphism $\phi \colon \Gamma \to \Homeo(X)$, where $X = \mathbb{R}$ or~$S^1$. (Recall that a homomorphism is \emph{faithful} if its kernel is trivial.) \end{mainques} A fundamental theorem in the subject tells us that the two seemingly different questions in \cref{MainQues} are actually the same for most arithmetic groups (if, as is usual in Geometric Group Theory, we ignore the very minor difference between a group and its finite-index subgroups): \begin{thm}[Ghys \cite{GhysCercle}, Burger-Monod \cite{BurgerMonod-BddCohoLatts}] \label{GhysFP} Let\/ $\Gamma = \SL(n,\mathbb{Z})$, or some other irreducible arithmetic group, such that no finite-index subgroup of\/~$\Gamma$ is isomorphic to a subgroup of\/ $\SL(2,\mathbb{R})$. Then: \begin{align*} \text{some } &\text{finite-index subgroup of\/~$\Gamma$ has a faithful action on\/~$\mathbb{R}$} \\& \iff \text{ some finite-index subgroup of\/~$\Gamma$ has a faithful action on~$S^1$.} \end{align*} \end{thm} \begin{proof} ($\Rightarrow$) Suppose $\dot\Gamma$ is a finite-index subgroup of~$\Gamma$ that acts on~$\mathbb{R}$. Then $\dot\Gamma$ also acts on the one-point compactification of~$\mathbb{R}$, which is homeomorphic to~$S^1$. (Note that this argument is elementary and very general. It is the opposite direction of the theorem that requires assumptions on~$\Gamma$, and sometimes requires passage to a finite-index subgroup.) ($\Leftarrow$) Suppose $\dot\Gamma$ is a finite-index subgroup of~$\Gamma$ that acts on~$S^1$. A major theorem proved independently by Ghys \cite{GhysCercle} and Burger-Monod \cite{BurgerMonod-BddCohoLatts} tells us that that the action must have a finite orbit. (We will say a bit about the proof of this theorem in \cref{AmenLect,BddCohoLect}.) This means that a finite-index subgroup $\ddot\Gamma$ of~$\dot\Gamma$ has a fixed point in~$S^1$. Let $p$ be a point in~$S^1$ that is fixed by~$\ddot\Gamma$. Then $\{p\}$ is a $\ddot\Gamma$-invariant subset, so its complement is also invariant. This implies that $\ddot\Gamma$ acts on $S^1 \smallsetminus \{p\}$, which is homeomorphic to~$\mathbb{R}$. \end{proof} Thus, in most cases, it does not matter which of the two versions of \cref{MainQues} we consider. For now, let us look at actions on~$\mathbb{R}$. \begin{assump} To avoid minor complications, we will assume, henceforth, that $$ \text{all actions are orientation-preserving.} $$ This means that an action of~$\Gamma$ on~$X$ is a faithful homomorphism $\phi \colon \Gamma \to \Homeo_+(X)$, where $\Homeo_+(X)$ is the group of \emph{orientation-preserving} homeomorphisms of~$X$. Since $\Homeo_+(X)$ is a subgroup of index~$2$ in the group of all homeomorphisms, this is just another example of ignoring the difference between a group and its finite-index subgroups. \end{assump} \begin{rem} The expository paper \cite{Morris-CanLatt} covers the main topics of these lectures in somewhat more depth. See \cite{GhysCircleSurvey} and \cite{Navas-GrpsDiffeos} for introductions to the general theory of group actions on the circle (not just actions of arithmetic groups), and see \cite{Morris-IntroArithGrps} for an introduction to arithmetic groups. \end{rem} \section{Examples} The following result provides an obstruction to the existence of an action on~$\mathbb{R}$. \begin{lem} \label{TorsionNoAct} If a group has a nontrivial element of finite order, then the group does not have a faithful action on\/~$\mathbb{R}$. \end{lem} \begin{proof} It suffices to show that every nontrivial element~$\varphi$ of $\Homeo_+(\mathbb{R})$ has infinite order. Since $\varphi$ is nontrivial, there is some $p \in \mathbb{R}$, such that $\varphi(p) \neq p$. Assume, without loss of generality, that $\varphi(p) > p$. The fact that $\varphi$ is an orientation-preserving homeomorphism of~$\mathbb{R}$ implies that it is an increasing function: $$ x > y \implies \varphi(x) > \varphi(y) .$$ Therefore (letting $x = \varphi(p)$ and $y = p$), we have $\varphi^2(p) > \varphi(p)$. In fact, by induction, we have $$ \varphi^n(p) > \varphi^{n-1}(p) > \cdots > \varphi(p) > p ,$$ so $\varphi^n(p) > p$ for every $n > 0$. This implies $\varphi^n(p) \neq p$, so $\varphi$ is not the identity map. Since $n$~is arbitrary, this means that $\varphi$ has infinite order. \end{proof} \begin{cor} \label{SLnZNoActBcsTorsion} If $n \ge 2$, then $\SL(n,\mathbb{Z})$ does not have a faithful action on~$\mathbb{R}$. \end{cor} \begin{proof} It is easy to find a nontrivial element of finite order in $\SL(n,\mathbb{Z})$. For example, the matrix {\smaller$\begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix}$} is in $\SL(2,\mathbb{Z})$ and has order~$2$. \end{proof} It is not difficult to show that every arithmetic group has a finite-index subgroup that has no elements of finite order \cite[Lem.~4.19, p.~232]{PlatonovRapinchukBook}. This means that \cref{TorsionNoAct} does not provide any obstruction at all to the existence of actions of sufficiently small finite-index subgroups of~$\Gamma$. For example: \begin{eg} It is well known that some finite-index subgroups of $\SL(2,\mathbb{Z})$ are free groups. (In fact, every torsion-free subgroup is free \cite[Eg.~1.5.3, p.~11, and Prop.~18, p.~36]{Serre-Trees}.) Any such subgroup has \emph{many} faithful actions on~$\mathbb{R}$: \end{eg} \begin{exer} \label{FreeGrpActsOnR} Show that every finitely generated free group has a faithful action on~$\mathbb{R}$. \end{exer} Here is a much less trivial class of arithmetic groups that act on~$\mathbb{R}$: \begin{thm}[{}{Agol and Boyer-Rolfsen-Wiest}] \label{ArithSL2CActs} If\/ $\Gamma$ is any arithmetic subgroup of $\SL(2,\mathbb{C})$, then some finite-index subgroup of\/~$\Gamma$ has a faithful action on\/~$\mathbb{R}$. \end{thm} \begin{proof} A very recent and very important theorem of Agol \cite{AgolHaken} tells us there is a finite-index subgroup~$\dot\Gamma$ of~$\Gamma$, such that there is a surjective homomorphism $\varphi \colon \dot\Gamma \mathrel{\lower0.5pt\hbox{\LARGE$\twoheadrightarrow$}} \mathbb{Z}$. Since $\mathbb{Z}$ has an obvious nontrivial action on~$\mathbb{R}$ (by translations), this implies that $\dot\Gamma$ also acts nontrivially on~$\mathbb{R}$ (by translations). However, additional effort is required to obtain an action that is faithful. A classic theorem of Burns-Hale \cite{BurnsHale} provides a cohomological condition that implies the existence of a faithful action: $$ \begin{matrix} \text{$\dot\Gamma$ has a faithful action on~$\mathbb{R}$ if $H^1(\Lambda; \mathbb{R})$ is nonzero} \\ \text{for every finitely generated, nontrivial subgroup~$\Lambda$ of~$\dot\Gamma$} \end{matrix} $$ (see \fullcref{LOExers}{BurnsHale}). Agol's theorem tells us $H^1(\dot\Gamma; \mathbb{R})$ is nonzero, which establishes the hypothesis for the special case where $\Lambda = \dot\Gamma$. By using $3$-manifold topology and a fairly simple argument about Euler characteristics, a theorem of Boyer-Rolfsen-Wiest \cite[Thms.~3.1 and 3.2]{BoyerRolfsenWiest} promotes this nonvanishing to obtain the condition for all~$\Lambda$, and thereby yields a faithful action on~$\mathbb{R}$. \end{proof} \begin{egs} \label{EgNotAct} We have seen that some finite-index subgroups of $\SL(2,\mathbb{Z})$ have actions on~$\mathbb{R}$. To obtain arithmetic groups that do \emph{not} act on~$\mathbb{R}$ (even after passing to a finite-index subgroup), we need a bigger group. \begin{enumerate} \item \label{EgNotAct-SLn} One approach would be to take larger matrices (not just $2 \times 2$). Later in this lecture, we will see that this works: if $n \ge 3$, then no finite-index subgroup of $\SL(n,\mathbb{Z})$ has a faithful action on~$\mathbb{R}$. \item \label{EgNotAct-SL2O} Another possible approach would be to keep the same size of matrix, but enlarge the ring of coefficients: instead of only the ordinary ring of integers~$\mathbb{Z}$, consider a ring a algebraic integers~$\mathcal{O}$. \Cref{BddGenLect} outlines a proof that this approach also works: if $\alpha$ is a real algebraic integer that is irrational (for example, we could take $\alpha = \sqrt{2}$), then no finite-index subgroup of $\SL \bigl( 2, \mathbb{Z}[\alpha] \bigr)$ acts faithfully on~$\mathbb{R}$. \end{enumerate} \end{egs} \section{The main conjecture} \label{LOMainConjSect} In the spirit of \cref{EgNotAct}, it is conjectured that every ``irreducible'' arithmetic group that acts on~$\mathbb{R}$ is contained in a very small Lie group, like $\SL(2,\mathbb{C})$: \begin{conj} \label{NoActConj} If\/ $\Gamma$ is an ``irreducible'' arithmetic group, then $$ \text{$\Gamma$ does not have a faithful action on\/~$\mathbb{R}$} $$ unless $\Gamma$ is an arithmetic subgroup of a ``very small'' Lie group. \end{conj} For the interested reader, the remainder of this \namecref{LOMainConjSect} makes the conjecture more precise. However, we will only look at examples of arithmetic groups, not delving deeply into their theory, so, for our purposes, a vague understanding of the conjecture is entirely sufficient. \begin{defn} Saying that $\Gamma$ is \emph{irreducible} means that no finite-index subgroup of~$\Gamma$ is a direct product $\Gamma_1 \times \Gamma_2$ (where $\Gamma_1$ and~$\Gamma_2$ are infinite). \end{defn} The following simple observation shows that the problem reduces to this case. \begin{exer} \label{DirProdFaithful} Show that the direct product $\Gamma_1 \times \Gamma_2$ has a faithful action on~$\mathbb{R}$ if and only if $\Gamma_1$ and~$\Gamma_2$ both have faithful actions on~$\mathbb{R}$. \end{exer} Technically speaking, instead of saying that the Lie group is ``very small\zz,'' we should say that it is a semisimple Lie group whose ``real rank'' is only~$1$. In other words, up to finite index, it belongs to one of the four following families of groups (up to local isomorphism): \begin{itemize} \item $\SO(1,n)$ (the isometry group of hyperbolic $n$-space $\mathbb{H}^n$), or \item $\SU(1,n)$ (the isometry group of complex hyperbolic $n$-space), or \item $\Sp(1,n)$ (the isometry group of quaternionic hyperbolic $n$-space), or \item $F_{4,1}$ (the isometry group of the hyperbolic plane over the octonions, also known as the ``Cayley plane"). \end{itemize} Since $\SL(2,\mathbb{C})$ is locally isomorphic to $\SO(1,3)$, this list does include the examples in \cref{ArithSL2CActs}. \begin{rem} \Cref{NoActConj} applies only to actions on~$\mathbb{R}$, not actions on~$S^1$, because some arithmetic groups of large real rank do act on the circle. Namely, if $G$ is a semisimple Lie group that has $\SL(2,\mathbb{R})$ as one of its simple factors, then every arithmetic subgroup of~$G$ acts on the circle (by linear-fractional transformations). However, it is conjectured that these are the only such arithmetic groups of large real rank \cite[p.~200]{GhysCercle}. \end{rem} \section{Left-invariant total orders} The following \lcnamecref{ActIffLO} translates \cref{NoActConj} into a purely algebraic question about the existence of a certain structure on the group~$\Gamma$. \begin{defn} Let $\Gamma$ be a group. \noprelistbreak \begin{itemize} \item A \emph{total order} on a set~$\Omega$ is an transitive, antisymmetric binary relation~$\prec$ on~$\Omega$, such that, for all $a,b \in \Omega$, we have $$ \text{either \ $a \prec b$ \ or \ $a \succ b$ \ or \ $a = b$} .$$ \item When $\prec$ is a total order on a group~$\Gamma$, we can ask that the order structure be compatible with the group multiplication: $\prec$ is \emph{left-invariant} if, for all $a,b,c \in \Gamma$, we have $$ a \prec b \iff ca \prec cb .$$ \end{itemize} See \cite{KopytovMedvedev} for more about the theory of left-invariant total orders. \end{defn} \begin{exer} \label{ActIffLO} Let $\Gamma$ be a countable group. Then $$ \text{$\Gamma$ has a faithful action on~$\mathbb{R}$ $\iff$ $\exists$ a left-invariant total order~$\prec$ on~$\Gamma$.} $$ \end{exer} \begin{proof}[Hint] ($\Rightarrow$) If no nontrivial element of~$\Gamma$ fixes~$0$, then we may define $$ a \prec b \iff a(0) < b(0) ,$$ and this is a left-invariant total order. (Recall that each element of~$\Gamma$ acts on~$\mathbb{R}$ via in increasing function, so if $a(0) < b(0)$, then $c \bigl( a(0) \bigr) < c \bigl( b(0) \bigr)$. If $a(0) = b(0)$, the tie can be broken by choosing some other $p \in \mathbb{R}$ and comparing $a(p)$ with $b(p)$. ($\Leftarrow$) Note that $\Gamma$ acts faithfully (by left translation) by automorphisms of the ordered set $(\Gamma,\prec)$, which is isomorphic (as an ordered set) to a subset of $(\mathbb{Q},<)$. If it is isomorphic to all of $(\mathbb{Q},<)$, then $\Gamma$ acts on the Dedekind completion, which is homeomorphic to~$\mathbb{R}$. There is actually no loss of generality in assuming that $(\Gamma,\prec) \cong (\mathbb{Q},<)$, because $(\Gamma, \prec)$ can be replaced with a left-invariant ordering of $\Gamma \times \mathbb{Q}$ that is order-isomorphic to $(\mathbb{Q}, <)$. \end{proof} Therefore, \cref{NoActConj} can be restated as follows: \begin{conj}[Algebraic version of the conjecture] If\/ $\Gamma$ is an irreducible arithmetic group, then $$ \text{$\Gamma$ does not have a left-invariant total order} $$ unless $\Gamma$ is an arithmetic subgroup of a ``very small'' Lie group. \end{conj} \begin{exer} \label{ProdPos} Suppose $\prec$ is a left-invariant total order on~$\Gamma$ (and $e$ is the identity element of~$\Gamma$). Show that if $a,b \in \Gamma$ with $a,b \succ e$, then $ab \succ e$ and $a^{-1} \prec e$. \end{exer} \section{$\SL(3,\mathbb{Z})$ does not act on the line} We can now prove \fullcref{EgNotAct}{SLn}: \begin{thm}[Witte \cite{Witte-QrankAct1mfld}] \label{SLnZNotLO} If\/ $\Gamma$ is a finite-index subgroup of\/ $\SL(n,\mathbb{Z})$, with $n \ge 3$, then there does not exist a left-invariant total order on\/~$\Gamma$. \end{thm} The proof is based on understanding a certain famous subgroup~$H$ of $\SL(3,\mathbb{Z})$: \begin{notation} \label{HeisNotn} Let $H$ be the \emph{discrete Heisenberg group}, which means $$H = \begin{bmatrix} 1 & \mathbb{Z} & \mathbb{Z} \cr &1 & \mathbb{Z} \\ & & 1 \end{bmatrix} \subset \SL(3,\mathbb{Z}) .$$ For convenience, let us also fix names for some particular elements of~$H$: $$ x = \begin{bmatrix} 1 & 1 & 0 \\ &1 & 0 \cr & & 1\end{bmatrix} , \qquad y = \begin{bmatrix} 1 & 0 & 0 \\ &1 & 1 \\ & & 1\end{bmatrix} , \qquad z = \begin{bmatrix} 1 & 0 & 1 \\ &1 & 0 \\ & & 1\end{bmatrix} . $$ (Note that $\{ x, y \}$ is a generating set for~$H$.) \end{notation} \begin{exers} \label{HeisExers} \ \noprelistbreak \begin{enumerate} \item \label{HeisExers-z=[x,y]} Show $z = [x,y] \in Z(H)$, where \begin{itemize} \item $[x,y] = x^{-1} y^{-1} x y$ is the \emph{commutator} of~$x$ and~$y$, and \item$Z(H) = \{\, a \in H \mid ah = ha, \forall h \in H\,\}$ is the \emph{center} of~$H$. \end{itemize} \item \label{HeisExers-[xyyk]} Show $x^k y^\ell = y^\ell x^k z^{k \ell}$ for $k,\ell \in \mathbb{Z}$. \item \label{HeisExers-orderable} {\rm(optional)} Show $H$ has a left-invariant total order. \\ \hint{If $N$ is a normal subgroup of~$\Gamma$, such that $N$ and $\Gamma/N$ each have a left-invariant total order, then $\Gamma$ has a left-invariant total order \fullcsee{LOExers}{extension}.} \end{enumerate} \end{exers} \begin{notation} Suppose $\prec$ is a left-invariant total order on a group~$\Gamma$. For $a,b \in \Gamma$, we write $a \ll b$ if $a$ is \emph{infinitely smaller} than~$b$. I.e., $$a \ll b \iff a^n \prec |b|, \ \forall n \in \mathbb{Z} , \quad \text{where $|b| = \begin{cases} b & \text{if $b \succeq e$} , \\ b^{-1} & \text{if $b \prec e$} . \end{cases}$} $$ \end{notation} Here is the key fact that will be used in the proof: \begin{lem}[Ault \cite{Ault-RONilpGrps}, Rhemtulla \cite{Rhemtulla-ROGrps}] \label{LOHeis} If $\prec$ is any left-invariant total order on~$H$, then either $z \ll x$ or $z \ll y$. \end{lem} \begin{proof} Assume, for simplicity, that $x,y,z \succ e$. (This actually causes no loss of generality, since there is no harm in replacing some or all of $x$, $y$, and~$z$ by their inverses. This is because we can retain the relation $[x,y] = z$ by interchanging $x$ and~$y$ if necessary, since $[y,x] = z^{-1}$.) From \fullcref{HeisExers}{[xyyk]}, we have \begin{align} \label{LOHeisPfQuadratic} y^n x^n y^{-n} x^{-n} = z^{-n^2} .\end{align} Note that the exponent of~$z$ is quadratic in~$n$ (and negative). Now suppose $z \not\ll x$ and $z \not\ll y$. Then there exist $p,q \in \mathbb{Z}$, such that $z^p \succ x$ and $z^q \succ y$. Therefore $$e \ \prec \ x^{-1}z^p, \ y^{-1} z^q, \ x, \ y ,$$ so, for all $n \in \mathbb{Z}^+$, we have \begin{align*} e &\prec y ^n \, x^n \, (y^{-1}z^q)^n \, (x^{-1} z^p)^n && \text{(\cref{ProdPos})} \\& = y^n \, x^n \, y^{-n} \, x^{-n} \, z^{qn+pn} && \text{(since $z \in Z(H)$)} \\&= z^{-n^2} z^{(p+q)n} && \text{\pref{LOHeisPfQuadratic}} \\&= z^{\text{(linear)} - \text{(quadratic)}} \\&= \ z^{\text{negative}} && \text{(if $n$ is sufficiently large)} \\&\prec e && \text{(since $z \succ e$)} . \end{align*} This is a contradiction. \end{proof} \begin{proof}[Proof of \cref{SLnZNotLO}] Suppose there is a left-invariant total order on $\Gamma = \SL(3,\mathbb{Z})$. (For simplicity, we are writing the proof as if $\Gamma$ is the entire group $\SL(3,\mathbb{Z})$, and leave it as an exercise for the reader to modify the proof to work for finite-index subgroups.) In \cref{HeisNotn}, we gave names to three particular elements of~$\Gamma$ that have a single off-diagonal~$1$. For this proof, we actually want to name all six such elements: we call them $\heis1$, $\heis2$, $\heis3$, $\heis4$, $\heis5$, $\heis6$, where $$\SL(3,\mathbb{Z}) = \begin{bmatrix}* & \heis 1 & \heis2 \\[2pt] \heis4 & * & \heis3 \\[2pt] \heis5 & \heis6 & * \end{bmatrix} .$$ Thus, for example, $x = \heis1$, $y = \heis3$, and $z= \heis2$, so $\left\langle \heis1, \heis2, \heis3 \right\rangle$ is the Heisenberg group. Actually, there are six copies of the Heisenberg group in $\Gamma$ (see \fullcref{SL3ZPfExers}{Heis}): \begin{align} \label{SixHeisInSL3} \begin{matrix} \bigl\langle \heis 1, \heis2, \heis3 \bigl\rangle, & \bigl\langle \heis2, \heis3, \heis4 \bigl\rangle, & \bigl\langle \heis 3, \heis4, \heis5 \bigl\rangle \\[\medskipamount] \bigl\langle \heis4, \heis5, \heis6 \bigl\rangle, & \bigl\langle \heis5, \heis6, \heis1 \bigl\rangle, & \bigl\langle \heis 6,\heis 1, \heis2 \bigl\rangle . \end{matrix} \end{align} Since $\bigl\langle \heis1, \heis2, \heis3 \bigl\rangle$ is a Heisenberg group, \cref{LOHeis} tells us that either $\heis2 \ll \heis1$ or $\heis2 \ll \heis3$. Assume, without loss of generality, that $$\heis2 \ll \heis3 .$$ Also, since $\bigl\langle \heis2, \heis3, \heis4 \bigl\rangle$ is also a Heisenberg group, \cref{LOHeis} tells us that either $\heis3 \ll \heis2$ or $\heis3 \ll \heis4$. However, we know $\heis2 \ll \heis3$, which implies $\heis3 \not\ll \heis2$. So we must have $$\heis3 \ll \heis4 .$$ Continuing in this way, using the other Heisenberg groups in succession, we have $$ \heis 2 \ll \heis3 \ll \heis4 \ll \heis5 \ll \heis6 \ll \heis1 \ll \heis2 .$$ By transitivity, this implies $\heis2 \ll \heis2$, which is a contradiction. \end{proof} \begin{exers} \label{SL3ZPfExers} \ \noprelistbreak \begin{enumerate} \item \label{SL3ZPfExers-Heis} Verify that each of the subgroups listed in \pref{SixHeisInSL3} is isomorphic to the Heisenberg group. More precisely, for $1 \le k \le 6$, verify that there is an isomorphism $\varphi \colon \left\langle \, \ovalbox{$k-1$} \, , \ovalbox{$k$} \, , \ovalbox{$k+1$} \,\right\rangle \to H$, such that $$ \varphi \left( \ovalbox{$k-1$} \right) = x , \quad \varphi \left( \ovalbox{$k$} \right) = z , \text{\quad and}\quad \varphi \left( \ovalbox{$k+1$} \right) = y, . $$ \item \label{SL3ZPfExers-FinInd} The given proof of \cref{SLnZNotLO} is incomplete, because it assumes that $\Gamma$ is all of $\SL(3,\mathbb{Z})$. (And we already knew from \cref{SLnZNoActBcsTorsion} that $\SL(3,\mathbb{Z})$ has no faithful action on~$\mathbb{R}$.) Modify the proof so it is valid when $\Gamma$ is a finite-index subgroup of $\SL(3,\mathbb{Z})$. \end{enumerate} \end{exers} \section{Comments on other arithmetic groups} \begin{rems} \ \noprelistbreak \begin{enumerate} \item The proof of \cref{SLnZNotLO} can be adapted to show that finite-index subgroups of the group $\Sp(4,\mathbb{Z})$ do not have left-invariant total orders \cite{Witte-QrankAct1mfld}. \item From \fullcref{LOExers}{subgrp}, we see that if $\Gamma$ contains a finite-index subgroup of either $\SL(3,\mathbb{Z})$ or $\Sp(4,\mathbb{Z})$, then $\Gamma$ does not have a left-invariant total order. In the terminology of arithmetic groups, this exactly means \cite{Witte-QrankAct1mfld}: $$ \qquad \text{if $\rank_{\rational} \Gamma \ge 2$, then $\Gamma$ does not have a left-invariant total order} .$$ \item The argument of \cref{SLnZNotLO} relies on the existence of Heisenberg groups in~$\Gamma$, so it does not apply to $\SL \bigl( 2, \mathbb{Z}[\alpha] \bigr)$. (Heisenberg groups are nonabelian nilpotent groups, but every nilpotent subgroup of $\SL(2,\mathbb{C})$ is abelian.) We will use a quite different argument to discuss these groups in \cref{BddGenLect}. \item Another important case in which the argument of \cref{SLnZNotLO} cannot be applied is when $G/\Gamma$ is compact. This is because every nilpotent subgroup of $\Gamma$ is virtually abelian. \end{enumerate} \end{rems} \begin{open} \label{CocpctOpen} Find an arithmetic group\/~$\Gamma$, such that $G/\Gamma$ is compact, and no finite-index subgroup of\/~$\Gamma$ has a faithful action on\/~$\mathbb{R}$. \end{open} Most large arithmetic groups have Kazhdan's Property~$(T)$. (We refer the reader to E.\,Breuillard's lectures in this volume for further discussion of property~$(T)$.) Therefore, a negative answer to the following well-known question would be a major advance toward settling \cref{CocpctOpen} (and many other interesting cases of \cref{NoActConj}): \begin{open} Does there exist an infinite group with Kazhdan's Property\/~$(T)$ that has a faithful action on\/~$\mathbb{R}$ or~$S^1$? \end{open} The answer is negative for actions on the circle if we require our actions to act by homeomorphisms that have continuous second derivatives: \begin{thm}[Navas \cite{Navas-ActKazhdan}] Infinite groups with Kazhdan's Property\/~$(T)$ do not have faithful, $C^2$\!~actions on $S^1$. \end{thm} \begin{exers} \label{LOExers} A group that has a left-invariant total order is said to be \emph{left-orderable}. \noprelistbreak \begin{enumerate} \itemsep=\smallskipamount \item \label{LOExers-subgrp} Show that every subgroup of a left-orderable group is left-orderable. \item \label{LOExers-abelian} Show torsion-free, \emph{abelian} groups are left-orderable. \item \label{LOExers-extension} Show that if $N$ is a normal subgroup of~$\Gamma$, such that $N$ and~$\Gamma/N$ are left-orderable, then $\Gamma$ is left-orderable. \hint{Compare $a$ with~$b$ in $\Gamma/N$, and use the order on~$N$ to break ties.} \item \label{LOExers-nilpotent} Show torsion-free, \emph{nilpotent} groups are left-orderable. \item \label{LOExers-solvable} {(harder)} Show that some torsion-free, \emph{solvable} group is \emph{not} left-orderable. \item \label{LOExers-Exps} Show that $\Gamma$ is left-orderable if and only if, for every finite sequence $g_1,\ldots,g_n$ of nontrivial elements of~$\Gamma$, there exists $\epsilon_1,\ldots,\epsilon_n \in \{\pm1\}$, such that the semigroup generated by $\{g_1^{\epsilon_1},\ldots,g_n^{\epsilon_n}\}$ does not contain~$e$. \item \label{LOExers-locally} Show \emph{locally} left-orderable $\implies$ left-orderable. \\ \hintit{A group is said to \emph{locally} have a certain property if all of its \emph{finitely generated} subgroups have the property. So the exercise asks you to show that if every finitely generated subgroup of~$\Gamma$ is left-orderable, then $\Gamma$ is left-orderable.} \item \label{LOExers-residually} Show \emph{residually} left-orderable $\implies$ left-orderable. \\ \hintit{$\Gamma$ is said to \emph{residually} have a certain property if, for every $g \in \Gamma$, there exists a group~$H$ with the property, and a homomorphism $\varphi \colon \Gamma \to H$, such that $\varphi(g) \neq e$.} \item \label{LOExers-BurnsHale} (Burns-Hale \cite[Cor.~2]{BurnsHale}) Show that if $H^1(\Lambda; \mathbb{R}) \neq 0$ for every nontrivial, finitely generated subgroup~$\Lambda$ of~$\Gamma$, then $\Gamma$ is left-orderable. \end{enumerate} \end{exers} \lecture{Bounded generation and a proof for $\SL \bigl( 2, \mathbb{Z}[\alpha] \bigr)$} \label{BddGenLect} In \cref{LOLect}, we showed that finite-index subgroups of $\SL(3, \mathbb{Z})$ do not have faithful actions on~$\mathbb{R}$. In this lecture, we prove the same conclusion for appropriate groups of $2 \times 2$ matrices. \begin{notation} Throughout this lecture, $\alpha$ is a algebraic integer that is real and irrational. (Actually, we do not need to require $\alpha$ to be real unless it satisfies a quadratic equation with rational coefficients.) \end{notation} \begin{thm}[Lifschitz-Morris \cite{LMActOnLine}] \label{SL2ANoAct} If\/ $\Gamma$ is a finite-index subgroup of\/ $\SL \bigl( 2, \mathbb{Z}[\alpha] \bigr)$, then\/ $\Gamma$ does not have a faithful action on\/~$\mathbb{R}$. \end{thm} The proof has two ingredients: \textit{bounded generation} and \textit{bounded orbits}. Both are with respect to \emph{unipotent subgroups}. \begin{notation} Let $\overline{U} =$ {\smaller$ \begin{bmatrix}1 & * \\ 0 & 1\end{bmatrix}$} and $\underline{V} =$ {\smaller $ \begin{bmatrix} 1 & 0 \\ * & 1 \end{bmatrix}$}. These are ``unipotent'' subgroups of $\SL \bigl( 2, \mathbb{Z}[\alpha] \bigr)$. \end{notation} \begin{rem} Any subgroup of $\SL \bigl( 2, {\ast} \bigr)$ that is conjugate to a subgroup of~$\overline{U}$ is said to be \emph{unipotent}, but we do not need any unipotent subgroups other than $\overline{U}$ and~$\underline{V}$. \end{rem} \section{What is bounded generation?} Now that we know what unipotent subgroups are, let us see what ``bounded generation'' means. \begin{recall} A basic theorem of undergraduate linear algebra says that every invertible matrix can be reduced to the identity matrix by row operations (or by column operations, if you prefer those). Also, since performing a row operation (or column operation) is the same as multiplying by an ``elementary matrix\zz,'' this implies the important fact that every invertible matrix is a product of elementary matrices. In other words, $$ \text{\it the elementary matrices generate the group of all invertible matrices.} $$ \end{recall} However, in your undergraduate course, the scalars were assumed to be in a \emph{field} (probably either $\mathbb{R}$ or~$\mathbb{C}$), but our matrices have their entries in a ring of integers (namely, either $\mathbb{Z}$ or $\mathbb{Z}[\alpha]$), which is not a field. Fortunately, this is not a problem: \begin{eg} \label{RowReduceEg} The matrix {\smaller$\begin{bmatrix} 13 & 31 \\ 5 & 12 \\ \end{bmatrix}$} is a fairly typical element of $\SL(2,\mathbb{Z})$. Let us see that it can be reduced to the identity matrix, by using only \emph{integer} row operations. More precisely, the only allowable operation is adding an integer ($\mathbb{Z}$) multiple of one row to another row. (In linear algebra, a few additional operations are usually allowed, such as multiplying a row by a scalar, but we will not permit those operations.) Using $\leadsto$ to denote applying a row operation, we see that the matrix can indeed be reduced to the identity: $$\begin{bmatrix} 13 & 31 \\ 5 & 12 \\ \end{bmatrix} \leadsto \begin{bmatrix} 3 & 7 \\ 5 & 12 \\ \end{bmatrix} \leadsto \begin{bmatrix} 3 & 7 \\ 2 & 5 \\ \end{bmatrix} \leadsto \begin{bmatrix} 1 & 2 \\ 2 & 5 \\ \end{bmatrix} \leadsto \begin{bmatrix} 1 & 2 \\ 0 & 1 \\ \end{bmatrix} \leadsto \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} . $$ \end{eg} Here is the general case: \begin{prop} \label{RowReduceSL2Z} Every matrix in $\SL\bigl(2,\mathbb{Z})$ can be reduced to the identity matrix by integer row operations. \end{prop} \begin{proof}[Idea of proof] Essentially, we apply the Euclidean Algorithm: \begin{itemize} \item Choose the smallest nonzero entry in the first column, and use a row operation to subtract an appropriate integer multiple of it from the other entry in the first column. Now it is the other entry that is the smallest in the first column. \item By repeating this process, we will eventually reach a situation with only one nonzero entry in the first column. Since the determinant is~$1$, this entry must be a unit in the ring~$\mathbb{Z}$, which means that the entry is either $1$ or~$-1$. By performing just a few more row operations, we can assume it is~$1$, and that it is in the top-left corner. \item Now the matrix is upper triangular, with a $1$ in the top-left corner. Since the determinant is~$1$, there must also be a~$1$ in the bottom right corner. So one additional row operation yields the identity matrix. \qedhere \end{itemize} \end{proof} Here is another way of saying the same thing: \begin{cor} \label{UVGenSL2Z} $\overline{U}$ and~$\underline{V}$ generate $\SL(2,\mathbb{Z})$. \end{cor} \begin{proof} Adding an integer multiple of one row to another row is the same as multiplying on the left by a matrix in either $\overline{U}$ or~$\underline{V}$. \end{proof} In \cref{RowReduceEg}, we used only a few (namely, $5$) row operations to reduce the matrix to the identity by using the procedure outlined in the proof of \cref{RowReduceSL2Z}. However, it is easy to construct examples of matrices for which this procedure will use an arbitrarily large number of steps. It would be much better to have a more clever algorithm that can reduce every matrix to the identity in no more than, say, $1000$ row operations. Unfortunately, this is \emph{impossible:} \begin{unfortunate}[see \fullcref{BddGenExers}{SL2Z}] \label{ReduceSL2ZUnbdd} For every $c$, there is a matrix in $\SL(2,\mathbb{Z})$ that cannot be reduced to the identity with less than~$c$ integer row operations. \end{unfortunate} Here is another way of saying this: \cref{UVGenSL2Z} tells us that every element of $\SL(2,\mathbb{Z})$ can be written as a word in the elements of~$\overline{U}$ and~$\underline{V}$. What \cref{ReduceSL2ZUnbdd} tells us is that there is no uniform bound on the length of the word: some elements of $\SL(n,\mathbb{Z})$ require a word of length more than a hundred, others require length more than a million, others require length more than a trillion, and so on. In other words, if $g \in \SL(2,\mathbb{Z})$, then \cref{UVGenSL2Z} tells us, for some~$n$, there are sequences $\{u_i\}_{i=1}^n \subset \overline{U}$ and $\{v_i\}_{i=1}^n \subset \underline{V}$, such that $$ g = u_1 v_1 u_2 v_2 \cdots u_n v_n .$$ However, \cref{ReduceSL2ZUnbdd} tells us that there is no uniform bound on~$n$ that is independent of~$g$. That is, although $\overline{U}$ and~$\underline{V}$ \emph{generate} $\SL(2,\mathbb{Z})$, they do not \emph{boundedly} generate $\SL(2,\mathbb{Z})$. Here is the official definition: \begin{defn} Suppose $X_1,\ldots,X_k$ are subgroups of~$\Gamma$. We say $X_1,\ldots,X_k$ \emph{boundedly generate}~$\Gamma$ if there is some~$n$, such that $$ \Gamma = (X_1 X_2\cdots,X_k)^n .$$ In other words, for every $g \in \Gamma$, there exist sequences $\{x_{i,j}\}_{j=1}^n \subseteq X_i$, such that $$ g = x_{1,1} x_{2,1} \cdots x_{k,1} \, x_{1,2} x_{2,2} \cdots x_{k,2} \, \cdots \, x_{1,n} x_{2,n} \cdots x_{k,n} .$$ The key point (which is what distinguishes this from just saying the subgroups \emph{generate}~$\Gamma$) is that there is an upper bound on~$n$ that is independent of~$g$. (Then, since $x_{i,j}$ is allowed to be the identity element, we can take the same value of~$n$ for all~$g$.) \end{defn} \begin{exers} \label{BddGenExers} When $\Gamma$ is boundedly generated by cyclic subgroups (i.e., $\Gamma = H_1 H_2 \cdots H_n$, with each $H_i$ cyclic), we usually just say $\Gamma$ is \emph{boundedly generated}. \noprelistbreak \begin{enumerate} \item \label{BddGenExers-quotient} Assume $\Gamma$ is boundedly generated (by cyclic subgroups), and $N$ is a normal subgroup of~$\Gamma$. Show that $\Gamma/N$ is boundedly generated (by cyclic subgroups). \item \label{BddGenExers-modnth} Assume $\Gamma$ is boundedly generated (by cyclic subgroups), and $n \in \mathbb{Z}^+$. Show $\bigl\langle\, g^n \mid g \in \Gamma \,\bigr\rangle$ has finite index in~$\Gamma$. \item \label{BddGenExers-FinInd} Let $\dot\Gamma$ be a finite-index subgroup of~$\Gamma$. Show that $\Gamma$ is boundedly generated (by cyclic subgroups) if and only if $\dot\Gamma$~is boundedly generated (by cyclic subgroups) \item \label{BddGenExers-SL2Z} Prove \cref{ReduceSL2ZUnbdd}. \hint{The free group~$F_2$ is not boundedly generated by cyclic subgroups \fullcsee{QuasiMExers}{F2NotBddGen}. You may assume this fact (without proof).} \item \label{BddGenExers-VariablePowers} (harder) Assume $\Gamma$ is boundedly generated (by cyclic subgroups), and $n$ is any function from~$\Gamma$ to~$\mathbb{Z}^+$. Show $\bigl\langle\, g^{n(g)} \mid g \in \Gamma \,\bigr\rangle$ has finite index in~$\Gamma$. \end{enumerate} \end{exers} \section{Bounded generation of $\SL \bigl( 2, \mathbb{Z}[\alpha] \bigr)$} \label{BddGenSL2ASect} Although there is no bound on the number of $\mathbb{Z}$ operations needed to reduce a $2 \times 2$ matrix to the identity, we can find a bound if we allow slightly more scalars in our operations: \begin{thm}[Carter-Keller-Paige \cite{CKP,Morris-CKP}] \label{SL2OBddGen} The subgroups $\overline{U}$ and~$\underline{V}$ boundedly generate\/ $\SL\bigl(2,\mathbb{Z} {[\alpha]} \bigr)$. \end{thm} In other words, there is some~$n$, such that $$ \SL\bigl(2,\mathbb{Z} {[\alpha]} \bigr) = (\overline{U} \underline{V})^n = \overline{U} \, \underline{V} \, \overline{U} \, \underline{V} \, \cdots\, \overline{U} \, \underline{V} .$$ \begin{rem} The proof of the general case of \cref{SL2OBddGen} is nonconstructive, so it does not provide an explicit bound on~$n$. In cases where a bound is known, it depends on~$\alpha$ and can be arbitrarily large if the algebraic integer $\alpha$ is very complicated. However, it is believed that there should be a uniform bound that is independent of~$\alpha$. In fact, we will see below that if certain number-theoretic conjectures are true, then less than 10 row operations should always suffice. \end{rem} The known proofs of \cref{SL2OBddGen} are long and complicated, so we will not try to explain them. Instead, we will give a very short and simple proof that relies on an unproved conjecture in Number Theory. \begin{defn} Let $r$ and~$q$ be nonzero integers. We say $r$ is a \emph{primitive root} modulo~$q$ if $$\{\, r, r^2, r^3, \ldots \,\} \mod q \ = \ \{1,2,3, \ldots, q-1\} .$$ \end{defn} \begin{eg} $3$ is a primitive root modulo~$7$, because $$ 3, \ 3^2 \equiv 2, \ 3^3 \equiv 6, \ 3^4 \equiv 4, \ 3^5 \equiv 5, \ 3^6 \equiv 1 $$ is a list of all the nonzero residues modulo~$7$. \end{eg} Although it is still an open problem, we will assume: \begin{conj}[Artin's Conjecture] Let $r \in \mathbb{Z}$, such that $r \neq -1$ and $r$ is not a perfect square. Then there exist infinitely many primes~$q$, such that $r$ is a primitive root modulo~$q$. \end{conj} \begin{rem} Although this conjecture is still an open problem, it is implied by a certain generalization of the Riemann Hypothesis \cite{Hooley-ArtinConj}. \end{rem} Dirichlet proved there are infinitely many primes in any (appropriate) arithmetic progression \cite[p.~61]{Serre-CourseArith}, and we will assume a stronger form of Artin's Conjecture that says $q$ can be chosen to be in any such arithmetic progression: \begin{conj} \label{ArtinArithProgConj} Let $a,b,r \in \mathbb{Z}$, such that \begin{itemize} \item $\gcd(a,b) = 1$, and \item $r \neq -1$ and $r$~is not a perfect $m$th power for any $m > 1$. \end{itemize} Then there exist infinitely many primes~$q$ in the arithmetic progression $\{a + k b\}_{k \in \mathbb{Z}}$, such that $r$ is a primitive root modulo~$q$. \end{conj} \begin{rem} We assume $r$ is not a perfect $m$th power in order to avoid the following obstruction: if $m > 1$, $a \equiv 1 \pmod{m}$, $m \mid b$, and $r$~is a perfect $m$th power, then $r$ is not a primitive root modulo any prime in the arithmetic progression $\{a + k b\}_{k = 0}^\infty$. \end{rem} To avoid the need for any Algebraic Number Theory, we will prove a slight variation of \cref{SL2OBddGen} that replaces the algebraic number~$\alpha$ with a rational number, namely, $1/p$: \begin{thm}[Carter-Keller-Paige \cite{CKP,Morris-CKP}] If $p$ is any prime, then every matrix in $\SL \bigl( 2, \mathbb{Z}[1/p] \bigr)$ can be reduced to the identity matrix with a bounded number of $\mathbb{Z}[1/p]$ row operations. \end{thm} \begin{proof} Let $\begin{bmatrix} a &c \\ b & d \end{bmatrix} \in \SL \bigl( 2, \mathbb{Z}[1/p] \bigr)$. Assume, for simplicity, that $a,b,c,d \in \mathbb{Z}$. We explain how to reduce this matrix to the identity with only five row operations: $$ \begin{bmatrix} a & c \\ b & d \end{bmatrix} \stackrel{\textstyle 1}{\leadsto} \begin{bmatrix} q & * \\ b & d \end{bmatrix} \stackrel{\textstyle 2}{\leadsto} \begin{bmatrix} q & * \\ p^\ell & * \end{bmatrix} \stackrel{\textstyle 3}{\leadsto} \begin{bmatrix} 1 & * \\ p^\ell & * \end{bmatrix} \stackrel{\textstyle 4}{\leadsto} \begin{bmatrix} 1 & * \\ 0 & * \end{bmatrix} \stackrel{\textstyle 5}{\leadsto} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} .$$ 1) Letting $r = p$ in \cref{ArtinArithProgConj}, we know there is some $k \in \mathbb{Z}$, such that $q = a + kb$ is prime, and $p$~is a primitive root modulo~$q$. Our first row operation adds $k$~times the second row to the first row. 2) Now, since $p$ is a primitive root modulo~$q$, we know there exists $\ell \in \mathbb{Z}^+$, such that $p^\ell \equiv b \pmod{q}$. This means there exists $k' \in \mathbb{Z}$, such that $p^\ell = b + k' q$. Our second row operation adds $k'$~times the first row to the second row. 3) Every element of $\mathbb{Z}[1/p]$ is a multiple of $p^\ell$ (since $p$ is a unit in this ring). Therefore, our third row operation can add anything at all to the top-left entry, so we can change this matrix entry to anything we want. We change it to a~$1$ (by subtracting $(q-1)p^{-\ell}$ times the second row from the first row). 4) Now, the fourth row operation can use the~$1$ in the top-left corner to kill the bottom-left entry by subtracting $p^\ell$ times the first row from the second row. 5) Since the original matrix {\smaller[2]$\begin{bmatrix} a &c \\ b & d \end{bmatrix}$} is in $\SL \bigl( 2, \mathbb{Z}[1/p] \bigr)$, and the row operations we applied do not affect the determinant, we know that the determinant of this matrix is~$1$. Therefore, the bottom-right entry must be~$1$. So the fifth row operation can kill the top-right entry. \end{proof} \begin{rem}[Carter-Keller \cite{CarterKeller-BddElemGen,CarterKeller-ElemExp}] \label{SL3ZBddGen} If $n \ge 3$, then every matrix in $\SL(n,\mathbb{Z})$ can be reduced to the identity by using no more than $\frac{1}{2}(3n^2 - n) + 36$ integer row operations. Thus, although the Euclidean Algorithm will use an unbounded number of row operations, Carter and Keller showed that the extra freedom provided by larger matrices can be exploited to find a different algorithm that uses only a bounded number. \end{rem} \section{Bounded orbits and a proof for $\SL \bigl( 2 , \mathbb{Z}[\alpha] \bigr)$} We have seen that the subgroups $\overline{U}$ and~$\underline{V}$ boundedly generate $\SL \bigl( 2 , \mathbb{Z}[\alpha] \bigr)$. The other ingredient in our proof of \cref{SL2ANoAct} is that these subgroups have bounded orbits: \begin{thm}[Lifschitz-Morris \cite{LMActOnLine}] \label{BddOrbs} Let\/ $\Gamma$ be a finite-index subgroup of\/ $\SL \bigl( 2 , \mathbb{Z}[\alpha] \bigr)$. If\/ $\Gamma$ acts on\/~$\mathbb{R}$, then every $\overline{U}$-orbit is a bounded set, and every $\underline{V}$-orbit is a bounded set. \end{thm} Before discussing the proof of this theorem, let us explain how it is used: \begin{proof}[Proof of \cref{SL2ANoAct}] Suppose $\Gamma$ has a nontrivial action on~$\mathbb{R}$. (This will lead to a contradiction.) Pretend, for simplicity, that $\Gamma$ is all of $\SL \bigl( 2 , \mathbb{Z}[\alpha] \bigr)$, instead of being a finite-index subgroup. \setcounter{step}{0} \begin{step} \label{AssumeNoFP} We may assume the action has no fixed points. \end{step} Let $F$ be the set of fixed points of the $\Gamma$-action on~$\mathbb{R}$. Then $F$ is obviously a closed set, so its complement is open. (Also, the complement is nonempty, because the $\Gamma$-action is nontrivial.) Thus, if we let $I$ be a connected component of the complement, then $I$ is an open interval in~$\mathbb{R}$. It is easy to see that $I$ is $\Gamma$-invariant (because, by definition, the endpoints of~$I$ are fixed points for~$\Gamma$). So $\Gamma$ acts on~$I$. By definition, $I$ is contained in the \emph{complement} of the set of fixed points, so the $\Gamma$-action on~$I$ has no fixed points. Since $I$ is homeomorphic to~$\mathbb{R}$, this provides an action of~$\Gamma$ on~$\mathbb{R}$ with no fixed points. \begin{step} \label{OrbitsBdd} We show that every $\Gamma$-orbit on~$\mathbb{R}$ is bounded. \end{step} Fix some $p \in \mathbb{R}$. \begin{itemize} \item From \cref{BddOrbs}, we know that the $\underline{V}$-orbit of~$p$ is bounded, so it has a finite infimum~$a_1$ and a finite supremum~$b_1$. Thus, $\underline{V} p$ is contained in the compact interval $[a_1,b_1]$. \item From \cref{BddOrbs}, we know that $\overline{U}$-orbit of~$a_1$ has a finite infimum~$a_2$, and the $\overline{U}$-orbit of~$b_1$ has a finite supremum~$b_2$. Since every element of~$\Gamma$ acts via an order-preserving homeomorphism of~$\mathbb{R}$, we know that $\overline{U}\underline{V} p \subseteq [a_2,b_2]$. \item Continuing in this way, we see that $(\overline{U}\underline{V})^n p$ is contained in a compact interval $[a_{2n}, b_{2n}]$ for every $n \in \mathbb{Z}^+$. \end{itemize} However, since $\overline{U}$ and~$\underline{V}$ boundedly generate~$\Gamma$ (see \cref{SL2OBddGen}), we know there is some~$n$, such that $(\overline{U}\underline{V})^n = \Gamma$. Therefore, the $\Gamma$-orbit of~$p$ is a bounded set. \begin{step} We obtain a contradiction. \end{step} Fix some $p \in \mathbb{R}$. From \cref{OrbitsBdd}, we know $\Gamma p$ is a bounded set, so it has a finite supremum~$b$. Since $\Gamma p$ is a $\Gamma$-invariant set, and $b$~is a point that is defined from this set, we know that the point~$b$ must be fixed by~$\Gamma$. This contradicts \cref{AssumeNoFP}, which tells us that there are no fixed points. \end{proof} Instead of actually proving \cref{BddOrbs}, we will prove a simpler version that replaces $\alpha$ with the rational number $1/p$ (much as in \cref{BddGenSL2ASect}): \begin{thm}[Lifschitz-Morris \cite{LMActOnLine}] Let\/ $\Gamma = \SL \bigl( 2 , \mathbb{Z}[1/p] \bigr)$ {\upshape(}or a finite-index subgroup{\upshape)}, where $p$~is prime. If\/ $\Gamma$ acts on\/~$\mathbb{R}$, then every $\overline{U}$-orbit is a bounded set, and every $\underline{V}$-orbit is a bounded set. \end{thm} \begin{proof}[Idea of proof] Pretend, for simplicity, that $\Gamma$ is all of $\SL \bigl( 2 , \mathbb{Z}[1/p] \bigr)$, instead of being a finite-index subgroup. For $u,v \in \mathbb{Z}[1/p]$, let $$\overline u = \begin{bmatrix}1 & u \\ 0 & 1 \end{bmatrix} \in \overline{U} \text{\quad and\quad} \underline v = \begin{bmatrix}1 & 0 \\ v & 1\end{bmatrix} \in \underline{V} . $$ Also let $$ \midline{p} = \begin{bmatrix}p & 0 \\ 0 & 1/p\end{bmatrix} \in \Gamma .$$ A simple calculation shows \begin{align} \label{CommRelns} \text{$\midline{p}^{n} \overline{u} \midline{p}^{-n} \to \overline{\infty}$ \quad and\quad $\midline{p}^{n} \underline{v} \midline{p}^{-n} \to \underline{0}$ \qquad as $n \to \infty$} . \end{align} Suppose some $\overline{U}$-orbit is not a bounded set. Then either it is not bounded above, or it is not bounded below. Assume, without loss of generality, that it is not bounded above. Since $\underline{V}$ is conjugate to~$\overline{U}$, this implies that some $\underline{V}$-orbit is also not bounded above. In fact, it can be shown that there is a single point $x \in \mathbb{R}$, such that \begin{itemize} \item both the $\overline{U}$-orbit and the $\underline{V}$-orbit of~$x$ are not bounded above, and \item $\midline{p}$ fixes~$x$. \end{itemize} Fix $\overline u \in \overline{U}$ with $\overline u (x) > x$. Since the $\underline{V}$-orbit of~$x$ is not bounded above, we can choose $\underline v \in \underline{V}$ with $\overline u (x) < \underline v(x)$. Then, since $\midline{p}$ is order-preserving, we have \begin{align*} \midline{p}^n \bigl( \overline{u} (x) \bigr) < \midline{p}^n \bigl( \underline{v}(x) \bigr) . \end{align*} However, as $n \to \infty$, we have \begin{align*} \midline{p}^n \bigl( \overline{u} (x) \bigr) &= (\midline{p}^n \overline{u} \midline{p}^{-n})(x) && \text{($\midline{p}$ fixes $x$)} \\&\to \overline{\infty}(x) && \text{(\ref{CommRelns})} \\&\to \infty && \text{($\overline{U}$-orbit is not bounded above)} \\ \intertext{and} \midline{p}^n \bigl( \underline{v} (x) \bigr) &= (\midline{p}^n \underline{v} \midline{p}^{-n})(x) \to \underline{0}(x) . \end{align*} Therefore $\infty$ is less than the finite number~$\underline{0}(x)$. This is a contradiction. \end{proof} \begin{exer} \label{BddGenIsom} Assume $\Gamma$ is boundedly generated (by cyclic subgroups). Show that if $\Gamma$ acts by \emph{isometries} on a metric space~$X$, and every cyclic subgroup has a bounded orbit on~$X$, then every $\Gamma$-orbit on~$X$ is bounded. \end{exer} \section{Implications for other arithmetic groups of higher rank} \Cref{SLnZNotLO,SL2ANoAct} each provide examples of arithmetic groups that cannot act on the line. In both cases, the proof was based on unipotent elements, which means that they only apply to arithmetic groups that are \emph{not} cocompact (cf. \cref{CocpctOpen}). To complete the treatment of such groups, it will suffice to consider only a few more examples: \begin{thm}[Chernousov-Lifschitz-Morris \cite{ChernousovLifschitzMorris-AlmMin}] If\/ $\Gamma$ is any noncocompact, irreducible arithmetic group, and\/ $\rank_{\real} \Gamma > 1$, then\/ $\Gamma$ contains a finite-index subgroup of either\/ $\SL \bigr(2, \mathbb{Z}[\alpha] \bigr)$ {\upshape(}for some~$\alpha${\upshape)} or a noncocompact arithmetic subgroup of either\/ $\SL(3,\mathbb{R})$ or\/ $\SL(3,\mathbb{C})$. \end{thm} Combining this with \cref{SL2ANoAct} establishes the following observation: \begin{cor Proving the following very special case would establish \cref{NoActConj} under the additional assumption that\/ $\Gamma$ is not cocompact. \end{cor} \begin{conj} \label{LattSL3NoAct} Noncocompact arithmetic subgroups of\/ $\SL(3,\mathbb{R})$ and\/ $\SL(3,\mathbb{C})$ have no faithful action on\/~$\mathbb{R}$. \end{conj} One possible approach is to use bounded generation: \begin{thm}[Lifschitz-Morris \cite{LMActOnLine}] Let\/ $\Gamma$ be a noncocompact arithmetic subgroup of\/ $\SL(3,\mathbb{R})$ or\/ $\SL(3,\mathbb{C})$. If some finite-index subgroup of\/~$\Gamma$ is boundedly generated by unipotent subgroups, then\/ $\Gamma$ does not have a faithful action on\/~$\mathbb{R}$. \end{thm} This implies that \cref{LattSL3NoAct} is a consequence of the following fundamental conjecture in the theory of arithmetic groups: \begin{conj}[Rapinchuk, 1989] \label{RapinchukConj} If\/ $\Gamma$ is a noncocompact, irreducible arithmetic group, and\/ $\rank_{\real} \Gamma > 1$, then\/ $\Gamma$ contains a finite-index subgroup that is boundedly generated by unipotent subgroups. \end{conj} In fact, to establish \cref{LattSL3NoAct} (and therefore also the entire noncocompact case of \cref{NoActConj}), it would suffice to prove the special case of \cref{RapinchukConj} in which $\Gamma$ is an arithmetic subgroup of either $\SL(3,\mathbb{R})$ or $\SL(3,\mathbb{C})$. \lecture{What is an amenable group?} \label{AmenLect} Amenability is a very fundamental notion in group theory --- there are literally dozens of different definitions that single out exactly the same class of groups. We will discuss just a few of these many viewpoints, and, for simplicity, we will restrict our attention to \emph{discrete} groups that are countable, ignoring the important applications of this notion in the theory of topological groups. Much more information can be found in the monographs \cite{PatersonBook} and~\cite{PierBook}. \section{Ponzi schemes} Let us begin with an amusing example that illustrates one of the many definitions. \begin{eg} \label{PonziFreeGrp} Consider the free group $F_2 = \langle a,b \rangle$, and let us assume that every element of the group starts with \$1. Thus, if $f_0(g)$ denotes the amount of money possessed by element~$g$ at time $t = 0$, then $$ \text{$f_0(g) = \$1$, \quad for all~$g \in F_2$.} $$ Now, everyone will pass their dollar to the person next to them who is closer to the identity. (That is, if $g = x_1x_2\cdots x_n$ is a reduced word, with $x_i \in \{a^{\pm1}, b^{\pm1} \}$ for each~$i$, then $g$~passes its dollar to $g' = x_1x_2\cdots x_{n-1}$. The identity element has nowhere to pass its dollar, so it keeps the money it started with.) Then, letting $f_1$ denote the amount of money possessed now (at time $t = 1$), we have $$ \text{$f_1(g) = \$3$ \quad for all~$g$ \ (except that $f_1(e) = \$5$)}. $$ Thus, everyone has more than doubled their money. Furthermore, this result was achieved by moving the money only a bounded distance. \end{eg} Such an arrangement is called a \emph{Ponzi scheme} on the group~$F_2$: \begin{defn} A \emph{Ponzi scheme} on a group~$\Gamma$ is a function $M \colon \Gamma \to \Gamma$, such that: \begin{enumerate} \item $M^{-1}(g) \ge 2$ for all $g \in \Gamma$ $$ \text{(\emph{everyone doubles their money if each $g$ passes its dollar to $M(g)$}),} $$ and \item there is a finite subset~$S$ of~$\Gamma$, such that $M(g) \in gS$ for all $g \in \Gamma$ $$ \text{(\emph{money moves only a bounded distance}).} $$ \end{enumerate} \end{defn} From \cref{PonziFreeGrp}, we know there is a Ponzi scheme on the free group~$F_2$. However, not all groups have a Ponzi scheme: \begin{exer} \label{AbelNoPonzi} There does not exist a Ponzi scheme on the abelian group~$\mathbb{Z}^n$. \\ \hint{Any group with a Ponzi scheme must have exponential growth, because $f_t(g)$ is exponentially large, but the money moves only a linear distance.} \end{exer} More generally, we will see later that no solvable group has a Ponzi scheme (even though solvable groups can have exponential growth). This is because solvable groups are ``amenable\zz,'' and the nonexistence of a Ponzi scheme can be taken as the definition of amenability: \begin{thm}[{}{Gromov \cite[p.~328]{Gromov-MetricStructures}}] \label{PonziIffNotAmenThm} There exists a Ponzi scheme on\/~$\Gamma$ if and only if\/ $\Gamma$~is not amenable. \end{thm} This theorem provides a nice description of what it means for a group to \emph{not} be amenable, but it does not directly provide any positive information about a group that \emph{is} amenable. Most of the other definitions we discuss are better for that. \section{Almost-invariant subsets} Instead of using Ponzi schemes, we will adopt the following definition: $$ \text{$\Gamma$ is amenable $\iff$ $\Gamma$ has \emph{almost-invariant} finite subsets.} $$ To see what this means, let us consider an example: \begin{eg} Let $\Gamma = \mathbb{Z}^2 = \langle a, b \rangle$, where $a = (1,0)$ and $b = (0,1)$. If we let $F$ be a large ball in~$\Gamma$, then $F$ is very close to being invariant under the left-translation by~$a$ and~$b$: $$ \text{ $\#(F \cap aF) > (1-\epsilon) \, \#F$ \ and \ $\#(F \cap bF) > (1-\epsilon) \, \#F$ } , $$ where $\epsilon$ can be as small as we like, if we take~$F$ to be sufficiently large. We say that $F$ is ``almost invariant:'' \end{eg} \begin{defn} Let $\Gamma$ be a group, and fix a finite subset~$S$ of~$\Gamma$ and some $\epsilon > 0$. A finite, nonempty subset~$F$ of~$\Gamma$ is \emph{almost invariant} if $$ \text{$\#(F \cap aF) > (1-\epsilon) \, \#F$, \quad $\forall a \in S$} .$$ \end{defn} \begin{defn} \label{AmenDefn} $\Gamma$ is \emph{amenable} if and only if $\Gamma$ has almost-invariant finite subsets (for all finite~$S$ and all $\epsilon > 0$). \end{defn} \begin{exers} \label{FolnerEx} Use \cref{AmenDefn} to show: \begin{enumerate} \item \label{FolnerEx-Free} The free group $F_2$ is \emph{not} amenable. \hint{If $F$ is almost invariant, then the first letter of most of the words in~$F$ must be \emph{both~$a$ and~$b$}.} \item \label{FolnerEx-union} If $\Gamma$ is amenable, $S$ is a finite subset of~$\Gamma$, and $\epsilon > 0$, then there exists a finite subset~$F$ of~$\Gamma$, such that $\#(SF) < (1 + \epsilon) \, \#F$, where $$SF = \{\, sf \mid s \in S, f \in F\,\} .$$ \item \label{FolnerEx-QI} Amenability is invariant under quasi-isometry. (This means that you should assume $\Gamma_1$ is quasi-isometric to~$\Gamma_2$, and prove that $\Gamma_1$ is amenable if and only if $\Gamma_2$ is amenable.) \hint{Fix $c > 1$. Show $\Gamma$ is not amenable iff it has a finite subset~$S$, such that $\#(SF) \ge c \cdot \#F$ for every finite subset~$F$ of~$\Gamma$.} \item \label{FolnerEx-noPonzi} Amenable groups do not have Ponzi schemes. \item \label{FolnerEx-Ponzi} Nonamenable groups have Ponzi schemes. \hint{Suppose $A_1,\ldots,A_n$ are finite sets, and $a_1,\ldots,a_n \in \natural$. Show (by induction on $a_1 + \cdots + a_n$) that if $\# \bigcup_{i \in I} A_i \ge \sum_{i \in I} a_i$ for every $I \subseteq \{1,\ldots,n\}$, then there exists $A_i' \subseteq A_i$, such that $\#A_i' = a_i$ and $A_1',\ldots,A_n'$ are pairwise disjoint.} \end{enumerate} \end{exers} \begin{term} \ \noprelistbreak \begin{enumerate} \item Since the notion of ``almost invariant" depends on the choice of $S$ and~$\epsilon$, many authors say that $F$ is ``$(S,\epsilon)$-invariant\zz.'' \item An almost-invariant set can also be called a ``F\o lner set\zz.'' More precisely, a sequence $\{F_n\}$ of nonempty, finite subsets of~$\Gamma$ is said to be a \emph{F\o lner sequence} if, for every finite subset~$S$ of~$\Gamma$, every $\epsilon > 0$, and every sufficiently large~$n$, the set~$F_n$ is $(S,\epsilon)$-invariant. (The set~$F_n$ is often called a ``F\o lner set\zz.'') Thus, \cref{AmenDefn} can be restated as saying that $\Gamma$ is amenable if and only if it has a F\o lner sequence. \end{enumerate} \end{term} \section{Average values and invariant measures} \label{AvgValSect} In many situations, it is difficult to directly employ the almost-invariant sets provided by \cref{AmenDefn}. This \lcnamecref{AvgValSect} provides some consequences that are often easier to apply. For example, every bounded function on~$\Gamma$ has an average value: \begin{defn} A \emph{mean} on $\ell^\infty(\Gamma)$ is a linear functional $A \colon \ell^\infty(\Gamma) \to \mathbb{C}$, such that $A(\varphi)$ satisfies two axioms that would be expected of the average value of~$\varphi$: \begin{itemize} \item \emph{the average value of a constant function is that constant} $$\text{$A(c) = c$ \quad if $c$ is a constant} ,$$ and \item \emph{the average value of a positive-valued function cannot be negative} $$ \text{$A(\varphi) \ge 0$ \quad if $\varphi \ge 0$} .$$ \end{itemize} The mean is \emph{left-invariant} if $A \bigl( \varphi^g \bigr) = A \bigl( \varphi \bigr)$ for all $\varphi \in \ell^\infty(\Gamma)$ and all $g \in \Gamma$, where $\varphi^g(x) = \varphi(gx)$. \end{defn} \begin{prop} \label{AmenAvgVal} $\Gamma$ is amenable if and only if there exists a left-invariant mean on $\ell^\infty(\Gamma)$. \end{prop} \begin{proof}[Proof ($\Rightarrow$)] Choose a sequence~$\{F_n\}_{n=1}^\infty$ of almost-invariant sets with $\epsilon \to 0$ as $n \to \infty$, and let $$A_n(\varphi) = \frac{1}{\#F_n}\sum_{x \in F_n} \varphi(x) .$$ That is, $A_n(\varphi)$ is the average value of~$\varphi$ on the finite set~$F_n$, so $A_n$ is obviously a mean on $\ell^\infty(\Gamma)$. Since the set $F_n$ is almost invariant, the mean~$A_n$ is close to being left-invariant. To obtain perfect left-invariance, we take a limit: $$ A(\varphi) = \lim_{k \to \infty} A_{n_k}(\varphi) ,$$ where $\{n_k\}$ is a subsequence chosen so that the limit exists. However, if we choose different subsequences for different functions~$\varphi$, then the limit may not be linear or left-invariant --- we need to be consistent in our choice of $A(\varphi)$ for all~$\varphi$. This can be accomplished in various ways: \begin{itemize} \item (logician's approach) An \emph{ultrafilter} on~$\natural$ tells us which subsequences are ``good'' and which are ``bad\zz.'' So the choice of an ultrafilter easily leads to a consistent value for $A(\varphi)$. \item (analyst's approach) Define a linear functional~$A_0$ that \begin{itemize} \item takes the value~$1$ on the constant function~$1$, and \item is $0$ on every function of the form $\varphi^g - \varphi$. \end{itemize} Then the \emph{Hahn-Banach Theorem} tells us that $A_0$ extends to a linear functional defined on all of~$\ell^\infty(\Gamma)$. \item (other viewpoints) Use \emph{Zorn's Lemma}, \emph{Tychonoff's Theorem}, or some other version of the \emph{Axiom of Choice}. \qedhere \end{itemize} \end{proof} The following consequence is very important in the theory of group actions: \begin{cor} \label{ActHasInvtMeas} Suppose \noprelistbreak \begin{itemize} \item $\Gamma$ is amenable, and \item $\Gamma$ acts on a compact metric space~$X$ {\upshape(}by homeomorphisms\/{\upshape)}. \end{itemize} Then there exists a $\Gamma$-invariant Borel probability measure~$\mu$ on~$X$. \end{cor} \begin{proof} Fix a basepoint $x_0 \in X$. For any $f \in C(X)$, we can define a function $\varphi_f \colon \Gamma \to \mathbb{C}$ by restricting~$f$ to the $\Gamma$-orbit of~$x_0$. More precisely, $$ \varphi_f(g) = \varphi( gx_0) .$$ Since $f$ is continuous and~$X$ is compact, we know that $f$ is bounded. So $\varphi_f$ is also bounded. Therefore, \cref{AmenAvgVal} tells us that it has an average value $A(\varphi_f)$, which we call $\mu(f)$. Since $A$ is a mean, it is easy to see that $\mu$ is a positive linear functional of finite norm (in fact, $\|\mu\| = 1$). So the Riesz Representation Theorem tells us that $\mu$ is a Borel measure on~$X$. Since $A$ is translation-invariant and $A(1) = 1$, we see that $\mu$ is translation-invariant and $\mu(X) = 1$, so $\mu$ is a translation-invariant probability measure. \end{proof} \begin{rem} The converse is true: if every $\Gamma$-action on every compact metric space has a $\Gamma$-invariant probability measure, then $\Gamma$ is amenable. So this is another possible choice for the definition of amenability. \end{rem} \Cref{BddCohoLect} will discuss the ``bounded cohomology group'' $H_b^n(\Gamma;V)$, which is defined exactly like the usual group cohomology, except that all cochains are required to be bounded functions. This notion provides another definition of amenability: \begin{thm}[B.\,E.\,Johnson \cite{Johnson-CohoBanach}] $\Gamma$ is amenable if and only if $H_b^n(\Gamma;V) = 0$ for every\/ $\Gamma$-module~$V$ that is the dual of a Banach space. \end{thm} \begin{proof}[Proof of $(\Rightarrow)$] Recall that if $\Gamma$ is a finite group, and $V$~is a $\Gamma$-module (such that multiplication by the scalar $|\Gamma|$ is invertible), then one can prove $H^n(\Gamma;V) = 0$ by averaging: for an $n$-cocycle $\alpha \colon \Gamma^n \to V$, define $$\overline{\alpha}(g_1,\ldots,g_{n-1}) = \frac{1}{|\Gamma|} \sum_{g \in \Gamma} \alpha(g_1,\ldots,g_{n-1}, g) .$$ Then $\overline{\alpha}$ is an $(n-1)$-cochain, and $\delta \overline{\alpha} = \pm\alpha$. So $\alpha$ is a coboundary, and is therefore trivial in cohomology. Since $\Gamma$ is amenable, we can do exactly this kind of averaging for any \emph{bounded} cocycle. See \cref{Hbb(amen;R)=0} for more details. \end{proof} When $\Gamma$ is amenable, \cref{AmenAvgVal} allows us to take the average value of the characteristic function of any subset of~$\Gamma$. This leads to von\,Neumann's original definition of amenability \cite{vonNeumann-AllgemeineMasses}: \begin{cor} \label{VonNeumann} $\Gamma$ is amenable if and only if there exists a finitely additive, translation-invariant probability measure that is defined on all of the subsets of\/~$\Gamma$. \end{cor} More precisely, if we let $\power{\Gamma}$ be the collection of all subsets of~$\Gamma$, then the conclusion means there is a function $\mu \colon \power{\Gamma} \to [0,1]$, such that: \begin{itemize} \item $\mu(X_1 \cup X_2) = \mu(X_1) + \mu(X_2)$ if $X_1$ and~$X_2$ are disjoint, \item $\mu(\Gamma) = 1$, and \item $\mu(gX) = \mu(X)$ for all $g \in \Gamma$ and $X \subseteq \Gamma$. \end{itemize} This definition was motivated by von\,Neumann's interest in the famous \emph{Banach-Tarski paradox}. The subjects are connected via the following notion: \begin{defn} A \emph{paradoxical decomposition} of~$\Gamma$ is a representation $$ \Gamma = \left( \coprod_{i=1}^m A_i \right) \coprod \left( \coprod_{j=1}^n B_j \right) \qquad \text{(disjoint unions)} ,$$ such that, for some $g_1,\ldots,g_m,h_1,\ldots,h_n \in \Gamma$, we have $$\Gamma = \bigcup_{i=1}^m g_i A_i = \bigcup_{j=1}^n h_j B_j .$$ \end{defn} \begin{exers} \label{VonNeumannEx} \ \noprelistbreak \begin{enumerate} \item \label{VonNeumannEx-noParadox} Show that if $\Gamma$ is amenable, then $\Gamma$ does not have a paradoxical decomposition. \item \label{VonNeumannEx-free} Find an explicit paradoxical decomposition of a free group. \item \label{VonNeumannEx-Paradox} Show that if $\Gamma$ is not amenable, then $\Gamma$ has a paradoxical decomposition. \\ \hint{There exists a Ponzi scheme.} \end{enumerate} \end{exers} \begin{rems} \ \noprelistbreak \begin{enumerate} \item von\,Neumann used the German word ``messbar'' (which can be translated as ``measurable''), not the currently accepted term ``amenable\zz,'' and his condition was not proved to be equivalent to \cref{AmenDefn} until much later (by F\o lner \cite{Folner-GrpsBanachMean}). \item See \cite{Wagon-BanachTarski} for much more about the Banach-Tarski paradox, paradoxical decompositions, and the relevance of amenability. \end{enumerate} \end{rems} We have now seen several proofs that the existence of almost-invariant sets implies some other notion that is equivalent to amenability. Here a proof that goes the other way. \begin{prop} If there is an invariant mean~$A$ on $\ell^\infty(\Gamma)$, then $\Gamma$ has almost-invariant finite sets. \end{prop} \begin{proof}[Idea of proof] The dual of~$\ell^1(\Gamma)$ is~$\ell^\infty(\Gamma)$, so $\ell^1(\Gamma)$ is dense in the dual of $\ell^\infty(\Gamma)$, in an appropriate weak topology. Hence, there is a sequence $\{f_n\} \subset \ell^1(\Gamma)$, such that $f_n \to A$. Since $A$~is invariant, we may choose some large~$n$ so that $f_n$ is close to being invariant. Then, for an appropriate $c> 0$, the finite set $\{\, x \mid |f(x)| > c \,\}$ is almost invariant. \end{proof} \section{Examples of amenable groups} Much of the following exercise can be proved fairly directly by using almost-invariant sets, but it will be much easier to use other characterizations of amenability for some of the parts. \begin{exers} \label{EgAmenEx} Show that all groups of the following types are amenable: \begin{enumerate} \item \label{EgAmenEx-finite} finite groups \item \label{EgAmenEx-cyclic} cyclic groups \item \label{EgAmenEx-product} $\text{amenable} \times \text{amenable}$ \hintit{I.e., if $\Gamma_1$ and$\Gamma_2$ are amenable, then $\Gamma_1 \times \Gamma_1$ is amenable.} \item \label{EgAmenEx-abelian} abelian groups \item \label{EgAmenEx-extension} amenable by amenable \qquad \hintit{I.e., if there is a normal subgroup~$N$ of~$\Gamma$, such that $N$ and $\Gamma/N$ are amenable, then $\Gamma$ is amenable.} \item \label{EgAmenEx-solv} solvable groups \item \label{EgAmenEx-subgrp} subgroups of amenable groups \item \label{EgAmenEx-quotient} quotients of amenable groups \item \label{EgAmenEx-locally} locally amenable groups \qquad \hintit{I.e., if every \emph{finitely generated} subgroup of~$\Gamma$ is amenable, then $\Gamma$ is amenable.} \item \label{EgAmenEx-limit} direct limits of amenable groups \qquad \hintit{I.e., if $\mathcal{A}$ is a collection of amenable groups that is totally ordered under inclusion, then $\bigcup \mathcal{A}$ is amenable.} \item \label{EgAmenEx-subexp} groups of subexponential growth \qquad \hintit{I.e., if there is a finite generating set~$S$ of~$\Gamma$, such that $\lim_{n \to \infty} (\# S^n)/e^{\epsilon n} = 0$ for every $\epsilon > 0$, then $\Gamma$ is amenable.} \end{enumerate} \end{exers} \begin{rems} \label{AmenClasses} \ \noprelistbreak \begin{enumerate} \item \label{AmenClasses-elem} From \cref{EgAmenEx}, we see that any group obtained from finite groups and abelian groups by repeatedly taking extensions, subgroups, quotients, and direct limits must be amenable. These ``obvious'' examples of amenable groups are said to be \emph{elementary amenable}. \item \label{AmenClasses-Grigorchuk} The so-called ``Grigorchuk group'' is an example of a group with subexponential growth that is not elementary amenable \cite{GrigorchukPak-Intermediate}. \item \label{AmenClasses-Basilica} A group is said to be \emph{subexponentially amenable} if it can be constructed from groups of subexponential growth by repeated application of extensions, subgroups, quotients, and direct limits. The ``Basilica group'' is an example of an amenable group that is not subexponentially amenable \cite{BartholdiVirag-AmenRandomWalks}. \end{enumerate} Thus, the following obvious inclusions are proper: $$ \begin{matrix} \hfill \{\text{finite}\} \subsetneq \\[2pt] \{\text{abelian}\} \subsetneq\{\text{solvable}\} \subsetneq \end{matrix} \left\{ \begin{matrix} \text{elementary} \\ \text{amenable} \end{matrix} \right\} \subsetneq \left\{ \begin{matrix} \text{subexponentially} \\ \text{amenable} \end{matrix} \right\} \subsetneq \left\{ \begin{matrix} \text{amenable} \end{matrix} \right\} . $$ \end{rems} \begin{rem} By combining \fullcref{FolnerEx}{Free} with \fullcref{EgAmenEx}{subgrp}, we see that if $\Gamma$ has a nonabelian free subgroup, then $\Gamma$ is not amenable. The converse is often called the ``von\,Neumann Conjecture\zz,'' but it was shown to be false in 1980 when Ol'shanskii proved that the ``Tarski monster'' is not amenable. This is a group in which every element has finite order, so it certainly does not contain free subgroups \cite{OlshanskiiSapir-TorsionByCycic}. N.\,Monod \cite{Monod-PiecewiseProj} has recently constructed counterexamples that are much less complicated. \end{rem} \begin{warn} In the theory of topological groups, it is \emph{not} true that every subgroup of an amenable group is amenable --- only the \emph{closed} subgroups need to be amenable. In particular, many amenable topological groups contain nonabelian free subgroups (but such subgroups cannot be closed). \end{warn} \section{Applications to actions on the circle} We are discussing amenability in these lectures because it plays a key role in the proofs of Ghys \cite{GhysCercle} and Burger-Monod \cite{BurgerMonod-BddCohoLatts} that large arithmetic groups must always have a finite orbit when they act on the circle (cf.\ \cref{GhysFP}). Here is a much simpler example of the connection between amenability and finite orbits: \begin{prop} \label{AmenOnCircle} Suppose \begin{itemize} \item $\Gamma$ is amenable, and \item $\Gamma$ acts on $S^1$ {\upshape(}by orientation-preserving homeomorphisms\/{\upshape)}. \end{itemize} Then either \begin{enumerate} \item the abelianization of\/~$\Gamma$ is infinite, or \item \label{AmenOnCircle-FinOrb} the action has a finite orbit. \end{enumerate} \end{prop} \begin{proof} From \cref{ActHasInvtMeas}, we know there is a $\Gamma$-invariant probability measure~$\mu$ on~$S^1$. \setcounter{case}{0} \begin{case} Assume $\mu$ has an atom. \end{case} This assumption means there exists some point $p \in S^1$ that has positive measure: $\mu \bigl( \{p\} \bigr) > 0$. Since $\mu$ is $\Gamma$-invariant, every point in the orbit of~$p$ must have the same measure. However, since $\mu$ is a probability measure, we know that the sum of the measures of these points is finite. Therefore the orbit of~$p$ must be finite (since the sum of infinitely many copies of the same positive number is infinite). \begin{case} Assume $\mu$ has no atoms. \end{case} Assume, for simplicity, that the support of~$\mu$ is all of~$S^1$. (That is, no nonempty open interval has measure~$0$.) Then the assumption of this case implies that, after a continuous change of coordinates, the measure~$\mu$ is simply the Lebesgue measure on~$S^1$. Since $\Gamma$ preserves this measure (and is orientation-preserving), this implies that $\Gamma$ acts on the circle by rotations. Since the group of rotations is abelian, we conclude that the abelianization of~$\Gamma$ is infinite. (Or else the image of~$\Gamma$ in the rotation group is finite, which means that every orbit is finite.) \end{proof} \begin{rem} If we assume that $\Gamma$ is infinite and finitely generated, then the conclusion of the \lcnamecref{AmenOnCircle} can be strengthened: it can be shown that the abelianization of~$\Gamma$ is infinite \cite{Morris-AmenOnLine}, so there is no need for alternative~\pref{AmenOnCircle-FinOrb}. \end{rem} Large arithmetic groups always have finite abelianization, so it might seem that the theorem of Ghys and Burger-Monod could be obtained directly from \cref{AmenOnCircle}. Unfortunately, that is not possible, because arithmetic groups are not amenable (since they contain free subgroups). Instead, Ghys's proof is based on the following more sophisticated observations: \begin{prop} \label{AmenOnConvex} Suppose \begin{itemize} \item $\Gamma$ is amenable, \item $\Gamma$ acts by {\upshape(}continuous{\upshape)} linear maps on a locally convex vector space~$V$, and \item $C$ is a nonempty, compact, convex, $\Gamma$-invariant subset of~$V$. \end{itemize} Then $\Gamma$ has a fixed point in~$C$. \end{prop} \begin{proof} $\Gamma$ acts on the compact set~$C$ by homeomorphisms, so \cref{ActHasInvtMeas} provides a $\Gamma$-invariant probability measure~$\mu$ on~$C$. Let $p$ be the center of mass of~$\mu$. Then $p$ is fixed by~$\Gamma$, since $\mu$ is $\Gamma$-invariant. Also, since $C$ is convex, we know $p \in C$. \end{proof} \begin{rem} \Cref{AmenOnConvex} has a converse: if $\Gamma$ has a fixed point in every nonempty, compact, convex, $\Gamma$-invariant set, then $\Gamma$ is amenable. So this fixed-point property provides yet another possible definition of amenability. \end{rem} \begin{cor}[{}{Furstenberg}] \label{FurstenbergLemma} Suppose \begin{itemize} \item $\Gamma$ is an arithmetic subgroup of\/ $\SL(3,\mathbb{R}) = G$, \item $\Gamma$ acts on~$S^1$ {\upshape(}by homeomorphisms{\upshape)}, \item $P = \text{\smaller[2] $\begin{bmatrix} * & * & * \\ & * & * \\ & & * \end{bmatrix}$} \subset G$, and \item $\Prob(S^1) = \{ \text{probability measures on~$S^1$} \}$, with the natural weak topology. \end{itemize} Then there exists a\/ $\Gamma$-equivariant measurable function $\overline\psi \colon G/P \to \Prob(S^1)$. \end{cor} \begin{proof} Let $$ \mathcal{C} = \bigl\{\, \text{measurable $\Gamma$-equivariant}\ \psi \colon G \to \Prob(S^1) \,\bigr\} $$ (where functions that differ only on a set of measure~$0$ are identified). It is easy to see that $\mathcal{C}$ is convex. Then, since the Banach-Alaoglu Theorem tells us that weak$^*$-closed, convex, bounded sets are compact, we see that $\mathcal{C}$ is compact in an appropriate weak topology. Also, $P$ acts continuously on~$\mathcal{C}$, via $\psi^p(g) = \psi(gp)$. We know that solvable groups are amenable \fullcsee{EgAmenEx}{solv}. Although we have only been considering discrete groups, the same is true for topological groups in general. So $P$ is amenable (because it is solvable). Therefore (a generalization of) \cref{AmenOnConvex} tells us that $P$ has a fixed point. This means there is a $\Gamma$-equivariant map $\psi \colon G \to \Prob(S_1)$, such that $\psi(gp) = \psi(g)$ (a.e.). Ignoring a minor issue about sets of measure~$0$, this implies that $\psi$ factors through to a well-defined $\Gamma$-equivariant function $\overline\psi \colon G/P \to \Prob(S_1)$. \end{proof} We omit the proof of the main step in Ghys's argument: \begin{thm}[Ghys \cite{GhysCercle}] \label{GhysConstant} The function~$\overline\psi$ provided by \cref{FurstenbergLemma} is constant {\upshape(}a.e.{\upshape)}. \end{thm} From this, it is easy to complete the proof: \begin{cor}[Ghys \cite{GhysCercle}] If\/ $\Gamma$ is any arithmetic subgroup of\/ $\SL(3,\mathbb{R})$, then every action of\/~$\Gamma$ on the circle has a finite orbit. \end{cor} \begin{proof} From \cref{GhysConstant}, we know there is a constant function $\overline\psi \colon G/P \to \Prob(S^1)$ that is $\Gamma$-equivariant (a.e.). \begin{itemize} \item Since $\overline\psi$ is constant, its range is a single point~$\mu$ (a.e.). \item Since $\overline\psi$ is $\Gamma$-equivariant, its range is a $\Gamma$-invariant set. \end{itemize} So $\mu$ is $\Gamma$-invariant. Since $\mu \in \Prob(S^1)$, then the proof of \cref{AmenOnCircle} shows that either \begin{enumerate} \item the abelianization of~$\Gamma$ is infinite, or \item the action has a finite orbit. \end{enumerate} Since the abelianization of every arithmetic subgroup of $\SL(3,\mathbb{R})$ is finite, we conclude that there is a finite orbit, as desired. \end{proof} \begin{rem} See \cite{GhysCircleSurvey} for a nice exposition of Ghys's proof for the special case of lattices in $\SL(n,\mathbb{R})$. (A slightly modified proof of the general case that reduces the amount of case-by-case analysis is in \cite{WitteZimmer-ActOnCircle}.) A quite different (and very interesting) version of the proof is in \cite{BaderFurmanShaker}. \end{rem} \lecture{Introduction to bounded cohomology} \label{BddCohoLect} M.\,Burger and N.\,Monod \cite{BurgerMonod-BddCohoLatts,BurgerMonod-ContBddCoho} developed a sophisticated machinery to calculate bounded cohomology groups, and used it to prove that actions of arithmetic groups on the circle have finite orbits. (See \cite{Monod-ContBddCoho} for an exposition.) We will discuss only some elementary aspects of bounded cohomology, and describe how it is related to actions on the circle, without explaining the fundamental contributions of Burger-Monod. See \cite{Monod-Invitation} for a more comprehensive introduction to bounded cohomology and its applications. (Almost all of the information in this lecture can be found there.) The widespread interest in this subject was inspired by a paper of Gromov \cite{Gromov-VolBddCoho}. \section{Definition} \begin{recall} For a discrete group~$\Gamma$, the cohomology group $H^n(\Gamma; \mathbb{R})$ is defined as follows. \begin{itemize} \item Any function $c \colon \Gamma^n \to \mathbb{R}$ is an \emph{$n$-cochain}, and the set of these cochains is denoted $C^n(\Gamma)$. \item A certain \emph{coboundary operator} $\delta_n \colon C^n(\Gamma) \to C^{n+1}(\Gamma)$ is defined. Here are the definitions for the smallest values of~$n$: \begin{align*} \delta_0 c \, (g_1) &= 0 && \text{for $c \in \mathbb{R}$}, \\ \delta_1c \, (g_1,g_2) &= c(g_1 g_2) - c(g_1) - c(g_2) && \text{for $c \colon \Gamma \to \mathbb{R}$}. \end{align*} \item Then $$H^n(\Gamma; \mathbb{R}) = \frac{\ker \delta_n}{\mathop{\mathrm{Image}} \delta_{n-1}} = \frac{\text{$n$-cocycles}}{\text{$n$-coboundaries}} = \frac{Z^n(\Gamma)}{B^n(\Gamma)} .$$ \end{itemize} (Note that, for simplicity, we take the coefficients to be~$\mathbb{R}$, not a general $\Gamma$-module.) \end{recall} \begin{defn} The \emph{bounded cohomology} group $H_b^n(\Gamma;\mathbb{R})$ is defined in exactly the same way as $H^n(\Gamma;\mathbb{R})$, except that all cochains are required to be \emph{bounded} functions. \end{defn} \begin{eg} $H_b^0(\Gamma;\mathbb{R})$ and $H_b^1(\Gamma;\mathbb{R})$ are very easy to compute: \begin{itemize} \item It is easy to check that $$H^0(\Gamma;\mathbb{R}) = \{\, \text{$\Gamma$-invariants in $\mathbb{R}$} \,\} = \mathbb{R} = \{\, \text{the set of constants} \,\} .$$ \item The same calculation shows that $H_b^0(\Gamma)$ is the set of \emph{bounded} constants. Then, since it is obvious that every constant is a bounded function, we have $H_b^0(\Gamma;\mathbb{R}) = \mathbb{R}$. \item It is easy to check that $H^1(\Gamma;\mathbb{R}) = \{\, \text{homomorphisms $\Gamma \to \mathbb{R}$} \,\}$. \item The same calculation shows that $H_b^1(\Gamma;\mathbb{R})$ is the set of \emph{bounded} homomorphisms into~$\mathbb{R}$. Since a homomorphism into~$\mathbb{R}$ can never be bounded (unless it is trivial), this means $H_b^1(\Gamma;\mathbb{R}) = \{0\}$. \end{itemize} Thus, $H_b^0(\Gamma;\mathbb{R})$ and $H_b^1(\Gamma;\mathbb{R})$ give no information at all about~$\Gamma$. (One of them is always~$\mathbb{R}$, and the other is always~$\{0\}$.) \end{eg} So $H_b^n(\Gamma;\mathbb{R})$ is only interesting when $n \ge 2$. These groups are not easy to calculate: \begin{eg} \label{BddCoho(F2)} For the free group~$F_2$, we have: $$ H_b^n(F_2;\mathbb{R}) = \begin{cases} \text{$\infty$-dimensional} & n = 2,3 \\ \hfil \langle\text{\it open problem}\,\rangle & n > 3. \end{cases} $$ \end{eg} \begin{open} Find some countable group\/~$\Gamma$, such that you can calculate $H_b^n(\Gamma;\mathbb{R})$ for all~$n$ {\upshape(}and $H_b^n(\Gamma;\mathbb{R}) \neq 0$ for some~$n${\upshape)}. \end{open} Bounded cohomology is easy to calculate for amenable groups: \begin{prop}[B.\,E.\,Johnson \cite{Johnson-CohoBanach}] \label{Hbb(amen;R)=0} If\/ $\Gamma$ is amenable, then $H_b^n(\Gamma;\mathbb{R}) = 0$ for all~$n$. \end{prop} \begin{proof} From \cref{AmenAvgVal}, we know there is a left-invariant mean $$A \colon \ell^\infty(\Gamma; \mathbb{R}) \to \mathbb{R} .$$ Any element of $H_b^n(\Gamma;\mathbb{R})$ is represented by a bounded function $c \colon \Gamma^n \to \mathbb{R}$, such that $\delta_n c = 0$. To simplify the notation, let us assume $n = 2$. For each $g \in \Gamma$, we can define a bounded function $c_g \colon \Gamma \to \mathbb{R}$ by $c_g(x) = c(g,x)$. Then, by defining $ \overline c(g) = A(c_g) \in \mathbb{R} $, we have $\overline c \colon \Gamma \to \mathbb{R}$. Now, for $g_1,g_2,x \in \Gamma$, we have $$ 0 = \delta_2 c \, (g_1,g_2, x) = c(g_1,g_2) - c(g_1, g_2 x) + c(g_1g_2, x) - c(g_2, x) .$$ Applying~$A$ to both sides (considered as functions of~$x$), and recalling that $A$ is left-invariant, we obtain $$ 0 = c(g_1,g_2) - \overline{c}(g_1) + \overline{c}(g_1g_2) - \overline{c}(g_2) ,$$ so $c = -\delta_1 \overline{c} \in \mathrm{Image}(\delta_1)$. Therefore $[c] = 0$ in $H_b^2(\Gamma;\mathbb{R})$. \end{proof} \begin{rems} \ \noprelistbreak \begin{enumerate} \item The bounded cohomology $H_b^n(X;\mathbb{R})$ of a topological space~$X$ is defined by stipulating that a cochain in $C^n(X)$ is \emph{bounded} if it is a bounded function on the space of singular $n$-simplices. \item (Brooks \cite{Brooks-RemBddCoho}, Gromov \cite{Gromov-VolBddCoho}) $H_b^n(X;\mathbb{R}) = H_b^n \bigl( \pi_1(X) ; \mathbb{R} \bigr)$. \item Forgetting that the cochains are bounded yields a comparison homomorphism $H_b^n(\Gamma;\mathbb{R}) \to H^n(\Gamma;\mathbb{R})$. It is very interesting to find situations in which this map is an isomorphism. \item (Thurston) If $M$ is a closed manifold of negative curvature, then the comparison map $H_b^n(M;\mathbb{R}) \to H^n(M;\mathbb{R})$ is \emph{surjective} for $n\ge 2$. However, it can fail to be injective. \end{enumerate} \end{rems} \section{Application to actions on the circle} \begin{defn} If $\rho \colon \Gamma \to \Homeo^+(S^1)$ is a homomorphism, then, for each $g \in \Gamma$, covering-space theory tells us that $\rho(g)$ can be lifted to a homeomorphism~$\widetilde g$ of the universal cover, which is~$\mathbb{R}$. However, the lift depends on the choice of a basepoint, so it is not unique --- lifts can differ by an element of $\pi_1(S^1) = \mathbb{Z}$. Specifically, if $\widehat g$ is another lift of~$\rho(g)$, then $$\exists n \in \mathbb{Z}, \ \forall t \in \mathbb{R}, \ \widehat g(t) = \widetilde g(t) + n . $$ Therefore, for any $g,h \in \Gamma$, there exists $c(g,h) \in \mathbb{Z}$, such that $$ \forall t \in \mathbb{R}, \ \widetilde g \bigl( \widetilde h(t) \bigr) = \widetilde{gh}(t) + c(g,h) , $$ because $\widetilde g \widetilde h$ and $\widetilde{gh}$ are two lifts of~$gh$. It is easy to verify that $c$ is a $2$-cocycle: $$ \text{$c(h,k) - c(gh,k) + c(g,hk) - c(g,h) = 0$ for $g,h,k \in \Gamma$} ,$$ and that choosing a different lift~$\widetilde g$ only changes~$c$ by a coboundary. Therefore, $c$ determines a well-defined cohomology class $\alpha\in H^2(\Gamma;\mathbb{Z})$, which is called the \emph{Euler class} of~$\rho$. \end{defn} \begin{exer} \label{EulerTrivial} Show that the Euler class of~$\rho$ is trivial if and only if $\rho$ lifts to a homomorphism $\widetilde\rho \colon \Gamma \to \Homeo^+(\mathbb{R})$. \end{exer} \begin{rem} The Euler class can also be defined more naturally, by noting that if we let $\widetilde H$ be the set consisting of all possible lifts of all elements of $\Homeo^+(S^1)$, then we have a short exact sequence $$ \{e\} \to \mathbb{Z} \to \widetilde H \to \Homeo^+(S^1) \to \{e\} $$ with $\mathbb{Z}$ in the center of~$\widetilde H$. Any such central extension is determined by a well-defined cohomology class $\alpha_0 \in H^2 \bigl( \Homeo^+(S^1);\mathbb{Z} \bigr)$, and the Euler class is obtained using the homomorphism~$\rho$ to pull this class back to~$\Gamma$. \end{rem} \begin{exer} \label{EulerIsBdd} Choose a basepoint in~$\mathbb{R}$ (say, $0$), and assume the lift $\widetilde g$ is chosen with $0 \le \widetilde g(0) < 1$ for all $g \in \Gamma$. Show $c$ is bounded. \end{exer} \begin{defn} Although we only defined bounded cohomology with real coefficients, the same definition can be applied with~$\mathbb{Z}$ in place of~$\mathbb{R}$. Therefore, if we choose $c$ as in \cref{EulerIsBdd}, then it represents a bounded cohomology class $[c] \in H_b^2(\Gamma; \mathbb{Z})$, which is called the \emph{bounded Euler class} of the action. \end{defn} \begin{rem} It can be shown that the bounded Euler class is a well-defined invariant of the action (independent of the choice of basepoint, etc.). \end{rem} \begin{prop}[Ghys \cite{Ghys-GrpsEtBddCoho}] \label{BddEulerClassFP} The bounded Euler class is trivial if and only if\/ $\Gamma$ has a fixed point in~$S^1$. \end{prop} \begin{proof} ($\Leftarrow$) We may assume the fixed point is the basepoint~$\overline{0} \in S^1$. Then we may choose $\widetilde g$ with $\widetilde g(0) = 0$. So $c(g,h) = 0$ for all $g,h$. ($\Rightarrow$) We have $c(g,h) = \varphi(gh) - \varphi(g) - \varphi(h)$ for some bounded $\varphi \colon \Gamma \to \mathbb{Z}$. Letting $\widehat g (t) = \widetilde g(t) + \varphi(g)$, we have \begin{itemize} \item $\widehat g \, \widehat h = \widehat{gh}$, so $\widehat \Gamma$ is a lift of~$\Gamma$ to $\Homeo^+(\mathbb{R})$, and \item $|\widehat g(0)| \le |\widetilde g(0)| + |\varphi(g)| \le 1 + \|\varphi\|_\infty$. \end{itemize} Hence, the $\widehat\Gamma$-orbit of~$0$ is a bounded set, so it has a supremum in~$\mathbb{R}$. This supremum is a fixed point of~$\widehat\Gamma$, so its image in~$S^1$ is a fixed point of~$\Gamma$. \end{proof} \begin{cor} If $H_b^2(\Gamma;\mathbb{Z}) = 0$, then every orientation-preserving action of~$\Gamma$ on~$S^1$ has a fixed point. \qed \end{cor} The following result is easier to apply, because it uses real coefficients for the cohomology, instead of integers: \begin{cor} \label{H2RVanishFP} If \begin{itemize} \item $H_b^2(\Gamma;\mathbb{R}) = 0$, \item $H^1(\Gamma;\mathbb{R}) = 0$, and \item $\Gamma$ is finitely generated, \end{itemize} then every orientation-preserving action of\/~$\Gamma$ on~$S^1$ has a finite orbit. \end{cor} \begin{proof} The short exact sequence $$ 0 \to \mathbb{Z} \to \mathbb{R} \to \mathbb{T} \to 0$$ yields a long exact sequence of bounded cohomology: $$ H_b^1(\Gamma;\mathbb{T}) \to H_b^2(\Gamma;\mathbb{Z}) \to H_b^2(\Gamma;\mathbb{R}) .$$ By assumption, the group at the right end is~$0$, so the map on the left is surjective. Therefore, the bounded Euler class is the coboundary of some (bounded) $1$-cocycle $\alpha \colon \Gamma \to \mathbb{T}$. I.e., $\alpha$ is a homomorphism to $\mathbb{T}$. Since $H^1(\Gamma;\mathbb{R}) = 0$ (and $\Gamma$ is finitely generated), we know $\alpha$ is trivial on some finite-index subgroup~$\Gamma'$ of~$\Gamma$. Then the bounded Euler class $\delta_1 \alpha$ is trivial on~$\Gamma'$, so \cref{BddEulerClassFP} tells us that $\Gamma'$ has a fixed point~$p$. Since $\Gamma'$ has finite index, we see that the $\Gamma$-orbit of~$p$ is finite. \end{proof} \begin{thm}[Ghys \cite{GhysCercle}, Burger-Monod \cite{BurgerMonod-BddCohoLatts}] If\/ $\Gamma$ is any arithmetic subgroup of\/ $\SL(n,\mathbb{R})$, with $n \ge 3$, then every action of\/~$\Gamma$ on~$S^1$ has a finite orbit. \end{thm} \begin{proof}[Outline of Burger-Monod proof] Burger and Monod showed (in a much more general setting) that the comparison map $H_b^2(\Gamma;\mathbb{R}) \to H^2(\Gamma;\mathbb{R})$ is injective. Since it is known that $H^2(\Gamma;\mathbb{R}) = 0$ (if $n$~is sufficiently large), we conclude that $H_b^2(\Gamma;\mathbb{R}) = 0$. The other hypotheses of \cref{H2RVanishFP} are well known to be true. \end{proof} \section{Computing $H_b^2(\Gamma;\mathbb{R})$} To calculate $H_b^2(\Gamma;\mathbb{R})$, we would like to understand the kernel of the comparison map $H_b^2(\Gamma;\mathbb{R}) \to H^2(\Gamma;\mathbb{R})$. For this, we introduce some notation: \begin{defn} \ \noprelistbreak \begin{itemize} \item A function $\alpha \colon \Gamma \to \mathbb{R}$ is a: \begin{itemize} \item \emph{quasimorphism} if $\alpha(gh) - \alpha(g) - \alpha(h)$ is bounded (as a function of $(g,h) \in \Gamma \times \Gamma$); \item \emph{near homomorphism} if it is within a bounded distance of a homomorphism. \end{itemize} \item We use $\QM(\Gamma,\mathbb{R})$ and $\NH(\Gamma,\mathbb{R})$ to denote the space of quasimorphisms and the space of near homomorphisms, respectively. \end{itemize} Note that $\NH(\Gamma,\mathbb{R}) \subset \QM(\Gamma,\mathbb{R})$. \end{defn} \begin{prop} \label{KernelComparison} The kernel of the comparison map $H_b^2(\Gamma;\mathbb{R}) \to H^2(\Gamma;\mathbb{R})$ is $$ \frac{\QM(\Gamma,\mathbb{R})}{\NH(\Gamma,\mathbb{R})} .$$ \end{prop} \begin{proof} Let $c$ be a bounded $2$-cocycle, such that $c$ is trivial in $H^2(\Gamma;\mathbb{R})$. Then $c = \delta_1 \alpha$, for some $\alpha \colon \Gamma \to \mathbb{R}$. Thus, for all $g,h \in \Gamma$, we have $$ \text{$| \alpha(gh) - \alpha(g) - \alpha(h)| = | \delta_1 \alpha \, (g,h)| = |c(g,h)| \le \| c\|_\infty$ is bounded} .$$ So $\alpha$ is a quasimorphism. This establishes that $\QM(\Gamma,\mathbb{R})$ maps onto the kernel of the comparison map, via $\alpha \mapsto \delta_1 \alpha$. Now suppose $\alpha \in \QM(\Gamma,\mathbb{R})$, such that $\delta_1 \alpha$ is trivial in $H_b^2(\Gamma;\mathbb{R})$. The triviality of $\delta_1 \alpha$ means there is a bounded function $c \colon \Gamma \to \mathbb{R}$, such that $\delta_1 \alpha = \delta_1 c$. Then $\delta_1(\alpha - c) = 0$, so $\alpha - c$ is a homomorphism. Since $c$ is bounded, this means that $\alpha$ is within a bounded distance of a homomorphism; i.e., $\alpha \in \NH(\Gamma,\mathbb{R})$. \end{proof} \begin{eg}[Brooks \cite{Brooks-RemBddCoho}] We can construct many quasimorphisms on the free group~$F_2$: \begin{itemize} \item As a warm-up, recall that there is an obvious homomorphism $\varphi_a$, defined by letting $\varphi_a(x)$ be the (signed) number of occurrences of~$a$ in the reduced representation of~$x$. For example, $$\varphi_a(a^2 b a^{3} b^2 a b^{-3} a^{-7} b^2) = 2 + 3 + 1 - 7 = -1 .$$ There is an analogous homomorphism~$\varphi_b$, and every homomorphism $F_2 \to \mathbb{R}$ is a linear combination of these two. \item Similarly, for any nontrivial reduced word~$w$, we can let $\varphi_{w}(x)$ be the (signed) number of disjoint occurrences of~$w$ in the reduced representation of~$x$. For example, $$\varphi_{ab}(a^2 b a^{3} b^2 a b^{-3} a^{-7} b^2) = 1 + 1 -1 = 1 .$$ This is a quasimorphism. \end{itemize} \end{eg} \begin{exer} \label{WordQuasi} Verify that $\varphi_{w}$ is a quasimorphism, for any reduced word~$w$. \end{exer} With these quasimorphisms in hand, it is now easy to prove a fact that was mentioned in \cref{BddCoho(F2)}: \begin{exer} \label{H2b(free)} Show that $H_b^2(F_2;\mathbb{R})$ is infinite-dimensional. \hint{Verify that $\varphi_{a^k}$ ($k \ge 2$) is not within a bounded distance of the linear span of $\{\varphi_b, \varphi_a, \varphi_{a^{k+1}}, \varphi_{a^{k+2}}, \varphi_{a^{k+3}}, \ldots\}$, by finding a word~$x$, such that $\varphi_{a^k}(x)$ is large, but the others vanish on~$x$.} \end{exer} \begin{exers} \label{QuasiMExers}\ \noprelistbreak \begin{enumerate} \item \label{QuasiMExers-KernelFD} Show that if $\Gamma$ is boundedly generated (by cyclic subgroups), then the kernel of the comparison map $H_b^2(\Gamma;\mathbb{R}) \to H^2(\Gamma;\mathbb{R})$ is finite-dimensional. \hint{Every quasimorphism $\mathbb{Z} \to \mathbb{R}$ is a near homomorphism.} \item \label{QuasiMExers-F2NotBddGen} Show the free group~$F_2$ is not boundedly generated (by cyclic subgroups). \item \label{QuasiMExers-commutator} Show that every quasimorphism is bounded on the set of commutators $\{x^{-1} y^{-1} x y\}$. \item \label{QuasiMExers-amenable} Show that if $\Gamma$ is amenable group, and the abelianization of~$\Gamma$ is finite, then $\Gamma$ does not have unbounded quasimorphisms. \\ \hint{Amenable groups do not have bounded cohomology.} \item \label{QuasiMExers-SL3ZNoQuasi} Show that $\SL(3,\mathbb{Z})$ has no unbounded quasimorphisms. \\ \hint{Use \cref{SL3ZBddGen}.} \end{enumerate} \end{exers} \begin{appendix} \chapter*{Hints for the exercises} \soln{FreeGrpActsOnR} Every finitely generated free group is a subgroup of the free group $F_2 = \langle a, b \rangle$, so we need only consider this one example. Choose any faithful action of~$F_2$ on the circle. (For example, use the \emph{Ping-Pong Lemma} \cite[Lem.~2.3.9]{Navas-GrpsDiffeos}, which provides a sufficient condition for two homeomorphisms to generate a free group, or note that $F_2$ is a subgroup of $\PSL(2,\mathbb{R})$, which acts faithfully on the circle by linear-fractional transformations.) Since $F_2$ is free, we can lift this to an action on the line (simply by choosing any lift of the two generators $a$ and~$b$). Since it projects down to a faithful action on the circle, this action on the line must also be faithful. \soln{DirProdFaithful} Since any open interval is homeomorphic to~$\mathbb{R}$, we may let $\Gamma_1$ act on the open interval $(0,1)$ (fixing all points in the complement), and let $\Gamma_2$ act on the open interval $(1.2)$ (fixing all points in the complement). These actions commute, so they define an action of $\Gamma_1 \times \Gamma_2$. \soln{ActIffLO} Details are in \cite[Thm.~6.8]{GhysCircleSurvey}. \soln{ProdPos} By left-invariance, we have $$\text{ $ab = a \cdot b \succ a \cdot e = a \succ e $ \quad and \quad $e = a^{-1} \cdot a \succ a^{-1} \cdot e = a^{-1}$ } .$$ \fullsoln{HeisExers}{z=[x,y]} Straightforward matrix multiplication verifies that $z = [x,y]$ and that $z$ commutes with both~$x$ and~$y$. \fullsoln{HeisExers}{[xyyk]} Since $z = [x,y]$, we have $xy = yxz$. By induction on~$k$ (and using the fact that $z$ commutes with~$x$), then $x^k y = yx^k z^k$. By induction on~$\ell$ (and using the fact that $z$ commutes with~$y$), then $x^k y^\ell = y^\ell x^k (z^k)^\ell$ for $k,\ell \in \mathbb{Z}^+$. \fullsoln{HeisExers}{orderable} To apply \fullcref{LOExers}{extension}, note that $H$ has a chain of normal subgroups $$ \{e\} \triangleleft \langle z \rangle \triangleleft \langle z,x \rangle \triangleleft H ,$$ and each quotient is isomorphic to~$\mathbb{Z}$ (hence, has an obvious left-invariant order). \fullsoln{SL3ZPfExers}{Heis} Either calculate that $\left[\ovalbox{$k-1$}, \ovalbox{$k+1$}\right] = \ovalbox{$k$}$, and that $\ovalbox{$k$}$ commutes with both $\ovalbox{$k-1$}$ and $\ovalbox{$k+1$}$, or observe that some permutation matrix conjugates the ordered triple $\left( \ovalbox{$k-1$} \, , \ovalbox{$k$} \, , \ovalbox{$k+1$} \right)$ to $(x,z,y)$. \fullsoln{SL3ZPfExers}{FinInd} Details are in \cite[\S3]{Witte-QrankAct1mfld}. \fullsoln{LOExers}{subgrp} The restriction of a left-invariant total order is a left-invariant total order. \fullsoln{LOExers}{abelian} We may assume $\Gamma$ is finitely generated \fullcsee{LOExers}{locally}, so $\Gamma \cong \mathbb{Z} \times \cdots \times \mathbb{Z}$. Now use \cref{DirProdFaithful} or \fullcref{LOExers}{extension}. \fullsoln{LOExers}{extension} Let $\prec_*$ and $\prec^*$ be left-invariant total orders on~$N$ and $\Gamma/N$, respectively. Then we define $$ g \prec h \quad \iff \quad \begin{matrix} \text{$gN \prec^* hN$ \quad or} \\[5pt] \text{$gN = hN$ \ and \ $h^{-1} g \prec_* e$} . \end{matrix} $$ The left-invariance of $\prec_*$ and $\prec^*$ implies the left-invariance of~$\prec$. \fullsoln{LOExers}{nilpotent} Recall that the \emph{ascending central series} $$ \{e\} = Z_0 \triangleleft Z_1 \triangleleft \cdots \triangleleft Z_c = \Gamma $$ is defined inductively by $Z_i/Z_{i-1} = Z \bigl( \Gamma/ Z_{i-1})$. Fix $i \ge 2$ and let $\overline\Gamma = \Gamma/Z_{i-2}$. For any nontrivial $z \in \overline{Z_i}$, there exists $g \in \Gamma$, such that $[\overline{g}, z]$ is a nontrivial element of~$\overline{Z_{i-1}}$, which is torsion-free (by induction). Since $[\overline{g}, z^n] = [\overline{g}, z]^n$, this implies that $Z_i/Z_{i-1}$ is torsion-free. It is also abelian, so we can apply \fullcref{LOExers}{abelian} and \fullcref{LOExers}{extension}. \fullsoln{LOExers}{solvable} This is not at all obvious, but specific examples are given on page~52 of \cite{KopytovMedvedev}. Here is the general philosophy. Assume $G$ is a nontrivial, finitely generated, left-orderable group. If $G$ is solvable (or, more generally, if $G$ is ``amenable''), then it can be shown that the abelianization of~$G$ is infinite \cite{Morris-AmenOnLine}. So any torsion-free solvable group with finite abelianization provides an example. \fullsoln{LOExers}{Exps} ($\Rightarrow$) Choose $\epsilon_i$ so that $g_i^{\epsilon_i} \succ e$. Then every element of the semigroup is $\succ e$. ($\Leftarrow$) The condition implies it is possible to choose a semigroup~$P$ in~$\Gamma$, such that $e \notin P$ and, for every nonidentity element~$g$ of~$\Gamma$, either $g \in P$ or $g^{-1} \in P$. Define $x \prec y \iff x^{-1} y \in P$. Details can be found in \cite[Thm.~3.1.1, p.~45]{KopytovMedvedev}. \fullsoln{LOExers}{locally} Use \fullcref{LOExers}{Exps}. \fullsoln{LOExers}{residually} Let $g_1,\ldots,g_n$ be nontrivial elements of~$\Gamma$. For each~$i$, there is a left-orderable group~$H_i$, and a homomorphism $\varphi_i \colon \Gamma \to H_i$, such that $\varphi_i(g_i) \neq e$. For the resulting homomorphism~$\varphi$ into $H_1 \times \cdots \times H_n$, we have $\varphi(g_i) \neq e$ for all~$i$. Now apply \fullcref{LOExers}{Exps}. \fullsoln{LOExers}{BurnsHale} Given nontrivial elements $g_1,\ldots,g_n$ of~$\Gamma$, the assumption provides a nontrivial homomorphism $\rho \colon \langle g_1,\ldots,g_n \rangle \to \mathbb{R}$. Assume $\rho$ is trivial on $g_1,\ldots,g_k$, and nontrivial on the rest. By induction on~$n$, we can choose $\epsilon_1,\ldots,\epsilon_k \in \{\pm1\}$, such that the semigroup generated by $\{g_1,\ldots,g_k\}$ does not contain~$e$. For $i > k$, choose $\epsilon_i$ so that $\rho(g_i^{\epsilon_i}) > 0$. Then the semigroup generated by $g_1,\ldots,g_n$ does not contain~$e$, so \fullcref{LOExers}{Exps} applies. (Details can be found on page~50 of \cite{KopytovMedvedev}.) \fullsoln{BddGenExers}{quotient} If $\Gamma = H_1 \cdots H_n$, then $\Gamma/N = \overline{H_1} \cdots \overline{H_n}$, where $\overline{H_i}$ is the image of~$H_i$ in $\Gamma/N$. \fullsoln{BddGenExers}{modnth} Call the subgroup~$N$. Then $N$~is a normal subgroup, so \fullcref{BddGenExers}{quotient} tells us that $\Gamma/N$ is boundedly generated by cyclic groups. However, every element of $\Gamma/N$ has finite order, so all of these cyclic groups are finite. \fullsoln{BddGenExers}{FinInd} ($\Leftarrow$) For a normal subgroup~$N$ of~$\Gamma$, it is easy to see that if $N$ and $\Gamma/N$ are boundedly generated, then $\Gamma$ is boundedly generated. Also note that finite groups are (obviously) boundedly generated. ($\Rightarrow$) To present the main idea with a minimum of notation, let us assume $\Gamma = H K$ is the product of just two cyclic groups. Let $\dot K = K \cap \dot\Gamma$, and let $\{k_1,\ldots,k_n\}$ be a set of coset representatives for $\dot K$ in~$K$. There exists a finite-index subgroup~$\dot H$ of~$H$, such that the conjugate $\dot H^{k_j}$ is contained in~$\dot\Gamma$ for every~$j$. Let $\{h_1,\ldots,h_m\}$ be a set of coset representatives for $\dot H$ in~$H$. Given $g \in \dot\Gamma$, we may choose $h \in h$ and $k \in K$, such that $$g = h k = (h_i \dot h) (k_j \dot k) = (h_i k_j) \dot h^{k_j} \dot k .$$ Therefore, if we let $\ell_1,\ldots,\ell_r$ be a list of the elements in $\dot\Gamma \cap \{h_i k_j\}$, then $$ \dot\Gamma = \langle \ell_1 \rangle \cdots \langle \ell_r \rangle \ \dot H^{k_1} \cdots \dot H^{k_n} \ \dot K .$$ \fullsoln{BddGenExers}{SL2Z} Let $\Gamma$ be a free subgroup of finite index in $\SL(2,\mathbb{Z})$. Then \fullcref{QuasiMExers}{F2NotBddGen} tells us that $\Gamma$ is not boundedly generated, so \fullcref{BddGenExers}{FinInd} implies that $\SL(2,\mathbb{Z})$ also is not boundedly generated by cyclic groups. Since $\overline{U}$ and~$\underline{V}$ are cyclic, this completes the proof. \fullsoln{BddGenExers}{VariablePowers} The argument is somewhat similar to \fullcref{BddGenExers}{FinInd}($\Rightarrow$). Let $\dot\Gamma$ be the subgroup that is under consideration, and let us assume, for simplicity, that $\Gamma = HK$ is the product of just two cyclic groups. Let $\dot K = K \cap \dot\Gamma$, and let $\{k_1,\ldots,k_n\}$ be a set of coset representatives for $\dot K$ in~$K$. The key point is to observe that, by definition, $\dot\Gamma$ contains a finite-index subgroup of each~$H^{k_j}$, so we may choose a finite-index subgroup~$\dot H$ of~$H$, such that $\dot H^{k_j}$ is contained in~$\dot\Gamma$ for every~$j$. Let $\{h_1,\ldots,h_m\}$ be a set of coset representatives for $\dot H$ in~$H$. For $g \in \Gamma$, we have $$g = h k = (h_i \dot h) (k_j \dot k) = (h_i k_j) \dot h^{k_j} \dot k \in (h_i k_j) \dot\Gamma .$$ Therefore, $\{ h_i k_j \}$ contains a set of coset representatives, so the index of~$\dot\Gamma$ is at most~$mn$. \soln{BddGenIsom} The Triangle Inequality implies that \emph{every} orbit of every cyclic group is bounded. Now, for any $x \in X$, any $R \in \mathbb{R}^+$, and any cyclic subgroup~$H_i$ of~$\Gamma$, this implies there exists $r_i \in \mathbb{R}^+$, such that $H_ix$ is contained in the ball $B_{r_i}(x)$. By the Triangle Inequality, we have $H_i \cdot B_R(x) \subseteq B_{R+r_i}(x)$. By induction, if $\Gamma = H_1 \cdots H_n$, then $\Gamma x \subseteq B_{r_1 + \cdots + r_n}(x)$. \soln{AbelNoPonzi} Suppose $M$ is a Ponzi scheme on~$\Gamma$, and $M(g) \in gS$ for all~$g$. Let $k$ be the maximum word length of an element of~$S$. Then, since $M$ is (at least) 2-to-1, we know that the ball of radius $r + k$ has at least twice as many elements as the ball of radius~$r$. So $\Gamma$ has exponential growth. \fullsoln{FolnerEx}{Free} Assume, without loss of generality, that (at least) $3/4$ of the elements of~$F$ do \emph{not} start with~$a^{-1}$. Then $3/4$ of~$aF$ starts with~$a$ and $3/4$ of~$baF$ starts with $b$. If $F$ is almost invariant, this implies that almost half of~$F$ starts with both~$a$ and~$b$. \fullsoln{FolnerEx}{union} Let $n = \#S$, and choose $F$ so that $\#(F \cap aF) > \bigl( 1 - (\epsilon/n) \bigr) \#F$ for all $a \in S$. Then $\#(aF \smallsetminus F) < \epsilon/n$ for all $a \in S$, so $\#(SF) - \#F < n \cdot (\epsilon/n) = \epsilon$. \fullsoln{FolnerEx}{QI} It will suffice to prove the hint, since it gives a condition that is invariant under quasi-isometry. Suppose $\Gamma$ is not amenable. Then there exist $S$ and~$\epsilon$, such that $\#(SF) \ge (1 + \epsilon) \#F$, for every finite subset~$F$ of~$\Gamma$. Choosing $n$~large enough that $(1 + \epsilon)^n > c$, then we have $\#(S^nF) \ge c \cdot \#F$. The other direction is immediate from \fullcref{FolnerEx}{union}. \fullsoln{FolnerEx}{noPonzi} Suppose $M$ is a Ponzi scheme on~$\Gamma$, and $M(g) \in gS$ for all~$g$. From \fullcref{FolnerEx}{union} (and replacing $F$ with~$F^{-1}$ to convert left-translations into right-translations), we know there is a finite set~$F$, such that $\#(F S^{-1}) < 2 \cdot \#F$. This is impossible, since $M$ is (at least) 2-to-1 and $M^{-1}(F) \subseteq F S^{-1}$. \fullsoln{FolnerEx}{Ponzi} In the special case where $a_i = 1$ for all~$i$, the hint is known as \emph{Hall's Marriage Theorem}, and can be found in many combinatorics textbooks. The general case is proved similarly. Since there are only finitely many possibilities for each set~$A_i'$, a standard diagonalization argument shows that the result is also valid for an infinite sequence $A_1,A_2,\ldots$ of finite sets and an infinite sequence $\{a_i\} \subseteq \natural$. From the hint to \fullcref{FolnerEx}{QI}, there exists $S \subset \Gamma$, such that $\#(FS^{-1}) \ge 2 \cdot \#F$ for every finite $F \subset \Gamma$. For $y \in \Gamma$, let $A_y = y S^{-1}$ and $a_y = 2$. Then there exists $A_y' \subseteq A_y$, such that $\#A_y' = 2$ and the sets $\{A_y'\}_{y \in \Gamma}$ are pairwise disjoint. Define $M(g) = y \in g S$ for all $g \in A_y'$. \fullsoln{VonNeumannEx}{noParadox} Let $$ a = \sum_{i = 1}^m \mu(A_i) \text{\qquad and\qquad} b = \sum_{j = 1}^n \mu(B_j) ,$$ where $\mu$ is a finitely additive, translation-invariant probability measure on~$\Gamma$. Then, since $A_1,\ldots,A_m,B_1,\ldots,B_n$ are pairwise disjoint, we have $$a + b = \mu(\Gamma) = 1 .$$ On the other hand, since $\Gamma = \bigcup_{i=1}^m g_i A_i $, we have $$ 1 = \mu(\Gamma) = \mu \left( \bigcup_{i=1}^m g_i A_i \right) \le \sum_{i = 1}^m \mu( g_i A_i ) = \sum_{i = 1}^m \mu( A_i ) = a ,$$ and, similarly, $b = 1$. This is a contradiction. \fullsoln{VonNeumannEx}{free} Let $A_1$, $A_2$, $B_1$, and~$B_2$ be the reduced words that start with $a$, $a^{-1}$, $b$, or~$b^{-1}$, respectively. (Also add $e$ to one of these sets.) Then $$ F_2 = a^{-1} A_1 \cup a A_2 = b^{-1} B_1 \cup b B_2 .$$ \fullsoln{VonNeumannEx}{Paradox} Let $M$ be a Ponzi scheme, and choose $S$ such that $M(g) \in Sg$. Let $A$ contain a single element of $M^{-1}(x)$, for every $x \in \Gamma$, and let $B$ be the complement of~$A$. Then, for $s \in S$, let $ A_s = \{\, g \in A \mid M(g) = sg \,\}$ and $ B_s = \{\, g \in B \mid M(g) = sg \,\}$. By construction, these sets are pairwise disjoint, and we have $\Gamma = \bigcup_{s \in S} s A_s = \bigcup_{s \in S} s B_s$. \fullsoln{EgAmenEx}{finite} This is obvious from almost any characterization of amenability. For example, letting $F = \Gamma$ yields a nonempty, finite set that is invariant, not merely almost-invariant. \fullsoln{EgAmenEx}{cyclic} By \fullcref{EgAmenEx}{finite}, it suffices to consider the infinite cyclic group~$\mathbb{Z}$. A long interval $\{0,1,2,\ldots,n\}$ is almost-invariant. \fullsoln{EgAmenEx}{product} The Cartesian product of two almost-invariant sets is almost-invariant. \fullsoln{EgAmenEx}{abelian} We may assume $\Gamma$ is finitely generated (by \fullcref{EgAmenEx}{locally}), so it is a direct product of finitely many cyclic groups. \fullsoln{EgAmenEx}{extension} This is difficult to do with almost-invariant sets, because multiplication by an element of $G/N$ will act by conjugation on~$N$, which may cause distortion. It is perhaps easiest to apply \cref{AmenOnConvex}. Since $N$ is amenable, it has fixed points in~$C$. The set $C^N$ of such fixed points is a closed, convex subset. Also, it is $\Gamma$-invariant (because $N$ is normal). So $\Gamma$ acts on~$C^N$. Since $N$ is trivial on this set, the action factors through to $\Gamma/N$, which must have a fixed point. This is a fixed point for~$\Gamma$. \fullsoln{EgAmenEx}{solv} By definition, a solvable group is obtained by repeated extensions of abelian groups, so this follows from repeated application of \fullcref{EgAmenEx}{extension}. \fullsoln{EgAmenEx}{subgrp} This is another case that is difficult to do with almost-invariant sets. Instead, note that if there is a Ponzi scheme on some subgroup of~$\Gamma$, then it could be reproduced on all of the cosets, to obtain a Ponzi scheme on all of~$\Gamma$. This establishes the contrapositive. \fullsoln{EgAmenEx}{quotient} This is immediate from \cref{ActHasInvtMeas} (or \cref{AmenOnConvex}), because any action of $\Gamma/N$ is also an action of~$\Gamma$. It also follows easily from \cref{VonNeumann}, since any subset of $\Gamma/N$ pulls back to a subset of~$\Gamma$. \fullsoln{EgAmenEx}{locally} Given $S$ and~$\epsilon$, let $H$ be the subgroup generated by~$S$, so $H$ is finitely generated. If $H$ is amenable, then it contains an almost-invariant set, which is also an $(S,\epsilon)$-invariant set in~$G$. \fullsoln{EgAmenEx}{limit} This is immediate from \fullcref{EgAmenEx}{locally}, because any finite subset of $\bigcup \mathcal{A}$ must be contained in one of the sets in~$\mathcal{A}$. \fullsoln{EgAmenEx}{subexp} See the hint to \fullcref{FolnerEx}{QI}. \soln{EulerTrivial} ($\Leftarrow$)~By assumption, we may choose the lifts in such a way that $\widetilde g \widetilde h = \widetilde{gh}$ for all $g$ and~$h$. So $c = 0$. ($\Rightarrow$) We have $c(g,h) = \varphi(gh) - \varphi(g) - \varphi(h)$ for some $\varphi \colon \Gamma \to \mathbb{Z}$. Then, letting $\widetilde\rho(g) (t) = \widetilde g(t) + \varphi(g)$, we have $\widetilde\rho( g ) \, \widetilde\rho( h ) = \widetilde\rho(gh)$, so $\widetilde\rho$ is a homomorphism. \soln{EulerIsBdd} We have $c(g,h) = \widetilde g \bigl( \widetilde h(0) \bigr) - \widetilde{gh}(0)$. Note that \begin{itemize} \item $0 \le \widetilde{gh}(0) < 1$, and \item $ 0 \le \widetilde g(0) \le \widetilde g \bigl( \widetilde h(0) \bigr) < \widetilde g(1) = \widetilde g(0) + 1 < 1 + 1 = 2$, \end{itemize} so both terms on the right-hand side are bounded. \soln{WordQuasi} In fact $\varphi_w(xy)$ never differs by more than~$1$ from $\varphi_w(x) + \varphi_w(y)$. There is a difference only if some occurrence of~$w$ (or~$w^{-1}$) overlaps the boundary between $x$ and~$y$, and there cannot be two such occurrences that are disjoint. \soln{H2b(free)} Let $x = (a^kbab^{-1})^n(a^{-(k-1)}b^2a^{-1}b^{-1}a^{-1}b^{-1})^n$. \fullsoln{QuasiMExers}{KernelFD} Write $\Gamma = H_1 \cdots H_r$. Any quasimorphism on~$\Gamma$ is determined, up to bounded error, by its restriction to the cyclic subgroups $H_1,\ldots,H_r$. Also, it is not difficult to show that every quasimorphism $\mathbb{Z} \to \mathbb{R}$ is a near homomorphism. (Or this can be deduced from \cref{Hbb(amen;R)=0} and \cref{KernelComparison}.) So the restriction of~$\varphi$ to each~$H_i$ is a near homomorphism. Since the homomorphisms from~$\mathbb{Z}$ to~$\mathbb{R}$ form a one-dimensional space, we conclude that the dimension of $\QM(\Gamma;\mathbb{R})/\ell^\infty(\Gamma;\mathbb{R})$ is at most~$r$. \fullsoln{QuasiMExers}{F2NotBddGen} Compare \cref{H2b(free)} with \fullcref{QuasiMExers}{KernelFD}. \fullsoln{QuasiMExers}{commutator} We have \begin{align*} \varphi ( x^{-1} y^{-1} x y ) &= \varphi ( x^{-1} y^{-1} ) + \varphi( x y \bigr) \pm C = \varphi ( x^{-1} ) + \varphi( y^{-1} ) + \varphi( x ) + \varphi( y \bigr) \pm 3C \\&= \varphi ( x^{-1} x ) + \varphi( y^{-1} y ) \pm 5C = 2\varphi ( e ) \pm 5C , \end{align*} so $| \varphi ( x^{-1} y^{-1} x y )| \le 2 |\varphi ( e )| + 5C$. \fullsoln{QuasiMExers}{amenable} Let $\varphi \colon \Gamma \to \mathbb{R}$ be a quasimorphism. From \cref{Hbb(amen;R)=0} and \cref{KernelComparison}, we see that $\varphi$ is within bounded distance of a homomorphism. However, since the abelianization of~$\Gamma$ is finite, there are no nontrivial homomorphisms $\Gamma \to \mathbb{R}$. Therefore $\varphi$ is bounded. \fullsoln{QuasiMExers}{SL3ZNoQuasi} Every elementary matrix is a commutator (recall that $[x^k, y] = z^k$), so \fullcref{QuasiMExers}{commutator} implies that $\varphi$ is bounded on the set of elementary matrices. Since every element of $\SL(3,\mathbb{Z})$ is the product of a bounded number of these elementary matrices (see \cref{SL3ZBddGen}), a simple estimate shows that $\varphi$ is bounded. \end{appendix}
{ "timestamp": "2012-10-16T02:01:14", "yymm": "1210", "arxiv_id": "1210.3671", "language": "en", "url": "https://arxiv.org/abs/1210.3671" }
\section{Introduction} The purpose of these notes is to derive a bound on the volume of a tubular neighborhood of a real algebraic variety in terms of the degrees of the defining polynomials. The problem is stated in probabilistic terms, namely, as the probability that a random point, uniformly distributed in a ball, falls within a certain neighborhood of the variety. \begin{theorem}\label{th:main} Let $V$ be the zero-set of multivariate polynomials $f_1,\dots,f_s$ in $\mathbb{R}^n$ of degree at most $D$. Assume $V$ is a complete intersection of dimension $m=n-s$. Let $x$ be uniformly distributed in a ball $B^n(p,\sigma)$ of radius $\sigma$ around $p\in \mathbb{R}^n$. Then \begin{equation*} \mathop{\mathbf P}\{{\sf dist}(x,V)\leq \varepsilon\}\leq 4 \ \sum_{i=0}^{m}\binomial{n}{s+i} \ \left(\frac{2D\varepsilon}{\sigma}\right)^{s+i}\left(1+\frac{\varepsilon}{\sigma}\right)^{m-i}. \end{equation*} If the polynomials $f_1,\dots,f_s$ are homogeneous and $p=0$, then \begin{equation*} \mathop{\mathbf P}\{{\sf dist}(x,V)\leq \varepsilon\}\leq 2 \ \sum_{i=0}^{m}\binomial{n}{s+i} \ \left(\frac{2D\varepsilon}{\sigma}\right)^{s+i}. \end{equation*} \end{theorem} The second of the stated equations is commonly attributed to A.~Ocneanu~\cite[Theorem 4.3]{demm:88}, though a proof has not been published so far and does not seem available. From the proof of Theorem~\ref{th:main} we also get the following corollary, conjectured by J.~Demmel~\cite[(4.15)]{demm:88}. \begin{corollary}\label{co:asympt} For compact $V$ and small enough $\varepsilon$ we have \begin{equation*} \mathop{\mathbf P}\{{\sf dist}(x,V)\leq \varepsilon\} = {\mathsf{vol}}_{n-s}(V) \cdot \varepsilon^s\cdot \frac{n\Gamma(n/2)}{\pi^{(n-s)/2}s\Gamma(s/2)}+o(\varepsilon^{s}). \end{equation*} \end{corollary} Theorem~\ref{th:main} can be adapted to a spherical setting without too much difficulty, thus generalizing the results of~\cite{bucl:08} to higher codimension, but for the sake of brevity such a generalization is omitted in these notes. \subsection{History and applications} In 1840, J. Steiner~\cite{stei:40} showed that volume of an $\varepsilon$-neighborhood of a convex body in $\mathbb{R}^3$ could be written as a quadratic polynomial in $\varepsilon$. This result has become a staple of integral geometry and was the starting point of a myriad of generalizations in multiple directions. One such generalization is a celebrated result by H. Weyl~\cite{weyl:39}, who showed that for $\varepsilon$ small enough, the volume of an $\varepsilon$-neighborhood around a compact Riemannian submanifold of $\mathbb{R}^n$ is given by a polynomial whose degree is the dimension of the manifold. Weyl's tube formula became an important ingredient in Allendoerfer and Weil's proof of the Gauss-Bonnet Theorem for hypersurfaces. For more on Weyl's tube formula and its ramification, see~\cite{gray:04}. Bounds on the volume of tubes around real varieties in terms of degrees have previously been given by R.~Wongkew~\cite{wong:93}, although without explicit constants. Tube formulae came into the radar of numerical analysis through the work of S. Smale~\cite{smal:81}, E. Kostlan~\cite{kost:85}, J. Renegar~\cite{rene:87}, and J. Demmel~\cite{demm:88}, among others, who were interested in the probabilistic analysis of condition numbers. It has been observed (see, e.g.,~\cite{kaha:72,demm:87} and the references there) that the condition number of many numerical computation problems can by bounded by the inverse distance to a set of ill-posed inputs. In particular, if one can describe the set of ill-posed inputs as a subset of an algebraic variety, then a bound on the relative volume of its neighborhood in terms of the degree of the variety directly translates into a result on the probability distribution of condition numbers. The results of Demmel~\cite{demm:88} have been partially extended to the setting of {\em smoothed analysis} on the sphere in~\cite{bucl:08}, by studying tubular neighborhoods of hypersurfaces intersected with spherical caps. For a comprehensive survey of these ideas we refer to~\cite{buer:10}. Recently, a consequence of the degree bound derived in this article has been used in the study of embeddings of simplicial complexes into Euclidean space~\cite[Prop 3.10]{grgu:12}. Other notable fields in which tube formulae have been used extensively include statistics~\cite{adta:07} and the probabilistic analysis of convex optimization~\cite{ambu:11,almt:13,mctr:13}. The main purpose of the current article is to fill a gap in the literature by making available a complete and rigorous derivation of the real degree bounds used in~\cite{demm:88}. \subsection{Main ideas} The proof of Theorem~\ref{th:main} is based on three main ingredients: Weyl's tube formula, an integral-geometric kinematic formula, and B\'ezout-type bounds on the degree of Gauss maps. In what follows, let $V$ be a complete intersection of dimension $m=n-s$. First, based on Weyl's tube formula, a bound is derived in terms of {\em integrals of absolute curvature}: \begin{equation*} {\mathsf{vol}}_n \ T(V,\varepsilon) \leq \varepsilon^s \sum_{i=0}^{m} \frac{1}{s+i} \ |K_{i}|(V) \ \varepsilon^{i}. \end{equation*} The highest order term $|K_m|(V)$ is intimately related to the {\em generalized Gauss map} of $V$, and can in fact be expressed in terms of the {\em degree} of this map. Using standard B\'ezout-type arguments it is possible to bound the degree of the Gauss map in terms of the degrees of the defining polynomials. The lower-order invariants $|K_i|(V)$ can then be related to the highest order invariants $|K_i|(V\cap L)$ of an intersection with a random linear subspace by means of {\em Crofton's Formula} from integral geometry: \begin{equation*} |K_i|(V)\leq 2\binomgr{n}{s+i}\int_{L\in \mathcal{E}_{s+i}^n}|K_i|(V\cap L) \ d\lambda_{s+i}^n. \end{equation*} where $\mathcal{E}_{s+i}$ denotes the space of $(s+i)$-dimensional subspaces with suitable measure. One can the apply the degree bounds in lower dimension. Obviously, some care has to be taken when implementing these ideas in detail. \subsection{Outline} Section~\ref{se:prel} gives a review of the necessary concepts of Riemannian geometry in Euclidean space. In Section~\ref{se:tubes}, Weyl's tube formula and results from integral geometry are presented in a slightly generalized form to suit our purposes. At the beginning of Section~\ref{se:degree}, the tube formula is reformulated in terms of the degrees of a generalized Gauss map. Up to this point, everything is based on compact Riemannian manifolds. Systems of polynomial equations enter when bounding the degrees of the generalized Gauss map, leading to the proof of Theorem~\ref{th:main}. The appendix is devoted to a complete proof of Weyl's tube formula in Euclidean space. \subsection{Notation and terminology}\label{se:notation} We write $B^n(p,\sigma)$ for the solid closed ball in $\mathbb{R}^n$ with center $p$ and radius $\sigma>0$, and $S^{n-1}(p,\sigma)$ for its boundary, and set $S^{n-1}:=S^{n-1}(0,1)$ and $B^n:=B^n(0,1)$. We write ${\mathsf{vol}}_n\ M$ for the $n$-dimensional Lebesgue-measure of a measurable set $M\subseteq \mathbb{R}^n$, and often drop the subscript an simply write ${\mathsf{vol}} \ M$. For an $m$-dimensional Riemannian manifold $M$, when we write ${\mathsf{vol}}_m \ M = {\mathsf{vol}} \ M$ we mean $\int_M\omega_M$, with $\omega_M$ the volume form associated to the Riemannian structure (see Section~\ref{sec:integration}). Whenever we say manifold, we mean smooth manifold. Throughout this paper we denote by ${\mathcal O}_{n-1}:=2\pi^{n/2}/\Gamma\left(\frac{n}{2}\right)$ the $(n-1)$-dimensional volume of the unit sphere $S^{n-1}$ in $\mathbb{R}^{n}$, and $\omega_n:={\mathcal O}_{n-1}/n$ the $n$-dimensional volume of the solid unit ball in $\mathbb{R}^n$. The {\em flag coefficients} are defined as \begin{equation}\label{eq:bingr} \binomgr{n}{k}:=\binomial{n}{k}\frac{\omega_n}{\omega_k \ \omega_{n-k}} \end{equation} for $n\geq 0$ and $k\geq 0$. They appear naturally in the study of invariant measures on Grassmannians~\cite{klro:97}. \section{Preliminaries}\label{se:prel} We assume familiarity with the basic notions of Riemannian geometry, as described for example in~\cite{doca:92}. The purpose of most of this section is to introduce notation and terminology. \subsection{Riemannian manifolds in $\mathbb{R}^n$}\label{sse:riemannian} Given a Riemannian manifold $M$ of dimension $m$, we denote by $TM$ its tangent bundle, by $C(M)$ the ring of smooth functions on $M$, and by $\mathcal{X}(M)$ the $C(M)$-module of tangent vector fields on $M$. For $p\in M$ we write $T_pM$ for the tangent space at $p$. If $v\in T_p\mathbb{R}^n$ and $f\in C(\mathbb{R}^n)$, then $v(f)$ denotes the directional derivative of $f$ in direction $v$ at $p$. In this article we are only concerned with submanifolds $M$ of Euclidean space $\mathbb{R}^n$. As such, each $T_pM$ can be identified with a subspace of $T_p\mathbb{R}^n\cong \mathbb{R}^n$ in the obvious manner. Let $N M := \{ (p,v)\in T\mathbb{R}^n \mid p\in M, v \perp T_pM\}$ be the normal bundle to $M$ in $\mathbb{R}^n$ and denote by $N_pM$ the fiber of $N M$ over $p\in M$, i.e., the normal space to $M$ at $p$ in $\mathbb{R}^n$. Let $Y\in \mathcal{X}(\mathbb{R}^n)$ be a smooth vector field. For $v\in T_p\mathbb{R}^n$ denote by $\nabla_vY:=v(Y)$ the covariant derivative of $Y$ along $v$ at $p$. The covariant derivative satisfies $v(\langle Y,Z\rangle) =\langle\nabla_vY,Z\rangle+\langle Y,\nabla_vZ\rangle$. In particular, for orthogonal fields $Y$ and $Z$ we have $\langle \nabla_vY,Z\rangle=-\langle Y,\nabla_vZ\rangle$. For $v\in N_p M$ and $X,Y\in \mathcal{X}(M)$, the second fundamental form $S_v(X,Y)$ of $X$ and $Y$ along $v$ is the symmetric, bilinear map $T_pM\times T_pM\rightarrow \mathbb{R}$ defined by \begin{equation*} S_v(X,Y):=\langle \nabla_{X(p)}Y,v\rangle, \end{equation*} where we assume the vector fields $X,Y$ to be extended to a neighborhood of $M$ in $\mathbb{R}^n$ for this definition to make sense. Given a normal vector field $Z$ on $M$ we have $S_{Z(p)}(X,Y)=-\langle Y,\nabla_{X(p)}Z\rangle$ (since $X,Y$ are orthogonal to $Z$). Given an orthonormal frame field $(E_1,\dots,E_m)$ on $U\subset M$, we will on occasion use the matrix $S(Z)$ with entries in $C(M)$ that represents this bilinear form with respect to that frame field. Its values at $p\in U$ are given by the entries \begin{equation}\label{eq:second} S_{ij}(Z)(p)=S_{Z(p)}(E_i,E_j)=\langle \nabla_{E_i(p)}E_j,Z\rangle=-\langle E_j,\nabla_{E_i(p)}Z\rangle. \end{equation} Note that we can also talk about $S(v)$ for fixed $v\in N_pM$. Then we have $S(v)=S(Z)(p)$ for any normal vector field such that $Z(p)=v$. \subsubsection{A note on integration and orientation}\label{sec:integration} Given a Riemannian manifold $M\subseteq \mathbb{R}^n$, we denote by $\omega_M$ the natural volume form on $M$ associated to the Riemannian metric. Thus if $U\subseteq M$ is an oriented coordinate neighborhood and $x^1,\dots,x^m\colon U\rightarrow \mathbb{R}^m$ are orthonormal coordinates (so that the tangent vectors $\partial/\partial x^1,\dots,\partial/\partial x^m$ form a positively oriented, orthonormal basis at each $p\in U$), then $\omega_M=dx^1\wedge \cdots \wedge dx^m$ on $U$. All volume forms are densities (unsigned forms), though we will occasionally locally represent them as differential forms in an oriented coordinate neighborhood $U\subseteq M$ without always stating this explicitly. Given a map $f$ from a manifold of the same dimension to $M$, $f^*\omega_M$ denotes the pull-back volume form. \subsubsection{Curvature Invariants}\label{sec:curvinv} In this section we introduce the curvature invariants $K_0(M),\dots,K_m(M)$ associated to a compact Riemannian manifold $M$ in terms of the second fundamental form. These invariants are key components in Weyl's formula (Section~\ref{sec:weyl}) for the volume of tubes around $M$. Let $M$ be an $m$-dimensional compact Riemannian manifold, $U'\subseteq \mathbb{R}^n$ open and $U=U'\cap M$. Let $(E_1,\dots,E_n)$ be an orthonormal frame field on $U'$, such that $E_1(p),\dots,E_m(p)$ form an oriented basis of $T_pM$ for all $p\in U$. For $v\in N_pM$ let $S(v)$ denote the $m\times m$ matrix of the second fundamental form at $p$ along $v$ with respect to the frame field, as defined in~(\ref{eq:second}). For $0\leq i\leq m$ let $\psi_i\colon N_p M\rightarrow \mathbb{R}$ be the homogeneous polynomial of degree $i$ defined by \begin{equation*} \det(\mathrm{Id}-tS(v))=\sum_{i=0}^m t^i \psi_i(v). \end{equation*} Note that the $\psi_i(v)$ are, up to sign, the coefficients of the characteristic polynomial of $S(v)$. More precisely, we have \begin{equation*} \psi_i(v)=(-1)^i\sigma_i(\kappa_1(v),\dots,\kappa_m(v)), \end{equation*} where the $\kappa_i(v)$ are the eigenvalues of $S(v)$, i.e., the principal curvatures, and $\sigma_i$ denotes the $i$-th elementary symmetric function. In particular, $\psi_m(v)=(-1)^m\det S(v)$. These quantities are, up to orientation, independent of the particular orthonormal frame used to define the matrix $S(v)$. For $p\in M$, set $S(N_pM):=\{v\in N_pM \mid \|v\|=1\}$ and denote by $S(NM)$ the corresponding normal sphere bundle. At a point $p\in M$ define \begin{equation*} I_i(p)=\int_{v\in S(N_pM)} \psi_i(v)\; \omega_{S(N_pM)}. \end{equation*} The $I_i(p)$ are polynomial invariants of the second fundamental form in the sense of~\cite{howa:93}. If $v=\sum_{j=1}^su^jE_{m+j}(p)$, $s=n-m$, then the $I_i(p)$ are integrals over all $u\in S^{s-1}$ of homogeneous polynomials of degree $i$ in $u^1,\dots,u^s$. From this it follows that $I_i(p)=0$ for $i$ odd. The {\em integrals of curvature} are defined as \begin{equation}\label{eq:defki} K_i(M):=\int_M I_i(p)\; \omega_M=\int_{S(N M)}\psi_i(v)\; \omega_{S(N M)}. \end{equation} It is easy to see that $K_0(M)={\mathcal O}_{s-1}{\mathsf{vol}}_m \ M$. Less trivial is the fact that $K_m(M)={\mathcal O}_{n-1}\ \chi(M)$, where $\chi(M)$ is the Euler characteristic of $M$. This is a consequence of the generalized Gauss-Bonnet Theorem (see~\cite{gray:04} for a discussion of this result and its relation to Weyl's tube formula). The {\em integrals of absolute curvature}, suggested by Peter B\"urgisser~\cite{bucl:08}, are defined as \begin{equation}\label{eq:defabski} |K_i|(M):=\int_{S(N M)}|\psi_i(v)|\; \omega_{S(N M)}. \end{equation} These are important for extending Weyl's tube formula to and inequality for the volume of $\varepsilon$-tubes for arbitrary $\varepsilon$. Clearly, the definition is also valid for an open subset $U\subset M$, or an open subset of $M\backslash \partial M$ if $M$ is a compact Riemannian manifold with boundary. \subsubsection{The degree} Let $f\colon M\rightarrow P$ by a smooth map of compact Riemannian manifolds. By Sard's Theorem~\cite[\S 2]{miln:97} almost all $q\in P$ are regular values. The preimage $f^{-1}(q)$ is either empty or a finite set with locally constant cardinality~\cite{miln:97} as $q$ varies among regular values. For measurable $h\colon P\rightarrow \mathbb{R}$ we have \begin{equation}\label{eq:coarea} \int_{p\in M}h\circ f(p) \ f^*\omega_P = \int_{q\in P} h(q)\ \#f^{-1}(q) \ \omega_P, \end{equation} where $\#f^{-1}(q)$ denotes the cardinality of the preimage of $q$. Recall (Section~\ref{sec:integration}) that we are dealing with unsigned forms, i.e., $f^*\omega_P = |\det(D\varphi)|\omega_m$, otherwise we would have to count the points in the fiber with signs. We define the maximum degree of $f$ to be the maximum cardinality of the preimage of a regular value under $f$: \begin{equation*} \operatorname{mdeg} f := \max_{q\in \mathrm{reg }P} \#f^{-1}(q). \end{equation*} With this definition we have \begin{equation}\label{eq:degree} \int_{M}\ f^*\omega_P\leq \operatorname{mdeg} f \ \int_{P}\ \omega_P. \end{equation} This notion of degree differs from the usual one from differential topology (see~\cite[\S 5]{miln:97}), which takes into account orientation. \subsubsection{Transversality} The intersection of two manifolds $M$ and $P$ in $\mathbb{R}^n$ of dimension $m,\ell$ with $m+\ell\geq n$ is called {\em transversal} at $p\in M\cap P$, if $\dim T_pM\cap T_pP=m+\ell-n$. The intersection is called transversal if it is transversal at every $p\in M\cap P$. In that case, $M\cap P$ is a smooth $(m+\ell-n)$-dimensional manifold. Recall that $B^n(p,\sigma)$ denotes the closed ball of radius $\sigma$ around $p$ in $\mathbb{R}^n$, and $S^{n-1}(p,\sigma)=\partial B^n(p,\sigma)$ is its boundary. The following lemma is a standard application of Sard's Lemma, the proof is omitted. By ``almost all'' we mean ``up to a set of measure zero''. \begin{lemma}\label{le:transverse} Let $M$ be a Riemannian manifold. For almost all $\sigma>0$ the intersection $B^n(p,\sigma)\cap M$ is a Riemannian manifold with boundary. In particular, $S^{n-1}(p,\sigma)\cap M$ is a smooth Riemannian manifold of codimension one in $M$. \end{lemma} \section{Geometry of tubes and integral geometry}\label{se:tubes} \subsection{Weyl's tube formula}\label{sec:weyl} References for the content of this section are~\cite{weyl:39,gray:04}. Let $M\subseteq \mathbb{R}^n$ be a Riemannian submanifold of dimension $m< n$, possibly with boundary, and denote by $s:=n-m$ the codimension of $M$ in $\mathbb{R}^n$. The (closed) {\em tube} of radius $\varepsilon$ around $M$ in $\mathbb{R}^n$ is defined to be the set \begin{equation}\label{eq:tube} T(M,\varepsilon) := \left\{p\in \mathbb{R}^n \bigg| \begin{array}{ll} \exists \text{ line of length }\leq \varepsilon \text{ from } p \\ \text{ meeting } M \text{ orthogonally}\end{array}\right\}. \end{equation} For compact manifolds this coincides with the $\varepsilon$-neighborhood of $M$ in $\mathbb{R}^n$, though in general this need not be the case. \begin{figure}[h] \begin{center} \includegraphics[width=0.8\textwidth]{tubes.pdf} \end{center} \caption{Tube around and open [left] and closed [right] interval.} \end{figure} In his influential paper~\cite{weyl:39}, Weyl derived the expression \begin{equation*} {\mathsf{vol}} \ T(M,\varepsilon) = {\mathcal O}_{s-1} \ \varepsilon^s \sum_{\stackrel{i=0}{i \text{ even}}}^{m} \frac{(i-1)(i-3)\cdots 1}{(s+i)(s+i-2)\cdots s} \ \mu_{i}(M)\ \varepsilon^{i} \end{equation*} for the volume of a tube of radius $\varepsilon$ around $M$, provided $\varepsilon$ is small enough. The $\mu_i(M)$ are the {\em curvature invariants} of $M$. The deeper part of Weyl's work consists of showing that these invariants are intrinsic, that is, they only depend on the curvature tensor of $M$ and not on the particular embedding of $M$ in $\mathbb{R}^n$. We will not need this feature here, however, and will be happy with expressing these invariants in terms of the second fundamental form. The $\mu_i(M)$ are just a different normalization of the invariants $K_i(M)$ introduced in Section~\ref{sec:curvinv}, namely \begin{equation}\label{eq:muk} K_i(M)={\mathcal O}_{s-1}\frac{(i-1)(i-3)\cdots 1}{(s+i-2)(s+i-4)\cdots s} \mu_i(M). \end{equation} for $i$ even. Note that Corollary~\ref{co:asympt} follows immediately from the Weyl's tube formula, using that $K_0(M)={\mathcal O}_{s-1}{\mathsf{vol}}_m\ M$. Note that the $K_i(M)$ are no longer independent of the embedding, since the codimension enters the formula. We will need a slight variation of Weyl's tube formula that works for arbitrary $\varepsilon$. \begin{theorem}\label{thm:tubeineq} Let $M\subseteq \mathbb{R}^n$ be an oriented, compact, $m$-dimensional Riemannian manifold, possibly with boundary, and assume $s:=n-m>0$. Let $U\subseteq M\backslash \partial M$ be an open subset of $M$. Then for all $\varepsilon>0$ we have \begin{equation}\label{eq:weyl} {\mathsf{vol}} \ T(U,\varepsilon) \leq \varepsilon^s \sum_{i=0}^{m} \frac{1}{s+i} \ |K_{i}|(U) \ \varepsilon^{i}. \end{equation} \end{theorem} The proof, given in the appendix, is along the lines of ~\cite{weyl:39,gray:04}. \begin{example} Let $M=S^m$ be the $m$-dimensional unit sphere in $\mathbb{R}^n$. Along the lines of the proof of the tube formula~\ref{thm:tubeineq} we can derive \begin{equation*} {\mathsf{vol}} \ T(S^m,\varepsilon) = 2 \ \omega_{m+1}\ \varepsilon^s \sum_{i=0}^{m} \frac{\omega_{s+i}}{\omega_{i+1}}\binomial{m+1}{i+1}\ \varepsilon^{i}. \end{equation*} for small $\varepsilon$ (recall from Section~\ref{se:notation} the definition of $\omega_n$ and ${\mathcal O}_n$). From this we get \begin{equation*} K_i(S^m)=\frac{2{\mathcal O}_m{\mathcal O}_{s+i-1}}{{\mathcal O}_i}\binomial{m}{i} \end{equation*} for $i$ even. Note that $K_0(S^m)={\mathcal O}_m{\mathcal O}_{n-m-1}$ and that $K_m(S^m)=2{\mathcal O}_{n-1}$ for $m$ even and $K_m(S^m)=0$ for $m$ odd, in accordance with the Euler characteristic for spheres. Some special cases for the volume of tubes: \begin{enumerate} \item Setting $m=n-1$ we get ${\mathsf{vol}} \ T(S^{n-1},\varepsilon)=\omega_n \ [(1+\varepsilon)^n-(1-\varepsilon)^n]$, as was to be expected. \item For $m=0$ we have ${\mathsf{vol}} \ T(S^0,\varepsilon)=2\ \varepsilon^n\ \omega_n$. \item For $m=1$, $n=2$ we get the volume of the torus ${\mathsf{vol}} \ T(S^1,\varepsilon)=2\pi^2\varepsilon^2$. \end{enumerate} \end{example} \subsection{Integral geometry} In order to obtain upper bounds for the integrals of absolute curvature $|K_i|(M)$, we will first derive bounds for $|K_m|(M)$ using the generalized Gauss map, and the case where $0\leq i<m$ is then handled by relating the $i$-th curvature invariants $K_i(M)$ to the curvature invariants $K_i(M\cap L)$ of the intersection of $M$ with a random affine space of dimension $s+i$. Formulae relating invariant measures of a set to its intersection with random affine spaces are known by the name of {\em Crofton formulae} and play a central role in integral geometry. For an introduction to integral geometry and geometric probability we refer to~\cite{klro:97, shwe:08}. The version of Crofton's formula involving Weyl's curvature invariants is due to Chern~\cite{cher:66} and Federer~\cite{fede:59}, see also~\cite[15.95b]{sant:76}. Let $\mathcal{E}_k^{n}$ be the set of $k$-dimensional affine spaces in $\mathbb{R}^n$ and $\mathbb{G}(n,k)$ the Grassmannian of $k$-dimensional linear subspaces of $\mathbb{R}^n$. We can identify $\mathcal{E}_k^n$ with the subset of those $(V,p)\in \mathbb{G}(n,k)\times \mathbb{R}^n$ such that $p\perp V$, the one-to-one correspondence $s$ being given by $s(V,p)=p+V$~\cite[Chapter 6]{klro:97}. Let $\nu_k^n$ denote the $O(n)$-invariant measure on $\mathbb{G}(n,k)$ induced by the identification $\mathbb{G}(n,k)=O(n)/O(k)\times O(n-k)$, normalized such that \begin{equation*} \nu_k^n(\mathbb{G}(n,k))=\frac{{\mathcal O}_{n-1}\cdots {\mathcal O}_{n-k}}{{\mathcal O}_{k-1}\cdots {\mathcal O}_0}, \end{equation*} see also ~\cite[3.2]{bucl:09} for a discussion. The product measure $\nu_{k}^n\times \omega_{\mathbb{R}^n}$ gives rise to an invariant measure $\overline{\lambda}_k^n$ on $\mathcal{E}_k^n$, defined by \begin{equation*} \int_{V\in \mathbb{G}(n,k)}\left(\int_{p\in V^{\perp}}f\circ s(V,p)\ \omega_{V^{\perp}}\right)\ d\nu_k^n=\int_{L\in \mathcal{E}_k^n} f(L)\ d\overline{\lambda}_k^n. \end{equation*} for a measurable function $f$ on $\mathcal{E}_k^n$. In particular, setting \begin{equation*} f=\mathbf{1}_{B^n(p,\sigma)}=\begin{cases} 1 & L\cap B^n(p,\sigma)\neq \emptyset\\ 0 & \text{ else } \end{cases} \end{equation*} we get \begin{align*} \overline{\lambda}_{k}^n(\{L\in \mathcal{E}_{k}^n \mid L\cap B^n(p,\sigma)\neq\emptyset\})&=\int_{L\in \mathcal{E}_k^n} f(L)\ d\overline{\lambda}_k^n\\ &=\int_{V\in \mathbb{G}(n,k)}\left(\int_{p\in V^{\perp}}\mathbf{1}_{B^n(p,\sigma)}\ \omega_{V^{\perp}}\right)\ d\nu_k^n\\ &=\omega_{n-k}\sigma^{n-k}\nu_k^n(\mathbb{G}(n,k)). \end{align*} In the following we use the renormalized measure $\lambda_k^n=\nu_k^n(\mathbb{G}(n,k))^{-1} \ \overline{\lambda}_k^n$, so that \begin{equation*} \lambda_{k}^n(\{L\in \mathcal{E}_{k}^n \mid L\cap B^n\neq\emptyset\})=\omega_{n-k}. \end{equation*} The following theorem is merely a reformulation of~\cite[15.95b]{sant:76} with a different normalization of the measure, and after simplifying the constants. Note also that with the parameters chosen here, it makes no difference whether we formulate this theorem with $\mu_i(M)$ or with $K_i(M)$, since $M\cap L$ has generically the same codimension $s$ in $L$ as $M$ in $\mathbb{R}^n$. Recall the definition~(\ref{eq:bingr}) of the flag coefficients in Section~\ref{se:notation}. \begin{theorem}(Crofton's Theorem)\label{thm:crofton} \begin{equation}\label{eq:crofton} K_i(M)=\binomgr{n}{s+i}\int_{L\in \mathcal{E}_{s+i}^n}K_i(M\cap L) \ d\lambda_{s+i}^n. \end{equation} \end{theorem} Crofton's Theorem leads to a bound on integrals of absolute curvature. \begin{theorem}\label{thm:croftonabs} Let $M$ be a compact Riemannian submanifold of $\mathbb{R}^n$ of dimension $m<n$, and let $i\leq m$. Then \begin{equation}\label{eq:croftonabs} |K_i|(M)\leq 2\binomgr{n}{s+i}\int_{L\in \mathcal{E}_{s+i}^n}|K_i|(M\cap L) \ d\lambda_{s+i}^n. \end{equation} \end{theorem} \begin{Proof} Let $M_+$ and $M_-$ denote the parts of $M$ on which $I_i(p)$ is positive and negative, respectively. Then $|K_i|(M)=|K_i(M_+)|+|K_i(M_-)|$. \end{Proof} \section{Degree bounds}\label{se:degree} \subsection{Degree of the Gauss map} In this section we interpret the expected value of the highest curvature invariant as the degree of a generalized Gauss map. Let $S(N M)$ denote the normal sphere bundle over $M$. Note that $S(N M)$ has codimension one in $\mathbb{R}^n$. \begin{definition} Let $M\subseteq \mathbb{R}^n$ be a compact $m$-dimensional Riemannian manifold. The {\em generalized Gauss map} of $M$ is defined as \begin{equation*} \gamma\colon S(N M)\rightarrow S^{n-1}, \quad (p,v)\mapsto v. \end{equation*} \end{definition} The generalized Gauss map on a compact manifold can be shown to be surjective. Note that for almost all $w\in S^{n-1}$, the map $h(p,v)=\<v,w\>$ is a Morse function with non-degenerate critical points those $(p,v)$ such that $v=w$. Recall now the definition~(\ref{eq:degree}) of the degree of a map. \begin{lemma}\label{le:curvedegree} Let $M\subseteq \mathbb{R}^n$ be a compact Riemannian manifold of dimension $m$. Then \begin{equation}\label{eq:gauss} |K_m|(M)=\int_{v\in S^{n-1}} \# \gamma^{-1}(v) \ \omega_{S^{n-1}} \leq {\mathcal O}_{n-1} \ \operatorname{mdeg} \gamma . \end{equation} \end{lemma} \begin{Proof} We need to show that \begin{equation}\label{eq:pullback} \gamma^*\omega_{S^{n-1}}=|\det S(v)| \ \omega_{S(N M)} \end{equation} on $M$. Once this is shown, the claim of the lemma follows from the definition of the $|K_i|$ (\ref{eq:defki}), namely, \begin{equation*} |K_m|(M)=\int_{(p,u)\in S(N M)} |\det S(u)|\ \omega_{S(N M)}. \end{equation*} Let $x^1,\dots,x^m\colon U\rightarrow \mathbb{R}^m$ be orthonormal coordinates on an open set $U\subset M$. Let $(E_1,\dots,E_n)$ be an orthonormal frame field defined in a neighborhood of $U$ in $\mathbb{R}^n$, such that on $U$ we have $E_i=\partial/\partial x^i$ for $1\leq i\leq m$. The frame field $E_1,\dots,E_n$ gives a local trivialization of the sphere bundle \begin{align*} U\times S^{s-1}&\rightarrow S(N M)\\ (p,u)&\mapsto \left(p,\sum_{i=1}^{s}u^i E_{m+i}\right). \end{align*} An orthonormal coordinate system $y^1,\dots,y^{s-1}$ for $S^{s-1}$ thus gives rise to orthonormal coordinates $x^1,\dots,x^m,y^1,\dots,y^{s-1}$ on $S(N M)$. With $\omega_M=dx^1\wedge \cdots \wedge dx^{m}$ and $dy=dy^1\wedge \cdots \wedge dy^{s-1}$ we have \begin{equation} \omega_{S(N M)}=\omega_M\wedge dy. \end{equation} Similarly we have $\omega_{S^{n-1}}=E_1^*\wedge \cdots \wedge E_m^*\wedge dy^1\wedge \cdots \wedge dy^{s-1}$. Let $\phi$ be such that $\gamma^*\omega_{S^{n-1}}=\phi(p,v) \ \omega_{S(N M)}$ as differential form. Then \begin{align*} \phi(p,v)&=\gamma^*\omega_{S^{n-1}}\left( \frac{\partial}{\partial x^1},\dots,\frac{\partial}{\partial x^m},\frac{\partial}{\partial y^1},\dots,\frac{\partial}{\partial y^{s-1}}\right)\\ &=\omega_{S^{n-1}}\left(\gamma_*\frac{\partial}{\partial x^1},\dots,\gamma_*\frac{\partial}{\partial x^m},\gamma_*\frac{\partial}{\partial y^1},\dots,\gamma_*\frac{\partial}{\partial y^{s-1}}\right). \end{align*} Note that \begin{equation*} \gamma_*\frac{\partial}{\partial x^i}=\sum_{\ell=1}^s u^{\ell}\frac{\partial}{\partial x^i}E_{m+\ell}(p), \quad \gamma_*\frac{\partial}{\partial y^j}=\sum_{\ell=1}^s \frac{\partial}{\partial y^j}u^{\ell} \ E_{m+\ell}(p), \end{equation*} from which we obtain \begin{equation*} \phi(p,v) = \omega_{M}\left(\frac{\partial}{\partial x^1} \gamma,\dots,\frac{\partial}{\partial x^m} \gamma\right) \cdot dy\left(\frac{\partial}{\partial y^1} \gamma,\dots,\frac{\partial}{\partial y^{s-1}}\gamma\right). \end{equation*} A direct calculation shows that \begin{equation*} \langle\frac{\partial}{\partial x^i}\gamma,E_j\rangle =-S_{ij}(v). \end{equation*} From this it follows that \begin{equation*} \omega_{M}\left(\frac{\partial}{\partial x^1} \gamma,\dots,\frac{\partial}{\partial x^m} \gamma \right)=(-1)^m\det S(v). \end{equation*} Clearly \begin{equation*} dy\left(\frac{\partial}{\partial y^1} \gamma,\dots,\frac{\partial}{\partial y^{s-1}} \gamma \right)=1 \end{equation*} from which the claim follows for $M$ without boundary. \end{Proof} The statement of Lemma~(\ref{le:curvedegree}) also holds if $M$ is replaced by an open subset $U\subset M\backslash \partial M$, for a compact manifold with boundary $M$. We omit the details. For an affine subspace $L\in \mathcal{E}_{s+i}^n$ in general position, the intersection $M\cap L$ is either empty or an $i$-dimensional submanifold of $L\cong \mathbb{R}^{s+i}$. In the latter case we can define the degree of $M$ with respect to $L$ as the degree of the Gauss map of $M\cap L$ in $L$, that is, \begin{equation*} \operatorname{mdeg}(M;L):= \operatorname{mdeg} \gamma|_{M\cap L}\leq \max_{v\in S^{s+i-1}}\#\gamma|_{M\cap L}^{-1}(v). \end{equation*} Define the $i$-th degree $\operatorname{mdeg}_i(M)$ of $M$ to be the maximum of $\operatorname{mdeg}(M;L)$ over all $L\in \mathcal{E}_{s+i}^n$ that intersect $M$: \begin{equation*} \operatorname{mdeg}_i(M):=\sup_{L\in\mathcal{E}_{s+i}^{n}}\operatorname{mdeg}(M;L). \end{equation*} Before dealing with polynomial equations, we give a bound of the volume of an $\varepsilon$-tube around $M$ by a function of the $i$-th degrees of $M$ and of $\varepsilon$. \begin{theorem}\label{thm:degbound} Let $M$ be a Riemannian manifold of dimension $m$ in $\mathbb{R}^n$ and set $s=n-m$. Assume $M$ is contained in a ball $B^n(p,\sigma)$ of radius $\sigma$. Then for $\varepsilon>0$ we have \begin{equation*} {\mathsf{vol}} \ T(M,\varepsilon)\leq 2 \omega_n\ \varepsilon^s \ \sum_{i=0}^{m} \binomial{n}{s+i}\ \operatorname{mdeg}_i(M) \ \sigma^{m-i} \varepsilon^i, \end{equation*} with equality for $\varepsilon$ small enough. \end{theorem} \begin{Proof} In light of the tube formula, Theorem~\ref{thm:tubeineq}, we aim to bound the integrals of absolute curvature $|K_i|$ of $M$. By the Crofton's formula~(\ref{eq:croftonabs}) and the degree bound, Lemma~\ref{le:curvedegree}, we have {\small \begin{align*} \frac{1}{s+i} |K_i|(M)&\leq \frac{2}{s+i}\binomgr{n}{s+i}\int_{L\in\mathcal{E}_{s+i}^{n}} |K_i|(M\cap L)d\lambda_{s+i}^n\\ &\leq\frac{2 \omega_n}{(s+i)\omega_{s+i}\omega_{m-i}}\binomial{n}{s+i}{\mathcal O}_{s+i-1}\int_{L\in\mathcal{E}_{s+i}^{n}}\operatorname{mdeg}(M;L)d\lambda_{s+i}^n. \end{align*}} Since $M\subset B^p(p,\sigma)$, we only need to worry about those $L$ that intersect this ball. By our normalization, \begin{equation*} \lambda_{s+i}^n(\{L\in \mathcal{E}_{s+i}^n\mid L\cap B^n(p,\sigma)\neq \emptyset\})=\sigma^{m-i}\omega_{m-i}, \end{equation*} and we can bound the right-most integral above as \begin{equation*} \int_{L\in\mathcal{E}_{s+i}^{n}}\operatorname{mdeg}(M;L)d\lambda_{s+i}^n\leq \sigma^{m-i}\omega_{m-i}\cdot \operatorname{mdeg}_i(M). \end{equation*} Plugging these bounds into the tube formula~(\ref{eq:weyl}) and simplifying the constants, the claim follows. \end{Proof} \subsection{Complete intersections} Let $f_1,\dots,f_s\in \mathbb{R}[X_1,\dots,X_n]$ be polynomials such that their common zero set $V$ is a complete intersection, i.e., for every $p\in V$ the gradients $\nabla f_1,\dots,\nabla f_s$ are linearly independent. The gradients determine an orientation of $V$. \begin{lemma}\label{le:compint} Let $V$ be a complete intersection defined as the zero-set of polynomials $f_1,\dots,f_s$ of degree at most $D$. Then the degree of the generalized Gauss map $\gamma \colon S(NV)\rightarrow S^{n-1}$ is bounded by \begin{equation*} \operatorname{mdeg} \gamma \leq (2D)^n. \end{equation*} \end{lemma} \begin{Proof} We assume $V$ is compact, the general case can be handled with some care. Let $f=\sum_{i=1}^s f_i^2$, so that in particular, $Z(f)=Z(f_1,\dots,f_s)$. Let $\delta>0$ be such that $\delta$ is a regular value of $f\colon \mathbb{R}^n\rightarrow \mathbb{R}$ and set $f_\delta=f-\delta$. Then $V_\delta=Z(f_\delta)$ is a hypersurface with associated Gauss map $\gamma_\delta(x)=\nabla f_\delta(x)/\|\nabla f_\delta(x)\|$. By a standard argument using B\'ezout's Theorem (c.f.,~\cite{miln:64}), the degree of $\gamma_\delta$ is bounded by $(2D)^n$. We next argue that this bound also applies to the cardinality of $\gamma^{-1}(v)$. In fact, we can find a regular value of $f$, $\delta>0$, such that for all $(p_i,v_i)\in \gamma^{-1}(v)$ and disjoint some neighborhoods $U_i$ of $p_i$ in $\mathbb{R}^n$, there exist $q_i\in U_i$ such that $f(q_i)=\delta$ and $\gamma_\delta(q_i)=v$. It follows that the number of points in the preimage $\gamma^{-1}(v)$ is also bounded by $(2D)^n$. \end{Proof} Now everything is in place for the proof of the main bound. \Proofof{Theorem~\ref{th:main}} Set $M':=V\cap B^n(p,\sigma+\varepsilon)$. For almost all $\sigma$, $M'$ will be a smooth compact Riemannian manifold with smooth $(m-1)$-dimensional boundary $\partial M'$ (Lemma \ref{le:transverse}). Moreover, \begin{equation*} T(V,\varepsilon)\cap B^n(p,\sigma)\subseteq T(M'\backslash\partial M',\varepsilon)\cup T(\partial M',\varepsilon). \end{equation*} Note that this inclusion does not hold if we had defined $M'$ by intersecting $V$ with $B^n(p,\sigma)$, as $V$ need not intersect that ball at all. We can then apply Theorem~\ref{thm:degbound} to $M'\backslash \partial M'$ and to $\partial M'$. Since $\operatorname{mdeg}_i(M'\backslash \partial M')\leq \operatorname{mdeg}_i(V)$, it remains to bound $\operatorname{mdeg}_i(V)$. To bound the degree of $V\cap L$, after a change of coordinates we can assume that $L$ is given by $x_{s+i+1}=0,\dots,x_{n}=0$. The $f_i$ can therefore be seen as polynomials in $s+i$ variables, denoted by $\overline{x}$. The claim now follows from Lemma~\ref{le:compint}. The boundary $\partial M'$ is defined by the same set of polynomials $f_1,\dots,f_s$ as $V$, with the additional constraint of lying on the sphere $\sum_i x_i^2=(\sigma+\varepsilon)^2$. We can therefore apply the same degree bounds, with the exponents increased by one, to this set. Note that if $V$ is homogeneous, we can define $M'$ by intersecting with $B^n(p,\sigma)$ rather than $B^n(p,\sigma+\varepsilon)$. We also have $T(V,\varepsilon)\cap B^n(p,\sigma)\subseteq T(M'\backslash \partial M',\varepsilon)$, which accounts for the factor of $2$ instead of $4$ and the simpler form in the second equation in Theorem~\ref{th:main}. Dividing the resulting expressions by ${\mathsf{vol}}\ B^n(p,\sigma)=\omega_n\sigma^n$ gives the desired bounds. \endProofof \section*{Appendix} In this appendix we give a proof of the tube formula Theorem~\ref{thm:tubeineq}. \Proofof{Theorem~\ref{thm:tubeineq}} We prove the first inequality and point out on the way how the equality for small $\varepsilon$ is obtained. We restrict to compact manifolds without boundary $M$, extending the argument the slightly more general case in the statement of the theorem causes no problem. Consider the surjective map \begin{align*} f\colon S(N M)\times [0,\varepsilon] & \rightarrow T(M,\varepsilon)\subseteq \mathbb{R}^n\\ (p,v,t) &\mapsto p+tv \end{align*} of compact manifolds. For $(p,v)\in S(NM)$ the critical radius is defined as \begin{equation*} \rho_M(p,v)=\sup \{ t \mid \mathrm{dist}(p+tv,M)=t\}, \end{equation*} and set $\rho_M=\mathrm{inf}_{(p,v)\in S(NM)}\rho_M(p,v)$. The map $f$ is injective if $\varepsilon \leq \rho_M$. By Sard's Theorem the set of critical values of $f$ has Lebesgue measure zero and the fibers of $f$ at regular values are finite and locally constant~\cite[\S 1]{miln:97}. Given the natural volume form $\omega_{\mathbb{R}^n}$ on $\mathbb{R}^n$ we thus have, by~(\ref{eq:coarea}), \begin{equation}\label{eq:trans} {\mathsf{vol}}\ T(M,\varepsilon)\leq \int_{p\in T(M,\varepsilon)}\# f^{-1}(p) \ \omega_{\mathbb{R}^n}=\int_{S(N M)\times (0,\varepsilon)}f^*\omega_{\mathbb{R}^n}, \end{equation} with equality if $\varepsilon\leq \rho_M$. Recall that we are dealing with unsigned forms. The problem reduces to evaluating the right-hand side. We claim that \begin{equation}\label{eq:toshow} f^*\omega_{\mathbb{R}^n}=t^{s-1}|\det(\mathrm{Id}-tS(v))\ \omega_{S(N M)}\wedge dt|. \end{equation} Assuming this to hold for the moment, the claimed inequality for the volume of tubes follows by integrating \begin{align*} \int_{S(N M)\times (0,\varepsilon)}f^*\omega_{\mathbb{R}^n} &=\int_{S(N M)} \left(\int_{0}^{\varepsilon} t^{s-1}|\det(\mathrm{Id}-tS(v))| \ dt \right) \omega_{S(N M)}\\ &\leq \int_{S(N M)} \left(\int_{0}^{\varepsilon}t^{s-1}\sum_{i=0}^{m}t^i|\psi_i(v)| \ dt\right)\omega_{S(N M)}\\ &=\sum_{i=0}^m \left(\int_{0}^{\varepsilon}t^{s-1+i}\ dt\right) \left(\int_{S(N M)}|\psi_i(v)|\ \omega_{S(N M)}\right) \\ &= \sum_{i=0}^{m} \frac{1}{s+i} \ \varepsilon^{s+i} \ |K_i|(M).\\ \end{align*} It therefore remains to prove (\ref{eq:toshow}). Note that is $\varepsilon<\rho_M$, then the map $f$ is injective and, with the right choice of orientation, the determinant $\det(\mathrm{Id}-tS(v))$ is always positive. We can therefore omit the absolute value and obtain an equality with the integrals of curvature. Let $(x^1,\dots,x^m)\colon U\rightarrow \mathbb{R}^m$ be orthonormal coordinates on $U=U'\cap M$. Let $(E_1,\dots,E_n)$ be an orthonormal frame field on $U'$ such that $E_i:=~\frac{\partial}{\partial x^i}$ on $U\subseteq M$ for $1\leq i\leq m$. Set $\omega_M:=E_1^*\wedge\cdots \wedge E_m^*$ and $\omega_N:=E_{m+1}^*\wedge \cdots \wedge E_n^*$ ($E_i^*$ denoting the dual of $E_i$). We then have $\omega_{\mathbb{R}^n}=\omega_M\wedge \omega_N$, and for the restriction to $M$, $\omega_M|_{TM}=dx^1\wedge \cdots \wedge dx^m$. The frame field also gives a local trivialization of the sphere bundle \begin{align*} U\times S^{s-1}&\rightarrow S(N M)\\ (p,u)&\mapsto \left(p,\sum_{i=1}^{s}u^i E_{m+i}(p)\right). \end{align*} An orthonormal coordinate system $y^1,\dots,y^{s-1}$ for $S^{s-1}$ then gives rise to orthonormal coordinates $(x^1,\dots,x^m,y^1,\dots,y^{s-1},t)$ on $S(N M)\times (0,\varepsilon)$. Setting $dx=dx^1\wedge \cdots \wedge dx^{m}$ and $dy=dy^1\wedge \cdots \wedge dy^{s-1}$ we have \begin{equation}\label{eq:wedge1} \omega_{S(N M)}\wedge dt=dx\wedge dy\wedge dt. \end{equation} Let $\phi(p,v,t)$ be such that $f^*\omega_{\mathbb{R}^n}=\phi(p,v,t) \ \omega_{S(N M)}\wedge dt$ as differential form. By Equation (\ref{eq:wedge1}) we obtain \begin{align*} \phi(p,v,t)&=f^*\omega_{\mathbb{R}^n}\left( \frac{\partial}{\partial x^1},\dots,\frac{\partial}{\partial x^m},\frac{\partial}{\partial y^1},\dots,\frac{\partial}{\partial y^{s-1}},\frac{\partial}{\partial t}\right)\\ &=\omega_{\mathbb{R}^n}\left(f_*\frac{\partial}{\partial x^1},\dots,f_*\frac{\partial}{\partial x^m},f_*\frac{\partial}{\partial y^1},\dots,f_*\frac{\partial}{\partial y^{s-1}},f_*\frac{\partial}{\partial t}\right). \end{align*} We next observe that, using the definition of $f$, \begin{align*} f_*\frac{\partial}{\partial x^i}&=\frac{\partial}{\partial x^i}p+t\sum_{\ell=1}^s u^{\ell}\frac{\partial}{\partial x^i}E_{m+\ell}(p),\\ f_*\frac{\partial}{\partial y^j}&=t\sum_{\ell=1}^s \frac{\partial}{\partial y^j}u^{\ell} \ E_{m+\ell}(p),\\ f_*\frac{\partial}{\partial t}&=v. \end{align*} In particular, $f_*(T_vS^{s-1}\times T_t\mathbb{R})\subseteq N_pM=(T_pM)^{\perp}$, so that \begin{equation*} \phi(p,v,t)=\omega_{M}\left(\frac{\partial}{\partial x^1} f,\dots,\frac{\partial}{\partial x^m} f\right) \cdot \omega_{N}\left(\frac{\partial}{\partial y^1} f,\dots,\frac{\partial}{\partial y^{s-1}} f,\frac{\partial}{\partial t} f\right). \end{equation*} A straight-forward calculation shows that \begin{equation*} \left\langle\frac{\partial}{\partial x^i}f,E_j\right\rangle=\left\langle E_i+t\frac{\partial}{\partial x^i}Z,E_j\right\rangle=\delta_{ij}-tS_{ij}(v), \end{equation*} where $Z$ is a normal vector field with $Z(p)=v$. From this it follows that \begin{equation*} \omega_{M}\left(\frac{\partial}{\partial x^1} f,\dots,\frac{\partial}{\partial x^m} f\right)=\det (\mathrm{Id}-tS(v)). \end{equation*} Similarly one obtains \begin{equation*} \omega_{N}\left(\frac{\partial}{\partial y^1} f,\dots,\frac{\partial}{\partial y^{s-1}} f,\frac{\partial}{\partial t} f\right)=t^{s-1}. \end{equation*} This completes the proof of the claimed inequality. \endProofof \bibliographystyle{plain}
{ "timestamp": "2013-10-01T02:04:25", "yymm": "1210", "arxiv_id": "1210.3742", "language": "en", "url": "https://arxiv.org/abs/1210.3742" }
\section*{References}
{ "timestamp": "2012-10-16T02:01:27", "yymm": "1210", "arxiv_id": "1210.3687", "language": "es", "url": "https://arxiv.org/abs/1210.3687" }
\section{Software Diversity based Adaptation (SDA)} $\mathbf{Definitions}$: Define the original network and the network after edges being cut as $G_0$ and $G_1$ respectively. Define the network $G=(G,V,E)$ after edges $$\{(i,s)|s \in S\wedge (i,s)\notin E\}$$ being added as $G^{(i,S)}$, where $S \subseteq V$. Define self node $i$’s $k$-hop ego network $N_i^{k}$ in a network $G$ as the sub-graph generated by node $i$'s $k$-hop in $G$. In a network $G=(G,V,E)$, we call the ego network $N_i^{k}$ after edges $$\{(i,s)|s \in S\wedge (i,s)\notin E\}$$ being added as self node $i$’s potential $k$-hop ego network $N_{(i,S)}^{k}$ in the network $G^{(i,S)}$, where $S \subseteq V$. Now, we can choose the candidates for recovery phase (i.e., adding the number of edges lost) in a self node’s ego network $N_i^{k_{max}}$ in $G_1$. To do the next step, we still need to choose the software diversity metric to partially capture the security part of a ego network. Define Software Diversity $SD_k^i$ as a function of self node $i$'s $k$-hop ego network $N_i^{k}$ in $G_1$, i.e.$$SD_{k,l}^{i}:= f(N_i^{k})=\prod_{j=1}^l (1-P^j_s(N_i^k)),$$ where $P^j_s(N_i^{k})$ is the probability of node $i$ being attacked by the $j$-th shortest path from a randomly selected entry node in $N_i^{k}$. Here, $l$ is the search threshold of $SD_{k,l}^{i}$. Note: Here in $P^l_s$, $l$ is a constant and the probability is defined under the assumption that the attacker only attack each node once. Define Software Diversity $SD_{k}^{(i,S)}$ as a function of self node $i$'s potential $k$-hop ego network $N_{(i,S)}^{k}$ in $G_1^{(i,S)}$ , i.e. $$SD_{k,l}^{(i,S)}:= f(N_{(i,S)}^{k})=\prod_{j=1}^l (1-P^j_s(N_{(i,S)}^{k})),$$ where $S$ is a subset of nodes in $N_i^{k}$, $P^j_s$ is the probability of node $i$ being attacked by the $j$-th shortest path from a randomly selected entry node in $N_{(i,S)}^{k}$. Here, $l$ is the search threshold of $SD_{k,l}^{(i,S)}$. Given a network $G=(G,V,E)$, suppose for any node $i\in V$, $v_i$ is given as the vulnerability of node $i$ and $sv_i$ is given as the software version of node $i$, we define the the edge vulnerability matrix of $G$ to be $\mathcal{EV}^G$, where \begin{equation} \mathcal{EV}^G_{ij}=\begin{cases} v_i, & sv_i\neq sv_j\\ 1, & sv_i=sv_j \end{cases}. \end{equation} Given a hop distance $k$ and a search threshold $st$, the $i$-th entry of estimated path vulnerability vector $V^{st}_{k}$ is the sum of products of vulnerabilities of nodes over at most $st$ paths with end node $i$ and at most length $k$. \begin{algorithm}[!th] \small{ \caption{Software Diversity based Adaptation (SDA)} \label{algo:ERA} \begin{algorithmic}[1] \State{$\mathbf{DN} \leftarrow$ a vector with the length $N$ indicating whether a node's edges are disconnected, 1 for disconnected; 0 otherwise.} \State{$\mathbf{A} \leftarrow$ an adjacency matrix for a given network} \State{$\mathcal{EV} \leftarrow$ an edge vulnerability matrix for a given network} \State{$k \leftarrow$ hop distance of ego networks} \State{$l \leftarrow$ search threshold of the software diversity metric} \\ \State{{\bf Step 1:} Remove edges between two connected nodes when they use a same software package} \For{$i:=1$ to $N$} \For{$j:=1$ to $N$} \If{$(a_{ij} > 0) \wedge s_i == s_j$} \State{$a_{ij} = 0; a_{ji} = 0$} \State{$\mathbf{DN}(i) = \mathbf{DN}(i) + 1$} \Comment{counting \# of disconnected nodes per $i$} \EndIf \EndFor \EndFor \\ \State{{\bf Step 2:} Add edges between two disconnected nodes when they don't use a same software package} \State{step $\leftarrow$ 1} \State{$\mathbf{SD} \leftarrow$ estimated software diversity vector, initialized as $SD^{i}_{k,l}$ for its $i$-th entry} \State{$\mathbf{VUL} \leftarrow$ estimated path vulnerability vector $V^{st}_{k-1}$} \While{step $\leq$ $sum(\mathbf{DN})$} \State{$i \leftarrow$ a random chosen node in a network} \State{$\mathbf{filter} \leftarrow$ a subset of node $i$'s ego network $N_i^{k_{max}}$, generated with the following upper bound $\phi$} \State{$\phi \leftarrow$ an upper bound of $length(\mathbf{filter})$} \State{$\mathbf{candidate} \leftarrow$ a set of edges $e_{jk}$ where $s_j \neq s_k \wedge a(j, k)==0 \wedge (na_j \cdot na_k > 0)$ for node $j,k$ in $\mathbf{filter}$, generated with the following upper bound $\psi$} \State{$\psi \leftarrow$ an upper bound of $length(\mathbf{candidate})$} \State{$\mathbf{effect} \leftarrow$ store the edges' information and the absolute difference of $\mathbf{SD}$ caused by adding the edge} \If{$length(\mathbf{candidate})\neq 0$} \For{$e:=1$ to $length(\mathbf{candidate})$} \State{$\mathbf{effect}(e) \leftarrow$ store the edge $e$'s information and the sum of estimated absolute difference $\Delta\mathbf{SD}$ of $\mathbf{SD}$ caused by adding $e$, i.e. $\Delta\mathbf{SD}_i=\mathbf{SD}_i*\mathcal{EV}_{ij}*\mathbf{VUL}_{j}$ for $e_{ij}$} \EndFor \State{$r,s \leftarrow$ nodes $r,s$ that $\mathbf{effect}(e_{rs})$ gives the minimum absolute difference of $\mathbf{SD}$} \State{$a_{rs}=1;a_{sr}=1$} \State{$\mathbf{SD}_r=\mathbf{SD}_r-\Delta\mathbf{SD}_r;\mathbf{SD}_s=\mathbf{SD}_s-\Delta\mathbf{SD}_s$} \State{step = step + 1} \EndIf \EndWhile \end{algorithmic} } \end{algorithm} \end{document} \section{Algorithms} The following algorithms are referenced from the main paper. They are as follows: \begin{itemize} \item {\bf Algorithm~\ref{algo:SDA-step-1}} ($\mathbf{SDBA}$): Remove edges between two nodes with the same software package, which is Step 1 in the SDA algorithm (Algorithm~\ref{algo:SDA-step-1}). \item {\bf Algorithm~\ref{algo:SDA_pv}} ($\mathbf{GenPV}$): Generate path vulnerabilities. This algorithm returns a vector $\mathbf{PV}$ that contains the maximum attack path vulnerability where each attack path is a disjoint shortest path from node $i$ to a node with the maximum $k-1$ hop distance. \item {\bf Algorithm~\ref{algo:SDA_set_threshold}} ($\mathbf{SetEAB}$): Set edge adaptations budget. This algorithm returns $T^{global}$, which is the total number of edges to be adapted based on a given fraction of edges to be adapted, $\rho$, and a vector of the number of edges to be adapted per node, $\mathbf{T}^{local}$. \item {\bf Algorithm~\ref{algo:SDA_candidate_add}} ($\mathbf{GEAC}$): Generate edge addition candidates. This algorithm returns a vector of edge candidates to be added based on the lowest software diversity reduction. \item {\bf Algorithm~\ref{algo:SDA_candidate_remove}} ($\mathbf{GERC}$): Generate edge removal candidates. This algorithm returns a vector of edge candidates to be removed based on the highest software diversity increase. \item {\bf Algorithm~\ref{algo:SDA_rank_and_adapt}} ($\mathbf{AdaptNT}$): Adapt network topology. Based on the above two algorithms, $\mathbf{GEAC}$ and $\mathbf{GERC}$, $T^{global}$ number of edges are adapted (removed or added) where each node adapts its edge based on $\mathbf{T}^{local}$ based on $\mathbf{SetEAB}$ above. \item {\bf Algorithm~\ref{algo:random}} ({\bf Random-A}): This algorithm is one of comparable counterpart schemes and compared with the proposed SDA-based schemes. This algorithm uses the same procedure in Step 1 as the SDA (i.e., $\mathbf{SDBA}$ in Algorithm 1). In Step 2, an edge addition is performed randomly for the number of nodes lost in Step 1 only based on the condition that two nodes use different software packages. \item {\bf Algorithm~\ref{algo:epidemic-attacks}}: This algorithm describes how the epidemic attacks are modeled in this work. \end{itemize} \begin{algorithm}[!th] \small{ \caption{Software Diversity-based Basic Adaptation ($\mathbf{SDBA}$)} \label{algo:SDA-step-1} \begin{algorithmic}[1] \State{$N \leftarrow$ The total number of nodes in a network} \State{$\mathbf{DN} \leftarrow$ A vector containing the number of removed edges per node} \State{$\mathbf{A} \leftarrow$ An adjacency matrix for a given network with element $a_{ij}$ for $i, j = 1, \ldots, N$} \State{$\mathbf{S} \leftarrow$ A vector of software packages installed over nodes with element $s_i$ for $i = 1, \ldots, N$} \State{$\mathbf{A}' \leftarrow$ An adjacency matrix after edges are adapted.} \\ \State{$\mathbf{SDBA} (N, \mathbf{DN}, \mathbf{A}, \mathbf{S})$} \Comment{Remove edges between two nodes with the same software package.} \For{$i:=1$ to $N$} \For{$j:=1$ to $N$} \If{$(a_{ij} > 0) \wedge s_i == s_j$} \State{$a_{ij} = 0$} \State{$a_{ji} = 0$} \State{$\mathbf{DN}(i) = \mathbf{DN}(i) + 1$} \State{$\mathbf{DN}(j) = \mathbf{DN}(j) + 1$} \EndIf \EndFor \EndFor \State{$\mathbf{return}\ \mathbf{A}'$} \end{algorithmic} } \end{algorithm} \begin{algorithm}[!th] \small{ \caption{Generate Path Vulnerabilities ($\mathbf{GenPV}$)} \label{algo:SDA_pv} \begin{algorithmic}[1] \State{$\mathbf{A} \leftarrow$ An adjacency matrix for a given network with element $a_{ij}$ for $i, j = 1, \ldots, N$} \State{$\mathbf{S} \leftarrow$ A vector of software packages installed over nodes with element $s_i$ for $i = 1, \ldots, N$} \State{$\mathbf{APV} \leftarrow$ A vector of vulnerabilities of attack paths $apv_{ij}$ from node $i$ to node $j$ where $j = 1, \ldots, 20$} \Comment{For simplicity, consider maximum 20 shortest disjoint attack paths reachable to node $i$} \State{$k \leftarrow$ A hop distance given in a node's local network} \State{$\mathbf{PV} \leftarrow$ A vector of maximum path vulnerabilities for all nodes $i$ where $pv_i$ refers to the maximum attack path vulnerability where the path consists of at most $k-1$-hop distance from node $i$} \Comment{Count one hop from a node to any neighboring node and thus $k$ is decremented by 1.} \State{$\mathbf{PV} =\mathbf{GenPV} (\mathbf{A},\mathbf{APV},\mathbf{S},k)$} \For{$i:=1$ to $N$} \State{$pv_i=\max\limits_{j} apv_{ij}^{k-1}$} \EndFor \State{$\mathbf{return}\ \mathbf{PV}$} \end{algorithmic} } \end{algorithm} \begin{algorithm}[!th] \small{ \caption{Set Edge Adaptations Budget ($\mathbf{SetEAB}$)} \label{algo:SDA_set_threshold} \begin{algorithmic}[1] \State{$\mathbf{A} \leftarrow$ An adjacency matrix for a given network with element $a_{ij}$ for $i, j = 1, \ldots, N$} \State{$\mathbf{DN} \leftarrow$ A vector containing the number of removed edges per node} \State{$\rho \leftarrow$ A threshold referring to the fraction of edges to be adapted} \State{$\kappa_i \leftarrow$ Node $i$'s degree} \State{$T^{global} \leftarrow$ The total number of edges to be adapted} \State{$\mathbf{T}^{local} \leftarrow$ A vector of the number of edges that are allowed to be adapted per node where an element is denoted by $T^{local}_i$ for node $i$ for $i = 1, \ldots, N$} \\ \State{$\mathbf{T}^{local}, T^{global} = \mathbf{SetEAB} (\mathbf{DN},\mathbf{A},\rho)$} \State{$T^{global}= |\rho \frac{sum(\mathbf{DN})}{2}|$} \State{$\kappa=\frac{sum(\mathbf{A})+T^{global}}{N}$} \Comment{An expected average degree after $T^{global}$ number of edges are adapted} \If{$\rho>0$} \State{$T^{local}_i \leftarrow \max(0, \kappa - \kappa_i)$} \Comment{$T^{local}_i$ captures the number of edges that can be added} \State{$N_{HD}=\sum_{i=1}^N \max(0, \kappa_i-\kappa)-T^{global}$} \Else \State{$T^{local}_i \leftarrow \max(0, \kappa_i - \kappa)$} \Comment{$T^{local}_i$ captures the number of edges that can be removed} \State{$N_{HD}=\sum_{i=1}^N \max(0, \kappa-\kappa_i)-T^{global}$} \EndIf \Comment{Positive $N_{HD}$ means that more nodes have higher (or lower) degrees than the expected average degree while negative $N_{HD}$ implies fewer nodes have higher (lower) degrees than the average expected degree after edge adaptations. Hence, when $N_{HD}$ is positive, edge adaptations should be restricted further.} \While{$N_{HD}>0$} \For{$i:=1$ to $N$} \If{$T^{local}_i>0 \wedge N_{HD}>0$} \State{$T^{local}_i=T^{local}_i-1$} \State{$N_{HD}=N_{HD}-1$} \EndIf \EndFor \EndWhile \State{$\mathbf{return}\ T^{global}, \mathbf{T}^{local}$} \end{algorithmic} } \end{algorithm} \begin{algorithm}[!th] \small{ \caption{Generate Edge Addition Candidates ($\mathbf{GEAC}$)} \label{algo:SDA_candidate_add} \begin{algorithmic}[1] \State{$N \leftarrow$ The total number of nodes in a network} \State{$\mathbf{A} \leftarrow$ An adjacency matrix for a given network with element $a_{ij}$ for $i, j = 1, \ldots, N$} \State{$\mathbf{A^*}\leftarrow \mathbf{(A+I)}^{2k}$ where $a^*_{ij}$ is 1 when nodes $i$ and $j$ belong to each other's local network or its neighbor's local networks; 0 otherwise.} \State{$\mathbf{SD} \leftarrow$ A vector of software diversity values, $sd_i$ for all nodes $i = 1, \ldots, N$} \State{$\mathbf{SV} \leftarrow$ A vector of the vulnerabilities associated with software packages} \State{$\mathbf{S} \leftarrow$ A vector of software packages installed over nodes with element $s_i$ for $i = 1, \ldots, N$} \State{$\mathbf{PV} \leftarrow \mathbf{GenPV} (\mathbf{A},\mathbf{APV},\mathbf{S},k)$ } \State{$\mathbf{DN} \leftarrow$ A vector containing the number of removed edges per node} \State{$\mathbf{T}_{local} \leftarrow \mathbf{GetEdgesToAdapt} (\mathbf{DN}, \mathbf{A}, \rho)$} \\ \State{$\mathbf{\mathbf{add\_candidate}}$ = $\mathbf{GEAC}$($\mathbf{A},\mathbf{SD},\mathbf{SV},\mathbf{S},\mathbf{PV}, \mathbf{T}^{local}$)} \State{$r \leftarrow$ counter initialized at 0.} \For{$i:=1$ to $N$} \For{$j:=i+1$ to $N$} \If{$a^*_{ij}>0 \wedge a_{ij}==0 \wedge s_i\neq s_j$} \State{$\mathbf{sd\_diff\_sum}(r)= (sd_i - sd'_i) +(sd_j - sd'_j)$} \\ \Comment{Sum of the improved software diversity values by both node $i$ and node $j$ where the expected software diversity values of nodes $i$ and $j$ after edge addition adaptations are $sd'_i = sd_i (1-sv_{s_i} pv_j), sd'_j = sd_j (1-sv_{s_j} pv_i)$ based on Eq.~(3) of the main paper (i.e., software diversity metric)} \State{$r = r+1$} \EndIf \EndFor \EndFor \State{Rank $\mathbf{sd\_diff\_sum}$ in an ascending order and capture top $\mathbf{T}^{local}$ edges in $\mathbf{add\_candidate}$} \State{$\mathbf{return}\ \mathbf{add\_candidate}$} \end{algorithmic} } \end{algorithm} \begin{algorithm}[!th] \small{ \caption{Generate Edge Removal Candidates ($\mathbf{GERC}$)} \label{algo:SDA_candidate_remove} \begin{algorithmic}[1] \State{$N \leftarrow$ The total number of nodes in a network} \State{$\mathbf{A} \leftarrow$ An adjacency matrix for a given network with element $a_{ij}$ for $i, j = 1, \ldots, N$} \State{$\mathbf{A^*}\leftarrow \mathbf{(A+I)}^{2k}$ where $a^*_{ij}$ is 1 when nodes $i$ and $j$ belong to each other's local network or its neighbor's local networks; 0 otherwise.} \State{$\mathbf{SD} \leftarrow$ A vector of software diversity values, $sd_i$ for all nodes $i = 1, \ldots, N$} \State{$\mathbf{SV} \leftarrow$ A vector of the vulnerabilities associated with software packages} \State{$\mathbf{S} \leftarrow$ A vector of software packages installed over nodes with element $s_i$ for $i = 1, \ldots, N$} \State{$\mathbf{PV} \leftarrow \mathbf{GenPV} (\mathbf{A},\mathbf{APV},\mathbf{S},k)$ } \State{$\mathbf{DN} \leftarrow$ A vector containing the number of removed edges per node} \State{$\mathbf{T}_{local} \leftarrow \mathbf{GetEdgesToAdapt} (\mathbf{DN}, \mathbf{A}, \rho)$} \\ \State{$\mathbf{remove\_candidate}$ = $\mathbf{GERC}(\mathbf{A},\mathbf{SD},\mathbf{SV},\mathbf{S},\mathbf{PV},\mathbf{T}^{local}$)} \For{$i:=1$ to $N$} \For{$j:=i+1$ to $N$} \If{$a_{ij}>0$} \State{$\mathbf{sd\_diff\_sum}(r)= (sd'_i - sd_i) +(sd'_j - sd_j)$} \\ \Comment{Sum of the improved software diversity values by both node $i$ and node $j$ where the expected software diversity values of nodes $i$ and $j$ after edge removal adaptations are $sd'_i = sd_i/(1-sv_{s_i} pv_j), sd'_j = sd_j/(1-sv_{s_j} pv_i)$ based on Eq.~(3) of the main paper (i.e., software diversity metric)} \State{$r = r+1$} \EndIf \EndFor \EndFor \State{Rank $\mathbf{sd\_diff\_sum}$ in an descending order and capture top $\mathbf{T}^{local}$ edges in $\mathbf{remove\_candidate}$} \State{$\mathbf{return}\ \mathbf{remove\_candidate}$} \end{algorithmic} } \end{algorithm} \begin{algorithm}[!th] \small{ \caption{Adapt Network Topology ($\mathbf{AdaptNT}$)} \label{algo:SDA_rank_and_adapt} \Comment{Adapt edges based on Algorithms~\ref{algo:SDA_candidate_add} and~\ref{algo:SDA_candidate_remove}} \begin{algorithmic}[1] \State{$N\leftarrow\#$ of nodes in a given network} \State{$\mathbf{DN} \leftarrow$ a vector with \# of disconnected edges per node} \State{$\mathbf{A} \leftarrow$ an adjacency matrix for a given network} \State{$\mathbf{SV} \leftarrow$ a constant vulnerability vector for software packages} \State{$\mathbf{S} \leftarrow$ a software package vector for a given network} \State{$k \leftarrow$ hop distance of local networks} \State{$l \leftarrow$ search threshold of \# of attack paths} \State{$\rho \leftarrow$ threshold of the fraction of edges adapted} \State{$\mathbf{candidate} \leftarrow$ A vector of edge candidates, either $\mathbf{add\_candidate}$ in Algorithms~\ref{algo:SDA_candidate_add} or~\ref{algo:SDA_candidate_remove}} \State{$\mathbf{T}^{local}, T^{global} = \mathbf{SetEAB} (\mathbf{DN},\mathbf{A},\rho)$ based on Algorithm~\ref{algo:SDA_set_threshold}} \State{$\mathbf{A}' \leftarrow$ An adjacency network after edges are adapted.} \\ \State{$\mathbf{A}' = \mathbf{AdaptNT} (\mathbf{A},\mathbf{candidate},\mathbf{T}^{local},T^{global},\rho)$} \For{$(i,j, \mathbf{sd\_diff\_sum})$ in $\mathbf{candidate}$} \If{$T^{local}_i T^{local}_j > 0$} \If{$\rho>0$} \State{$a_{ij}=a_{ji}=1$} \Else \State{$a_{ij}=a_{ji}=0$} \EndIf \State{$T^{local}_i=T^{local}_i-1$} \State{$T^{local}_j=T^{local}_j-1$} \State{$T^{global}=T^{global}-1$} \EndIf \EndFor \For{$(i,j, \mathbf{sd\_diff\_sum})$ in $\mathbf{candidate}$} \If{$T^{global}>0$} \If{$\rho>0 \wedge a_{ij}==0$} \State{$a_{ij}=a_{ji}=1$} \State{$T^{global}=T^{global}-1$} \Else \If{$\rho<0 \wedge a_{ij}>0$} \State{$a_{ij}=a_{ji}=0$} \State{$T^{global}=T^{global}-1$} \EndIf \EndIf \EndIf \EndFor \State{$\mathbf{return}\ \mathbf{A'}$} \end{algorithmic}} \end{algorithm} \begin{algorithm}[!th] \small{ \caption{Random Adaptation (Random-A)} \label{algo:random} \begin{algorithmic}[1] \State{$N \leftarrow$ The total number of nodes in a network} \State{$\mathbf{DN} \leftarrow$ a vector with \# of disconnected edges per node} \State{$\mathbf{A} \leftarrow$ an adjacency matrix for a given network} \State{$\mathbf{S} \leftarrow$ A vector of software packages installed over nodes with element $s_i$ for $i = 1, \ldots, N$} \State{$\mathbf{SV} \leftarrow$ A vector of the vulnerabilities associated with software packages} \State{$k \leftarrow$ A hop distance given in a node's local network} \State{$l \leftarrow$ A maximum number of attack paths considered for estimating a node's software diversity} \State{$\rho \leftarrow$ A threshold referring to the fraction of edges to be adapted} \State{$\mathbf{A'} \leftarrow$ An adjacency matrix after edges are adapted.}\\ \State{$\mathbf{A'}=\mathbf{Random}$-$\mathbf{A}(N, \mathbf{DN}, \mathbf{A}, \mathbf{S}, \mathbf{SV}, k, l, \rho)$} \State{{\bf Step 1:} $\mathbf{A'} = \mathbf{SDBA} (N, \mathbf{DN}, \mathbf{A}, \mathbf{S}, \mathbf{SV}, k, l, \rho)$} \Comment{Remove edges between two nodes with the same software package (see Algorithm~\ref{algo:SDA-step-1}). $\mathbf{DN}$ is a vector of edges that are removed in Step 1.} \\ \State{{\bf Step 2:} Add random edges between two disconnected nodes if their software packages are different} \For{$i:=1$ to $N$} \State{$\mathbf{candidate} \leftarrow$ a vector of nodes that can be connected with node $i$ where $a_{ij}==0 \wedge (na_i \cdot na_j > 0)$ for node $j$} \State{$\mathbf{visited} \leftarrow$ a vector with $length(\mathbf{candidate})$} \For{$j:=1$ to $\mathbf{DN}(i)$} \State{$r \leftarrow$ A random integer selected from $[0, length(\mathbf{candidate})]$ at random.} \If{$\mathbf{visited}(r)==0 \wedge \mathbf{DN}(r)>0$} \State{$a_{ir} = 1$} \State{$a_{ri} = 1$} \State{$\mathbf{DN}(i)=\mathbf{DN}(i)-1$} \State{$\mathbf{DN}(r)=\mathbf{DN}(r)-1$} \State{$\mathbf{visited}(r)=1$} \Else \State{$j=j-1$} \EndIf \If{$\mathrm{sum}(\mathbf{visited}) == \mathrm{length}(\mathbf{visited})$} \\ \Comment{all candidate nodes are selected} \State{$break$} \EndIf \EndFor \EndFor \State{$\mathbf{return}\ \mathbf{A'}$} \end{algorithmic} } \end{algorithm} \begin{algorithm}[th!] \small{ \caption{Epidemic Attacks} \label{algo:epidemic-attacks} \begin{algorithmic}[1] \State{{\bf Input:}} \State{$\mathbf{A}\leftarrow$ an adjacency matrix} \State{$\mathbf{\sigma}_i\leftarrow$ attacker $i$' vector of exploitable software packages} \State{$N_s\leftarrow$ a number of software packages available} \State{$\gamma\leftarrow$ an intrusion detection probability} \State{$\mathbf{node} \leftarrow$} nodes' attributes, defined in Eq.~(1) of the main paper. \State{$\mathbf{S} \leftarrow$ a software package vector for a given network} \State{$\mathbf{SV} \leftarrow$ A vector of the vulnerabilities associated with software packages} \Procedure{performEpidemicAttacks}{$\mathbf{A}$, $\mathbf{node}$, $\mathbf{SV}$, $\sigma_j$, $N_s$, $\gamma$} \State{$spreadDone$ $\leftarrow 0$} \State{$\mathbf{spread}$: a list with length $N$, initialized at 0} \Comment{To check if node $i$ attempted to compromise its direct neighbors} \While{$spreadDone == 0$} \For{$i:=1$ to $N$} \Comment{check if $i$ is an attacker} \If{$na_i>0$}\Comment{If $i$ is an active attacker} \State{$r_1 \leftarrow$ a random real number in $[0, 1]$ based on uniform distribution} \If{$nc_i>0$} \If{$r_1 > \gamma\wedge\mathbf{spread}(i)<2$} \State{$\mathbf{spread}(i) = \mathbf{spread}(i)+1$} \For{$j:=1$ to $N$} \If{$a_{ij}> 0 \; \wedge na_j > 0 \wedge nc_j == 0$} \Comment{if $j$ is susceptible} \If{$\mathbf{\sigma}_i$ includes $s_j$} \Comment{$i$ knows $s_j$'s vulnerability} \State{$nc_j=1$} \Comment{$j$ is compromised by $i$} \Else \State{$r_2 \leftarrow$ a random real number in $[0, 1]$} \State{$d \leftarrow sv_{s_j}$} \If{$r_2 < d$} \State{$nc_j=1$} \Comment{$j$ is compromised by $i$} \State{$\sigma_i(\comg{s_j})=1$} \Comment{$i$ learned $s_j$'s vulnerability} \EndIf \EndIf \EndIf \EndFor \Else \State{$na_i=0$} \Comment{$i$ is detected and deactivated for infecting behavior} \State{$a_{ij}=0, a_{ji}=0$} \Comment{disconnecting all edges connected to $i$} \EndIf \Else \If{$r_1 > \gamma$} \State{$na_i=0$} \State{$a_{ij}=0, a_{ji}=0$} \EndIf \EndIf \EndIf \For{$k:=1$ to $N$} \If{$na_i > 0 \wedge nc_i > 0 \wedge \mathbf{spread}(k)< 2$} \Comment{each node has two chances to compromise each of its direct neighbors} \State{$spreadDone=0$} \State{$break$} \Else \State{$spreadDone=1$} \EndIf \EndFor \EndFor \EndWhile \EndProcedure \end{algorithmic} } \end{algorithm} \newpage \section{Real Network Topologies Used} \label{sec:network-topologies} This work develops a topology-aware notion of software diversity and, hence it makes sense to study the effect of different network topologies. To meet this objective, we considered three different network datasets, each with different levels of network density. A dense network (DN) is generated from a Facebook ego network, which is a connected component from the one of the 10 ego networks provided by SNAP~\cite{snapnets}. This network has density of approximately $0.05$ with a mean degree near $52$. A visualization of this network is shown in Fig.~\ref{fig_network_topology} (a) and its degree distribution is shown in Fig.~\ref{fig_network_topology} (d). The distribution is right-skewed despite the high mean. A medium dense network (MN) is generated from a subset of the Enron email dataset on SNAP~\cite{snapnets}. This medium dense network is generated by the largest connected component of the induced subgraph of nodes ranked between 501 and 1500 by degree. This was done to generate a network with a similar size to the dense network from an original network that starts with less density. The density is $0.016$ with a mean degree near $16$. A visualization of this network is shown in Fig.~\ref{fig_network_topology} (b) and its degree distribution is shown in Fig.~\ref{fig_network_topology} (e). The distribution has the shape close to a binomial distribution. A sparse network (SN) is generated from an observation of the Internet at the autonomous systems level~\cite{snapnets}. Edges are removed from the recorded data to ensure a simple network. The density of this network is approximately $0.003$ and the mean node degree is approximately $4.4$. A visualization of this network is shown in Fig.~\ref{fig_network_topology} (c) and its degree distribution is shown in Fig. \ref{fig_network_topology} (f). The distribution skews right and is close to a power-law shape. \begin{figure*}[!htb] \centering \subfigure[Dense network (DN) with 1033 nodes and 26747 edges]{ \includegraphics[width=0.3\textwidth, height=0.2\textwidth]{figures/dense_topo.png}} \subfigure[Medium dense network (MN) with 985 nodes and 7994 edges]{ \includegraphics[width=0.3\textwidth, height=0.2\textwidth]{figs/fig1/mn.png}} \subfigure[Sparse network (SN) with 1476 nodes and 3254 edges]{ \includegraphics[width=0.3\textwidth, height=0.2\textwidth]{figures/sparse_topo.png}} \subfigure[DN's degree distribution]{ \includegraphics[width=0.3\textwidth, height=0.2\textwidth]{figs/fig1/dn_degree.png}} \subfigure[MN's degree distribution]{ \includegraphics[width=0.3\textwidth, height=0.2\textwidth]{figs/fig1/mn_degree.png}} \subfigure[SN's degree distribution]{ \includegraphics[width=0.3\textwidth, height=0.2\textwidth]{figs/fig1/sn_degree.png}} \caption{The network topologies used for our experimental study and their degree distributions.} \label{fig_network_topology} \end{figure*} \section{Experiments Under a Random Network} The work also considers a random network topology generated by an Erd{\"o}s-R{\'e}nyi (ER) random graph model $G(n,p)$ where $n$ is the number of nodes and $p$ is the connection probability between any pair of nodes~\cite{Newman10}. This model is ideal to study the effect of network density in a general manner by varying the connection probability $p$ since the network has density approximately $p$ for large $n$. It is used here for a sensitivity analysis and comparative analysis for the methods considered in the paper. \begin{figure*}[!ht] \centering \subfigure[Erd{\"o}s-R{\'e}nyi random network (ER) with 1000 nodes and 12473 edges using $p=0.025$]{ \includegraphics[width=0.3\textwidth, height=0.2\textwidth]{figures/er_topo.png}} \subfigure[ER's degree distribution]{ \includegraphics[width=0.3\textwidth, height=0.2\textwidth]{figs/fig1/er_degree.png}} \caption{The network topology of a random network used for our experimental study and its degree distribution.} \label{fig_er_topology} \end{figure*} \subsection{Visualization of an ER Network} A visualization of the example ER network for $n=1000$ and $p=0.025$ is shown in Fig.~\ref{fig_er_topology} (a). This realization of the ER model has density $0.02497$, which is very close to $p$, and has the mean node degree $24.946$, which is close to the expected value $p(n-1)$. The degree distribution of this network realization, shown in Fig.~\ref{fig_er_topology} (b), has is very close to a binomial distribution. \subsection{Identification of an Optimal Fraction of Edges to be Adapted ($\rho$)} We conducted a sensitivity analysis of the effect of the fraction of edges to be adapted ($\rho$) for the random network here. Fig.~\ref{fig_er_sensitivity} shows the size of the giant component ($S_g$) and the fraction of compromised nodes ($P_c$) versus varying $\rho$. We used $P_a=0.1$, $N_s=5$, and $p=0.025$ for corresponding simulations. We observe that $S_g$ attains its maximum when $\rho=-0.6$, which will be used as the optimal $\rho$ for the comparative analysis conducted in Section~\ref{Comparative Analysis}. \begin{figure}[!ht] \centering \subfigure[Random Network]{ \includegraphics[width=0.3\textwidth, height=0.22\textwidth]{figs/fig8/er.png}} \caption{Effect of the threshold of the fraction of edges adapted ($\rho$) on network connectivity ($S_g$) and security vulnerability ($P_c$).} \label{fig_er_sensitivity} \end{figure} \subsection{Comparative Performance Analysis} \label{Comparative Analysis} This section details our comparative analysis of the six schemes introduced in Section 5.2 under a random network. \subsubsection{Effect of Network Density Under a Random Network} Fig.~\ref{fig_p_er} shows how the network density, controlled by the ER connectivity parameter $p$, affects the performance of the six schemes with respect to the four performance metrics listed in Section 5.1 of the main paper. Overall, as the node connection probability $p$ increases, the network is exposed to higher vulnerability with respect to epidemic attacks because the attackers have more neighbors and, hence, more potential nodes to compromise. Since the number of software packages available $N_s$ is limited to 5, as density increases, an attacker has a better chance to come into contact with a neighboring node that has a software package shared or previously learned by the attacker. Since higher network connectivity naturally leads to higher vulnerability to epidemic attacks, as more nodes are connected, the attacks exhibit a greater impact. After attacks on the original or adapted network, the resulting outcome is a less connected network with a smaller size of the giant component. This post-attack network also has a lower software diversity value as a fewer number of nodes will be considered in the attack path to estimate the software diversity, as shown in Eq. (3) in Section 4.1. Therefore, when $p$ is low, significant performance differences by all schemes are not observed. On the other hand, as $p$ increases, schemes with lower adapted network density (i.e., SDA with $\rho = 0$ and SDA with optimal $\rho = -0.6$) outperform the other counterparts (i.e., No-A, Random-A, Random-Graph-C and SDA with $\rho = 1$). Overall the best performing method with respect to the three metrics is shown with SDA with $\rho = -0.6$. SDA with $\rho = -0.6$ also shows significant resilience among all four performance metrics. The performance order in the fraction of compromised nodes ($P_c$), the size of the giant component ($S_g$) and software diversity ($SD$) is: SDA with $\rho = -0.6 \geq$ SDA with $\rho = 0 \geq$ SDA with $\rho = 1 \approx$ Random-A $\geq$ Random-Graph-C $\approx$ No-A. Although Random-Graph-C exhibits slightly better performance than No-A, both schemes perform nearly identically. This implies that the contribution of shuffling is minimal under the random network model. With respect to the defense cost ($D_c$), the overall performance order follows: SDA with $\rho = 0$ $\approx$ No-A $\geq$ SDA with $\rho = 1$ $\approx$ Random-A $\geq$ SDA with $\rho = -0.6 \geq$ Random-Graph-C. Random-Graph-C incurs the highest cost since the shuffling cost and the cost caused by the IDS are combined when calculating the defense cost using Eq.~(11) in Section 5.1 of the main paper. Although adaptation schemes incur higher cost than No-A, as nodes are more connected with higher $p$, SDA with $\rho = 0$ generates significantly lower cost than the other existing counterparts. This is because well-adapted network topologies are less vulnerable to epidemic attacks. In these networks, a significantly fewer number of nodes become compromised and, hence, detected by the IDS, which disconnects all the edges from the detected and compromised nodes. On the other hand, non-adapted or poorly adapted network topologies result in more actions by the IDS, which will disconnect all the edges of nodes that are detected as compromised to protect the network itself. \subsubsection{Effect of the Fraction of Initial Seeding Attackers ($P_a$) under a Random Network} Fig.~\ref{fig_p_a_er} demonstrates how the different levels of attack density impact the performance of all comparing schemes with respect to the performance metrics in Section 5.1 under the random network topology. The overall trends of the performance as more seeding attackers are added in the network are as follows. An increase of the fraction of seeding attackers decreases software diversity and the size of the giant component, while at the same time increasing the fraction of compromised nodes and defense costs. This latter effect is because more nodes become compromised and accordingly more site percolation-based adaptations are required by the IDS (i.e., disconnecting all edges to a detected, compromised node). Also across the range of the attack density and with respect to all metrics expect defense cost ($D_c$), the overall performance order clearly follows: SDA with $\rho = -0.6 \geq$ SDA with $\rho = 0 \geq$ SDA with $\rho = 1 \approx$ Random-A $\geq$ Random-Graph-C $\approx$ No-A. As for defense cost ($D_c$), the overall performance order follows: SDA with $\rho = 0 \approx$ No-A $\geq$ SDA with $\rho = 1 \approx$ Random-A $\geq$ SDA with $\rho = -0.6 \geq$ Random-Graph-C. \begin{figure*}[!ht] \centering \subfigure{ \includegraphics[width=0.65\textwidth, height=0.02\textwidth]{figs/fig2/legend.png}} \setcounter{subfigure}{0} \subfigure[Fraction of compromised nodes ($P_c$)]{ \includegraphics[width=0.26\textwidth, height=0.19\textwidth]{figs/er/p/P_c.png}} \subfigure[Size of the giant component ($S_g$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/er/p/S_g.png}} \subfigure[Software diversity ($SD$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/er/p/SD.png}}\hspace{-0.85em} \subfigure[Defense cost ($D_c$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/er/p/D_c.png}}\hspace{-0.85em} \caption{Effect of the connection probability between two nodes ($p$) under random networks.} \label{fig_p_er} \end{figure*} \begin{figure*}[!ht] \centering \subfigure{ \includegraphics[width=0.65\textwidth, height=0.02\textwidth]{figs/fig2/legend.png}}\vspace{-1em} \setcounter{subfigure}{0} \subfigure[Fraction of compromised nodes ($P_c$)]{ \includegraphics[width=0.25\textwidth, height=0.19\textwidth]{figs/er/p_a/P_c.png}} \subfigure[Size of the giant component ($S_g$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/er/p_a/S_g.png}}\hspace{-0.85em} \subfigure[Software diversity ($SD$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/er/p_a/SD.png}} \subfigure[Defense cost ($D_c$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/er/p_a/D_c.png}}\hspace{-0.85em} \caption{Effect of varying the fraction of seeding attacks ($P_a$) under a random network.} \label{fig_p_a_er} \end{figure*} \begin{figure*}[!ht] \centering \subfigure{ \includegraphics[width=0.65\textwidth, height=0.02\textwidth]{figs/fig2/legend.png}}\vspace{-1em} \setcounter{subfigure}{0} \subfigure[\% of compromised nodes ($P_c$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/er/n_s/P_c.png}}\hspace{-0.85em} \subfigure[Size of the giant component ($S_g$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/er/n_s/S_g.png}} \subfigure[Software diversity ($SD$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/er/n_s/SD.png}} \subfigure[Defense cost ($D_c$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/er/n_s/D_c.png}}\hspace{-0.85em} \caption{Effect of the number of software packages $(N_s)$ under a random network.} \label{fig_n_s_er} \end{figure*} \subsubsection{Effect of the Number of Software Packages ($N_s$) Under a Random Network} Fig.~\ref{fig_n_s_er} shows how a different number of software packages ($N_s$) available impacts the performance in terms of the metrics in Section 5.1 under the random network model. Similar to the previous results displayed in Figs.~\ref{fig_p_er} and \ref{fig_p_a_er}, the degree of software diversity is well aligned with that of the size of the giant component. But, noticeably, SDA with optimal $\rho = -0.6$ performs the best with high resiliency even under a very small $N_s$. In fact, this scheme shows steady performance in terms of all the metrics except software diversity ($SD$) across the range of $N_s$ considered in this work. This is due to the low network density and our intelligent edge adaptation. On the other hand, the other five schemes exhibit improved performance when $N_s$ is increased because the availability of more software packages can significantly contribute to increasing the degree of software diversity of each node. Unlike the previous results shown in Figs.~\ref{fig_p_er} (d) and \ref{fig_p_a_er} (d), Random-Graph-A attains maximum defense cost when $N_s=4$. This is due to two conflicting related factors: On the one hand, a node has more options to choose with more software packages available, i.e., a much lower chance to choose the same software package as the one the node currently has, thereby increasing the likelihood of shuffling and the shuffling cost. On the other hand, the network becomes more secure when more software packages are available, which results in lower IDS cost. \section{Identification of Optimal $k$ under a Random Network} \begin{figure}[h] \centering \includegraphics[width=0.3\textwidth, height=0.22\textwidth]{figs/er/k=1.png} \caption{Effect of varying $k$ (a maximum hop distance) on network connectivity ($S_g$) and security vulnerability ($P_c$) under a random network.} \label{fig_er_sensitivity_k} \end{figure} Fig.~\ref{fig_er_sensitivity_k} shows the sensitivity analysis when varying $k$ used in the software diversity metric of the SDA with $l=1$ and $\rho=-0.6$. We observe that both $S_g$ and $P_c$ are not sensitive when $k$ varies from $1$ to $5$. Thus, we use $k=1$ in our experiments to minimize computational complexity. \section{Identification of Optimal $l$ under a Random Network} \begin{figure}[h] \centering \includegraphics[width=0.3\textwidth, height=0.22\textwidth]{figs/er/l=1.png} \caption{Effect of varying $l$ (a maximum number of attack paths) on network connectivity ($S_g$) and security vulnerability ($P_c$) under a random network.} \label{fig_er_sensitivity_l} \end{figure} Fig.~\ref{fig_er_sensitivity_l} shows the sensitivity analysis under varying $l$ used in estimating the software diversity metric of the SDA when $k=1$ and $\rho=-0.6$. We observe that both $S_g$ and $P_c$ are not sensitive to varying $l$ from $1$ to $9$. Thus, we set $l=1$ in our experiments to minimize computational complexity. \bibliographystyle{plain} \section{Introduction} \label{sec:introduction} \subsection{Motivation} Inspired by the close relationship between the diversity of species and the resilience of ecosystems~\cite{Walker99}, information and software assurance research has evolved to include the concept of {\em software diversity} for enhanced security~\cite{Hole15, Hole15-antifragile, Larsen14, Larsen15, Donnell04}. Due to the dominant trend of software monoculture deployment for efficiency and effectiveness of service provisions, attackers have been granted significant advantages in that acquiring the intelligence needed to compromise a single software vulnerability enables the capability of efficiently compromising other homogeneous system components, such as operating systems, software packages, and/or hardware packages~\cite{Yang16}. To deny this advantage, the concept of diversity has been applied in the cybersecurity literature~\cite{Knight16}. Randomization of software features has been used to thwart cyber attacks by increasing uncertainty towards a target system whose critical information was known to an attacker previously. The concept of moving target defense (MTD)~\cite{Hong16, Manadhata11} has been proposed to change the attack surface in order to increase uncertainty and confusion for attackers and software diversity-based security mechanisms have also been used as part of MTD techniques. Research has shown that software diversity is closely related to enhancing the immunization of a computer system that halts multiple outbreaks of malware infections simultaneously occurring with heterogeneous and sparse spreading patterns~\cite{Newman10}. Hence, the rationale that software diversity reduces malware spreading is quite well known and has been validated for its effectiveness to some extent~\cite{Hole15, Hole15-antifragile}. This underlying philosophy encompasses a simple principle: {\em software polyculture} enhances security~\cite{Hole15}. Due to the accessibility to the Internet, which enables the distribution of individualized software and cloud computing with the computational power to perform diversification, massive-scale software diversity is becoming a realistic and practical approach to enhance security~\cite{Larsen14}. Although the benefit of software diversity seems obvious, its secure and transparent implementation of automatic software diversity is highly challenging~\cite{Larsen15}. In addition, no prior work has considered software diversity metrics as the basis to adapt a network topology to balance network connectivity and system security where each node's software vulnerability is incorporated into estimating each node's software diversity. In this work, we are interested in developing a software diversity metric to measure a node's software diversity based on software vulnerabilities of intermediate nodes on attack paths reachable to the node. \vspace{-2mm} \subsection{Research Problem} In this work, we develop a software diversity metric for measuring a network topology in terms of minimizing security vulnerabilities against epidemic attacks (e.g., malware/virus spreading) while maintaining a sufficient level of network connectivity to provide seamless service availability. The proposed software diversity metric can be used to make decisions related to which two nodes should be disconnected or connected in order to construct an improved network topology meeting these two goals, minimizing security vulnerability and maximizing network connectivity. However, identifying the optimal network topology requires an exponential solution complexity~\cite{Yang08}. In this work, we propose a heuristic method called software diversity-based adaptation (SDA) to generate a better network topology that is resilient against epidemic attacks with a sufficiently high network connectivity where the deployment cost is acceptable. We leverage percolation theory~\cite{Newman10}, which has been used to describe the process or paths of some liquid passing through a medium. We use this theory to model and analyze attack processes and defense or recovery processes by using site or bond percolation. {\em Site percolation} (i.e., removing a node)~\cite{Newman10} is used to model an attacker's behavior in compromising another node, implying that the node being percolated refers to the node being compromised (infected) by the attacker, leading to the disconnection of all edges around the node to reflect its failure or its being detected by an intrusion detection system (IDS). {\em Bond percolation} is used to adjust edges between nodes such that connected nodes with high security vulnerability (e.g., two connected nodes have the same software package installed or a neighbor node has high software vulnerability) are disconnected while disconnected nodes with low or no security vulnerability (e.g., two disconnected nodes using a different software with low software vulnerability) can be connected in a given network. \begin{comment} \subsubsection{Research Questions} We aim to answer the following {\bf key research questions}: \begin{itemize} \item Is our proposed software diversity metric a good indicator to represent network resilience in both security and network connectivity in the presence of epidemic attacks? \item What are the key impacts from varying the key design parameters in terms of the attack density (or strength), the number of software packages available, the software diversity level, or the network adaptation deployment cost when the proposed scheme is compared against comparable counterpart baseline schemes? \item Which adaptation strategies perform the best among all the schemes considered and under what circumstances (e.g., network density, attack density, or availability of software packages)? \item What are the key factors that maximize network resilience in security and network connectivity while incurring acceptable deployment cost? \end{itemize} \end{comment} \vspace{-2mm} \subsection{Key Contributions} We made the following {\bf key contributions} in this work: \begin{itemize} \item This work is the first that takes a multidisciplinary approach by considering both the computer science's software diversity to enhance cybersecurity and percolation theoretic network resilience techniques to study the effect of interconnectivity on network connectivity under epidemic attacks. To be specific, we develop network adaptation strategies that determine whether to add or remove edges between two nodes in a given network, aiming to minimize network vulnerabilities against epidemic attacks while maintaining maximum network connectivity. Given that each node is installed with a set of software (we call it a `software package'), we investigate network resilience and vulnerability depending on how a network topology is connected under epidemic attackers who can exploit the vulnerabilities based on their knowledge on software vulnerabilities. \item We develop a novel software diversity metric that measures a node's software diversity level, representing both the vulnerabilities of attack paths reachable to the node and the network connectivity. To minimize computational complexity in estimating the node's software diversity based on attack path vulnerability, we introduce a node's local network only based on the node's $k$-hop neighbors. This approach allows us to provide a lightweight method to compute each node's software diversity. To prove the effectiveness of this software diversity metric, we use it as the criterion to determine whether to add/remove an edge between two nodes. \item Although most software diversity-based network topology adaptations are studied by shuffling the types of software packages~\cite{Yang08, Yang16}, our work takes one step further by changing network topology, which is proven much more effective than its software shuffling counterpart (e.g., graph coloring) in reducing vulnerability to epidemic attacks while maximizing the network connectivity. In addition, our proposed software diversity-based network adaptations are lightweight showing acceptable operational cost while achieving minimum security vulnerability and maximum network connectivity, which opens a door for the applicability in resource-constrained, contested network environments. \item We validate the outperformance of the proposed SDA strategy by conducting a comprehensive comparative performance analysis with the following six schemes (see Section \ref{subsec:comparing_schemes}): non-adaptation, random adaptation, graph-coloring, and three variants of the proposed SDA strategies. We analyze the effect of key design parameters such as network density, attack density, and the number of software packages available on four performance metrics (see Section~\ref{subsec:metrics}), i.e., the size of a giant component, the fraction of undetected compromised nodes, software diversity levels, and defense cost (i.e., shuffling plus network topology adaptation costs). We validate the outperformance of our SDA scheme in three real network topologies covering dense (high), medium dense, and sparse (low) networks~\cite{snapnets}. Further, to profoundly understand the effect of various network characteristics, we conduct sensitivity analysis under a random graph using the Erd{\"o}s-R{\'e}nyi (ER) network model and analyze the results. Due to space constraints, we placed these results for the ER network in Sections C.2--C.3 of the appendix file. \end{itemize} We will discuss the answers of the research questions in Section~\ref{sec:exp-result-analysis} and conclude them in Section~\ref{sec:conclusion}. \section{Background \& Related Work} \label{sec:related_work} This section provides an overview of related work and background literature in terms of the percolation theory studied for network resilience in Network Science and the software diversity-based approaches studied for system security in Computer Science. \subsection{Percolation Theoretic Network Resilience} \label{subsec:percolation_theory} Percolation theory has been substantially used to investigate network resilience (or robustness) in Network Science. {\em Site percolation} and {\em bond percolation} are commonly used to select a node or an edge to remove or add, to model the choice of nodes to immunize in the context of epidemics on networks, such as disease transmission, computer malware/virus spreading, or behavior propagation (e.g., product adoption)~\cite{Dezso02, Newman10}. Recently, percolation theory was leveraged to develop software diversity techniques particularly to solve a software assignment problem~\cite{Yang08, Yang16} because how nodes are connected matters in propagating malware infection while choosing nodes or edges to add or remove is exactly following the concept of site or bond percolation in percolation theory~\cite{Newman10}. The origins of percolation theory come from the mathematical formalization of statistical physics research on the flow of liquid through a medium~\cite{Grimmett1997}. Percolation theory has been substantially applied to networks to study connectivity, robustness~\cite{Barabasi2016, Newman10}, reliability~\cite{Li15}, and epidemics~\cite{cardy1985epidemic, moore2000epidemics}. The percolation process was studied in computer science under the notion of ``network resilience''~\cite{Colbourn87, Najjar90, Sterbenz10}, independent of its development in the statistical physics literature. More recent developments in the physics literature have profoundly influenced studies in the computer science. These contributions have incorporated the recognition that networks are not derived from a random structure, and failures of nodes, whether from attacks or due to dependent correlations, are not uniformly random~\cite{albert2000error}. Hence, significant interest has developed in removal processes that model targeted attacks on the network using a centrality metric. In the network science domain, the degree of network resilience is commonly measured based on the size of the giant component (i.e., the largest connected component in a given network), which gives a clear sense of how the network is connected with a portion of existing nodes even after a certain number of nodes or edges are removed. Percolation theory has been used to model various processes on networks in the context of network failures or attacks, e.g., connectivity, routing, and epidemic spreading~\cite{Colbourn87, Najjar90}. \subsection{Software Diversity-based Cybersecurity} \label{subsec:software_diversity} Many approaches have been explored to validate the usefulness of software or network diversity to ensure network security. \citet{Chen16-safety} investigated the usefulness of software diversity to enhance security. \citet{Huang14, Huang17} solved a software assignment problem by isolating nodes with the same software to minimize the effect of epidemic worm attacks. \citet{Franz10} proposed an approach to introduce compiler-generated software diversity for a large scale network, aiming to create hurdles for attackers and eliminate any advantage of knowing the vulnerabilities of a single software. \citet{Homescu17} presented a large-scale automated software diversification to mitigate the vulnerabilities exposed by software monoculture. \citet{Yang08, Yang16} proposed a software diversity technique to combat sensor worms by solving a software assignment problem, given a limited number of software versions available. The authors used percolation theory to model the design features of software diversity to defend against sensor worms. \citet{Zhang01} developed a resilient, heterogeneous networking-based system where a single solution was common to increase interoperability. Recently, {\em network diversity} is proposed as a security metric to measure network resilience against zero-day attacks~\cite{Zhang16}. Inspired by the network diversity metrics~\cite{Zhang16}, \citet{Li18} further developed the network model and diversity metric based on vulnerability similarity, configuration constraints and multi-label hosts. \citet{hosseini18} mathematically analyzed the malware propagation under a network with six different types of nodes in an epidemic model. They proved a positive correlation between network security and the degree of network diversity. \citet{prieto2019} proposed an optimal software assignment algorithm with multiple software packages to enhance network resilience under attacks. Although the above works discussed the concept of software diversity to ensure system security, their aim is to solve a software assignment problem by shuffling different types of software packages among nodes without changing the network topology. Unlike the software assignment approach, we aim to generate an optimal network topology that is resilient against epidemic attacks while maximizing network connectivity. The proposed software diversity metric is designed for each node to make a decision on whether to add or remove an edge based on the vulnerabilities on the attack paths reachable to the node~\cite{Chen07, Keramati13}. \section{System Model} \label{sec:system_model} This section discusses our system model in terms of the network model, the node model, the attack model, and the defense model. \subsection{Network Model} \label{subsec:network_model} In this work, we are concerned with a special distributed network environment where each node belongs to a set of regional coordinators. Examples include a software-defined network (SDN) where each node can be instructed by an SDN controller it belongs to~\cite{Kreutz15}, an edge computing Internet-of-Things (IoT) system with some high computing edge nodes available to perform high computing tasks~\cite{Li18-edge}, and a hierarchical mobile ad hoc network with decentralized controllers in charge of governing the nodes under their control~\cite{Cho08-decen-manet}. Periodic information exchange between nodes and the regional coordinators is required to ensure seamless operations of the system. However, since each node's software diversity value, which is used to make a decision on edge adaptation (i.e., adding/removing edges), is computed locally by each node, a regional coordinator will only need to rank the software diversity values of neighbor nodes around a target node, and inform the target node of which edges to add/remove based on the estimated ranks. Moreover, the ranking operation of the neighbor nodes around a target node is only periodically performed by a regional coordinator and will not require high communication overhead for each node to communicate with the regional coordinator. A temporal network is an undirected network for which the topology evolution (or change) occurs due to node failures or nodes being compromised by attackers. In addition, the network may change its topology when adaptation strategies are performed by connecting between two nodes or disconnecting all the edges associated with compromised nodes to mitigate the spread of infection over the network. We denote the set of nodes in the network by nodes $i$'s, characterized by a set of attributes as shown in Section~\ref{subsec:node_model}. An edge between nodes can be on and off depending on the dynamics caused by node failures, node recovery, or edge adaptations (i.e., an edge can be added or removed). We maintain an adjacency matrix $\mathbf{A}$ in order to keep track of direct or indirect connectivities (i.e., edges) between nodes where $a_{ij}=1$ indicates there exists an edge between nodes $i$ and $j$ while $a_{ij}=0$ indicates that no edge exists. In order for each node to efficiently estimate its software diversity by considering the vulnerabilities of attack paths reachable to the node, it only considers neighboring nodes within $k$-hop distance from itself. We call it a node's {\em $k$-hop local network}. This local network is used for each node to estimate its software diversity value by considering the vulnerabilities of attack paths available within its local network. Although we will maintain the value of $k$ as a sufficiently small number (e.g., 1 or 2), it does not underestimate the vulnerabilities of possible attack paths because using smaller $k$ means taking a conservative perspective that an attacker is quite close enough such as being within the local network. For example, if an attacker wants to compromise a particular target node, it may try multiple attack paths where each attack path has a set of intermediate nodes. When the attack path is long, it means the vulnerability of the target node is low as the attacker needs to compromise all the intermediate nodes in order to finally compromise the target node. However, when the path length is small, it does not necessarily decrease the attack vulnerability because the attacker is close to the target node. We assume that software packages installed in each node and the associated vulnerabilities information are given to the regional coordinator in the initial network deployment period. In addition, we assume that each node is also well informed about the software vulnerability information associated with the software packages installed in the neighboring nodes in its $k$-hop local network. We assume that the changes of network topology are mainly made by node failures or network adaptations in this work. Adding or removing an edge between two nodes requires secure communications between them. Even if they are within wireless range of each other but don't share a secret key for secure communications, they are not logically connected. In this work, generating an optimal network topology which is resilient against epidemic attacks with maximum network connectivity is based on a logical network topology. \subsection{Node Model} \label{subsec:node_model} Each node $i$ is characterized by its attributes as: \begin{itemize} \item A node $i$'s status on whether it is active or not, denoted by $na_i$, indicating whether it is alive ($=1$) or failed ($=0$), respectively; \item A node $i$'s status on whether the node is compromised ($=1$) or non-compromised ($=0$), denoted by $nc_i$; \item A node $i$'s software package installed, representing the diversified package or version of the same software providing the same functionality. In this work, we adopt the well-known software diversification approach called {\em N-version programming}~\cite{Avizienis77, Avizienis85}. This concept means that a software has multiple independent implementations while the different implementations of the software still can provide same functionalities while the implementations are different and naturally have different bugs or vulnerabilities. Following this concept, we model the node's software package installed denoted by $s_i$ with a limited number of software packages available, $N_s$, where $s_i$ is an integer, ranged in $[1, N_s]$; \item A node $i$'s degree of software diversity, $sd_i$, whose physical meaning is how different node $i$'s software package is from its neighbors. The detail on the computation of node $i$'s software diversity are elaborated in Eq.~\eqref{eq:metric_sd}; and \item A node $i$ has software vulnerability derived from the software package $s_i$ it is installed with, which is denoted by $sv_i$. \end{itemize} Based on the above five attributes, node $i$ is characterized by: \begin{equation} \label{eq:node_attributes} \mathbf{node} (i) = [na_i, nc_i, s_i, sd_i, sv_i]. \end{equation} If attacker $j$ targets vulnerable node $i$ (i.e., a node that has not been compromised before), which is one of its direct neighbors, the probability that node $j$ infects node $i$, denoted by $\beta_{ji}$, is estimated based on the probability that node $j$ can exploit the vulnerability of node $i$'s software package, $s_i$. We estimate this probability based on node $i$'s vulnerability to node $j$, estimated by~\cite{Hole15}: \begin{equation} \label{eq:exploit_rho} \beta_{ji} = \begin{cases} 1 & \quad \text{if } \sigma_j (\comg{s_i}) > 0 \text{;} \\ sv_{i} & \quad \text{otherwise,} \end{cases} \end{equation} where $\sigma_j$ is a vector of software packages attacker $j$ has learned about their security vulnerabilities. For example, attacker $j$ knows the vulnerabilities of software package 1 and 3 among 5 packages available. It is denoted by $\sigma_j=[1, 0, 1, 0, 0]$. In this case, the sum of $\sigma_j$ indicates the number of software packages attacker $j$ knows the vulnerabilities of and so can exploit. Note that it is a dynamic value learned after node $j$ compromises node $i$ via reconnaissance even if their installed software packages are different, i.e., $s_i \neq s_j$. Here $sv_{i}$ refers to the vulnerability of software package $s_i$, which can be estimated based on the the degree of a Common Vulnerabilities and Exposures (CVE) with a Common Vulnerability Scoring System (CVSS) severity score~\cite{CVSS2018, CVE2018}. A node's mean vulnerability is simply obtained by the scaled mean vulnerability across multiple vulnerabilities in $[0, 10]$ where the maximum vulnerability score is 10 in CVSS. We normalize the value ranged in $[0, 1]$ as a real number. \subsection{Attack Model} \label{subsec:attack_model} This work deals with two stages of attack behaviors: An outside attacker before the node is compromised and an inside attacker after the node is compromised but undetected. \vspace{1mm} \noindent {\bf (1) Node Compromise by Epidemic Attacks}: We consider the so called {\em epidemic attack} which describes an attacker's infection behavior based on an epidemic model, called the SIR (Susceptible-Infected-Removed) model~\cite{Newman10}. That is, an outside attacker can compromise the nodes directly connected to itself, its direct neighbors, without access rights to their settings or files. Typical example scenarios include the spread of malwares or viruses. Botnets can spread malwares or viruses via mobile devices. A mobile device can misuse a mobile malware, such as a Trojan horse, thus acting as a botclient to receive commands and controls from a remote server~\cite{Mavoungou16}. Further, worm-like attacks are popular in wireless sensor networks where the sensor worm attacker sends a message to exploit the software vulnerability in order to cause a crash or take control of sensor nodes~\cite{Yang08, Yang16}. Attacker $j$ can compromise its direct neighbor $i$ when node $i$ uses a software package that attacker $j$ can exploit because the attacker knows the vulnerability of the software package. This case happens when $s_i$ is the same as $s_j$ or attacker $j$ learned $s_i$'s vulnerability in the past (i.e., $\mathbf{\sigma}_j(s_i)> 0$). When attacker $j$ is installed with a particular software package, $s_j$, we assume that attacker $j$ knows the vulnerability of its own software package, $s_j$. Attacker $j$ can learn the vulnerabilities of other software packages although it needs to commit more time and resources to obtain the information of their security vulnerabilities. Node $i$'s vulnerability by attacker $j$ based on these two cases is reflected in Eq.~\eqref{eq:exploit_rho}. When node $i$ is compromised, node $i$'s status is changed from `susceptible' to `infected' indicating that node $i$ is now an attacker. Then, node $i$ can infect other nodes and learn their software vulnerabilities, which are unknown to it. The attack procedures are described in Algorithm~8 of the appendix file. \vspace{1mm} \noindent {\bf (2) Malicious Behavior of Compromised Nodes Undetected by the IDS}: Even if an intrusion detection system (IDS) is assumed to be placed in this work (see Section~\ref{subsec:defense_model} below), an attacker may not be detected by the IDS and the inside attacker can perform malicious behaviors such as packet dropping attacks (e.g., gray or black hole attacks), data exfiltration attacks, or denial-of-service (DoS) attacks to compromise the security goals in terms of loss of confidentiality, integrity, and availability~\cite{Do15, Wood02}. \subsection{Defense Model} \label{subsec:defense_model} We assume that a system is equipped with an IDS, which detects infected (i.e., compromised) nodes. When infected node $i$ is detected by the IDS, we model the detection probability with $\gamma$ representing the removal probability in the SIR model. The response to the detected node will be performed by disconnecting all the edges connected to the detected attacker, which corresponds to removing the node from the system based on the concept of {\em site percolation}. Note that the development of an IDS is beyond the scope of this work. We simply consider the IDS characterized by a false positive probability and a false negative probability, both of which have the value of $1-\gamma$. \section{Software Diversity based Adaptation Algorithm Design} \label{sec:software_diversity_strategies} In this section, we describe our proposed software diversity based adaptation (SDA) algorithm design in detail. SDA uses software diversity as a key determinant to select edges to percolate (i.e., add or remove) for mitigating the spreading of compromised nodes by epidemic attackers and also to maximize the network connectivity for network resilience. \subsection{Software Diversity Metric} \label{subsec:diversity_vulnerability} A node's vulnerability is commonly computed based on its software package installed~\cite{Hole15, Hole15-antifragile}. However, if the node is connected with many other nodes that are directly or indirectly connected, its potential vulnerability is not simply restricted by the vulnerability of its own software package. We use a broader concept of node vulnerability by embracing the vulnerabilities of attack paths reachable to each node. To better capture the relationship between node vulnerability and network topology, we utilize an attack path $AP$ an attacker can take to successfully compromise a target node. That is, in order to compromise the target node, the attacker needs to compromise all intermediate nodes on the attack path. Hence, we estimate each node's software diversity value which refers to the probability that a node is robust against vulnerabilities from attack paths $AP$'s reachable to the node. \begin{figure*}[th!] \centering \subfigure[Software diversity estimation of node $i$]{ \includegraphics[width =0.45\textwidth, height = 0.35\textwidth]{./figs/sd-metric.png}} \subfigure[Edge adaptations based on software diversity differences]{ \includegraphics[width=0.45\textwidth, height = 0.35\textwidth]{./figs/sda-adaptation.png}} \caption{Example of the software diversity-based adaptation strategies: (a) The estimation of node $i$'s software diversity value; and (b) The edge adaptation based on the software diversity difference in Eqs.~\eqref{eq:sd-diff-add} and~\eqref{eq:sd-diff-remove}.} \label{fig:sd-overview} \end{figure*} To this end, we consider the shortest paths (i.e., maximum $k$-hop distance paths) between boundary nodes (i.e., nodes in the boundary of a target node's local network) to a target node as attack paths. In addition, to reduce the complexity of measuring each node's software diversity, we use a limited number of attack paths, denoted by $l$, where each path has at most $k$-hop distance. Target node $i$'s software diversity, denoted by $sd_i (k, l)$, is obtained by: \begin{equation} \label{eq:metric_sd} sd_i (k, l):=\prod_{j \in \mathbf{ap}_i}^l (1-apv^k_{ij}), \end{equation} where $\mathbf{ap}_i$ is a set of attack paths available to node $i$ ranked based on their highest vulnerability and $apv^k_{ij}$ is the vulnerability of an attack path from node $j$ to node $i$ with the maximum hop distance $k$. In order to consider the maximum number of nodes associated with the attack paths, we consider disjointed attack paths from the boundary nodes (i.e., $j$'s) to node $i$. \subsection{Software Diversity based Bond Percolation for Network Adaptation} \begin{algorithm}[!th] \small{ \caption{Software Diversity-based Adaptation (SDA)} \label{algo:SDA} \begin{algorithmic}[1] \State{$N \leftarrow$ The total number of nodes in a network} \State{$\mathbf{DN} \leftarrow$ A vector containing the number of removed edges per node} \State{$\mathbf{A} \leftarrow$ An adjacency matrix for a given network with element $a_{ij}$ for $i, j = 1, \ldots, N$} \State{$\mathbf{S} \leftarrow$ A vector of software packages installed over nodes with element $s_i$ for $i = 1, \ldots, N$} \State{$\mathbf{SV} \leftarrow$ A vector of the vulnerabilities associated with software packages} \State{$\mathbf{SD} \leftarrow$ A vector of software diversity values, $sd_i$ for all nodes $i = 1, \ldots, N$} \State{$\mathbf{PV} \leftarrow$ A vector of maximum path vulnerabilities for all nodes $i$ where $pv_i$ refers to the maximum attack path vulnerability where the path consists of at most $k-1$-hop distance from node $i$} \State{$k \leftarrow$ A hop distance given in a node's local network} \State{$l \leftarrow$ A maximum number of attack paths considered for estimating a node's software diversity} \State{$\rho \leftarrow$ A threshold referring to the fraction of edges to be removed when $\rho<0$ and added when $\rho>0$} \State{$\mathbf{A}' \leftarrow$ An adjacency matrix after edges are adapted in Step 1.} \State{$\mathbf{A}'' \leftarrow$ An adjacency matrix after edges are adapted in Step 2.} \\ \State{$\mathbf{A}''= \mathbf{SDA} (N, \mathbf{DN}, \mathbf{A}, \mathbf{S}, \mathbf{SV}, k, l, \rho)$} \\ \State{{\bf Step 1:} $\mathbf{A}' = \mathbf{SDBA} (N, \mathbf{DN}, \mathbf{A}, \mathbf{S}))$} \Comment{Remove edges between two nodes with the same software package based on Algorithm~1 of the appendix file).} \\ \State{{\bf Step 2:} Add/remove edges locally based on the ranks of the software diversity differences estimated in Eqs.~\eqref{eq:sd-diff-add} and~\eqref{eq:sd-diff-remove}} (Algorithms~4 and 5 of the appendix file) \State{$\mathbf{A^*}\leftarrow \mathbf{(A'+I)}^{2k}$ where $a^*_{ij}$ is 1 when nodes $i$ and $j$ belong to each other's local network or their neighbors' local networks; 0 otherwise.} \State{$\mathbf{SD} \leftarrow$ A vector of software diversity where each element, $sd_i (k, l)$, refers to node $i$'s software diversity value when at most $l$ number of attack paths are considered where each attack path has at most $k$-hop length.} \State{$\mathbf{PV} \leftarrow$ A vector of estimated attack path vulnerabilities associated with each node.} \State{$\mathbf{candidate}\leftarrow$ A set of edge candidates} \Comment{Algorithms~4 and 5 of the appendix file.} \State{$\mathbf{T}^{local}, T^{global} = \mathbf{setEAB} (\mathbf{DN},\mathbf{A}',\rho)$}\Comment{Set edge adaptations budget based on Algorithm~3 of the appendix file.} \If{$\rho>0$} \State{$\mathbf{candidate} = \mathbf{GEAC}(\mathbf{A}',\mathbf{SD},\mathbf{SV},\mathbf{S},\mathbf{PV},\mathbf{T}^{local}$)} \\ \Comment{Algorithm~4 in the appendix file.} \Else \State{$\mathbf{candidate}$ = $\mathbf{GERC}(\mathbf{A}',\mathbf{SD},\mathbf{SV},\mathbf{S},\mathbf{PV},\mathbf{T}^{local}$)} \\ \Comment{Algorithm~5 in the appendix file.} \EndIf \State{$\mathbf{A}''=\mathbf{AdaptNT}(\mathbf{A}',\mathbf{candidate}, \mathbf{T}^{local},T^{global},\rho$)} \\ \Comment{Algorithm~6 in the appendix file.} \State{$\mathbf{return}\ \mathbf{A}''$} \end{algorithmic} } \end{algorithm} The design objective of SDA is to decide which edges to add or remove in order to maximize the size of the giant component (i.e., the largest network cluster in a network) for maintaining network connectivity and to minimize the fraction of nodes being compromised due to epidemic attacks with minimum defense cost defined in Section \ref{subsec:metrics}. \begin{comment} Bond percolation for a non-compromised, active node $i$ to remove or add an edge with other nodes can be performed to meet this goal as follows: \begin{enumerate} \item If $(s_i == s_j)$ for $a_{ij} > 0 \wedge i \neq j$, meaning that when two connected nodes use the same software package, then remove the edge between $i$ and $j$; \item Given $\rho$, which is defined as the faction of the edges to be adapted in a network and $a^*_{ij}$ which returns 1 when there exists at least one node $u$ such that node $i$ and node $j$ are both in node $u$'s local network. Note that $i$ and $j$ are reachable within their wireless range. But recall that they cannot communicate to each other if they don't share a symmetric key to pursue a common mission. Adding/removing an edge can be performed when two nodes are physically within the wireless range for possible communications but the network topology adapted after the edge adjustment is a logical network topology. \begin{enumerate} \item if $\rho>0$ (i.e., a fraction of edges to be restored) and $(s_i \neq s_j) \wedge a^*_{ij}>0$ for $a_{ij} == 0$, add an edge between $i$ and $j$ based the extent of software diversity is reduced by adding the edge (i.e., choosing an edge that reduces minimum software diversity), which is described in Algorithms~4 and~5 of the appendix file. \item if $\rho<0$ (i.e., a fraction of edges to be removed) and $a_{ij} > 0 \wedge i \neq j$, remove an edge between nodes $i$ and $j$ based on the improvement of software diversity by removing the edge, which is described in Algorithms~4 and~5 of the appendix file. \end{enumerate} \end{enumerate} The first condition is to remove all edges between nodes with the same software package. We call this Step 1, which is used in both SDA and Random-A (random adaptation). In (a) of the second condition, $\rho$ refers to the fraction of edges to be restored from the lost edges in the first condition. In (b) of the second condition, $\rho$ means the fraction of edges to be additionally removed from the total number of edges in the adapted network from Step 1. \end{comment} We have two tasks to determine which edges to remove or add as follows: \begin{enumerate} \item Estimate the gain or loss as a result of removing or adding an edge based on the difference between a node's current software diversity value and its expected software diversity value after the edge adaptation made between nodes $i$ and $j$. To determine if adding an edge between nodes $i$ and $j$ is beneficial, we compute the software diversity difference by: \begin{equation} SD_{\mathrm{diff}}^{A} (i, j) = (sd_i - sd'_i) + (sd_j - sd'_j), \label{eq:sd-diff-add} \end{equation} where $sd_i = sd_i (k, l)$ and $sd_j = sd_j (k, l)$ for simplicity and $sd_i (k, l)$ and $sd_j (k, l)$ are defined in Eq.~\eqref{eq:metric_sd}. $sd'_i$ and $sd'_j$ are the expected software diversity values of nodes $i$ and $j$ after an edge is added. The most promising candidate edge to be added should be an edge with the lowest $SD_{\mathrm{diff}}^A (i, j)$. $sd'_i$ is simply obtained by \begin{equation} sd'_i = sd_i (1-sv_{i} pv_j), \end{equation} where $sv_{i}$ is the software vulnerability of the software package installed in node $i$ (i.e., $s_i$) and $pv_j$ is the attack path vulnerability from node $j$ to the boundary node in node $j$'s local network. That is, $sv_{i} pv_j$ is the same as $apv_{ij}$ in Eq.~\eqref{eq:metric_sd} (where we omitted $k$ for simplicity). $sd'_j$ can also be similarly obtained. To determine if removing the edge between nodes $i$ and $j$ is beneficial, we compute the software diversity difference by: \begin{equation} SD_{\mathrm{diff}}^{R} (i, j) = (sd'_i - sd_i) + (sd'_j - sd_j), \label{eq:sd-diff-remove} \end{equation} where $sd'_i$ is computed by: \begin{equation} sd'_i = sd_i/(1-sv_{i} pv_j). \end{equation} Here the division by $(1-sv_{i} pv_j)$ represents the extent of reducing the vulnerability by removing an edge between nodes $i$ and $j$ based on Eq.~\eqref{eq:metric_sd}. $sd'_j$ is similarly obtained like $sd'_i$ above. The most promising candidate edge to be removed should be an edge with the highest $SD_{\mathrm{diff}}^R (i, j)$. See Algorithm~4 (Generates Edge Addition Candidates or GEAC) and Algorithm~5 (Generates Edge Removal Candidates or GERC) of the appendix file that provides the detail on generating edge candidates for edge addition and removal, respectively. \item Estimate how many edges each node can adapt, i.e., remove or add. Based on the rationale that high centrality nodes (e.g., high degree) may expose high vulnerability in terms of security and network connectivity, we minimize the difference between the maximum degree and minimum degree by adding more edges to nodes with lower degree while deleting edges to the nodes with higher degree. Based on this principle, we develop a heuristic method to estimate how many edges should be adapted per node. See Algorithm~3 (Set Edge Adaptations Budget or SetEAB) of the appendix file for detail. \end{enumerate} Algorithm~\ref{algo:SDA} describes our proposed software diversity-based adaption (SDA) algorithm in detail. It executes Algorithm~1 of the appendix file in Step 1 on line 16 to remove edges between two nodes with the same software package. It makes the decision to add/remove edges locally based on the ranks of the software diversity differences estimated in Step 2 based on Eqs.~\eqref{eq:sd-diff-add} and~\eqref{eq:sd-diff-remove}, with the objective to best satisfy both security (i.e., minimum or no impact by epidemic attacks) and performance (i.e., a sufficient level of network connectivity) requirements. Fig.~\ref{fig:sd-overview} illustrates the SDA algorithm execution with an example network where distinct software packages are marked with distinct colors. Fig.~\ref{fig:sd-overview} (a) illustrates how node $i$ estimates its software diversity value when $k=l=2$. Fig.~\ref{fig:sd-overview} (b) illustrates how node $i$ determines whether to add or remove edges based on the software diversity differences, $SD_{\mathrm{diff}}^{A}$ and $SD_{\mathrm{diff}}^{R}$, based on Eqs.~\eqref{eq:sd-diff-add} and~\eqref{eq:sd-diff-remove}, respectively. \section{Experimental Setup} \label{sec:exp-setup} In this section, we describe performance metrics, counterpart baseline schemes against which our proposed SDA algorithm (i.e., Algorithm~\ref{algo:SDA}) is compared for performance comparison, and simulation environment setup for performance evaluation. \subsection{Performance Metrics} \label{subsec:metrics} We use the following performance metrics: \begin{itemize} \item {\bf Software diversity ($SD$)}: This metric measures the mean software diversity for all nodes in a network. Since node $i$'s software diversity, i.e., $sd_i$, is computed based on Eq.~\eqref{eq:metric_sd}, the mean software diversity for all nodes in the network is obtained by: \begin{equation} \label{eq:metric_average_sd} SD = \frac{\sum_{i=1}^N sd_i}{N}. \end{equation} Recall that $k$ is used to determine node $i$'s local network and thus is the maximum possible hop distance from node $i$ to all other neighboring nodes in its local network. Higher software diversity is more desirable to ensure high system security. \item {\bf Size of the giant component ($S_g$)}: This metric captures the degree of network connectivity composed of non-compromised (uninfected), active nodes in a network. $S_g$ is computed by: \begin{equation} \label{eq:metric_giant_component} S_g = \frac{N_g}{N}, \end{equation} where $N$ is the total number of nodes in the network and $N_g$ is the number of nodes in the giant component. Higher $S_g$ is more desirable, implying higher network resilience in the presence of epidemic attacks. \item {\bf Fraction of compromised nodes ($P_c$)}: This metric measures the fraction of the number of compromised nodes due to epidemic attacks over the total number of nodes in a network. This includes both currently infected (not detected by the IDS) and removed (previously infected and detected by the IDS) nodes. $P_c$ is computed by: \begin{equation} \label{eq:metric_vulnerable_component} P_c = \frac{N_c}{N}, \end{equation} where $N_{c}$ represents the total number of compromised nodes after epidemic attacks on a network (i.e., the original network under No-Adaptation and an adapted network under all adaptation schemes). See Section \ref{subsec:comparing_schemes} for a listing of counterpart baseline schemes against which our proposed SDA algorithm is compared for a comparative performance analysis. \item {\bf Defense cost ($D_c$)}: This metric measures the defense cost associated with the following defense strategies employed by an adaptation scheme: (1) edge adaptations (i.e., adding or removing edges) to isolate detected attackers (or compromised nodes) by the IDS; (2) edge adaptations to maximize software diversity by each node based on the value of the software diversity metric in Eq.~\eqref{eq:metric_sd}; and (3) shuffling operations based on the fraction of nodes whose software package is randomly shuffled over the total number of nodes. $D_c$ is computed by: \begin{eqnarray} \label{eq:metric_dc} D_c & = & \frac{\mathrm{sum}(|\mathbf{A}-\mathbf{B}|)}{\mathrm{sum}(\mathbf{A}+\mathbf{B})} + \frac{N_{SF}}{N} \end{eqnarray} In the first term, the numerator refers to the differences of edges between the adjacency matrix of an original network $\mathbf{B}$ and that of an adjusted network $\mathbf{A}$ after edges adaptations are made. The denominator is the sum of the additive two matrices. In the second term, $N_{SF}$ is the number of nodes whose software packages are shuffled and $N$ is the total number of nodes. Note that when a node's software package is shuffled but stays with its original software package, it is excluded from counting toward $N_{SF}$. This shuffling cost is estimated only when shuffling a software package is used such as random graph coloring, which is compared against our proposed SDA scheme in our work. Lower defense cost is more desirable. \end{itemize} \subsection{Counterpart Baseline Schemes for Performance Comparison} \label{subsec:comparing_schemes} In this work, we compare the performance of our proposed SDA scheme against No-adaptation (No-A), Random adaptation (Random-A), and Random graph coloring (Random-Graph-C) counterpart baseline schemes for a comparative performance analysis. Our SDA scheme uses the software diversity-based metric in Eq.~\eqref{eq:metric_sd} to select an edge to remove or add based on the concept of bond percolation, as discussed in Section \ref{subsec:percolation_theory}. To be specific, SDA first removes all edges between two connected nodes with the same software package as shown in Step 1 of Algorithm~\ref{algo:SDA} (i.e., executing Algorithm~1 in the appendix file). Then SDA decides a set of edges to be added or removed given $\rho$ (the percentage of edges to be added if $\rho>0$ or to be removed if $\rho<0$) as shown in Step 2 of Algorithm~\ref{algo:SDA}. The effect of $\rho$ on performance will be analyzed in Section~\ref{subsec:results_sensitivity} to identify the optimal $\rho$ value that can best balance security and network connectivity. We experiment with various $\rho$ values in the range of $[-1, 1]$ where $-1$ means removing all edges (such that no edges exist in the network) and 1 means fully restoring edges removed from Step 1. For example, SDA with $\rho = 1$ means fully restoring edges lost from Step 1 while SDA with $\rho = 0$ refers to only executing Step 1 (removing edges between two nodes with the same software package). SDA with $\rho = 0.6$ means only restoring 60\% of the edges lost in Step 1 while SDA with $\rho = -0.6$ means removing 60\% of edges in the network after Step 1. What edges to remove or add (see Step 2 of Algorithm~\ref{algo:SDA}) significantly affects network security and resilience. Below we briefly discuss the three counterpart baseline schemes to be compared against our proposed SDA schemes: \begin{itemize} \item {\bf No-adaptation (No-A)}: This represents the case in which no adaptation is applied, thus showing the effect of attacks on the performance of the original network. However, we allow an IDS to detect attackers. When the IDS detects compromised nodes with probability $\gamma$, all edges connected to the detected attacker will be disconnected in order to isolate the attackers, ultimately resulting in mitigating the spread of compromised nodes in the network. Therefore, when No-A is used, the adaptation cost can be high because the number of edges disconnected is affected by the network topology, which is one of the key factors impacting the degree of network vulnerability. \item {\bf Random adaptation (Random-A)}: This scheme first removes an edge between two nodes with the same software package (i.e., executing Algorithm~1 in the appendix file) and then randomly adds edges between nodes with a different software package (see Algorithm 7 in the appendix file). In this scheme, we add the same number of edges lost due to the execution of Step 1. \item {\bf Random graph coloring (Random-Graph-C)}: This scheme uses a simple rule for each node to shuffle its software package with the least common software package without changing any network topology. As a special case, when a node has many neighbors, it may choose the least common software package of those used among its neighbors. It may occur that a node shuffles to its original software package. In such a case, when the shuffled software package is the same as the original software package, we do not count it toward the shuffling cost in Eq. \eqref{eq:metric_dc}. We treat this scheme as an adaptation scheme because it also involves changing a configuration of its software by using a different implementation although it does not make any change to the network topology. \end{itemize} The pseudocode for SDA is presented in Algorithm~\ref{algo:SDA} and that for Random-A is described in Algorithm 7 of the appendix file. In our experiment, we compare the performance of No-A, Random-A, Random-Graph-C, and three variants of SDA with three different thresholds $\rho$ in terms of the 4 performance metrics discussed in Section~\ref{subsec:metrics}. We treat Random-A, Random-Graph-C, and the SDA schemes as adaptation schemes while No-A is treated as a baseline scheme without adaptation. \begin{table}[t!] \caption{Key design parameters, their meanings, and their default values.} \vspace{-2mm} \begin{tabular}{|P{1cm}|p{5.5cm}|P{0.9cm}|} \hline \multicolumn{1}{|c}{Param.} & \multicolumn{1}{|c}{Meaning} & \multicolumn{1}{|c|}{Value} \\ \hline $N$ & Total number of nodes in a network & 1000 \\ $p$ & Connection probability between pairs of nodes in a ER network & 0.025 \\ $\gamma$ & Intrusion detection probability & 0.95 \\ $k$ & The upper bound of hops considered in calculating software diversity $SD_{k,l}^{i}$ & [1,2] \\ $l$ & The upper bound of $\#$ of paths considered in calculating software diversity $SD_{k,l}^{i}$ & 1 \\ $n_r$ & Number of simulation runs & 100 \\ $N_s$ & Number of software packages available & [3,7] \\ $P_a$ & percentage of attackers in a network & [10,30\%]\\ $\rho$ & Threshold of fraction of edges adapted & $[-1,1]$ \\ $\mathbf{SV}$ & \vspace{-10mm}A vector of vulnerabilities associated with software packages which are selected based on the uniform distribution with the range in $(0, 0.5]$ (i.e., $U (0, 0.5]$). For the maximum 7 different software packages, the $\mathbf{SV}$ of the corresponding vulnerabilities are used. & \scalebox{0.75}{$ \left[\begin{array}{c} 0.41 \\ 0.35 \\ 0.48 \\ 0.22\\ 0.16 \\ 0.19 \\ 0.12 \end{array}\right]^T$} \\ \hline \end{tabular} \label{table:param_default_values} \end{table} \subsection{Environment Setup} \begin{figure*}[!ht] \centering \subfigure[Dense Network]{ \includegraphics[width=0.251\textwidth, height=0.19\textwidth]{figs/fig8/dn.png}} \subfigure[Medium Network]{ \includegraphics[width=0.251\textwidth, height=0.19\textwidth]{figs/fig8/mn.png}} \subfigure[Sparse Network]{ \includegraphics[width=0.251\textwidth, height=0.19\textwidth]{figs/fig8/sn.png}} \caption{Effect of $\rho$ (fraction of edges to be adapted) on performance of SDA in terms of the size of a giant component ($S_g$) and the fraction of compromised nodes ($P_c$). The optimal $\rho$ for the SDA scheme with respect to $S_g$ and $P_c$ in dense, medium dense, and sparse networks are identified as $\rho = -0.6, \rho = -0.4$ and $\rho = 1$, respectively.} \label{fig_sensitivity} \end{figure*} \subsubsection{Parameters and Data Collection} Table~\ref{table:param_default_values} summarizes the key parameters, their meanings, and their default values used in this work. We use the average of the performance measures collected based on 100 simulation runs. In the experiment, we examine the effect of the following key design parameters on performance: (1) attack density (i.e., percentage of attackers); and (2) the number of software packages available. For the ER network, we also study the effect of the network connection probability on performance in Section C.3 of the appendix file. \subsubsection{Network Topology Datasets} We setup 4 different undirected networks to evaluate the proposed work: (1) a sparse network from an observation of the Internet at the autonomous systems level~\cite{snapnets}; (2) a medium dense network derived from an Enron email network~\cite{snapnets}; (3) a dense Facebook ego network~\cite{snapnets}; and (4) an Erd{\"o}s-R{\'e}nyi (ER) random network~\cite{Newman10}. The network topologies and their degree distributions are shown in Figs.~1 and 2 of the appendix file. Except for the medium dense network, we use the original network topologies. For the medium dense network, in order to derive a network of comparable size with the other networks (the Enron email network has 36,692 nodes and 183,831 edges) we generate the medium dense network with 985 nodes and 7,994 edges by taking the following procedures: (i) Rank all nodes in the Enron email network by degree in a descending order; (ii) identify the medium dense network as the largest connected component of the induced subgraph, consisting of nodes with ranks from 501 to 1500, from the original graph. \subsubsection{Optimal Parameter Settings Used for SDA} \label{subsec:results_sensitivity} {\bf Fraction of edges to be adapted ($\rho$)}: We have conducted a sensitivity analysis of $\rho$ for the SDA scheme in terms of maximizing the size of a giant component ($S_g$) for network resilience without overly increasing the fraction of compromised nodes ($P_c$) for network security. As shown in Fig.~\ref{fig_sensitivity}, the optimal $\rho$ for the SDA scheme with respect to $S_g$ and $P_c$ in dense, medium dense and sparse networks have been identified as $\rho = -0.6, \rho = -0.4$ and $\rho = 1$, respectively. Due to space constraints, we have conducted the sensitivity analysis of $\rho$ for the ER random network in Appendix C.2 of the appendix file, from which we have observed the optimal $\rho$ with respect to $S_g$ and $P_c$ for the ER random network is $-0.6$. In summary, the optimal values of $\rho$ are observed at $-0.6$, $-0.4$, $1$, and $-0.6$ for dense, medium dense, sparse, and ER random networks, respectively. \vspace{1mm} \noindent {\bf The number of maximum attack paths ($l$) and the maximum hop distance in each attack path ($k$)}: The network type (i.e., dense, medium dense, sparse, or ER random) affects node density which in turn can affect the optimal setting of $l$ and $k$ under which SDA can best achieve both security (i.e., a low fraction of compromised nodes) and network resilience (i.e., a large size of the giant component). We have conducted a sensitivity analysis of $l$ or $k$ on performance of the SDA scheme in all four types of networks. Due to space constraints, we put the sensitivity analysis of $l$ and $k$ on performance of SDA in Sections D and E of the appendix file. In summary, for dense, medium dense, and ER random networks, we have selected $k=1$ and $l=1$ to calculate software diversity $sd_i (k, l)$ for each node in the network because we have observed no significant performance improvement with $k> 1$ and $l>1$. For the sparse network, we have not observed high sensitivity when $l> 1$. However, for $k$, we have observed that SDA performs the best when $k=2$ with $\rho=1$. Thus, we have selected $k=2$ and $l=1$ for the sparse network. \section{Experimental Results and Analysis} \label{sec:exp-result-analysis} In this section, we present the experimental results for a comparative performance analysis of the proposed SDA scheme against the counterpart baseline schemes and provide physical interpretations of the results. In our experiment, we compare 6 schemes: (1) Non-adaptation (No-A); (2) Random adaptation (Random-A); (3) Random graph coloring (Random-Graph-C); (4) SDA with $\rho=0$; (5) SDA with $\rho=1$; and (6) SDA with optimal $\rho$. See Section~\ref{subsec:comparing_schemes} for more detail on how each scheme is implemented. The 6th scheme "SDA with optimal $\rho$" is network-type dependent. As discussed earlier in Section \ref{subsec:results_sensitivity}, the optimal values of $\rho$ are observed at $-0.6$, $-0.4$, $1$, and $-0.6$ for dense, medium dense, sparse, and ER random networks, respectively. Initially a set of attackers is randomly and uniformly distributed to the network based on the percentage of attackers parameter $P_a$ and all such attackers perform epidemic attacks as described in Section~\ref{subsec:attack_model}. See Algorithm 8 of the appendix file for detail on how the attackers perform epidemic attacks. Below we only report the experimental results under dense, medium dense, and sparse networks. The experimental results under the ER random network are reported in Section C.3 of the appendix file due to space constraints. \begin{figure*}[!ht] \centering \subfigure{ \includegraphics[width=0.65\textwidth, height=0.02\textwidth]{figs/fig2/legend.png}}\vspace{-1em} \setcounter{subfigure}{0} \subfigure[Fraction of compromised nodes ($P_c$)]{ \includegraphics[width=0.25\textwidth, height=0.19\textwidth]{figs/fig2/P_c.png}} \subfigure[Size of a giant component ($S_g$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig2/S_g.png}}\hspace{-0.85em} \subfigure[Software diversity ($SD$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig2/SD.png}} \subfigure[Defense cost ($D_c$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig2/D_c.png}}\hspace{-0.85em} \caption{Effect of varying the fraction of attackers ($P_a$) under a dense network.} \label{fig_p_a_dn} \end{figure*} \begin{figure*}[!ht] \centering \subfigure{ \includegraphics[width=0.65\textwidth, height=0.02\textwidth]{figs/fig3/legend.png}}\vspace{-1em} \setcounter{subfigure}{0} \subfigure[Fraction of compromised nodes ($P_c$)]{ \includegraphics[width=0.25\textwidth, height=0.19\textwidth]{figs/fig3/P_c.png}}\hspace{-0.5em} \subfigure[Size of a giant component ($S_g$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig3/S_g.png}}\hspace{-0.85em} \subfigure[Software diversity ($SD$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig3/SD.png}} \subfigure[Defense cost ($D_c$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig3/D_c.png}}\hspace{-0.85em} \caption{Effect of the number of software packages ($N_s$) under a dense network.} \label{fig_n_s_dn} \end{figure*} \subsection{Comparative Performance Analysis under a Dense Network} \begin{figure*}[!ht] \centering \subfigure{ \includegraphics[width=0.65\textwidth, height=0.02\textwidth]{figs/fig4/legend.png}}\vspace{-1em} \setcounter{subfigure}{0} \subfigure[Fraction of compromised nodes ($P_c$)]{ \includegraphics[width=0.25\textwidth, height=0.19\textwidth]{figs/fig4/P_c.png}}\hspace{-0.5em} \subfigure[Size of a giant component ($S_g$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig4/S_g.png}}\hspace{-0.85em} \subfigure[Software diversity ($SD$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig4/SD.png}}\hspace{-0.85em} \subfigure[Defense cost ($D_c$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig4/D_c.png}}\hspace{-0.85em} \caption{Effect of varying the fraction of attackers ($P_a$) under a medium network.} \label{fig_p_a_mn} \end{figure*} \begin{figure*}[!ht] \centering \subfigure{ \includegraphics[width=0.65\textwidth, height=0.02\textwidth]{figs/fig5/legend.png}}\vspace{-1em} \setcounter{subfigure}{0} \subfigure[Fraction of compromised nodes ($P_c$)]{ \includegraphics[width=0.25\textwidth, height=0.19\textwidth]{figs/fig5/P_c.png}}\hspace{-0.5em} \subfigure[Size of a giant component ($S_g$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig5/S_g.png}}\hspace{-0.85em} \subfigure[Software diversity ($SD$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig5/SD.png}} \subfigure[Defense cost ($D_c$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig5/D_c.png}}\hspace{-0.85em} \caption{Effect of the number of software packages ($N_s$) under a medium network.} \label{fig_n_s_mn} \end{figure*} \label{subsec:results_dense} \subsubsection{Effect of Varying the Fraction of Initial Attacks ($P_a$)} Fig.~\ref{fig_p_a_dn} shows the effect of varying the attack density ($P_a$) on the performance of the six schemes in terms of the four metrics in Section~\ref{subsec:metrics} under the dense network, whose network topology and degree distribution are shown in Fig. 1 (a) of the appendix file. We observe that increasing the percentage of attackers ($P_a$) decreases software diversity ($SD$) and the size of the giant component ($S_g$) while increasing the percentage of compromised nodes ($P_c$) and the defense cost ($D_c$). We note that when more nodes are compromised, the defense cost would also increase since it requires more site percolation based adaptations to be performed when compromised nodes are detected by the IDS (i.e., for disconnecting all edges of a detected, compromised node). The overall performance order with respect to $P_c$ (representing network security) and $S_g$ (representing network connectivity and resilience) is observed as: SDA with optimal $\rho$ (set at $-0.6$) $\geq$ SDA with $\rho = 0 \geq \text{Random-Graph-C} \approx \text{No-A} \geq$ SDA with $\rho =1 \geq$ Random-A. It is apparent that the network density of a given network significantly affects both security and performance since SDA with $\rho=-0.6$ and SDA with $\rho=0$ have relatively fewer edges after adaptation and perform better than the other schemes in terms of $P_c$, $S_g$ and $SD$ (i.e., the average software diversity level), as shown in Figs.~\ref{fig_p_a_dn} (a)-(c). In Fig.~\ref{fig_p_a_dn} (d), SDA with optimal $\rho$ (set at $-0.6$) also shows significant resilience with relatively low defense cost ($D_c$) as $P_a$ increases. The overall performance order for other five schemes in $D_c$ is: Random-Graph-C $\geq$ Random-A $\geq$ SDA with $\rho =1 \geq$ SDA with $\rho =0 \geq$ No-A. Not only that SDA schemes outperform the counterpart baseline schemes in $P_c$, $S_g$, and $SD$, but also the defense cost of SDA schemes are significantly lower than that of Random-Graph-C and are comparable with Random-A (e.g., compared to SDA with optimal $\rho =-0.6$) and No-A (e.g., compared to SDA with $\rho =0$). This is a significant merit as SDA-based schemes outperform the counterpart baseline schemes with relatively low defense cost. \subsubsection{Effect of Varying the Number of Software Packages ($N_s$)} Fig.~\ref{fig_n_s_dn} shows the effect of varying the number of software packages available ($N_s$) on the performance of the six schemes with respect to the metrics defined in Section~\ref{subsec:metrics} under the dense network. We observe that increasing the number of software packages available ($N_s$) increases software diversity ($SD$) and the size of the giant component ($S_g$) while decreasing the percentage of compromised nodes ($P_c$) and the defense cost ($D_c$). Note that based on the concept of $N$-version programming, the number of software packages ($N_s$) here refers to the number of versions being implemented for the same piece of software. Hence, as $N_s$ increases, the software diversity strength increases, resulting in a decrease of the percentage of nodes being compromised due to attacks, an increase of the network connectivity, and a decrease of the defense cost because less nodes are being compromised. The overall performance order in $P_c$, $S_g$, and $SD$ is very similar to what we observed in Fig.~\ref{fig_p_a_dn}, with SDA with optimal $\rho=-0.6$ outperforming all other schemes. For $D_c$, SDA with optimal $\rho=-0.6$ generates a defense cost comparable to that generated by Random-A and in-between those generated by No-A (lowest cost) and Random-Graph-C (highest cost). \subsection{Comparative Performance Analysis Under a Medium Network} \label{subsec:results_mn} \begin{figure*}[!ht] \centering \subfigure{ \includegraphics[width=0.6\textwidth, height=0.02\textwidth]{figs/fig6/legend.png}}\vspace{-1em} \setcounter{subfigure}{0} \subfigure[Fraction of compromised nodes ($P_c$)]{ \includegraphics[width=0.25\textwidth, height=0.19\textwidth]{figs/fig6/P_c.png}}\hspace{-0.5em} \subfigure[Size of a giant component ($S_g$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig6/S_g.png}}\hspace{-0.5em} \subfigure[Software diversity ($SD$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig6/SD.png}} \subfigure[Defense cost ($D_c$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig6/D_c.png}}\hspace{-0.85em} \caption{Effect of varying the fraction of attackers ($P_a$) under a sparse network.} \label{fig_p_a_sn} \end{figure*} \begin{figure*}[!ht] \centering \subfigure{ \includegraphics[width=0.6\textwidth, height=0.02\textwidth]{figs/fig7/legend.png}}\vspace{-1em} \setcounter{subfigure}{0} \subfigure[Fraction of compromised nodes ($P_c$)]{ \includegraphics[width=0.25\textwidth, height=0.19\textwidth]{figs/fig7/P_c.png}}\hspace{-0.4em} \subfigure[Size of a giant component ($S_g$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig7/S_g.png}}\hspace{-0.85em} \subfigure[Software diversity ($SD$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig7/SD.png}} \subfigure[Defense cost ($D_c$)]{ \includegraphics[width=0.24\textwidth, height=0.19\textwidth]{figs/fig7/D_c.png}}\hspace{-0.85em} \caption{Effect of the number of software packages ($N_s$) under a sparse network.} \label{fig_n_s_sn} \vspace{-3mm} \end{figure*} \subsubsection{Effect of Varying the Fraction of Initial Attacks ($P_a$)} Fig.~\ref{fig_p_a_mn} demonstrates the effect of varying the percentage of initial attacks on metrics defined in Section~\ref{subsec:metrics} under the medium network, whose network topology and degree distribution are shown in Fig.1 (b) of the appendix file. Similar to Fig.~\ref{fig_p_a_dn}, Fig.~\ref{fig_p_a_mn} also shows that increasing the percentage of attackers ($P_a$) decreases software diversity ($SD$) and the size of the giant component ($S_g$) while increasing the percentage of compromised nodes ($P_c$) and the defense cost ($D_c$). The overall performance order in terms of $P_c$ (representing network security) and $S_g$ (representing network connectivity and resilience) is: SDA with optimal $\rho = -0.4 \geq$ SDA with $\rho =0 \geq$ SDA with $\rho =1 \approx$ Random-A $\geq$ Random-Graph-C $\approx$ No-A. In terms of $SD$ (software diversity), a similar performance order is observed except that Random-Graph-C has a higher $SD$ than No-A. These results demonstrate that SDA schemes clearly are more effective than traditional software shuffling schemes that do not change the network topology (e.g., Random-Graph-C). For the defense cost($D_c$), the overall performance order (the lower cost the better) is: Random-Graph-C $\geq$ SDA with optimal $\rho =-0.4 \geq$ Random-A $\approx$ SDA with $\rho =1 \geq$ SDA with $\rho =0 \geq$ No-A. Again these results support the claim that SDA-based schemes incur relatively low cost, while outperforming all counterpart baseline schemes in $P_c$, $S_g$, and $SD$. \subsubsection{Effect of Varying the Number of Software Packages ($N_s$)} Fig.~\ref{fig_n_s_mn} shows the effect of $N_s$ on performance under the medium network. We again observe that increasing the number of software packages available ($N_s$) increases software diversity ($SD$) and the size of the giant component ($S_g$) while decreasing the percentage of compromised nodes ($P_c$) and the defense cost ($D_c$). The overall performance order is the same as that in Figs.~\ref{fig_p_a_mn}, with SDA with optimal $\rho=-0.4$ outperforming all other schemes in terms of $SD$, $S_g$, and $P_c$ and performing comparably to Random-A in terms of $D_c$. By comparing Fig.~\ref{fig_n_s_mn} (for the medium dense network) with Fig.~\ref{fig_n_s_dn} (for the dense network), we also observe that SDA with optimal $\rho$ is more effective in the dense network. We attribute this to node density. That is, SDA is more effective when there are many node connections between nodes in the network allowing SDA to effectively decide which edges to add or remove to effectively maximize software diversity ($SD$) and the size of the giant component ($S_g$) thereby minimizing the percentage of compromised nodes ($P_c$). \subsection{Comparative Performance Analysis Under a Sparse Network} \label{subsec:results_sparse} \subsubsection{Effect of Varying the Fraction of Initial Attacks ($P_a$)} Fig.~\ref{fig_p_a_sn} shows the effect of varying the initial attack density ($P_a$) on the performance of the five schemes with respect to the 4 performance metrics discussed in Section \ref{subsec:metrics} under the sparse network, whose network topology and degree distribution are shown in Fig. 1 (c) of the appendix file. Unlike in the cases of medium and dense networks, the SDA with optimal $\rho$ scheme in the sparse network is the same as the SDA with $\rho=1$ scheme which restores all edges from the lost edges in Step 1 (i.e., $\rho =1$). Therefore, we only show comparative experimental results of the five schemes. In the sparse network, the degrees of most nodes are very small, implying that nodes are minimally connected where most nodes only have 1-3 neighbors at most. This means that the network itself is relatively much less vulnerable to epidemic attacks because the attackers inherently cannot reach many nodes to compromise due to network sparsity. On the other hand, this means that when there is a higher percentage of attackers, the damage upon an attack success (i.e., failing or compromising a node) is more detrimental by resulting in a much smaller size of the giant component representing a significantly lower network resilience (or availability), which introduces a great hindrance to providing continuous services due to a lack of paths available from a source to a destination. This trend can be clearly observed with the sharp decrease in the size of the giant component ($S_g$) under high attack density (i.e., $P_a=0.24$), when compared to the corresponding results under the medium network (i.e., Fig.~\ref{fig_p_a_mn} (b)). A more interesting result is that the overall performance trend does not follow the previous results shown under the dense network (i.e., Fig.~\ref{fig_p_a_dn}) and medium network (i.e., Fig.~\ref{fig_p_a_mn}) which have a sufficiently larger number of edges than the sparse network. The performance order in $S_g$ is: SDA with optimal $\rho = 1 \geq$ Random-A $\geq$ Random-Graph-C $\approx$ No-A $\geq$ SDA with $\rho =0$. Since the original network itself is sparsely connected, SDA with $\rho=0$ is not as effective as shown in our previous results for $S_g$ under the dense network (see Fig.~\ref{fig_p_a_dn}) and medium network (see Fig.~\ref{fig_p_a_mn}). SDA with optimal $\rho =1$ with all edges restored from the lost edges in Step 1 performs the best in $S_g$. This result is reasonable because the sparse network does not need to disconnect more edges because it is already sparse enough and significantly less vulnerable to epidemic attacks. The overall performance with respect to $P_c$ is very similar among all the five schemes, with slightly better results in two SDA schemes. Similarly to the result shown for the dense network, Random-Graph-C exhibits the same level of performance as No-A in $P_c$ and $S_g$, but with a higher software diversity ($SD$). This indicates the advantage of topology-aware adaptation in a sparse network. For the defense cost ($D_c$) the performance order is: Random-Graph-C $\geq$ Random-A $\geq$ SDA with optimal $\rho =1 \geq$ SDA with $\rho =0 \geq$ No-A. It is interesting to observe that all SDA-based schemes incur a lower defense cost than Random-A and Random-Graph-C possibly due to fewer compromised nodes in the system and thus less frequent IDS interventions. \subsubsection{Effect of Varying the Number of Software Packages ($N_s$)} Fig.~\ref{fig_n_s_sn} shows the effect of varying the number of software packages available ($N_s$) on the performance of the five schemes under the sparse network. As expected, as $N_s$ increases, $SD$ (software diversity) increases and $D_c$ (defense cost) decreases. As $N_s$ increases, $S_g$ (size of the giant component) also increases for all schemes except for the SDA with optimal $\rho=1$ scheme. The reason is that when $\rho=1$, SDA will restore all edges removed in Step 1 (see Step 1 in Algorithm~\ref{algo:SDA}). When $N_s$ is higher, fewer edges will be removed in Step 1 because of a smaller probability that two neighbor nodes will have the same software package. Consequently, when $N_s$ is higher, the very same smaller number of edges will be added back in Step 2 (see Step 2 in Algorithm~\ref{algo:SDA}), thus resulting in the size of the giant component in the shuffled topology not necessarily larger than the one when $N_s$ is lower. By comparing Fig.~\ref{fig_n_s_sn} (for the sparse network) with Fig.~\ref{fig_n_s_mn} (for the medium dense network) and Fig.~\ref{fig_n_s_dn} (for the dense network), we notice that SDA with optimal $\rho$ is most effective in the dense network. We conclude that our proposed SDA algorithm is most effective in a dense network under which SDA can effectively decide which edges among many to add or remove to effectively maximize software diversity ($SD$) and the size of the giant component ($S_g$) as well as minimizing the percentage of compromised nodes ($P_c$). \section{Conclusions} \label{sec:conclusion} \subsection{Summary} In this section, we summarize the contributions of this work: \begin{itemize} \item We proposed a software diversity metric based on vulnerabilities of attack paths reachable to each node. We called this scheme `software diversity-based adaptation' (SDA) and used it to adapt edges to generate a resilient network topology that can minimize security vulnerability while maximizing network resilience (or connectivity) to provide seamless services under epidemic attacks. \item We conducted extensive simulation experiments in order to demonstrate the performance of the proposed SDA scheme compared against other existing counterpart baseline schemes (i.e., random adaptation, random graph coloring, and no adaptation). Via extensive simulation experiments, we found our proposed SDA scheme outperforms counterpart baseline schemes in terms of the fraction of compromised nodes by epidemic attacks, the size of the giant component, and the level of software diversity. In addition, we analyzed the defense cost associated with each scheme and proved the proposed SDA scheme incurs comparable defense cost over existing counterparts. \item We also identified the optimal setting for executing SDA to meet the imposed performance goals. This allows each node to efficiently compute its software diversity value and use it for adapting edges to maximize its software diversity, leading to minimizing security vulnerability while maximizing network connectivity. \item We conducted an extensive simulation study with four different real networks in order to investigate the effect of network density on the optimal setting of SDA under which it can best achieve the dual goals of security (i.e., minimum vulnerability) and performance (i.e., maximum network connectivity). \item We effectively incorporated the techniques of percolation theory in the network science domain into software diversity-based security analysis in the computer science domain. To be specific, in terms of the computer science perspective, the proposed software diversity metric used attack path vulnerabilities, which are derived based on software vulnerabilities of the intermediate nodes on the attack paths. On the other hand, in terms of the network science perspective, this work also adopted percolation theory to examine the effect of software diversity-based edge adaptation on network resilience measured by the size of the giant component. Based on the rationale that network interconnectivity can increase both network vulnerability and network connectivity~\cite{Barabasi2016}, this work addressed the tradeoff relationship in the context of cybersecurity, which has not been addressed in the literature. \end{itemize} \subsection{Key Findings} From our extensive simulation experiments, we obtained the following key findings: \begin{itemize} \item Overall under epidemic attacks, more interconnectivity between nodes in a network introduces higher security vulnerability while bringing a larger size of the giant component, implying higher network connectivity. In addition, when two nodes use the same software package where the vulnerability of the software package is known to an attacker, it provides a high advantage to the attacker. How nodes are connected to each other is highly critical in determining the network's vulnerability to epidemic attacks. \item Even if two network topologies have the same network density (i.e., the same number of edges), how nodes are connected to each other can vastly change the extent of the security vulnerability to epidemic attacks. It is even possible that a sparser network may introduce more security vulnerability than a denser network depending on how the nodes are connected to each other. \item It is not necessary to consider the entire network topology for each node to make effective edge adaptation decisions to minimize security vulnerability while maximizing network connectivity. Our SDA algorithm allows each node to make effective decisions on edge adaptation in a lightweight manner. This is because edge adaptation decisions are determined based on ranking of node software vulnerability values, which is more flexible than using a threshold, to achieve the dual goals of security and performance. \item Under medium dense and dense networks, our SDA scheme significantly outperforms existing counterpart baseline schemes. However, under the sparse network, although our SDA scheme still outperforms other schemes, the difference was less significant. We conclude that our SDA scheme is most effective in a dense network under which SDA can effectively decide which edges among many existing connections to add or remove to effectively maximize software diversity and the size of the giant component as well as minimizing the percentage of compromised nodes. \item Our proposed SDA scheme is extremely resilient to harsh environments. The performance gain relative to counterpart baseline schemes increases as the environment is harsher, i.e., as the percentage of attackers increases or as the number of the software packages decreases. This proves the high resilience of the proposed SDA scheme under a highly disadvantageous environment. \end{itemize} \begin{comment} \subsection{Future Work} We plan to take the following future research directions: \begin{itemize} \item We will consider system dynamics derived from node mobility and node/edge adaptation upon knowing the security status of nodes, which have not been considered in this present work. We will link our current software diversity-based network adaptation strategy as a diversity-based network topology shuffling technique in moving target defense. \item We will consider the multiagent deep reinforcement learning (mDRL) algorithm for each node to autonomously determine edge adaptation decisions which can lead to an optimal solution of the robust network topology for maximizing network resilience under attacks. \item We will consider a more sophisticated attack type, such as advance persistent threat (APT) attacks in which a highly intelligent attacker persistently conducts multi-staged attacks based on the cyber kill chain~\cite{okhravi2013survey}. \item We will consider targeted attacks based on various types of centrality metrics and investigate their effect on network resilience and system security. \end{itemize} \end{comment} \bibliographystyle{IEEETranSN}
{ "timestamp": "2020-07-17T02:21:49", "yymm": "2007", "arxiv_id": "2007.08469", "language": "en", "url": "https://arxiv.org/abs/2007.08469" }
\section{Introduction} \label{sec:1} At the intraseasonal to interannual time scales, the variability of the large-scale atmospheric circulation in the mid-latitudes of both hemispheres is dominated by the ``annular modes'', which are usually defined based on empirical orthogonal function (EOF) analysis of zonal-mean meteorological fields (e.g., \citealt{Kidson1988,ThompsonWallace1998,ThompsonWallace2000,Lorenz2001,Lorenz2003,ThompsonWoodworth2014,ThompsonLi2015}). The barotropic annular modes are often derived as the first (i.e., leading) EOF (EOF1) of zonal-mean zonal wind, which exhibits a dipolar meridional structure and describes a north-south meandering of the eddy-driven jet. Note that in this paper, the focus is on the barotropic annular modes, hereafter simply called annular modes (see \citealt{ThompsonWoodworth2014,ThompsonBarnes2014}, and \citealt{ThompsonLi2015} for discussions about the ``baroclinic annular modes''). The second EOF of zonal-mean zonal wind (EOF2) has a tripolar meridional structure centered on the jet, describing a strengthening and weakening of the eddy-driven jet (i.e., jet pulsation). By construction, EOF1 and EOF2 (and any two EOFs) are orthogonal and their associated time series (i.e., principal components, PCs), sometimes called zonal index, are independent at zero time lag. The persistence of the annular mode (EOF1) and its underlying dynamics have been the subject of extensive research and debate in the past three decades \citep[e.g.,][]{Robinson1991,Branstator1995,FeldsteinLee1998,Robinson2000,Lorenz2001,Lorenz2003,Gerber2007,Gerber2008,ChenPlumb2009,SimpsonShepherd2013,Zurita2014,Nie2014,Byrne2016,MaHassanzadehKuang2017,HassanzadehKuang2019}. Many of the aforementioned studies have pointed to a positive eddy-zonal flow feedback mechanism as the source of the persistence: The zonal wind and temperature anomalies associated with the annular mode (EOF1) modify the generation and/or propagation of the synoptic eddies at the quasi-steady limit (greater than 7 days) in such a way that the resulting eddy fluxes reinforce the annular mode (see \citet{HassanzadehKuang2019} and the discussion and references therein). Most notably, \cite{Lorenz2001} developed a linear eddy-zonal flow feedback model (LH01 model hereafter) for the annular modes by regressing the anomalous eddy momentum flux divergence onto the zonal index of EOF1 ($z_1$) and interpreting correlations between $z_1(t)$ and regressed momentum flux divergence ($m_1(t)$) at long lags (greater than 7 days) as evidence for eddy-zonal flow feedbacks, i.e., feedbacks of EOF1 onto itself. \citet{Lorenz2001} developed a similar model, separately, for EOF2 and found, respectively, positive and weak eddy-zonal flow feedbacks for EOF1 and EOF2, respectively, consistent with the longer persistence of EOF1 compared to EOF2. Such single-EOF eddy-zonal flow feedback models have been used in most of the subsequent studies of the annular modes \citep[e.g.,][]{Lorenz2003, SimpsonShepherd2013,Lorenz2014,Robert2017,MaHassanzadehKuang2017,BoljkaShepherd2018,HassanzadehKuang2019,Lindgren2020}. While EOF1 and EOF2 are independent at zero lag, a few previous studies have pointed out that these two EOFs can be correlated at long lags (e.g., greater than 10 days), and that in fact the combination of these two leading EOFs represents coherent meridional propagations of the zonal-mean flow anomalies. Such propagating regimes have been observed in both hemispheres in reanalysis data \citep[e.g.,][]{Feldstein1998,FeldsteinLee1998,SheshadriPlumb2017}. Anomalous poleward propagation of zonal wind typically emerges in low latitudes and mainly migrate poleward over a few months, although non-propagating regimes can also appear in some instances (see Fig.~1 of \citealt{SheshadriPlumb2017} and Fig.~\ref{fig:6} in this paper). Similar behaviors have also been reported by in general circulation models (GCMs) (e.g., \citealt{JamesDodd1996,SonLee2006,SonLee2008,SheshadriPlumb2017}). \cite{SonLee2006} found that the leading mode of variability in an idealized dry GCM can be either the propagating or non-propagating regime depending on the details of thermal forcing imposed in the model. They also found that unlike the non-propagating regimes, the $z_1$ and $z_2$ of the propagating regimes are strongly correlated at long lags, peaking around $50$~days (see their Fig.~3; also Figs.~\ref{fig:4}b of the present paper). Furthermore, \cite{SonLee2006} reported that non-propagating regimes are often characterized by a single time-mean jet with a dominant EOF1 (in terms of the explained variance) while the propagating regimes are characterized by a double time-mean jet in the mid-latitudes with the variance associated with EOF2 being at least half of the variance of EOF1. Furthermore, \cite{SonLee2008} found that the $e$-folding decorrelation time scale of $z_1$ in the propagating regime to be much shorter than that of the non-propagating regime. The long $e$-folding decorrelation time scales for the annular modes in the non-propagating regime were attributed to an unrealistically strong positive EOF1-onto-EOF1 feedback, while the reason behind the reduction in the persistence of the annular modes in the propagating regime remained unclear. More recently, \citet{SheshadriPlumb2017} presented further evidence for the existence of propagating and non-propagating regimes and strong lagged correlations between $z_1$ and $z_2$ in reanalysis data of the Southern Hemisphere (SH) and in idealized GCMs. Moreover, they elegantly showed, using a principal oscillation patterns (POP) analysis \citep{Hasselmann1988,Penland1989}, that EOF1 and EOF2 are in fact manifestations of a single, decaying-oscillatory coupled mode of the dynamical system. Specifically, they found that EOF1 and EOF2 are, respectively, the real and imaginary parts of a single POP mode, which describes the dominant aspects of the spatio-temporal evolution of zonal wind anomalies. \citet{SheshadriPlumb2017} also showed that in the propagating regime, the auto-correlation functions of $z_1$ and $z_2$ decay non-exponentially. Given the above discussion, a single-EOF model is not enough to describe a propagating regime because the EOF1 and EOF2 in this regime are strongly correlated at long lags and that the auto-correlation functions of the associated PCs do not decay exponentially (but rather show some oscillatory behaviors too). From the perspective of eddy-zonal flow feedbacks, one may wonder whether there are cross-EOF feedbacks in addition to the previously studied EOF1 (EOF2) eddy-zonal flow feedback onto EOF1 (EOF2) in the propagating regime. In cross-EOF feedbacks, EOF1 (EOF2) changes the eddy forcing of EOF2 (EOF1) in the quasi-steady limit. Therefore, there is a need to extend the single-EOF model of LH01 and build a model that includes, at a minimum, both leading EOFs and accounts for their cross feedbacks. The objective of the current study is to develop such a model and to use it to estimate effects of the cross-EOF feedbacks on the variability of propagating annular modes. The paper is structured as follows: Section~\ref{sec:2} compares the characteristics of $z_1$, $z_2$, $m_1$, and $m_2$ for the non-propagating and propagating annular modes in reanalysis and idealized GCMs. In Section~\ref{sec:3} , we develop a linear eddy-zonal flow feedback model that accounts for cross-EOF feedbacks, validate the model using synthetic data from a stochastic prototype, discuss the key properties of the analytical solution of this model, and apply this model to data from reanalysis and an idealized GCM. The paper ends concluding remarks in Section~\ref{sec:4}. \section{Propagating annular modes in an idealized GCM and reanalysis} \label{sec:2} In this section, we will examine the basic characteristics and statistics of propagating annular modes in an idealized GCM (the dry dynamical core) and reanalysis. We focus on the southern annular mode, which makes it easier to compare the results of the reanalysis and the idealized aquaplanet GCM simulations. We will start with the idealized GCM to demonstrate the characteristics of the propagating and non-propagating annular modes. \subsection{An idealized GCM: The dry dynamical core} \label{sec:21} We use the Geophysical Fluid Dynamics Laboratory (GFDL) dry dynamical core GCM. The GCM is run with a flat, uniform lower boundary (i.e., aquaplanet) with T63 spectral resolution and 40 evenly spaced sigma levels in the vertical for 50000-day integrations after spinup. The physics of the model is based on \cite{Held_Suarez1994}, an idealized configuration for generating a realistic global circulation with minimal parameterization \citep{Held2005,Jeevanjee2017}. All diabatic processes are represented by Newtonian relaxation of the temperature field toward a prescribed equilibrium profile, and Rayleigh friction is included in the lower atmosphere to mimic the interactions with the boundary layer. \begin{figure} \centering \includegraphics[width=19pc,angle=0,trim={2cm 1cm 1cm 2cm},clip]{cross_cor_wind_recons.pdf} \caption{One-point lag-correlation maps of the vertically averaged zonal-mean zonal wind anomalies $\langle\overline{u}\rangle$, reconstructed from projections onto the two leading EOFs of $\langle\overline{u}\rangle$ for the (a) non-propagating regime and (b) propagating regime in two setups of an idealized GCM. The base latitude is at 30$^{\circ}$S and the contour interval is 0.1. Regions enclosed by contour lines denote values significant at the 95\% level.} \label{fig:1} \end{figure} The non-propagating and propagating regimes are produced in two slightly different setups of this model. For the setup with non-propagating regime, we use the standard configuration of \cite{Held_Suarez1994}, which employs an analytical profile approximating a troposphere in unstable radiative-convective equilibrium and an isothermal stratosphere for Newtonian relaxation. For the setup with propagating regime, we follow an approach similar to the one used by \cite{SheshadriPlumb2017}. In this setup, for the equilibrium temperature profile in the troposphere and stratosphere, we use the perpetual-solstice version of the equilibrium temperature specifications used in \cite{LubisHuang2018}, calculated from a rapid radiative transfer model (RRTM), with winter conditions in the SH. As will be seen later, these choices result in a large-scale circulation with reasonable annular mode time scales in the SH. In Fig.~\ref{fig:1}, we show, following \citet{SonLee2006}, the one-point lag-correlation maps for the vertically averaged zonal-mean zonal wind anomalies $\langle\overline{u}\rangle$ reconstructed from projections onto the two leading EOFs of $\langle\overline{u}\rangle$ for the two setups (hereafter, angle brackets and overbars denote the vertical and zonal averages, respectively). The anomalies are defined as the deviations from the time mean. The non-propagating and propagating regimes are clearly seen in Figs.~\ref{fig:1}a and ~\ref{fig:1}b, respectively. In the latter, the propagating anomalies emerge in low latitudes and propagate generally poleward over the course of 3-4 months. In contrast, the non-propagating regime is characterized by persistence zonal flow anomalies in the mid-latitude (Fig.~\ref{fig:1}a). To understand the relationship between zonal-mean zonal wind and eddy forcing in the non-propagating and propagating annular modes, the vertically averaged zonal-mean zonal wind anomalies ($\langle\overline{u}\rangle$) and vertically averaged zonal-mean eddy momentum flux convergence anomalies ($ \langle\overline{F}\rangle$) are projected onto the leading EOFs of $\langle\overline{u}\rangle$ following \citet{Lorenz2001}. The time series of zonal index ($z$) and eddy forcing ($m$) associated with EOF1 and EOF2 are formulated as: \begin{equation} z_{1,2}(t) = \frac{ \bf{\langle\overline{u} \rangle} \mathit{(t)}\; \bf{We_\mathrm{{1,2}}}}{\sqrt{\bf{e}^{T}_\mathrm{{1,2}}\bf{We_\mathrm{{1,2}}}}}, \end{equation} \begin{equation} m_{1,2}(t) = \frac{ \bf{\langle\overline{F} \rangle} \mathit{(t)}\; \bf{We_\mathrm{{1,2}}}}{\sqrt{\bf{e}^{T}_\mathrm{{1,2}}\bf{We_\mathrm{{1,2}}}}}, \end{equation} where $z_{1,2}$ ($m_{1,2}$) denotes the component of the field $\langle\overline{u}\rangle$ ($ \langle\overline{F}\rangle$) that projects onto the latitudinal structure of the two leading EOFs. $\bf{\langle\overline{u} \rangle} \mathit{(t)}$ and $\bf{\langle\overline{F} \rangle} \mathit{(t)}$ are $\langle \overline{u} \rangle (\phi,t)$ and $\langle \overline{F} \rangle (\phi,t)$ with their latitude dimension vectorized, $\bf{W}$ is a diagonal matrix whose elements are the $\cos(\phi)$ weighting used when defining the EOF structure $\bf{e}$, and $\phi$ is latitude \citep{SimpsonShepherd2013,MaHassanzadehKuang2017}. Here, the vertically averaged zonal-mean eddy momentum flux convergence $ \langle\overline{F}\rangle$ is calculated in the spherical coordinate as: \begin{equation} \langle\overline{F} \rangle (\phi,t) =- \frac{1}{\cos^2\phi} \frac{\partial (\langle\overline{u'v'}\cos^2 \phi \rangle )}{a\partial \phi } \end{equation} where $u'$ and $v'$ are deviations of zonal wind and meridional wind from their respective zonal means, and $a$ is Earth's radius. Figure~\ref{fig:2} shows lagged-correlation analysis between $z$ and $m$ in the GCM setup with non-propagating regime. The auto-correlation of $z_1$, as discussed in past studies \citep[e.g.,][]{ChenPlumb2009,MaHassanzadehKuang2017}, has a noticeable shoulder at around 5-day lags and shows an unrealistically persistence annular mode, well separated from the faster decaying $z_2$, which is consistent with the considerable difference in the contribution of the two EOFs to the total variance (60.2\% versus 19.2\%). The $e$-folding decorrelation time scales of $z_1$ and $z_2$ are $64.5$ and $4.8$ days, respectively. The strong, positive cross-correlations of $m_1z_1$ and insignificant cross-correlations of $m_2z_2$ at large positive lags suggest the existence of a positive eddy-zonal flow feedback for EOF1 (from EOF1) but not for EOF2 (see \citet{SonLee2008} and \citet{MaHassanzadehKuang2017}). Figure~\ref{fig:2}b shows that the $z_1z_2$ cross-correlations are weak at positive and negative lags, which consistently with the one-point lag-correlation map of Fig.~\ref{fig:1}a and Fig.~\ref{fig:3} (shown later), are indicative of a non-propagating regime, as reported previously for a similar setup \citep{SonLee2006,SonLee2008}. The $m_1z_2$ and $m_2z_1$ cross-correlations are small and often insignificant, suggesting the absence of the cross-EOF feedbacks in the non-propagating regime (Figs.~\ref{fig:2}e-f). All together, the above analysis shows that for the non-propagating regime, single-EOF reduced-order models such as LH01 are sufficient. \begin{figure} \centering \includegraphics[width=18.5pc,angle=0,trim={3cm 4.5cm 3cm 5.5cm},clip]{gfdl_crossfeedback_Npropagating.pdf} \caption{Lagged-correlation analysis of the GCM setup with non-propagating regime. (a) Auto-correlation of $z_1$ (blue) and $z_2$ (red), (b) cross-correlation $z_1z_2$, (c) cross-correlation $m_1z_1$, (d) cross-correlation $m_2z_2$, (e) cross-correlation $m_1z_2$, and (f) cross-correlation $m_2z_1$. The two leading EOFs contribute 60.2\% and 19.2\%, respectively, to the total variance. The $e$-folding decorrelation time scales of $z_1$ and $z_2$ are $64.5$ and $4.8$ days, respectively. Grey shading represents 5\% significance level according to the test of Bartlett (Appendix A).} \label{fig:2} \end{figure} The weak cross-correlations between $z_1$ and $z_2$ in the GCM with non-propagating regime (Fig.~\ref{fig:2}b) can be also seen by regressing the zonal-mean zonal wind anomalies on the zonal index at 0- and 20-day time lag. Figures~\ref{fig:3}a and \ref{fig:3}b show the wind anomalies regressed on $z_1$ and $z_2$ at lag 0, yielding approximately the EOF1 and EOF2 patterns, respectively. Twenty days after $z_1$ leads zonal wind anomalies, the anomalies do not drift poleward or decay, but rather persist (Fig.~\ref{fig:3}d). In contrast, 20 days after $z_2$ leads zonal wind anomalies, the anomalies decay and disappear (Fig.~\ref{fig:3}c). These observations are consistent with the long and short persistence of $z_1$ and $z_2$, respectively, consistent with the weak cross-correlations of $z_1$ and $z_2$ at positive or negative lags, and as become clear below, consistent with the non-propagating nature of this setup. \begin{figure} \centering \includegraphics[width=18.5pc,angle=0,trim={1cm 10cm 1cm 1cm},clip]{reg_z1z2_nonpropagating.pdf} \caption{Anomalous zonal-mean zonal wind ($\bar{u}$) regressed onto $z_1$ and $z_2$ in the GCM setup with non-propagating regime: (a, b) simultaneous, (c) $z_2$ leads by 20 days, and (d) $z_1$ leads by 20 days. The contours are the climatological zonal-mean zonal wind with interval of 5 ms$^{-1}$.} \label{fig:3} \end{figure} Figure~\ref{fig:4} shows lagged-correlation analysis between $z$ and $m$ in the GCM setup with propagating regime. The auto-correlation of $z_1$, its persistence compared to that of $z_2$, and the explained variance by the two EOFs (40.4\% versus 32.5\%) are much more similar to what is observed in the SH (shown later in Fig.~\ref{fig:7}). The $e$-folding decorrelation time scales of $z_1$ and $z_2$ are $14.1$ and $9.2$ days, respectively. Figure~\ref{fig:4}b shows that $z_1$ and $z_2$ are strongly correlated at long lags peaking at around $\pm 20$ days. This behavior along with the one-point lag-correlation map of Fig.~\ref{fig:1}b and regression map of wind anomalies (Fig. 5, shown later) suggests the existence of a propagating regime, as noted by few previous studies (e.g., \citealt{SonLee2006,SheshadriPlumb2017}). It should be noted that \citet{SonLee2006} have proposed a rule of thumb based on the ratio of the explained variance of EOF2 to EOF1: A non-propagating (propagating) regimes exists if the ratio is smaller (larger) than 0.5. The regime of our two setups are consistent with this rule of thumb as the ratios are $\sim 0.3$ and $\sim 0.8$ in our non-propagating and propagating regimes. \begin{figure} \centering \includegraphics[width=18.5pc,angle=0,trim={3cm 4.5cm 3cm 5.7cm},clip]{gfdl_crossfeedback_propagating.pdf} \caption{ Lagged-correlation analysis of the GCM setup with propagating regime. (a) Auto-correlation of $z_1$ (blue) and $z_2$ (red), (b) cross-correlation $z_1z_2$, (c) cross-correlation $m_1z_1$, (d) cross-correlation $m_2z_2$, (e) cross-correlation $m_1z_2$, and (f) cross-correlation $m_2z_1$. The two leading EOFs contribute 40.4\% and 32.5\%, respectively, to the total variance. The $e$-folding decorrelation time scales of $z_1$ and $z_2$ are $14.1$ and $9.2$ days, respectively. Grey shading represents 5\% significance level according to the test of Bartlett (Appendix A).} \label{fig:4} \end{figure} Furthermore, Fig.~\ref{fig:4}c shows that the $m_1z_1$ cross-correlations are positive at long positive lags (5-20 days) and then negative but small. Fig.~\ref{fig:4}d indicates small and negative cross-correlations between $z_2$ and $m_2$ at the times scale of longer than 20 days (Fig.~\ref{fig:4}c). Overall, the shape of the $m_1z_1$ and $m_2z_2$ cross-correlation functions are similar between the non-propagating and propagating regimes, although the $m_1z_1$ cross-correlations are larger and more persistent in the non-propagating regime. In contrast, the $m_1z_2$ and $m_2z_1$ cross-correlations are substantially different between the two regimes (Figs.~\ref{fig:4}e-f). There are statistically significant and large positive $m_1z_2$ cross-correlations at large positive lags ($>$ 5~days) and statistically significant and large negative $m_2z_1$ cross-correlations at positive lags up to 30~days. Note that as emphasized in the figures, positive lags here mean that $z_1$ ($z_2$) is leading $m_2$ ($m_1$). Therefore, these cross-correlations, as discussed later, indicate the existence of cross-EOF feedbacks in the propagating regime. \begin{figure} \centering \includegraphics[width=18.5pc,angle=0,trim={1cm 10cm 1cm 1cm},clip]{reg_z1z2_propagating.pdf} \caption{Anomalous zonal-mean zonal wind ($\bar{u}$) regressed onto $z_1$ and $z_2$ in the GCM setup with propagating regime: (a, b) simultaneous, (c) $z_2$ leads by 20 days, and (d) $z_1$ leads by 20 days. The contours are the climatological zonal-mean zonal wind with interval of 5 ms$^{-1}$.} \label{fig:5} \end{figure} Figure~\ref{fig:5} shows anomalous zonal-mean zonal wind regressed on $z_1$ and $z_2$ at 0- and 20-day time lag in the GCM setup with propagating regime. Figures~\ref{fig:5}a and \ref{fig:5}b show the wind anomalies regressed on $z_1$ and $z_2$ at lag 0, again yielding approximately the EOF1 and EOF2 patterns, respectively. As shown in Fig.~\ref{fig:5}c, 20 days after $z_2$ leads zonal wind anomalies, the anomalies have drifted poleward and project strongly onto the structure of wind anomalies associated with EOF1 (Figs.~\ref{fig:5}a,c, pattern correlation = 0.93). This is consistent with positive correlation of $z_1z_2$ at lag +20 days when $z_1$ leads $z_2$ (Fig.~\ref{fig:4}b). Likewise, twenty days after $z_1$ leads zonal wind anomalies, the anomalies (of Fig.~\ref{fig:5}a) have drifted poleward and project strongly onto the structure of anomalies associated with EOF2, but with an opposite sign (Figs.~\ref{fig:5}b,d, pattern correlation = -0.85). This is consistent with negative correlation of $z_1z_2$ when $z_2$ leads $z_1$ by 20 days (Fig.~\ref{fig:4}b). Overall, these results suggest the existence of cross-EOF feedbacks in the propagating annular mode. In Section~3, we will developed a model to quantify these four feedbacks and understand the effects of their magnitude and signs on the variability (e.g., persistence) of $z_1$ and $z_2$. But first, we will examine the variability and characteristics of $z$ and $m$ in reanalysis. In particular, we will see that the $z$ and $m$ cross-correlations in the GCM's propagating regime well resemble those in the SH reanalysis data. \subsection{Reanalysis} \label{sec:22} We use the 1979-2013 data from the European Centre for Medium-Range Weather Forecasts (ECMWF) interim reanalysis (ERA-Interim; \citealt{Dee2011}). Zonal and meridional wind components $(u,v)$ are 6 hourly, on 1.5$^{\circ}$ latitude $\times$ 1.5$^{\circ}$ longitude grid, and on 21 vertical levels between 1000 and 100 hPa. Anomalies used for computing correlations and EOF analyses are defined as the deviations from the climatological seasonal cycle. The mean seasonal cycle is defined as the annual average and the first four Fourier harmonics of the 35-yr daily climatology. \begin{figure} \centering \includegraphics[width=19pc,angle=0,trim={4cm 8.5cm 3cm 1cm},clip]{cross_cor_wind_recons_ERAI.pdf} \caption{One-point lag-correlation maps of the vertically averaged zonal-mean zonal wind anomalies from year-round ERA-Interim data integrated across the depth of the troposphere (1000-100 hPa) ($\langle\overline{u}\rangle$) in the Southern Hemisphere. (a) Total anomaly fields and (b) reconstructed from projections onto the two leading EOFs of $\langle\overline{u}\rangle$. The base latitude is at 30$^{\circ}$S and the contour interval is 0.1. Regions enclosed by contour lines denote values significant at the 95\% level according to the $t$-test.} \label{fig:6} \end{figure} Figure~\ref{fig:6} shows a one-point lag-correlation map of vertically averaged zonal-mean zonal wind $\langle\overline{u}\rangle$ in the SH, where the base latitude is 30$^{\circ}$S. Comparing this figure with Fig.~\ref{fig:1}, it can be seen that there is an indication of poleward-propagating anomalies in SH, which appear in low latitudes and migrate poleward over the course of 2-3 months (Fig.~\ref{fig:6}a). However, the poleward-propagating signals are not as clearly as those observed in the GCM setup with the propagating regime (Fig.~\ref{fig:1}b, or Fig. 2 of \citealt{SonLee2006}). This is consistent with previous studies (e.g. \citealt{Feldstein1998,FeldsteinLee1998,SheshadriPlumb2017}), showing that both propagating and non-propagating anomalies exist in all seasons in the SH, which somehow obscure the propagating signals. Reconstructions based on the projections onto the two leading EOFs of zonal-mean zonal wind further show that most of the mid-latitude SH wind variability can be explained by the two leading EOF modes (Fig.~\ref{fig:6}b). The ratio of the fractional variance of EOF2 (23.2\%) to that of EOF1 (45.1\%) is 0.51, which is right at the boundary from the rule of thumb. Overall, as already pointed out by \cite{SheshadriPlumb2017}, a propagating annular mode exists in the SH and is largely explained by the two leading EOF modes. Figure~\ref{fig:7}a shows the auto-correlations of $z_1$ and $z_2$. Consistent with \citet{Lorenz2001}, the estimated decorrelation time scales of these two PCs are 10.3 and 8.1 days, respectively. Figure~\ref{fig:7}b depicts the cross-correlation $z_1z_2$, showing statistically significant and relatively strong correlations that peak around $\pm 10$~days. As discussed in earlier studies, such lagged correlations are a signature of the propagating annular modes \citep{FeldsteinLee1998,SonLee2006,SonLee2008,SheshadriPlumb2017}, implying that the period of the poleward propagation is about 20-30 days in the SH (Fig. ~\ref{fig:7}b), consistent with \cite{SheshadriPlumb2017} and with Fig.~\ref{fig:6}. \begin{figure} \centering \includegraphics[width=19pc,angle=0,trim={3cm 4.5cm 3cm 5cm},clip]{cross_feedback_era.pdf} \caption{Lagged-correlation analysis for the Southern Hemisphere, calculated from year-round ERA-Interim data. (a) Auto-correlations of $z_1$ (blue) and $z_2$ (red), (b) cross-correlation $z_1z_2$, (c) cross-correlation $m_1z_1$, (d) cross-correlation $m_2z_2$, (e) cross-correlation $m_1z_2$, and (f) cross-correlation $m_2z_1$ at different lags. The two leading EOFs contribute to 45.1\% and 23.2\% of the total variance, respectively. The $e$-folding decorrelation time scales of $z_1$ and $z_2$ are $10.3$ and $8.1$ days, respectively. Grey shading represents 5\% significance level according to the test of Bartlett (Appendix A).} \label{fig:7} \end{figure} To understand the effects of $z_1$ and $z_2$ on $m_1$ and $m_2$, we also examine the cross-correlations between $z$ and $m$ at different lags (Figs.~\ref{fig:7}c-f). The shape and the magnitude of the $m_1z_1$ and $m_2z_2$ cross-correlations (Figs.~\ref{fig:7}c-d) are similar to those originally shown by \citet{Lorenz2001} (see their Figs.~5 and 13a) and later by many others using different reanalysis products and time periods. As discussed in \citet{Lorenz2001}, the statistically significant positive $m_1z_1$ cross-correlations at long positive lags ($\sim 8-20$~days) and the insignificant $m_2z_2$ cross-correlations for time scales longer than $\sim 5$~days are indicative that a positive eddy-zonal flow feedback exists only for EOF1, but not for EOF2 (also see \citet{Byrne2016} and \citet{MaHassanzadehKuang2017}). We emphasize that this positive feedback is from EOF1 onto itself. To see if there are cross-EOF feedbacks, in Figs.~\ref{fig:7}e-f we plot the $m_1z_2$ and $m_2z_1$ cross-correlations at different lags. The $m_1z_2$ cross-correlations show statistical significant positive correlations at large positive lags, signifying that a cross-EOF feedback, i.e, $z_2$ modifying $m_1$, is present. Note that the magnitude of the $m_1z_2$ cross-correlations at positive lags is overall larger than those of $m_1z_1$ (Fig.~\ref{fig:7}c). There are also statistically significant but negative $m_2z_1$ correlations at large positive lags, again suggesting the existence of a cross-EOF feedback, i.e, $z_1$ modifying $m_2$. These results indicate that in the presence of propagating regime in the SH, there are indeed cross-EOF feedbacks; however, these feedbacks were always ignored in the previous studies and reduced-order models of the SH extratropical large-scale circulation. \section{Eddy-zonal flow feedbacks in the propagating annular modes: Model and quantification} \label{sec:3} In this section, an eddy-zonal flow feedback model that accounts for the coupling of the leading two EOFs and their feedbacks, including the cross-EOF feedbacks will be introduced. Then this model will be validated using synthetic data from a simple stochastic prototype, and from its analytical solution, we will derive conditions for the existence of the propagating regime. Finally, we will use this model to estimate the feedback strengths of the propagating annular modes in data from the reanalysis (SH) and the idealized GCM. \subsection{Developing an eddy-zonal flow feedback model for propagating annular modes} \label{sec:31} With the same notations as in \citet{Lorenz2001}, the time series of zonal indices ($z_{1}$ and $z_{2}$) and eddy forcing ($m_{1}$ and $m_{2}$) associated with the first two leading EOFs are calculated by projecting the vertically averaged zonal-mean zonal wind $\langle\overline{u}\rangle$ and eddy momentum flux convergence $\langle\overline{F}\rangle$ anomalies onto the patterns of the first and second EOFs of $\langle\overline{u}\rangle$ (see Eqs.~(1)-(2)). Equations for the tendency of $z_1$ and $z_2$ can be then formulated as: \begin{equation} \frac{dz_1}{dt}=m_1 -\frac{z_1}{\tau_1}, \label{eq:z1} \end{equation} \begin{equation} \frac{dz_2}{dt}=m_2 -\frac{z_2}{\tau_2}, \label{eq:z2} \end{equation} where $t$ is time and the last term on the right-hand side in each equation represents damping (mainly due to surface friction) with time scale $\tau$. As discussed in \citet{Lorenz2001}, Eqs.~(4)-(5) can be interpreted as the zonally and vertically averaged zonal momentum equation: \begin{equation} \frac{\partial \langle\overline{u}\rangle }{\partial t } = - \frac{1}{\cos^2\phi} \frac{\partial (\langle\overline{u'v'}\cos^2 \phi \rangle )}{a\partial \phi } - D , \end{equation} projected into EOF1 and EOF2, respectively. In the above equation, $D$ includes the effects of surface drag and is modeled as Rayleigh drag in Eqs.~(4)-(5). Assuming a linear representation for the feedback of an EOF onto itself, \citet{Lorenz2001} and later studies wrote $m_1(t)=\tilde{m}_1(t)+b_1 z_1(t)$ and $m_2(t)=\tilde{m}_2(t)+b_2 z_2(t)$, where $b_1$ and $b_2$ are the feedback strengths (with $b_j>0$ implying a positive feedback that prolongs the persistence of $z_j$). $\tilde{m}$ is the random, zonal flow-independent component of the eddy forcing that drives the high-frequency variability of $z$ \citep{Lorenz2001,MaHassanzadehKuang2017}. Here, to account for the cross-EOF feedbacks, i.e., the effect of $z_2$ on $m_1$ and $z_1$ on $m_2$, we extend the LH01 model and write \begin{equation} m_1=\tilde{m}_1+b_{11}z_1+b_{12}z_2,\label{eq:m1} \end{equation} \begin{equation} m_2=\tilde{m}_2+b_{21}z_1+b_{22}z_2. \label{eq:m2} \end{equation} With $j,k=1,2$, $b_{jk}$ is the strength of the linearized feedback of $z_k$ onto $z_j$ through modifying $m_j$ in the quasi-steady limit; thus the cross-EOF feedbacks are represented by the terms involving $b_{12}$ and $b_{21}$. To find the values of $b_{jk}$, we can use the lagged-regression method of \citet{SimpsonShepherd2013}, which assumes that $reg_l(\tilde{m}_j,z_j)=sum(\tilde{m}_j(t+l)z_j(t)) \approx 0$ at large positive lags, $l$. By lag-regressing each term in Eqs.~(\ref{eq:m1}) onto $z_1$ and then onto $z_2$, we find \begin{equation} \begin{bmatrix} reg_l(z_1, z_1) & reg_l(z_2, z_1) \\ reg_l(z_1, z_2) & reg_l(z_2, z_2) \end{bmatrix} \begin{bmatrix} b_{11}\\ b_{12} \end{bmatrix} = \begin{bmatrix} reg_l(m_1, z_1)\\ reg_l(m_1, z_2) \end{bmatrix} \label{eq:b11} \end{equation} and similarly, from Eq.~(\ref{eq:m2}) we find \begin{equation} \begin{bmatrix} reg_l(z_2, z_1) & reg_l(z_1, z_1) \\ reg_l(z_2, z_2) & reg_l(z_1, z_2) \end{bmatrix} \begin{bmatrix} b_{21}\\ b_{22} \end{bmatrix} = \begin{bmatrix} reg_l(m_2, z_1)\\ reg_l(m_2, z_2) \end{bmatrix}, \label{eq:b22} \end{equation} where we assumed $reg_l(\tilde{m}_j,z_k) \approx 0$ for $j,k=1,2$. \begin{table} \caption{Prescribed and estimated feedback strengths (in day$^{-1}$) in synthetic data for the case without cross EOF-feedbacks. The imposed damping rates of friction are $\tau_1$=$\tau_2$= $8$~days. The values of $b$ and $\tau$ are motivated by the observed ones, see Table~\ref{tab:4}.} \begin{center} \begin{tabular}{ccccccrrcrcrcrc} \topline Feedback & $b_{11} $ & $b_{12}$ & $b_{21}$ & $b_{22}$ \\ \midline Prescribed & 0.040 & 0.000 & 0.000 & 0.000 \\ Estimated (Eqs.~(\ref{eq:b11})-(\ref{eq:b22})) & 0.042 & 0.001 & -0.0006 & 0.0005\\ \botline \label{tab:1} \end{tabular} \end{center} \end{table} Note that if one attempts to find $b_{11}$ using a single-EOF approach such as LH01, then, from Eq.~(\ref{eq:m1}), one would be implicitly assuming that $reg_l(\tilde{m}_1+b_{12}z_2,z_1)=reg_l(\tilde{m}_1,z_1)+b_{12} reg_l(z_2,z_1) \approx b_{12} reg_l(z_2,z_1)$ is zero. However, as shown earlier, in the propagating regime, the $z_1z_2$ cross-correlations can be large at long lags, and as discussed below, the range of time lags needed to be used in Eqs.~(\ref{eq:b11})-(\ref{eq:b22}) and the lags at which $z_1z_2$ cross-correlations peaks are often comparable. Consequently, if $b_{12} \neq 0$, the key assumption of the statistical methods developed to quantify eddy-zonal flow feedbacks \citet{Lorenz2001,SimpsonShepherd2013,MaHassanzadehKuang2017}) is violated. Therefore, $b_{jk}$ should be determined together by solving the systems of equations (\ref{eq:b11})-(\ref{eq:b22}). The basic assumptions of our model, Eqs.~(\ref{eq:z1})-(\ref{eq:b22}), are similar to those of the LH01 model: i) A linear representation of the feedbacks is sufficient, and ii) The eddy forcing $m$ does not have long-term memory independent of the variability in the jet (represented by $z$). The second assumption means that at sufficiently large positive lags (beyond the time scales over which there is significant auto-correlation in $\tilde{m}$) the feedback component of the eddy forcing will dominate the ${m_jz_k}$ cross-correlations \citet{Lorenz2001,ChenPlumb2009,SimpsonShepherd2013,MaHassanzadehKuang2017}), i.e., $reg_l(\tilde{m}_j,z_k) \approx 0$ at ``large-enough'' positive lags. Note that one cannot use a lag that is too long because then even $reg_l(z_j,z_j)$ would be small and inaccurate. To find the appropriate lag to use, one must look for non-zero $m_jz_k$ cross-correlations at positive lags beyond an eddy lifetime. In this study, the strengths of the individual feedbacks are averaged over positive lags of 8 to 20 days for both GCM and reanalysis (e.g., \citealt{SimpsonShepherd2013,Burrows2016}). We choose this range in order to avoid the high-frequency variability at short lags (indicated by impulsive and oscillatory characters of the $\tilde{m}$ auto-correlation) and strong damping at the very long lags. In the following section, we will present a proof of concept for this eddy-zonal flow feedback model using synthetic data obtained from a simple stochastic prototype and show that using Eqs.~(\ref{eq:b11})-(\ref{eq:b22}), the prescribed feedbacks can be accurately backed out. \subsection{Validation using synthetic data from a simple stochastic prototype} \label{sec:32} We begin by constructing a simple stochastic system to produce synthetic time series $z$ and $m$ in the presence or absence of cross-EOF feedbacks. The equations of this system are the same as Eqs.~(\ref{eq:z1})-(\ref{eq:z2}) and (\ref{eq:m1})-(\ref{eq:m2}). Following \citet{SimpsonShepherd2013}, we generate a synthetic time series of the random component of the eddy forcing $\widetilde{m}_{1,2}$ using a second-order autoregressive (AR2) noise process: \begin{equation} \tilde{m}_{1} (t)=0.6\tilde{m}_{1} (t-2)-0.3\tilde{m}_{1} (t-1)+\epsilon_{1} (t), \label{eq:stoch1} \end{equation} \begin{equation} \tilde{m}_{2}(t)=0.6\tilde{m}_{2}({t-2})-0.3\tilde{m}_{2}(t-1)+\epsilon_{2}(t), \label{eq:stoch2} \end{equation} where $t$ denotes time (in days) and $\epsilon$ is white noise distributed uniformly between -1 and +1. Synthetic time series of $z_{j}$ and $m_{j}$ are produced by numerically integrating Eqs.~(\ref{eq:z1})-(\ref{eq:z2}), (\ref{eq:m1})-(\ref{eq:m2}), and (\ref{eq:stoch1})-(\ref{eq:stoch2}) forward in time with two different sets of prescribed $b_{jk}$. In the first set, there is no cross-EOF feedback, i.e., $b_{12}=b_{21}=0$ (Table~\ref{tab:1}). In the second set, $b_{11}$ and $b_{22}=0$ are the same as the first set, but here there is cross-EOF feedback, i.e., $b_{12}$ and $b_{21} \neq 0$ (Table~\ref{tab:2}). For both sets, we assumed $\tau_1=\tau_2= 8 $ days. The values of $b$ and $\tau$ are reasonably chosen based on the observed values in the SH (see Table 4). \begin{table} \caption{Prescribed and estimated feedback strengths (in day$^{-1}$) in synthetic data for the case with cross EOF-feedbacks. The imposed damping rates of friction are $\tau_1$=$\tau_2$= $8$~days. The values of $b$ and $\tau$ are motivated by the observed ones, see Table~\ref{tab:4}.} \begin{center} \begin{tabular}{ccccccrrcrcrcrc} \topline Feedback & $b_{11} $ & $b_{12}$ & $b_{21}$ & $b_{22}$ \\ \midline Prescribed & 0.040 & 0.060 & -0.025 & 0.000\\ Estimated (Eqs.~(\ref{eq:b11})-(\ref{eq:b22})) & 0.043 & 0.067 & -0.026 & -0.002\\ \botline \label{tab:2} \end{tabular} \end{center} \end{table} Spectral analysis of $z_{1,2}$ and $m_{1,2}$ shows that the synthetic data indeed have characteristics similar to those of the observed SH. For example, for the case with cross-feedbacks (Fig.~\ref{fig:8}), we find that consistent with observations (see Fig.~4 of \citet{Lorenz2001} or Fig.~3 of \citet{MaHassanzadehKuang2017}), the time scales of $z_1$ and $z_2$ are much longer (i.e., slower variability) than $m_1$ and $m_2$, and the power spectra of $z$ can be interpreted, to the first order, as reddening of the power spectra of eddy forcing $m$ \citet{Lorenz2001,MaHassanzadehKuang2017}. The power spectra of eddy forcings $m_1$ and $m_2$ have in general a broad maximum centered at the low and synoptic frequency, consistent with observations. Given that the characteristics of the synthetic data mimic the key characteristics of the observed annular modes, we use this idealized framework to validate the lagged-correlation approach of Eqs.~(\ref{eq:b11})-(\ref{eq:b22}) for quantifying eddy-zonal flow feedbacks. \begin{figure} \centering \includegraphics[width=18.5pc,angle=0,trim={0cm 0cm 0cm 0cm},clip]{simple_model_spectra.pdf} \caption{Spectra of $z_{1,2}$ and $m_{1,2}$ from the synthetic data with cross-EOF feedbacks. Black lines show the power spectra of (a) $z_1$, (b) $z_2$, (c) $m_1$, and (d) $m_2$. The red-noise spectra are indicated by the smooth solid red curves, and the smooth dashed blue lines are the 5\% and 95\% a priori confidence limits.} \label{fig:8} \end{figure} Figure~\ref{fig:9} shows the lagged-correlation analysis of the synthetic data without cross-EOF feedbacks. It is clearly seen that the only noticeable cross-correlations are that of $m_1z_1$, and there are no (statistically significant) cross-correlations between $z_1z_2$, $m_1z_2$ and $m_2z_1$ at any lag, consistent with a non-propagating regime and the absence of cross-EOF feedbacks (Fig.~\ref{fig:2}). Using Eqs.~(\ref{eq:b11})-(\ref{eq:b22}) and lag $l$=8-20 days, we can closely estimate the prescribed feedback parameters, i.e., $b_{11}=0.04$~day$^{-1}$ and $b_{22}=b_{12}=b_{21}=0$ (see Table~\ref{tab:1}). \begin{figure} \centering \includegraphics[width=18.5pc,angle=0,trim={3cm 4.4cm 3cm 5.7cm},clip]{cross_feedback_nonpropagating.pdf} \caption{Lagged-correlation analysis of synthetic data without cross-EOF feedback. (a) Auto-correlation of $z_1$ (blue) and $z_2$ (red), (b) cross-correlation $z_1z_2$, (c) cross-correlation $m_1z_1$, (d) cross-correlation $m_2z_2$, (e) cross-correlation $m_1z_2$, and (f) cross-correlation $m_2z_1$. The $e$-folding decorrelation time scales of $z_1$ and $z_2$ are $18.6$ and $9.2$ days, respectively. Grey shading represents 5\% significance level according to the test of Bartlett (Appendix A).} \label{fig:9} \end{figure} Figure~\ref{fig:10} shows the lagged-correlation analysis of the synthetic data with cross-EOF feedbacks. First, we see that there are statistically significant and often large cross-correlations in $z_1z_2$, $m_1z_1$, $m_1z_2$, and $m_2z_1$, with the shape of the cross-correlation distributions not that different from that of the SH reanalysis and the idealized GCM setup with propagating regime (Figs.~\ref{fig:4} and \ref{fig:7}). The positive $m_1z_1$ and near zero $m_2z_2$ cross-correlations at large positive lags signify a positive $z_1$-onto-$z_1$ feedback through $m_1$, but no $z_2$-onto-$z_2$ feedback through $m_2$, consistent with the prescribed positive value of $b_{11}$ and $b_{22}=0$. In addition, Figs.~\ref{fig:10}e-f also show that there are statistically significant and large correlations in $m_1z_2$ and $m_2z_1$ at positive lags, consistent with the introduction of cross-EOF feedbacks by setting $b_{12}=0.06$~day$^{-1}$ and $b_{21}=-0.025$~day$^{-1}$. The positive $m_1z_2$ cross-correlations are positive lags are higher than those of $m_1z_1$ (note that $b_{12}/b_{11} \approx 1.5$), and the sign of $m_2z_1$ cross-correlations is opposite to the sign of $m_1z_2$ cross-correlations (note that $b_{12}b_{21} <0$). Using Eqs.~(\ref{eq:b11})-(\ref{eq:b22}) and lag $l$=8-20 days, we can again closely estimate the prescribed feedback parameters, including the strength of the cross-EOF feedbacks (see Table~\ref{tab:2}). \begin{figure} \centering \includegraphics[width=18.5pc,angle=0,trim={3cm 4.4cm 3cm 5.7cm},clip]{cross_feedback_propagating.pdf} \caption{Lagged-correlation analysis of synthetic data with cross-EOF feedback. (a) Auto-correlation of $z_1$ (blue) and $z_2$ (red), (b) cross-correlation $z_1z_2$, (c) cross-correlation $m_1z_1$, (d) cross-correlation $m_2z_2$, (e) cross-correlation $m_1z_2$, and (f) cross-correlation $m_2z_1$. The $e$-folding decorrelation time scales of $z_1$ and $z_2$ are $13.9$ and $6.5$ days, respectively. The regions outside the gray shading indicate 95\% significance level according to the test of Bartlett (Appendix A).} \label{fig:10} \end{figure} The above analyses validate the approach using Eqs.~(\ref{eq:b11})-(\ref{eq:b22}) for quantifying the feedback strengths $b_{jk}$ in data from both propagating and non-propagating regimes. Furthermore, a closer examination of $z_1$ and $z_2$ auto-correlations in Figs.~\ref{fig:9}a and \ref{fig:10}a show that both $z_1$ and $z_2$ in the case without cross-EOF feedbacks are more persistence than those in the case with cross-EOF feedbacks; e.g., the $e$-forcing deccorelation time scale of $z_1$ is $18.6$~days in Fig.~\ref{fig:9}a while it is $13.9$~days in Fig.~\ref{fig:10}a. This observation might be counter-intuitive because both cases have the same $b_{11}>0$ while the case with cross-EOFs feedback has $b_{12}>0$, which might seem like another positive feedback that should further prolong the persistence of $z_1$. Finally, we notice that $b_{12}b_{21}<0$ in Table~\ref{tab:2} and in the SH reanalysis and idealized GCM setup with the propagating regime (Tables~\ref{tab:4} and \ref{tab:5}). Synthetic data generated with the same parameters as in Table~\ref{tab:2} but with the sign of $b_{21}$ flipped results in cross-correlation distributions that are vastly different from those of Fig.~\ref{fig:10} and what is seen in the SH reanalysis and idealized GCM. Inspired by these observations, next we examine the analytical solution of the deterministic version of Eqs.~(\ref{eq:z1})-(\ref{eq:z2} and (\ref{eq:m1})-(\ref{eq:m2}) to better understand the impacts of the strength and sign of $b_{jk}$ on the variability and in particular the persistence of $z_1$ and $z_2$. \subsection{Analytical solution of the two-EOF eddy-zonal flow feedback model} \label{sec:33} We focus on the deterministic (i.e., $\tilde{m}_j=0$) version of Eqs.~(\ref{eq:z1})-(\ref{eq:z2}) and (\ref{eq:m1})-(\ref{eq:m2}), which can be re-written as the following system of ordinary differential equations (ODEs): \begin{equation} \dot{\mathbf{z}}= \mathbf{A}\mathbf{z}, \end{equation} where \begin{equation} \mathbf{z}=\begin{bmatrix} z_1 \\ z_2 \end{bmatrix},\; \; \; \; \mathrm{and} \; \; \; \; \mathbf{A}=\begin{bmatrix} b_{11}-\frac{1}{\tau_1} & b_{12} \\ b_{21} & b_{22}-\frac{1}{\tau_2} \end{bmatrix}. \end{equation} The solution to this system is \begin{equation} \mathbf{z}(t)=e^{\mathbf{A} t} \mathbf{z}(0) = \left[\mathbf{V} e^{\boldsymbol{\Lambda} t} \mathbf{V}^{-1} \right] \mathbf{z}(0),\label{eq:sol} \end{equation} where $\mathbf{V}$ and $\boldsymbol{\Lambda}$ are the eigenvector and eigenvalue matrices of $\mathbf{A}$: \begin{equation} \mathbf{V} = \left[\mathbf{v}_1 \;\; \mathbf{v}_2 \right] = \begin{bmatrix} v_{11} & v_{12} \\ v_{21} & v_{22} \end{bmatrix}, \; \; \; \; \mathrm{and} \; \; \; \; \boldsymbol{\Lambda} = \begin{bmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{bmatrix}. \end{equation} To find the eigenvalues $\lambda$, we set the determinant of $\mathbf{A}$ equal to zero and solve the resulting quadratic equation to obtain: \begin{equation} \begin{split} \lambda_{1,2}=- \frac{1}{2} \left ( \frac{1}{\tau_1}+\frac{1}{\tau_2}-b_{11}-b_{22}\right ) \pm \\ \frac{1}{2} \sqrt{\left \{ \left (\frac{1}{\tau_1}-\frac{1}{\tau_2} \right ) - \left ( b_{11}-b_{22} \right ) \right \}^2+ 4 b_{12} b_{21} } , \end{split} \end{equation} which, in the limit of $\tau_1 \approx \tau_2$ (reasonable given their estimated values in Tables~\ref{tab:4} and \ref{tab:5}), simplifies to: \begin{equation} \lambda_{1,2}=- \frac{1}{2} \left ( \frac{2}{\tau}-b_{11}-b_{22}\right ) \pm \frac{1}{2} \sqrt{ \left ( b_{11}-b_{22} \right )^2 + 4 b_{12} b_{21}}.\label{eq:lambda} \end{equation} The solution (Eq.~(\ref{eq:sol})) can be re-written as \begin{equation} \mathbf{z}=c_1 e^{\lambda_1 t}\mathbf{v}_1+c_2 e^{\lambda_2 t}\mathbf{v}_2, \end{equation} where $c_1$ and $c_2$ depend on the initial condition. This system has a decaying-oscillatory solution, i.e., is in the propagating regime, if and only if the eigenvalues (\ref{eq:lambda}) have non-zero imaginary parts, which requires, as a necessary and sufficient condition: \begin{equation} \left ( b_{11}-b_{22} \right )^2 < -4 b_{12} b_{21}. \label{eq:condition1} \end{equation} Equation~(\ref{eq:condition1}) also implies that a necessary condition for the existence of propagating regimes is \begin{equation} b_{12} b_{21}<0. \label{eq:condition2} \end{equation} Thus, non-zero cross-EOF feedbacks of opposite signs are essential components of the propagating regime dynamics. The propagating regimes in the stochastic prototype (Table~\ref{tab:2}), SH reanalysis (Table~\ref{tab:4}), and idealized GCM (Table~\ref{tab:5}) satisfy the conditions of Eqs.~(\ref{eq:condition1})-(\ref{eq:condition2}), while the non-propagating regimes (Tables~\ref{tab:1} and \ref{tab:5}) do not. In the non-propagating regime, $\lambda_{1,2}=-\sigma_{1,2} < 0$ and $\mathbf{v}_{1,2}$ are real and in this regime, $z_{1,2}$ just decay exponentially according to \begin{equation} \mathbf{z}=c_1 e^{(-\sigma_1 t)}\mathbf{v}_1+c_2 e^{(- \sigma_2 t)}\mathbf{v}_2. \end{equation} In the propagating regime, $\lambda_{1,2}=-\sigma \pm i \omega $ and $\mathbf{v}_{1,2}$ are complex where \begin{eqnarray} \sigma &=& \frac{1}{2} \left ( \frac{1}{\tau_1}+\frac{1}{\tau_2}-b_{11}-b_{22}\right ), \label{eq:sigma}\\ \omega &=& \frac{1}{2} \sqrt{\left \{ \left (\frac{1}{\tau_1}-\frac{1}{\tau_2} \right ) - \left ( b_{11}-b_{22} \right ) \right \}^2+ 4 b_{12} b_{21} }. \label{eq:omega} \end{eqnarray} In this regime, $z_{1,2}$ decay and oscillate according to \begin{equation} \mathbf{z}=c_1 e^{(-\sigma t)}e^{(i \omega t)} \mathbf{v}_1+c_2 e^{(-\sigma t)}e^{(- i \omega t)}\mathbf{v}_2. \end{equation} Realizing that in this case $v_{11}=v_{12}$ are real, and $v_{21}=v^*_{22}$ and $c_{1}=c^*_{2}=c$, where $^*$ means complex conjugate, we can re-write the above equations as \begin{eqnarray} z_1&=&\left[c \, e^{( i \omega t)} v_{11} + c^* e^{(- i \omega t)} v_{11} \right] e^{(-\sigma t)} , \label{eq:z1SOL}\\ z_2&=&\left[c \, e^{( i \omega t)} v^*_{22}+ c^* e^{(- i \omega t)} v_{22} \right] e^{(-\sigma t)}.\label{eq:z2SOL} \end{eqnarray} These equations show that $z_1$ and $z_2$ have the same decay rate ($\sigma$) but different oscillatory components with frequency $\omega$. These results are consistent with the POP analysis of \citet{SheshadriPlumb2017} who showed that EOF1 and EOF2 are, respectively, the real and imaginary parts of a single decaying-oscillatory POP mode (see their Section~4b). As a results, the two modes have the same decay rate and frequency, but have different auto-correlation function decay rates and have strong lag cross-correlations because the oscillations are out of phase. A key contribution of our work is to find the decay rate $\sigma$ and frequency $\omega$ as a function of $b_{jk}$ and $\tau_j$ (Eqs.~(\ref{eq:sigma})-(\ref{eq:omega})). To understand the effects of the feedback strength $b_{jk}$ on the persistence of $z_j$, we compute the analytical solutions for 5 systems that have the same $b_{11}>0$ and $b_{22}=0$ (Table~\ref{tab:3}): In EXP1, there is no cross-EOF feedback ($b_{12}=b_{21}=0$), while in EXP2-EXP5, $b_{12}>0$ and $b_{21}<0$ and they have been doubled from experiment to experiment. Figure~\ref{fig:11} shows the auto-correlation coefficients of $z_{1}$ and their $e$-folding decorrelation time scales for EXP1-EXP5. EXP1, corresponding to non-propagating regimes, has the slowest-decaying auto-correlation function, i.e., longest $e$-folding decorrelation time scale (Figs.~\ref{fig:11}a,b). EXP2-EXP5, which all satisfy condition (Eq.~\ref{eq:condition1}), have faster-decaying auto-correlation functions, i.e., shorter $e$-folding decorrelation time scale, consistent with our earlier results in idealized GCM and stochastic prototype (Figs. 4 and 10). As discussed above, in the propagating regime, the eigenvectors and the corresponding eigenvalue are complex and thus, $z_{1,2}$ do not decay just exponentially, but rather show some oscillatory characteristics too (Fig.~\ref{fig:11}a, Eqs.~(\ref{eq:z1SOL})-(\ref{eq:z2SOL})). Since the frequency of these oscillations $\omega$ (Eq.~(\ref{eq:omega})) increases as the cross-EOF feedback strengths increase, shorter time scales in $z_{1}$ are expected in the experiment with stronger $b_{12}b_{21}$ (Fig.~\ref{fig:11}b). \begin{figure*}[t] \centering \includegraphics[width=29pc,angle=0,trim={2cm 15cm 1cm 4cm},clip]{solution_z1_prop_nprop.pdf} \caption{Auto-correlation functions of ${z_1}$ (a) and their corresponding $e$-folding decorrelation time scales (b) from the analytical solutions for the experiment with no cross-EOF feedback (EXP1) and the experiments with increasing cross-EOF feedback strength (EXP2-EXP5). The prescribed feedback strength $b_{jk}$ are shown in Table~\ref{tab:3}.} \label{fig:11} \end{figure*} \begin{table} \caption{Prescribed feedback strengths (in day$^{-1}$) used to analyze the impact of cross-EOF feedbacks on the decorrelation time scales of $z_1$ and $z_2$. The imposed damping rates of friction are $\tau_1$=$\tau_2$= $8$~days.} \begin{center} \begin{tabular}{ccccccrrcrcrcrc} \topline Feedback & $b_{11} $ & $b_{12}$ & $b_{21}$ & $b_{22}$ \\ \midline Exp1 & 0.040 & 0.000 & 0.000 & 0.000\\ Exp2 & 0.040 & 0.060 & -0.025 & 0.000\\ Exp3 & 0.040 & 0.120 & -0.050 & 0.000\\ Exp4 & 0.040 & 0.240 & -0.100 & 0.000\\ Exp5 & 0.040 & 0.480 & -0.200 & 0.000\\ \botline \end{tabular} \label{tab:3} \end{center} \end{table} The dependence of the $e$-folding decorrelation time scales of $z_1$ and $z_2$ on the feedback strengths, and in particular the cross-EOF feedback strengths, is further evaluated in Fig.~\ref{fig:12}. In Fig.~\ref{fig:12}a, it is clearly seen that the impact of increasing $b_{11}>0$ in the propagating regime (filled symbols) is to increase the persistence, i.e., decorrelation time scale, of $z_1$ (Fig.~\ref{fig:12}a), consistent with increasing the positive eddy-zonal flow feedback ($z_1$-onto-$z_1$ through $m_1$). However, when the feedback is further increased to twice the control value, condition (\ref{eq:condition1}) for the existence of a decaying-oscillatory solution is not satisfied anymore, and consistent with this, we see that the system undergoes a transition to the non-propagating regime. Further increasing $b_{11}$ leads to substantially more persistent $z_1$ and less persistence $z_2$. Note that in non-propagating regimes when $b_{12}b_{21} \neq 0$, the decay of $z_2$ depends on $b_{11}$ too (see Eq.~(\ref{eq:lambda})). \begin{figure*}[t] \centering \includegraphics[width=19pc,angle=90,trim={1cm 1cm 1cm 1cm},clip]{eigen_tau_all_theory.pdf} \caption{The computed $e$-folding decorrelation time scale (day) of $z_1$ (blue circles) and $z_2$ (red squares) as a function of feedback strengths (day$^{-1}$). The impact of varying (a) $b_{11}$, (b) $b_{12}$, and (c) $b_{12}$ and $b_{21}$ on the decorrelation time scale (the $y$-axis) while all other $b_{jk}$ are kept the same. The $x$-axis shows the value of varied $b_{jk}$ as fraction of the value in EXP2 (Table~\ref{tab:3}); the vertical dashed line indicates the control values. (d) The impact of varying $b_{11}$ in EXP1 (Table~\ref{tab:3}). The filled indicates that the parameters satisfy the condition for propagating regimes, i.e., existence of decaying-oscillatory solutions (Eq.~(\ref{eq:condition1})).} \label{fig:12} \end{figure*} Figure~\ref{fig:12}b shows that in the propagating regime, unlike increasing $b_{11}>0$, increasing $b_{12}>0$ leads to reduction in the persistence of $z_1$. This is the counter-intuitive behavior we had observed earlier in the stochastic prototype (Section~3\ref{sec:32}). Now we understand that this is because increasing $b_{12}$ increases the frequency of the oscillation $\omega$ in the system, resulting in reduction in the the decorrelation time scale of $z_1$ (and $z_2$); also see Fig.~\ref{fig:11}. Such impact can even be more pronounced when both cross-EOF feedbacks $b_{12}$ and $b_{21}$ are increased (Fig.~\ref{fig:12}c), leading to shorter decorrelation time scales. Because a positive $b_{12}$ decreases the persistence of $z_1$, we do not refer to is as a "positive feedback". To understand this behavior, we have to keep in mind that in the eddy forcing of $z_1$ ($z_2$), i.e., $m_1$ in Eq.~(\ref{eq:m1}) ($m_2$ in Eq.~(\ref{eq:m2})), $b_{12}>0$ ($b_{21}<0$) is the coefficient of $z_2$ ($z_1$). When $z_2$ leads $z_1$, they are negatively correlated (Figs.~\ref{fig:4}b, \ref{fig:7}b, and \ref{fig:10}b), thus $z_2$ multiplied by $b_{12}>0$ reduces $m_1$ that is forcing $z_1$, decreasing the persistence of $z_1$. Similarly, when $z_1$ leads $z_2$, they are positively correlated, thus $z_1$ multiplied by $b_{21}<0$ reduces $m_2$ and thus the persistence of $z_2$. Finally, for the sake of completeness, we also examine the effect of increasing $b_{11}$ in the absence of cross-EOF feedback (Fig.~\ref{fig:12}d). As expected increasing $b_{11}$ leads to increasing the persistence of $z_1$ and has no impact on the persistence of $z_2$ as now $z_1$ and $z_2$ are completely decoupled. \subsection{Quantifying eddy-zonal flow feedbacks in reanalysis and idealized GCM} \label{sec:34} The results of Sections~3\ref{sec:32} and ~3\ref{sec:33} show the importance of carefully quantifying and interpreting the eddy-zonal flow feedbacks, including the cross-EOF feedbacks, to understand the variability of the zonal-mean flow. Table~\ref{tab:4} presents the feedback strengths obtained from applying (\ref{eq:b11})-(\ref{eq:b22}) with $l=8-20$ days to the year-round SH reanalysis data. We find $b_{11} = 0.038 $~day$^{-1}$, a positive feedback from $z_1$ onto $z_1$, consistent with the findings of \citet{Lorenz2001} in their pioneering work. This estimate of $b_{11}$ is slightly higher than what we find using the single-EOF approach ($b_{11} = 0.035 $~day$^{-1}$), which is the same as what \citet{Lorenz2001} found using their spectral cross-correlation method. We also find non-zero cross-EOF feedbacks: $b_{12}=0.059$~day$^{-1}$ and $b_{21}=-0.020$~day$^{-1}$. We also estimate $b_{22}=0.017$~day$^{-1}$ that is slightly higher from what the single-EOF approach yields (Table~\ref{tab:4}). The estimated feedback strengths and friction rates ($\tau$) in Table~\ref{tab:4} satisfy the condition for propagating regime (Eq.~\ref{eq:condition1}). It should be noted that we also extended our approach to include the leading 3 EOFs and quantified the 9 feedback strengths; however, we found the effects of EOF3 on EOF1 and EOF2 negligible, which suggests that a two-EOF model (\ref{eq:b11})-(\ref{eq:b22}) is enough to describe the current SH large-scale circulation (not shown). \begin{table} \caption{Feedback strengths (in day$^{-1}$) estimated for year-round ERA-Interim reanalysis. The damping rates of friction are estimated as $\tau_1=8.3$~days and $\tau_2=8.4$~days following the methodology in Appendix~A of \citet{Lorenz2001}.} \begin{center} \begin{tabular}{ccccccrrcrcrcrc} \topline Feedback & $b_{11} $ & $b_{12}$ & $b_{21}$ & $b_{22}$ \\ \midline Eqs.~(\ref{eq:b11})-(\ref{eq:b22}) & 0.038 & 0.059 & -0.020 & 0.017 \\ LH01 & 0.035 & - & - & 0.002 \\ \botline \end{tabular} \label{tab:4} \end{center} \end{table} Table~\ref{tab:5} presents the feedback strengths obtained from applying (\ref{eq:b11})-(\ref{eq:b22}) with $l=8-20$ days to the two setups of the idealized GCM. In the non-propagating regime, we find $b_{11} = 0.133$~day$^{-1}$, and small $b_{22}$ and negligible $b_{12}$ and $b_{21}$, indicating the absence of cross-EOF feedbacks, consistent with insignificant $m_1z_2$ and $m_2z_1$ cross-correlations (Figs.~\ref{fig:2}e-f). The values of $b_{jk}$ do not satisfy the condition for propagating regime, which is consistent with weak cross-correlation between $z_1$ and $z_2$ at long lags (Fig.~\ref{fig:2}b). These results suggest that a strong $z_1$-onto-$z_1$ feedback dominates the dynamics of the annular mode in this setup (the standard Held-Suarez configuration), which leads to an unrealistically persistent annular mode, similar to what is seen in Fig.~\ref{fig:12}d, and consistent with the findings of previous studies \citep{SonLee2006,SonLee2008,MaHassanzadehKuang2017}. Using the linear response function (LRF) of this setup \citep{HassanzadehKuang2016,HassanzadehKuang2019} showed that this eddy-zonal flow feedback is due to enhanced low-level baroclinicity (as proposed by \citet{Robinson2000} and \citet{Lorenz2001}) and estimated, from a budget analysis, that the positive feedback is increasing the persistence of the annular mode by a factor of two. In the propagating regime, we find $b_{11} = 0.101$~day$^{-1}$, which is slightly lower than $b_{11}$ of the non-propagating regime. However, in the propagating regime, we also find strong cross-EOF feedbacks $b_{12}=0.075$~day$^{-1}$, $b_{21}=-0.043$~day$^{-1}$ as well as $b_{22}=0.023$~day$^{-1}$. These feedback strengths satisfy the condition for propagating regime, consistent with strong cross-correlation between $z_1$ and $z_2$ at long lags (Fig.~\ref{fig:4}b). Comparing the two rows of Table~\ref{tab:5} and Figs.~\ref{fig:2}a and \ref{fig:4}a with Table~\ref{tab:4} and Fig.~\ref{fig:7}a suggests that while it is true that the $b_{11}$ of the the idealized GCM's non-propagating regime is larger than that of the SH reanalysis (by a factor of 3.5), the unrealistic persistence of $z_1$ in this setup (time scale $\approx 65$~days) compared to that of the reanalysis (time scale $\approx 10$~days; compare Figs.~\ref{fig:2}a and \ref{fig:7}a) could be, at least partially, due to the absence of cross-EOF feedbacks (thus oscillations), which as we showed earlier in Section~3c, reduce the persistence of the annular modes. The GCM setup with propagating regime has $b_{11}$ that is around 2.7 times larger than that of the SH reanalysis, yet their $z_1$ $e$-folding decorrelation time scales are comparable (14~days vs. 10~days). \begin{table} \caption{Feedback strengths (in day$^{-1}$) estimated for the idealized GCM setups with non-propagating and propagating regimes. The estimated damping rates of friction are $\tau_1$=7.4 days and $\tau_2$=7.6 days for the GCM setup with non-propagating regime, and $\tau_1$=7.1 days and $\tau_2$=7.4 days for the GCM setup with propagating regime (estimated using the methodology in Appendix~A of \citet{Lorenz2001}).} \begin{center} \begin{tabular}{ccccccrrcrcrcrc} \topline Feedback & $b_{11} $ & $b_{12}$ & $b_{21}$ & $b_{22}$ \\ \midline Non-propagating & 0.133 & 0.003 & 0.002 & 0.021 \\ Propagating & 0.101 & 0.075 & -0.043 & 0.023 \\ \botline \end{tabular} \label{tab:5} \end{center} \end{table} These findings show the importance of quantifying and examining cross-EOF feedbacks to fully understand the dynamics and variability of the annular modes and to better evaluate how well the GCMs simulate the extratopical large-scale circulation. \section{Concluding remarks} \label{sec:4} The low-frequency variability of the extra-tropical large-scale circulation is often studied using a reduced-order model of the leading EOF of zonal-mean zonal wind. The key component of this model (LH01) is an internal-to-troposphere eddy-zonal flow interaction mechanism which leads to a positive feedback of EOF1 onto itself, thus increasing the persistence of the annular mode \citep{Lorenz2001}. However, several studies have showed that under some circumstances, strong couplings exist between EOF1 and EOF2 at some lag times, resulting in decaying-oscillatory, or propagating, annular modes (e.g. \citealt{SonLee2006,SonLee2008,SheshadriPlumb2017}). In the current study, following the methodology of \citet{Lorenz2001} and using data from the SH reanalysis and two setups of an idealized GCM that produce circulations with a dominant non-propagating or propagating regime, we first show strong cross-correlations between EOF1 (EOF2) and the eddy forcing of EOF2 (EOF1) at long lags, suggesting that cross-EOF feedbacks might exist in the propagating regimes. These findings together demonstrate that there is a need to extend the single-EOF model of LH01 and build a model that includes, at a minimum, both leading EOFs and accounts for their cross feedbacks. With similar assumptions and simplifications used in \citet{Lorenz2001}, we have developed a two-EOF model for propagating annular modes (consisting of a system of two coupled ODEs, Eqs.~(\ref{eq:z1})-(\ref{eq:z2}) with (\ref{eq:m1})-(\ref{eq:m2})) that can account for the cross-EOF feedbacks. In this model, the strength of the feedback of $k$th EOF onto $j$th EOF is $b_{jk}$ ($j,k=1,2$). Using the analytical solution of this model, we derive conditions for the existence of the propagating regime based on the feedback strengths. It is shown that the propagating regime, which requires a decaying-oscillatory solution of the coupled ODEs, can exist only if the cross-EOF feedbacks have opposite signs ($b_{12} b_{21} <0$), and if and only if the following criterion is satisfied: $\left ( b_{11}-b_{22} \right )^2 < -4 b_{12} b_{21}$. These criteria show that non-zero cross-EOF feedbacks are essential components of the propagating regime dynamics. Using this model and the idealized GCM and a stochastic prototype, we further show that cross-EOF feedbacks play an important role in controlling the persistence of the propagating annular modes (i.e., the $e$-folding decorrelation time scale of the zonal index, $z_j$) by setting the frequency of the oscillation $\omega $ (Eq.~(\ref{eq:omega})). Therefore, in this regime, the persistence of the annular mode (EOF1) does not only depend on the feedback of EOF1 onto itself, but also depends on the cross-EOF feedbacks. We find that as a result of the oscillation, the stronger the cross-EOF feedbacks, the less persistent the annular mode. Applying the coupled-EOF model to the reanalysis data shows the existence of strong cross-EOF feedbacks in the current SH extratropical large-scale circulation. Annular modes have been found to be too persistent compared to observations in GCMs including IPCC AR4 and CMIP5 models \citep{Gerber2007,GerberPolvani2008,Bracegirdle2020}. This long persistence has been often attributed to a too strong positive EOF1-onto-EOF1 feedback in the GCMs. The dynamics and strength of this feedback depends on factors such as the mean flow and surface friction \citep{Robinson2000,Lorenz2001,ChenPlumb2009,HassanzadehKuang2019}. External (to troposphere) influence, e.g., from the stratospheric polar vortex, has been also suggested to affect the persistence of the annular modes \citep{Byrne2016,Saggioro2019}. Our results show that the cross-EOF feedbacks play an important role in the dynamics of the annular modes, and in particular, that their absence or weak amplitudes can increase the persistence, offering another explanation for the too-persistent annular modes in GCMs. Overall, our findings demonstrate that to fully understand the dynamics of the large-scale extratropical circulation and the reason(s) behind the too-persistent annular modes in GCMs, the coupling of the leading EOFs and the cross-EOF feedbacks should be examine using models such as the one introduced in this study. An important next step is to investigate the underlying dynamics of the cross-EOF feedbacks. So far we have pointed out that cross-EOF feedbacks are essential components of the propagating annular modes; however, the propagation itself is likely essential for the existence of cross-EOF feedbacks. In fact, our preliminary result shows that the cross-EOF feedbacks result from the out-of-phase oscillations of EOF1 (north-south jet displacement) and EOF2 (jet pulsation) leading to an orchestrated combination of equatorward propagation of wave activity (a baroclinic process) and nonlinear wave breaking (a barotropic process), which altogether act to reduce the total eddy forcings (not shown). In ongoing work, we aim to explain and quantify the propagating annular modes dynamics using the LRF framework of \citet{hassanzadeh2016linear2,HassanzadehKuang2016} and finite-amplitude wave-activity framework \citep{NakamuraZhu2010,LubisHuang2018,LubisHuangNakamura2018} that have been proven useful in understanding the dynamics of the non-propagating annular modes \citep{Nie2014,MaHassanzadehKuang2017,HassanzadehKuang2019}. \acknowledgments We thank Aditi Sheshadri, Ding Ma, and Orli Lachmy for insightful discussions. This work is supported by National Science Foundation (NSF) Grant AGS-1921413. Computational resources were provided by XSEDE (allocation ATM170020), NCAR's CISL (allocation URIC0004), and Rice University Center for Research Computing.
{ "timestamp": "2020-07-20T02:02:31", "yymm": "2007", "arxiv_id": "2007.08589", "language": "en", "url": "https://arxiv.org/abs/2007.08589" }
\section{Introduction}\label{sec:introduction} As cosmological datasets increase in quantity and quality, so does our capacity to use them to pin down the properties of our universe~\cite{Scott:2018adl}. The error bars on the measurements of cosmological parameters have narrowed over recent years and discrepancies between datasets (or ``tensions'') have begun to emerge. Whilst this is most stark when examining differing observations of the Hubble parameter between early and late time cosmological probes~\cite{2014MNRAS.440.1138E, 2018JCAP...09..025M, 2019ApJ...876...85R}, other more minor tensions arguably exist in clustering parameters between weak lensing and the cosmic microwave background (CMB)~\cite{2019arXiv190105289D, 2020A&A...633A..69H} and in cosmic curvature between the CMB and CMB lensing/Baryon Acoustic Oscillations~\cite{2019arXiv190809139H,2020NatAs...4..196D, 2020MNRAS.496L..91E}. When a substantial tension occurs, it may indicate either a systematic error in how either or both of the datasets have been gathered and analysed, or more excitingly may hint at evidence for new physics if extensions or modifications to our concordance model can bring the inferred parameters back into alignment. In the case of the ``Hubble tension'' where a single obvious cosmological parameter such as the present day expansion rate $H_0$ is discrepant by $\sim5\sigma$, there is little doubt that something is fundamentally wrong. The other tensions are more subtle, in that they are only visible in complicated combinations of the parameters. As shown by \cref{fig:parameters}, in modern cosmology, error bars on the parameters of our universe are represented by high-dimensional Bayesian probability distributions. Visualising a ``distance'' between these degrees of belief is challenging, and in recent years a good deal of theory has been developed for defining a variety of metrics of discrepancy~\cite{2017PhRvD..95l3535C, 2019arXiv190910991L}. The latest Atacama Cosmology Telescope (ACT) data release 4~\cite{2020JCAP...12..047A,2020JCAP...12..045C,2020JCAP...12..046N} represents the most recently acquired CMB data, with two other measurements of the CMB power spectrum across a wide range of multipoles being provided by the \textit{Planck} satellite~\cite{2020A&A...641A...5P,2020A&A...641A...6P}, and the South Pole Telescope (SPT)~\cite{2018ApJ...852...97H}. By eye it is clear that in the ACT data some parameters such as the spectral tilt of the primordial power spectrum $n_s$ are mildly discrepant, but it is always possible in a high dimensional parameter space that such discrepancies occur by chance and are unremarkable. In this letter we discuss how this tension is quantified rigorously using the global Suspiciousness statistic~\citep[][henceforth H19]{2019PhRvD.100d3504H}, and find that ACT is in mild-to-moderate tension with \textit{Planck} and SPT, at a similar or greater level to that found in weak lensing data. We place ACT's own global tension analysis in the context of the tensions literature, and extend it by considering SPT data and further emphasise the perils of focussing too closely on lower-dimensional views onto the cosmological constraints. \section{Methodology}\label{sec:methodology} Quantifying tension between high dimensional posterior distributions is a non-trivial problem, even under the approximation of a Gaussian distribution. This has led to a large number of papers describing methods to quantify tension in high dimensional problems~\citep[for reviews, see][]{2017PhRvD..95l3535C, 2019arXiv190910991L}. Working in a Bayesian framework, as most cosmological analyses do, arguably the most natural way to quantify tension is using the Bayes Ratio~\citep{2006PhRvD..73f7302M}, defined as the ratio of the probability that the two datasets are described by a single set of parameters, to the probability that they are described by separate sets of parameters \begin{equation} \label{eq:bayesr} R = \frac{P(A, B)}{P(A) P(B)} = \frac{\mathcal{Z}_{AB}}{\mathcal{Z}_A \mathcal{Z}_B}, \end{equation} where $P$ represent a probability, we have omitted the dependence of both probabilities on an underlying model, such as $\Lambda$CDM, and $\mathcal{Z}$ is the Bayesian Evidence. Furthermore, we have assumed that both data sets are independent, an assumption that we further comment on later. High values of $R$ correspond to concordance, and low values are indicative of discordance, with $R$ often interpreted on a Jeffreys' scale~\citep{jeffreys1939theory, 2018PhRvD..98d3526A}. The main issue of this tension metric, in particular for the analysis of cosmological data sets, is that it is easily proven that $R$ is proportional to the prior volume of shared parameters. Therefore, $R$ cannot be used for analyses that use deliberately flat and wide uninformative priors, such as the analyses of \textit{Planck}, Dark Energy Survey~\citep[DES,][]{2018PhRvD..98d3526A}, Kilo Degree Survey~\citep[KiDS,][]{2020A&A...633A..69H}, ACT, SPT, etc.\ without the arbitrary width of this prior affecting tension assessment. A more detailed interpretation of this discussion can be found in H19. Motivated by this, H19 defined a new statistic, the {\it Suspiciousness} which keeps all the desired properties of~\cref{eq:bayesr}, but corrects for this undesired dependence on the prior volume. To do so, we divide the Bayes Ratio in two components: Information and Suspiciousness. The Information is defined as: \begin{equation} \label{eq:i} \log I = \mathcal{D}_A + \mathcal{D}_B - \mathcal{D}_{AB}, \end{equation} where $\mathcal{D}$ is the Kullback-Leibler divergence~\cite{KL}. The Information contains the dependence on the prior volume, therefore by removing it, we obtain a statistic that does not depend on it, but is composed of well-defined Bayesian and information theoretic quantities and is therefore covariantly insensitive to reparameterisation of the space. Therefore, we define the Suspiciousness as: \begin{equation} S = \frac{R}{I}. \end{equation} In the language of priors, the Suspiciousness may be interpreted as the most cautious Bayes Ratio $R$ corresponding to the narrowest possible priors that do not significantly alter the shape of the posteriors~\citep{2020MNRAS.496.4647L}. A significant innovation to the field which we highlight here, first noted in the appendix F.3 of~\citep{2021A&A...646A.140H} and explored in detail in \citep{2021arXiv210211511H} is that since ${\log \mathcal{Z} = \langle \log \mathcal{L} \rangle_\mathcal{P} - \mathcal{D}}$, the suspiciousness can be computed from MCMC chains via \begin{equation} \log S = \langle \log \mathcal{L}_{AB} \rangle_{\mathcal{P}_{AB}} - \langle \log \mathcal{L}_A \rangle_{\mathcal{P}_{A}} - \langle \log \mathcal{L}_B \rangle_{\mathcal{P}_{B}}. \label{eqn:chain_def} \end{equation} This observation means that so long as one has posterior samples for each of the datasets run separately and in combination, one may compute the suspiciousness without explicitly computing the Bayesian evidence. However, it should be noted that in non-CMB applications only a portion of the parameters are constrained, resulting in hypersurface-like posteriors which are extremely challenging for traditional posterior samplers, but present little challenge for nested samplers. If the posteriors are such that we may approximate them in the cosmological parameters by a Gaussian (an approximation which is reasonably justified as shown by \cref{fig:parameters}), as derived in H19, if the $d$-dimensional posterior distributions are Gaussian in the parameters with means and covariance $\mu$ and $\Sigma$, then the Suspiciousness is: \begin{align} \log S &= \frac{d}{2} -\frac{\chi^2}{2}, \label{eqn:logS}\\ \chi^2 &= (\mu_A-\mu_B){(\Sigma_{A}+\Sigma_{B})}^{-1}(\mu_A-\mu_B). \label{eqn:chi2} \end{align} This may be turned into a tension probability via the survival function of the chi-squared distribution \begin{equation} p = \int\limits_{\chi^2}^\infty \frac{x^{d/2-1}e^{-x/2}}{2^{d/2}\Gamma(d/2)} \d{x}, \label{eqn:p} \end{equation} and calibrated using a $\sigma$-tension by analogy with the Gaussian case using the inverse of the complementary error function: \begin{equation} \sigma(p) = \sqrt{2}\mathrm{erfc}^{-1}(1-p). \label{eqn:sigma} \end{equation} Note that, while several methods to quantify tension have been proposed in recent years, they are often built to recover \cref{eqn:p} and \cref{eqn:sigma} in the case of Gaussian posterior distributions. Therefore, if this work were performed using tension metrics such as Monte-Carlo Parameter Shifts~\cite{2020PhRvD.101j3527R}, Parameter Shifts in Update Form~\cite{2019PhRvD..99d3506R}, or EigenTension~\cite{2020MNRAS.499.4638P}, we would expect to obtain very similar, if not the same results, under the Gaussian approximation used in this work. This is also equivalent to the multivariate measure of tension used in the ACT paper~\cite{2020JCAP...12..047A}. It should be noted that alternative measures of tension have also been defined and explored that are specialised for the case when two datasets are correlated~\citep{2019MNRAS.484.3126K, 2020PhRvD.101j3527R}. In particular, \cite{2020MNRAS.496.4647L} extended the formalism described in this section to the case of correlated data sets. Applying this to the case of CMB datasets such as \textit{Planck}, ACT, SPT and WMAP (which are correlated by virtue of their measuring the same sky) will form the subject of a future paper. \vspace{-20pt} \section{Data}\label{sec:data} In this work we analyse the three latest CMB data sets, \textit{Planck}, SPT and ACT. As with all cosmological analyses, when considering combining or comparing them at the likelihood level we implicitly assume that the datasets are independent, even though this may not strictly true. Examining the effect of relaxing this assumption will form the subject of future work. It should also be noted that the prior treatment for $\tau$ is different across the three collaborations, one of the aims of future work will be to treat this in a consistent manner for all three cases. The ACT analysis uses a CMB-derived prior for $\tau$, so there is correlation between posteriors for the $\tau$ parameter. A more complete analysis could adjust the tension in either direction, since correlations in $\tau$ act to reduce the dimensionality to less than $d=6$, increasing the tension~\cite{2019PhRvD.100d3504H,2019PhRvD.100b3512H,2019MNRAS.484.3126K}, but since $\tau$ is a degeneracy-breaking parameter it can have dramatic effects in moving the relative locations of posteriors, increasing or reducing tension. \subsection{Planck} \vspace{-10pt} The \textit{Planck} mission~\cite{2020A&A...641A...1P} was a space observatory that measured the CMB for four years between 2009 and 2013. \textit{Planck} observed the sky in nine frequencies, between 30 and 857 GHz, with the goal of detecting both temperature and polarization anisotropies, and accurately removing foreground effects. \textit{Planck} measured the power spectrum of temperature anisotropies in multipoles $ \ell \in (2, 2508)$, and for E-mode polarization in multipoles $\ell \in (2, 1996)$, providing the most powerful constraints in the parameters of the $\Lambda$CDM cosmological model to date. Beyond the already mentioned tensions in $H_0$ cosmic curvature, and with weak lensing; the most puzzling aspect of the \textit{Planck} analysis is arguably the $A_{\rm L}$ parameter\footnote{Often known as `lensing parameter' or $A_{\rm lens}$, but we will refrain but these names as we believe they can be misleading}. $A_L$ was introduced for internal consistency checks~\citep{2008PhRvD..77l3531C}, and can smooth the peak of the \textit{Planck} power spectrum. \textit{Planck}~\citep{2020A&A...641A...1P} reports a value $A_L = 1.180 \pm 0.065$ for the combination of temperature and polarization, meaning that the data seems to prefer more smoothing of the peaks than the best fit $\Lambda$CDM cosmology provides. While it has been discussed that this could be caused by a statistical fluctuations, especially since the significance is lower for different versions of the likelihood~\citep{2019arXiv191000483E}, it has also been hypothesised that it could be a hint of new physics \cite{2020NatAs...4..196D}, although no theoretical model that produces this effect exists in the literature. It is important to point out that, while this effect is similar to that of CMB lensing, \textit{Planck} lensing measurements \cite{2020A&A...641A...8P} are compatible with $A_L = 1$. Throughout this letter we use the \textit{Planck} legacy archive chains derived using the baseline TTTEEE+low$\ell$+lowE+lensing, and have confirmed that our conclusions are insensitive to excluding the lensing portion of the likelihood. \vspace{-10pt} \subsection{South Pole Telescope} \vspace{-10pt} We make use of the South Pole Telescope measurements of temperature and polarization from the 500 square degree analysis of their SPTpol instrument~\cite{2018ApJ...852...97H}. This analysis used data at 150 GHz to produce power spectra for the E-mode polarization (EE) and the temperature-E-mode cross-spectrum (TE). It should be noted that the TE and EE have been identified as being in disagreement, so strictly this internal combination should also be viewed with suspicion. The main advantage of SPTpol with respect to \textit{Planck} is its higher resolution, which allows it to measure much smaller scales, covering a multipole range $\ell \in (50,8000)$. However, because of its smaller sky coverage, SPTpol cannot obtain information on large scales, and as a consequence produces parameter constraints that are weaker than those from \textit{Planck}. SPT~\citep{2018ApJ...852...97H} reports constraints that differ from \textit{Planck}'s, in particular when only SPTpol's high multipoles are used, but the significance of this reported discrepancy is not quantified. \textit{Planck}~\cite{2020A&A...641A...1P} used a parameter difference statistic, and found no evidence for statistical inconsistencies between the two analyses. Curiously, performing an $A_L$ analysis on SPTpol yields a value lower than one, $A_L = 0.81 \pm 0.14$ \vspace{-10pt} \subsection{Atacama Cosmology Telescope} \vspace{-10pt} Finally, we use the Atacama Cosmology Telescope (ACT) posterior samples\footnote{\url{phy-act1.princeton.edu/public/zatkins/ACTPol_lcdm_1.txt}} from Data Release 4 (DR4), which used 6000 square degrees at 98 and 150 GHz to produce power spectrum for temperature and polarization extending to $\ell = 4000$. Their results by eye appear to be in tension with \textit{Planck}, and ACT~\citep{2020JCAP...12..047A} report a global tension with \textit{Planck} consistent with that recovered in this paper. \begin{figure*} \centerline{% \includegraphics{parameters.pdf} } \caption{Measurements of the six parameters of the concordance $\Lambda$CDM model using data from the South Pole Telescope (SPT, blue), the Atacama Cosmology Telescope (ACT, orange) and the \textit{Planck} satellite (green). Plots along the diagonal show one-dimensional marginalised probability distributions normalised to equal height, below the diagonal show iso-probability contours containing $68\%$ and $95\%$ of the 2d marginal probability mass, and above the diagonal show samples drawn from the full probability distribution. Of the six cosmological parameters, visually ACT stands out in tension from the other two most clearly in the $n_s-\Omega_bh^2$ plane. We can artificially emphasise this further by computing and plotting the linear combination coordinate of maximum tension between ACT and \textit{Planck} ${t=-\Omega_b h^2 + 0.022 \Omega_c h^2 + 34\theta_{MC} -0.092 \tau + 0.05 {\rm{ln}}(10^{10} A_s) + 0.067 n_s}$, which by construction will have a tension of $\chi=4.15\sigma$. Marginalised plots can therefore over-emphasise tension by ignoring the other active coordinates, but the headline statistics in \cref{tab:tension} are derived from considering the entire distribution as a whole. Plot produced under \texttt{anesthetic}~\cite{2019JOSS....4.1414H}\label{fig:parameters}} \vspace{30pt} \end{figure*} \section{Results}\label{sec:results} Our results are summarised in \cref{tab:tension} and \cref{fig:parameters}. When the Suspiciousness tension quantification techniques are applied to the ACT data products in comparison with the \textit{Planck} baseline, we find a tension probability of ${p=0.86\%}$, with a corresponding Gaussian-calibrated tension of $2.63\sigma$. This level of discrepancy is generally termed mild-to-moderate, and is comparable with some of the larger tensions found between weak lensing and CMB data~\citep[H19,][]{2020A&A...633A..69H}. The degree of discrepancy between \textit{Planck} and ACT is consistent with the level of tension reported by ACT~\cite{2020JCAP...12..047A}. It is important to note that a global tension quantification such as Suspiciousness does not depend on any specific direction choice in parameter space, nor on the choice of parameters. It also naturally takes into account the effect that in having $d=6$ parameters, it is not improbable that some would be in strong marginal tension by chance. One can make this point explicit by computing an artificial parameter $t$ defined as the linear combination of the other parameters $\theta$ which maximises tension. In the Gaussian case this ``maximum tension parameter'' may be computed as \begin{equation} t \propto {(\mu_A-\mu_B)}^{T} {\left(\Sigma_A + \Sigma_B\right)}^{-1} \theta, \label{eqn:tmax} \end{equation} and by construction will have a one-dimensional marginalised tension of $\chi$, which in the case of consistency takes the value $t\sim\sqrt{d}\pm 1/\sqrt{2}$. Maximum tension coordinates will be discussed in greater detail in an upcoming work~\cite{liam}. Marginalised one and two-dimensional projections of the posteriors and the \textit{Planck}-ACT tension coordinate are summarised in \cref{fig:parameters}. It is easy for the eye to be drawn to certain projections where the marginalised tension is large, but as the maximum tension coordinate demonstrates, these can be misleading. We emphasise that the Suspiciousness synthesises all of the posterior information correctly into a single intuitive summary statistic. Comparing ACT with SPT~\cite{2013PhRvD..88b3501D}, we find a slightly lower mild-to-moderate tension of $2.37\sigma$ ($p=1.8\%$). Interestingly, comparing SPT with \textit{Planck} we find no significant evidence for tension ($p=16.8\%$), in contradiction with some of the historical literature~\cite{2018ApJ...852...97H}, and in agreement with \cite{2020A&A...641A...1P}. Since SPT and \textit{Planck} are consistent, we may confidently combine these datasets. In the absence of a full pipeline run, we combine the Gaussian posterior approximations using Eqs $(14)$--$(20)$ from H19. This \textit{Planck}+SPT combination is $2.79\sigma$ in tension with ACT ($p=0.52\%$), well into the ``moderate'' regime. Since ACT is in mild-to-moderate tension with both \textit{Planck} and SPT, we should be suspicious of combining it with either, but when we do, as in the final two rows of \cref{tab:tension}, we find no significant evidence for tension, although still higher than when comparing \textit{Planck} and SPT. In \cref{tab:tension} we also report the $\chi^2$ values for each data combination, and the suspiciousness $\log S$ for reference. As $\log S$ can be regarded as the most conservative value $\log R$ can take by adjusting priors, it is interesting that all values are negative, reflecting the fact that all of the tension probabilities are a little low, when one would traditionally expect $p$ to be uniformly distributed in a frequentist sense, and in general for $d=6$ one would expect positive values of $\log S$ 58\% of the time. In \cref{tab:true_tension} we compare the full non-Gaussian tension evaluated using \cref{eqn:chain_def} and find the Gaussian approximation to be a slight underestimate of the tension. \begin{table} \begin{tabular}{ccccc} Dataset combination & $\chi^2$ & $p$ & tension & $\log S$ \\ \hline \hline ACT vs \textit{Planck} & $17.2$ & $0.86\%$ & $2.63\sigma$ & $-5.60$ \\ ACT vs SPT & $15.4$ & $1.77\%$ & $2.37\sigma$ & $-4.68$ \\ \textit{Planck} vs SPT & $9.1$ & $16.82\%$ & $1.38\sigma$ & $-1.55$ \\ ACT vs \textit{Planck}+SPT & $18.4$ & $0.52\%$ & $2.79\sigma$ & $-6.22$ \\ \hline ACT+SPT vs \textit{Planck} & $12.2$ & $5.81\%$ & $1.90\sigma$ & $-3.09$ \\ ACT+\textit{Planck} vs SPT & $10.3$ & $11.09\%$ & $1.59\sigma$ & $-2.17$ \\ \end{tabular} \caption{Global tensions between CMB datasets. For each pairing of datasets we report the $\chi^2$ value calculated using \cref{eqn:chi2}, the corresponding tension probability $p$ from \cref{eqn:p} that such datasets would be this discordant by (Bayesian) chance, a conversion into a Gaussian-equivalent tension using \cref{eqn:sigma} and finall the Suspiciousness from \cref{eqn:logS}. Addition signs in the left column indicate combining the datasets at the likelihood level, and combinations below the line should be viewed with suspicion on account of their discordance reported above the line.\label{tab:tension}} \end{table} \begin{table} \begin{tabular}{cccc} ACT vs \textit{Planck} tension metric & $p$ & tension & $\log S$ \\ \hline True Suspiciousness \cref{eqn:chain_def} & $0.57\%$ & $2.76\sigma$ & $-6.10$ \\ Gaussian approximation \cref{eqn:logS} & $0.86\%$ & $2.63\sigma$ & $-5.60$ \\ \end{tabular} \caption{We compare the tension computed using the full non-Gaussian expression from \cref{eqn:chain_def}, and the tension computed via the Gaussian approximation. Note that in both cases, for this application since all parameters are well-constrained, all that is required are publicly available MCMC chains.\label{tab:true_tension}} \end{table} \vspace{-10pt} \section{Conclusions}\label{sec:conclusions} \vspace{-10pt} In general the causes of tension can be one of three things: (a) statistical fluctuation (b) systematics in at least one of the experiments (c) evidence for new physics. Given that we confidently launch manned space missions with higher failure rates than these tensions\footnote{``Bet someone's life'' probabilities can be computed using data from \url{http://www.spacelaunchreport.com/logyear.html}}, as Bayesians we should be very concerned that our CMB measurements are in this much disagreement, so should view statistical fluctuations at this level as a very unsatisfactory explanation. The general view (or hope) of many members of the cosmological community at the moment is that the cause of all of these tensions is likely a combination of (b) and (c), and before anyone can claim any kind of new physics we need to get a stronger handle on the systematics in many of our cosmological probes. As mentioned earlier, this analysis can and will be improved by using a full pipeline of evidences and KL divergences computed using nested sampling~\cite{Skilling}, as well as using techniques that are specialised for dealing with correlated datasets. However, we would like to draw practitioners' attention in particular to \cref{eqn:chain_def}, which allows them to compute the Suspiciousness using only MCMC chains. In this letter we do not seek to pass judgement on any of the \textit{Planck}, ACT, or SPT analyses. Indeed, it could be argued that given the quality of all three analyses, it is more likely that these discrepancies indicate a problem with the underlying cosmology, rather than any of the independent pipelines. Combined with the many other tensions emerging between other datasets, the discrepancy quantified in this work lends credence to the possibility that before long we may yet see a paradigm shift in our understanding of the universe. \section{Acknowledgements} \begin{acknowledgements} WH thanks Gonville \& Caius College for their support via a Research Fellowship. PL thanks STFC \& UCL for their support via a STFC Consolidated Grant. Many thanks are accorded to Lukas Hergt for invaluable contributions to the \texttt{anesthetic} package, and to Erminia Calabrese and Daan Meerburg for comments on an early draft. \end{acknowledgements} \bibliographystyle{unsrtnat}
{ "timestamp": "2021-04-20T02:34:25", "yymm": "2007", "arxiv_id": "2007.08496", "language": "en", "url": "https://arxiv.org/abs/2007.08496" }
\section{Introduction}\label{sec:introduction}} \begin{figure*}[hbt!] \begin{centering} \includegraphics[width=2\columnwidth]{fig/framework-01.pdf} \par\end{centering} \caption{\label{fig:framework} The inference process of our proposed framework. We extract features of each labeled and unlabeled instance, train a linear classifier with the support set, provide pseudo-label for the unlabeled instances, and use ICI to select the most trustworthy subset to expand the support set. This process is repeated until all the unlabeled data are included in the support set.} \end{figure*} \IEEEPARstart{H}{umans} \revise{are able to efficiently perform visual recognition by learning from a single example or a single exposure.} For example, children have no problem of forming the concept of ``giraffe'' by only taking a glance from a picture in a book~\cite{wang2020generalizing}, or hearing its description as looking like a deer with a long neck~\cite{zhang2017learning}. In contrast, \revise{the most successful recognition systems, deep learning based in particular}~\cite{krizhevsky2012imagenet,simonyan2014very,he2016deep,huang2017densely} still highly rely on an avalanche of labeled training data. \revise{This is problematic. It inevitably} increases the burden in rare data collection (\textit{e.g.}~accident data in the autonomous driving scenario) and expensive data annotation (\textit{e.g.}~disease data for medical diagnose), and more fundamentally limits their scalability to open-ended learning of the long tail categories in the real-world. Motivated by these observations, there has been a recent resurgence of research interest in few-shot learning~\cite{finn2017model,snell2017prototypical,sung2018learning,vinyals2016matching}. It aims to recognize new objects with extremely limited training data for each category. \revise{ To address this issue, the key idea is to train the model by transferring the knowledge from a disjoint but relevant dataset. Typically, the model trained on the \emph{source/base} dataset, which includes many labeled instances, is expected to be well generalizable to the \emph{ target/novel} dataset with only scarce labeled data. } A key challenge for few-shot learning is how to transfer the learned knowledge to new tasks. The simplest strategy is fine-tuning~\cite{yosinski2014transferable}, utilizing the limited training instances to update the learned models. Practically, it inevitably causes severely overfitting as one or a few instances are insufficient to model the data distributions of the novel classes. Data augmentation and regularization techniques~\cite{chen2019image,chen2019multi} can alleviate overfitting in such a limited-data regime, but they do not solve it. Several recent efforts are made in leveraging learning to learn, or meta-learning~\cite{lemke2015metalearning} paradigm by simulating the few-shot scenario in the training process~\cite{vinyals2016matching, snell2017prototypical, oreshkin2018tadam, sung2018learning, sung2017learning, finn2017model, li2017meta, nichol2018first, rusu2018meta}. However, Chen \textit{et al.}~\cite{DBLP:journals/corr/abs-1904-04232} empirically argues that such a learning paradigm often results in inferior performance compared to a simple baseline with a linear classifier coupled with a deep feature extractor. This phenomenon is also verified in~\cite{Liu_2020_CVPR_Workshops}. In real-world applications, unlabeled instances are easier and cheaper to obtain, comparing to the labeled instances which usually require expensive human annotation. Potentially we could utilize the unlabeled instances to alleviate the data-scarce problem and help learn the few-shot model. Specifically, two types of strategies resort to model the data distribution of novel category beyond traditional \emph{inductive} few-shot learning: (i) semi-supervised few-shot learning (SSFSL)~\cite{liu2018learning,ren2018meta,sun2019learning} supposes that we can utilize unlabeled data to help to learn the model; furthermore, (ii) transductive inference~\cite{joachims1999transductive} for few-shot learning (TFSL)~\cite{liu2018learning,qiao2019transductive} assumes we can access all the test data, rather than evaluate them one by one in the inference process. In other words, the few-shot learning model can utilize the data distributions of testing examples. Self-taught learning~\cite{self-taught-learning} is one of the most straightforward ways to leverage the information of unlabeled data. Typically, a trained classifier infers the pseudo labels of unlabeled data, which are further taken to update the classifier. \revise{ Nevertheless, the inferred pseudo-labels may be very noisy; the wrongly labeled instances may jeopardize the performance of the classifier. } It is thus essential to investigate the labeling confidence of each unlabeled instance. To this end, we present a statistical approach, dubbed Instance Credibility Inference (ICI) to exploit the distribution support of unlabeled instances for few-shot learning. Specifically, we first train a simple linear classifier (\textit{e.g.}~logistic regression, or linear support vector machine) with the labeled few-shot examples and use it to infer the pseudo-labels for the unlabeled instances. \revise{ The credibility of each pseudo-labeled instances is measured by the proposed ICI. Then a most trustworthy subset can be selected and expanded into the support set. } The simple classifier thus can be progressively updated (re-trained) by the expanded support set and further infer pseudo-labels for the unlabeled data. This process is repeated until all the unlabeled instances are iteratively selected to expand the support set, \textit{i.e.}~the pseudo-label of each unlabeled instance is converged. The schematic illustration is shown in Fig.~\ref{fig:framework}. Basically, we re-purpose the standard self-taught learning algorithm by our proposed ICI algorithm. How to select the pseudo-labeled data and exclude the wrongly-predicted samples,~\textit{i.e.}, excluding the noise introduced by the self-taught learning strategy? Our intuition is that the credibility criteria can neither solely rely on the manifold structure of the feature space (\textit{e.g.}~instances that are close to labeled instances under a certain distance metric) nor the label space (\textit{e.g.}~prediction score provided by the classifier). Instead, we propose to solve the hypothesis of (generalized) linear models (\textit{i.e.}~linear regression or logistic regression) {by progressively increasing} the sparsity of the data-dependent incidental parameter~\cite{fan2018partial} until it vanishes. Thus we can credit each pseudo-labeled instance by the sparsity of the corresponding incidental parameter. We prove that under the conditions of restricted eigenvalue, irrepresentability, and large error, our proposed method is able to collect \emph{all} the correctly-predicted pseudo-labeled instances. We conduct extensive experiments on major few-shot learning benchmark datasets to validate the effectiveness of our proposed algorithm. \noindent \textbf{Contributions.} The contributions of this work are as follows. \noindent (i) We present a statistical approach, dubbed Instance Credibility Inference (ICI) to exploit the distribution support of unlabeled instances for few-shot learning. Specifically, our model iteratively selects the pseudo-labeled instances according to its credibility measured by the proposed ICI for classifier training. \noindent (ii) We re-purpose the standard self-taught learning algorithm~\cite{self-taught-learning} by our proposed ICI. To measure the credibility of each pseudo-labeled instance, we solve the LM/GLM hypothesis by increasing the sparsity of the incidental parameter~\cite{fan2018partial} and regard the sparsity level as the credibility for each pseudo-labeled instance. \noindent (iii) Under the conditions of restricted eigenvalue, irrepresentability, and large error, we can prove that our method collects \emph{all} the correctly-predicted pseudo-labeled instances. \noindent (iv) Extensive experiments under two few-shot settings show \revise{the effectiveness of our approach} on four widely used few-shot learning benchmark datasets including \textit{mini}ImageNet, \textit{tiered}ImageNet, CIFAR-FS, and CUB. \noindent \textbf{Extensions.} A preliminary version of this work was published in~\cite{wang2020instance}. We have extended our conference version as follows. \noindent (i) We provide the theoretical analysis of ICI to answer the question that \textit{under what conditions can ICI find \textbf{all} the correctly-predicted instances}? \noindent (ii) We show that our ICI can be extended to generalized linear models, in particular, a \emph{logistic regression model with sparse incidental parameters}. Particularly we show in our experiments the effectiveness of such a logistic regression model with sparsity regularization for ICI. \section{Related work} \subsection{Semi-supervised learning} Semi-supervised learning (SSL) aims to improve the learning performance with both labeled and unlabeled instances. Basic assumptions in semi-supervised learning include continuity, cluster, and manifold assumptions. Conventional approaches focus on finding decision boundaries with both labeled and unlabeled data~\cite{vapnik1998statistical,bennett1999semi,joachims1999transductive}, and avoiding to learn the ``wrong'' knowledge from the unlabeled data~\cite{li2014towards} based on specific hypothesis. Recently, semi-supervised learning with deep learning models use consistency regularization~\cite{conf/iclr/LaineA17}, moving average technique~\cite{tarvainen2017mean} and adversarial perturbation regularization~\cite{miayto2016virtual} to train the model with a large amount of unlabeled data. \revise{ The task of semi-supervised few-shot learning is an extension of addressing SSL in the setting of few-shot learning, where only limited labeled target instances are available. Critically, as explained in~\cite{ren2018meta}, the vanilla SSL is solved in the standard supervised learning setting, whilst the SSFSL targets at addressing a transfer learning task. } \subsection{Self-taught learning} Self-taught learning~\cite{self-taught-learning}, also known as self-training~\cite{NoisyStudent}, is a traditional semi-supervised strategy of utilizing unlabeled data to improve the performance of classifiers~\cite{amini2002semi,grandvalet2005semi}. Self-taught learning algorithms often start by training an initial recognition model and infer the pseudo-labels of unlabeled instances, then the pseudo-labeled instances are taken to re-train the recognition model with specific strategies~\cite{lee2013pseudo}. Deep learning based self-taught learning strategy includes (i) directly training the neural network with both labeled instances and pseudo-labeled instances ~\cite{lee2013pseudo}, (ii) utilizing mix-up images between labeled instances and pseudo-labeled instances to synthesis training instances with less noise~\cite{arazo2019pseudo}, (iii) utilizing indirect ways to infer the pseudo-label of unlabeled instances (for example use label propagation constructed on the nearest-neighbor graph and select the trustworthy subset based on the entropy~\cite{iscen2019label}), and (iv) methods that introducing inductive bias (\textit{e.g.}~adding a cluster assumption on the feature space and re-weight the pseudo-labeled instances based on this assumption~\cite{shi2018transductive}) One of the key points in self-taught learning algorithms is how to reduce the noise introduced by the imperfect recognition models. Different from previous works, we measure the credibility of each pseudo-labeled instance by a statistical algorithm. Only the most trustworthy subset is employed to re-train the recognition model jointly with the labeled instances. \subsection{Learning with noisy labels} \revise{ There are many works on learning with noisy labels~\cite{angluin1988learning}. The noisy labels indicate that the provided label may not be the true class of the instance. Such noise may come from the annotation errors, mismatching of the search engine, or the pseudo-label in the self-taught learning process. Typical approaches in learning with noisy labels~\cite{song2020learning} include robust loss function~\cite{ghosh2017robust}, robust architecture~\cite{goldberger2016training}, robust regularization~\cite{jenni2018deep}, loss adjustment~\cite{chang2017active, ICML2019_UnsupervisedLabelNoise}, and sample selection~\cite{song2020robust}. } \revise{Sample selection aims to find clean subset from the noisy dataset to prevent the negative impact of noise. In deep learning based approaches, a popular assumption is that when the network is under-fitted, the loss of noisy samples are larger than clean samples. O2u-net~\cite{huang2019o2u} cyclically changes the learning rate of the network to satisfy the under-fitting condition, measure the loss of each sample and exclude the noisy subset. ODD~\cite{song2020robust} uses large learning rate to exclude the samples with higher losses.} \revise{However, almost all of these algorithms are based on the inherent assumption that a large number of training samples are accessible. Further, they mainly focus on the standard supervised learning setting. In contrast, SSFSL focuses on the transfer learning tasks. } \subsection{Few-shot learning} Few-shot learning aims to recognize novel visual categories from very few labeled examples. Recent efforts mainly follow the meta-learning strategy. That is, by simulating the few-shot scenario in the training process, algorithms are learning to learn with limited data. \revise{ We can roughly categorize existing works on few-shot learning into the following groups. (i) Learning robust and discriminative distance metrics, including weighted nearest neighbor classifier (\textit{e.g.}~Matching Network~\cite{vinyals2016matching}), finding robust prototype for each class (\textit{e.g.}~Prototypical Network~\cite{snell2017prototypical}), learning task-dependent metrics (\textit{e.g.}~TADAM~\cite{oreshkin2018tadam}), and learning parameterized metrics via neural networks~\cite{sung2018learning}. (ii) Finding the optimal initialization parameters that could rapidly adapt to specific task, including Meta-Critic~\cite{sung2017learning}, MAML~\cite{finn2017model}, Meta-SGD~\cite{li2017meta}, Reptile~\cite{nichol2018first}, and LEO~\cite{rusu2018meta}. (iii) Data augmentation strategies aim to alleviate the problem of limited data by directly synthesising new data in the image level~\cite{chen2019image} or the feature level~\cite{chen2019multi}. Additionally, SNAIL~\cite{mishra2018a} utilizes the sequence modeling to create a new framework. The proposed statistical algorithm is orthogonal and potentially beneficial to these algorithms -- it is always worth increasing the training set by utilizing the unlabeled data with confidently predicted labels. } \subsection{Few-shot learning with unlabeled data} \revise{ Recent works~\cite{hou2019cross,hu2020exploiting,lichtenstein2020tafssl,yang2020dpgn,hu2020leveraging,kye2020transductive} start to tackle few-shot learning with additional unlabeled instances. Compared with the traditional inductive setting, algorithms trained with unlabeled instances have the chance to handle a more trustworthy empirical distribution. Ren~\emph{et al.}~\cite{ren2018meta} utilized the unlabeled data to refine the prototype of each class. Liu~\emph{et al.}~\cite{liu2018learning} utilized label propagation strategy to transfer labels based on the relative distances within labeled data and unlabeled data. DPGN~\cite{yang2020dpgn} adopts contrastive comparisons to produce distribution representation.} \revise{Self-taught learning is also utilized in SSFSL. For example, LST~\cite{sun2019learning} uses the self-taught learning strategy in the transductive inference setting and trains the model in a meta-learning manner. CAN~\cite{hou2019cross} uses the self-taught learning to train the model repeatedly within the specific designed network. TAFSSL~\cite{lichtenstein2020tafssl} reduces the dimension of sample features to get a simpler manifold and construct specific self-taught learning algorithm based on the low-dimensional manifold. Compared with those algorithms, our approach is much simpler and theoretically guaranteed. Unlike previous meta-learning algorithms which usually has pre-training, meta-training, and meta-test process~\cite{sun2019learning}, our approach only modifies the inference process. } \subsection{Incidental parameters} Incidental parameters problem~\cite{neyman1948consistent} was tackled by the penalized estimation algorithms~\cite{fan2010selective}. It assumes the existence of sparse data-dependent parameters in the estimation models. For example, the linear regression model with incidental parameters follows $y_i=x_i^{\top}\beta^{*}+\gamma_i^{*}+\varepsilon_i$, where $\left(x_i,y_i\right)$ denotes data input, $\beta^{*}$ is the traditional coefficients, $\varepsilon_i$ denotes the random noise and $\gamma_i^{*}$ is the introduced data-dependent incidental parameters. Prior arts solve this problem by estimating the coefficients which are robust against the incidental parameters~\cite{neyman1948consistent, kiefer1956consistency, basu2011elimination, moreira2008maximum, fan2018partial}. Fu~\emph{et al.}~\cite{fu2015robust} introduce the incidental parameter in robust ranking task. In this paper, we propose to solve the few-shot learning problem based on the intuition that the incidental parameters indicate the credibility of pseudo-labeled instances. We do so by utilizing a weak estimation of coefficients to enlarge the influence of incidental parameters and transfer a \revise{``\emph{generalized linear model with incidental parameters}'' into a normal ``\emph{generalized linear}'' model } whose coefficients are the former incidental parameters. Then we estimate the incidental parameters along the regularization path to get the credibility of the corresponding instance. We further provide the theoretical properties of ICI. \section{Methodology} \subsection{Problem formulation} Here we define the few-shot learning problem mathematically. We are provided a base category set and a novel category set, denoted as $\mathcal{C}_{base}$ and $\mathcal{C}_{novel}$, respectively. The two category sets have no common category\footnote{Note that here and below we ignore another validation set for model selection since we could regard it as the novel set that is accessible in the training process.}, \textit{i.e.}, $\mathcal{C}_{base}\bigcap\mathcal{C}_{novel}=\emptyset$. Within each category set, we have a corresponding dataset, denoted as $\mathcal{D}_{base}=\left\{ \left(\bm{I}_{i},y_{i}\right),y_{i}\in\mathcal{C}_{base}\right\}$ and $\mathcal{D}_{novel}=\left\{ \left(\bm{I}_{i},y_{i}\right),y_{i}\in\mathcal{C}_{novel}\right\}$, respectively. With the above notations, few-shot learning algorithms aim to train on $\mathcal{D}_{base}$ and contain the capacity of rapidly \revise{adapting} to $\mathcal{D}_{novel}$ with access to only one or a few labeled instances per class. For evaluation, we adopt the standard \emph{$c$-way-$m$-shot} classification as \cite{vinyals2016matching} on $\mathcal{D}_{novel}$. Specifically, in each episode, we randomly sample $c$ classes to construct our category pool $\mathcal{C}$, that is $\mathcal{C}\sim \mathcal{C}_{novel},\left|\mathcal{C}\right|=c$; and $s$ and $q$ labeled images per class are randomly sampled in $\mathcal{C}$ to construct the support set $\mathcal{S}$ and the query set $\mathcal{Q}$, respectively. Thus we have $\left|\mathcal{S}\right|=c\times s$ and $\left|\mathcal{Q}\right|=c\times q$. The classification accuracy is averaged on query sets $\mathcal{Q}$ of many meta-testing episodes. In addition, we have unlabeled data of novel categories $\mathcal{U}_{novel}=\left\{ \bm{I}_{u}\right\} $. \subsection{Self-taught learning from unlabeled data} We recap the self-taught learning formalism~\cite{self-taught-learning} to tackle few-shot learning problem with unlabeled data. Particularly, denote $f\left(\cdot\right)$ as the feature extractor trained on $\mathcal{D}_{base}$. In one episode, one can train a supervised classifier $g\left(\cdot\right)$ on the support set $\mathcal{S}$, and pseudo-labeling unlabeled data, $\hat{y}_{i}=g\left(f\left(\bm{I}_{u}\right)\right)$ with corresponding confidence $p_{i}$. The most confident unlabeled instances will be further taken as additional data of corresponding classes in the support set $\mathcal{S}$. Thus we obtain the updated supervised classifier $g\left(\cdot\right)$. To this end, few-shot classifier acquires additional training instances, and thus its performance can be improved. However, it is problematic if directly utilizing self-taught learning in few-shot cases. Particularly, the supervised classifier $g\left(\cdot\right)$ is only trained by a few instances. The unlabeled instances with high confidence may not be correctly categorized, and the classifier will be updated by some wrong instances. Even worse, one can not assume the unlabeled instances follows the same class labels or generative distribution as the labeled data. Noisy instances or outliers may also be utilized to update the classifiers. To this end, we propose a systematical algorithm: Instance Credibility Inference (ICI) to reduce the noise. \subsection{Instance credibility inference (ICI)} To measure the credibility of predicted labels over unlabeled data, we introduce a hypothesis of linear model by regressing each instance from feature to label spaces. Particularly, given $n$ instances of $c$ classes, $\mathcal{S}=\left\{ \left(\bm{I}_{i},y_{i},\bm{x}_{i}\right),y_{i}\in\mathcal{C}_{novel}\right\} $, where $y_i$ is the ground truth when $\bm{I}_{i}$ comes from the support set, or the pseudo-label when $\bm{I}_{i}$ comes from the unlabeled set; \revise{$\bm{x}_{i}$ is the feature vector of instance $i$}. We employ a simple linear regression model to ``predict'' the class label, \begin{equation} \bm{y}_{i}=\bm{x}_{i}^{\top}\bm{\beta}^{*}+\bm{\gamma}_{i}^{*}+\bm{\varepsilon}_{i}\label{eq:lm}, \end{equation} where $\bm{\beta}^{*}\in\mathcal{\mathbb{R}}^{d\times c}$ is the coefficient matrix; $\bm{x}_{i}\in\mathcal{\mathbb{R}}^{d\times1}$; $\bm{y}_{i}$ is $c$ dimension one-hot vector denoting the class label of instance $i$, and \minor{ $\varepsilon_{ij}$ is independent sub-Gaussian noise of zero mean and variance bounded by $\sigma^2$ }. Note that to facilitate the computations, we employ Locally Linear Embedding (LLE)~\cite{roweis2000nonlinear} to reduce the dimension of extracted feature $f(\bm{I}_{i})$ to $d$. Inspired by incidental parameters~\cite{fan2018partial}, we introduce $\gamma_{i,j}^{*}$ to amend the chance of instance $i$ belonging to class $j$. The larger \revise{magnitude of} $\left\Vert\gamma_{i,j}^{*}\right\Vert$, the higher difficulty in attributing instance $i$ to class $j$. Consider the linear regression model for all instances, we are solving the problem of \begin{equation} \underset{\bm{\beta},\bm{\gamma}}{\mathrm{argmin}}\sum_{i=1}^{n}\left[\frac{1}{2}\left\Vert\bm{y}_{i}-\bm{x}_{i}^{\top}\bm{\beta}-\bm{\gamma}_{i}\right\Vert_{2}^{2}+\lambda R\left(\bm{\gamma}_{i}\right)\right]\label{eq:loss_func_instances}, \end{equation} where $R\left(\cdot\right)$ is the sparsity penalty, \textit{e.g.}, $R\left(\bm{\gamma}_i\right)=\sum_{j=1}^c\left|\bm{\gamma}_{i,j}\right|$. By re-writing Eq.~\eqref{eq:loss_func_instances} in a matrix form, we are thus solving the problem of \begin{equation} \left(\hat{\bm{\beta}},\hat{\bm{\gamma}}\right)=\underset{\bm{\beta},\bm{\gamma}}{\mathrm{argmin}}\frac{1}{2}\left\Vert\bm{Y}-\bm{X}\bm{\beta}-\bm{\gamma}\right\Vert _{\operatorname{F}}^{2}+\lambda R\left(\bm{\gamma}\right)\label{eq:loss_func}, \end{equation} where $\left\Vert \cdot \right\Vert _{\operatorname{F}}^{2}$ denotes the Frobenius norm. $\bm{Y}=[\bm{y}_{i}^{\top}]^{\top}\in\mathcal{\mathbb{R}}^{n\times c}$ and $\bm{X}=[\bm{x}_{i}]^{\top}\in\mathcal{\mathbb{R}}^{n\times d}$ indicate label and feature input respectively. $\bm{\gamma}=[\bm{\gamma}^{\top}_{i}]^{\top}\in\mathcal{\mathbb{R}}^{n\times c}$ is the incidental matrix. $\lambda$ is the coefficient of the penalty term $R\left(\cdot \right)$. To solve Eq.~\eqref{eq:loss_func}, we find the derivative with respect to $\bm{\beta}$ and make it equal to $0$, then we have \begin{equation} \hat{\bm{\beta}}=\left(\bm{X}^{\top}\bm{X}\right)^{\dagger}\bm{X}^{\top}\left(\bm{Y}-\bm{\gamma}\right)\label{eq:beta}, \end{equation} \noindent where $\left(\cdot\right)^{\dagger}$ denotes the Moore-Penrose pseudo-inverse. Note that (i) we are interested in utilizing $\bm{\gamma}$ to measure the credibility of each instance along its regularization path, rather than estimating $\hat{\bm{\beta}}$, since the linear regression model is not good enough for classification in general; (ii) the $\hat{\bm{\beta}}$ also relies on the estimation of $\bm{\gamma}$. To this end, we take Eq.~\eqref{eq:beta} into Eq.~\eqref{eq:loss_func} and solve the problem as \begin{equation} \underset{\bm{\gamma}\in\mathbb{R}^{n\times c}}{\mathrm{argmin}}\frac{1}{2}\left\Vert \bm{Y}-\bm{H}\left(\bm{Y}-\bm{\gamma}\right)-\bm{\gamma}\right\Vert _{\operatorname{F}}^{2}+\lambda R\left(\bm{\gamma}\right), \end{equation} where $\bm{H}=\bm{X}\left(\bm{X}^{\top}\bm{X}\right)^{\dagger}\bm{X}^{\top}$. We further define $\tilde{\bm{X}}=\bm{I}-\bm{H}$ and $\tilde{\bm{Y}}=\tilde{\bm{X}}\bm{Y}$. Then the above equation can be simplified as \begin{equation} \hat{\bm{\gamma}} = \underset{\bm{\gamma}\in\mathbb{R}^{n\times c}}{\mathrm{argmin}}\frac{1}{2}\left\Vert \tilde{\bm{Y}}-\tilde{\bm{X}}\bm{\gamma}\right\Vert _{\operatorname{F}}^{2}+\lambda R\left(\bm{\gamma}\right),\label{eq:penalty} \end{equation} which is a multi-response regression problem. Particularly, we regard $\hat{\bm{\gamma}}$ as a function of $\lambda$. When $\lambda$ changes from $0$ to $\infty$, the sparsity of $\hat{\bm{\gamma}}$ is increased until all of its elements are forced to vanish. Further, we use the penalty $R\left(\bm{\gamma}\right)$ to encourage $\bm{\gamma}$ vanishes row by row, \textit{i.e.}, instance by instance. For example, $R\left(\bm{\gamma}\right)=\sum_{i=1}^n\sum_{j=1}^c\left|\bm{\gamma}_{i,j}\right|$ or $R\left(\bm{\gamma}\right)=\sum_{i=1}^n\left\Vert\bm{\gamma}_{i}\right\Vert_2$. Moreover, the penalty \revise{tends to} vanish the subset of $\tilde{X}$ with the lowest deviations, indicating less discrepancy between the prediction and the ground truth. Hence we could rank the pseudo-labeled data by the \emph{smallest} $\lambda$ value when the corresponding $\hat{\gamma}_i$ vanishes. As shown in one toy example of Figure~\ref{fig:illu}, the $\hat{\bm{\gamma}}$ value of the instance denoted by the red line vanishes first, and thus it is the most trustworthy sample by our algorithm. We seek the best subset by checking the regularization path, \textit{i.e.}~$\hat{\bm{\gamma}}(\lambda)$ as $\lambda$ varies, which can be easily configured by a block coordinate descent algorithm implemented in Glmnet~\cite{simon2013blockwise}. Specifically, we can find $\lambda_{max}=\underset{i}{\max}\left\Vert\tilde{\bm{X}}_{\cdot i}^{\top}\tilde{\bm{Y}}\right\Vert _{2}/n$ to guarantee \revise{that the solution of Eq.~\eqref{eq:penalty} all equals to 0}. Then we can get a list of $\lambda$s from $0$ to $\lambda_{max}$. We solve a specific Eq.~\eqref{eq:penalty} with each $\lambda$, and get the regularization path of $\bm{\gamma}$ along the way. \begin{figure} \begin{centering} \includegraphics[width=0.8\columnwidth]{fig/illu.pdf} \par\end{centering} \caption{\label{fig:illu} Regularization path of $\lambda$ on ten samples. Red line is corresponding to the most trustworthy sample suggested by our ICI algorithm.} \end{figure} \subsection{Extension to logistic regression\label{sec:extension-lr}} \begin{figure*}[!ht] \includegraphics[width=1\textwidth]{fig/qualitative-images-v2.pdf} \caption{\label{fig:qualitative-images} \revise{New images selected per class in each iteration of an inference episode on \textit{mini}ImageNet. The averaged test accuracy is on the left, while the test accuracy of each class is listed at the bottom of the corresponding images in each iteration. In each iteration, the correctly-predicted instances of each class are placed on the left, and vice versa on the right. For each class, we select $5$ images at most. Note that in some iteration the number of the left unlabeled instances of classes is smaller than $5$. The remaining images are incorrectly predicted in the other classes. } } \end{figure*} In the above section, we develop ICI with a linear regression model. But the basic idea of measuring credibility of pseudo-labeled instance as the sparsity level of the corresponding incidental parameters along the regularization path is general and not limited in the linear regression model. To show this, in this section we extend ICI with generalized linear models, particularly, the logistic regression model. Recall that we have $\bm{Y}=[\bm{y}_{i}^{\top}]^{\top}\in\mathcal{\mathbb{R}}^{n\times c}$ and $\bm{X}=[\bm{x}_{i}]^{\top}\in\mathcal{\mathbb{R}}^{n\times d}$ as our label matrix and feature matrix, respectively. We use $\bm{\beta}^{*}\in\mathcal{\mathbb{R}}^{d\times c}$ as the coefficient matrix and $\bm{\gamma}^{*}=\left[\bm{\gamma}_{i}\right]\in\mathcal{\mathbb{R}}^{n\times c}$ as the incidental matrix. Then our logistic model with incidental parameters can be formed as \begin{equation} \bm{Y}_{i,c} = \frac{\exp \left(\bm{X}_{i\cdot}\bm{\beta}^{*}_{\cdot c}+\bm{\gamma}^{*}_{i,c}\right)}{\sum_{l=1}^C\exp \left(\bm{X}_{i\cdot}\bm{\beta}^{*}_{\cdot l}+\bm{\gamma}^{*}_{i,l}\right)}+\bm{\varepsilon}_{i,c}. \label{eq:logit-origin} \end{equation} This could be reformulated into a standard logistic regression model with sparsity regularization. Specifically, we define $\bar{\bm{X}}=\left(\bm{X}, \bm{I}\right)\in\mathbb{R}^{n\times(d + n)}$ and $\bar{\bm{\beta}}^{*}=\left(\bm{\beta}^{*}, \bm{\gamma}^{*}\right)^{\top}\in\mathbb{R}^{(d + n)\times c}$, in which $\bm{I}$ is the identity matrix. Then we have \begin{equation} \bar{\bm{X}}_{i\cdot}\bar{\bm{\beta}}^{*}_{\cdot c}= \left(\bm{X}_{i\cdot}, \bm{I}_{i\cdot}\right) \left(\bm{\beta}^{*}_{\cdot c}, \bm{\gamma}^{*}_{\cdot c}\right)^{\top}= \bm{X}_{i\cdot}\bm{\beta}^{*}_{\cdot c}+\bm{\gamma}^{*}_{i,c}. \end{equation} Hence we could reformulate Eq.~\eqref{eq:logit-origin} as \begin{equation} \bm{Y}_{i,c} = \frac{\exp \left(\bar{\bm{X}}_{i\cdot}\bar{\bm{\beta}}^{*}_{\cdot c}\right)} {\sum_{l=1}^C\exp \left(\bar{\bm{X}}_{i\cdot}\bar{\bm{\beta}}^{*}_{\cdot l}\right)}+\bm{\varepsilon}_{i,c}, \label{eq:logit-reformualted} \end{equation} which is exactly a logistic regression model. Our objective is the penalized negative log-likelihood function: \begin{equation} \label{eq:logistic-regression} \begin{aligned} \underset{\bar{\bm{\beta}}=\left(\bm{\beta}, \bm{\gamma}\right)^{\top}}{\mathrm{argmin}} &- \frac{1}{n}\sum_{i=1}^n \left(\sum_{l=1}^{c}\bm{Y}_{i,l}\left(\bar{\bm{X}}_{i,\cdot}\bar{\bm{\beta}}_{\cdot,l}\right) -\log \left(\sum_{l=1}^{c}e^{ \bar{\bm{X}}_{i,\cdot}\bar{\bm{\beta}}_{\cdot,l}}\right)\right) \\ &+\lambda_1 R\left(\bm{\beta}\right) + \lambda_2 R\left(\bm{\gamma}\right). \end{aligned} \end{equation} The algorithm for solving Eq.~\eqref{eq:logistic-regression} is well established~\cite{zhu1997algorithm,fan2008liblinear,yu2011dual,simon2013blockwise}. Note that unlike the linear regression version where we can calculate a closed-form solution for $\bm{\beta}$, here the penalty of $\bm{\beta}$ is necessary or we will not achieve a unique solution, \textit{i.e.}~the solution is ill-posed~\cite{tikhonov1977solutions}. For example, assume that we have a large enough $\lambda_2$ to vanish all elements of $\bm{\gamma}$. Then the problem degenerates to the normal logistic regression with the coefficient $\bm{\beta}$. Suppose we have an optimal solution $\bm{\beta}^*$, and we replace the $k$-th row $\bm{\beta}^*_{k,\cdot}$ by $\bm{\beta}^*_{k,\cdot}+\varepsilon \bm{1}^\top$ where $\varepsilon$ is some scalar. Then we have \begin{equation} \hat{\bm{Y}}_{i,l\mid_{\bm{\beta}^{*}_{k,\cdot}+\varepsilon \bm{1}^\top}} =\frac{e^{\bm{X}_{i\cdot}\bm{\beta}^{*}_{\cdot c}+x_{i,k}\varepsilon}} {\sum_{l=1}^Ce^{\bm{X}_{i\cdot}\bm{\beta}^*_{\cdot l}+x_{i,k}\varepsilon}} =\frac{e^{\bm{X}_{i\cdot}\bm{\beta}^*_{\cdot c}}} {\sum_{l=1}^Ce^{\bm{X}_{i\cdot}\bm{\beta}^*_{\cdot l}}} =\hat{\bm{Y}}_{i,l\mid_{\bm{\beta}^{*}_{k,\cdot}}} \end{equation} Hence, to get a unique solution, we must provide some penalty on $\bm{\beta}$. We use a partial Newton algorithm~\cite{simon2013blockwise} to solve this optimization problem. Similar to the linear regression model, we use a list of $\lambda$s to calculate the regularization path of $\bm{\gamma}$. \begin{algorithm} \textbf{Input}: support data$\left\{ \left(\bm{X}_{i},\bm{y}_{i}\right)\right\} _{i=1}^{c\times s}$, query data $\bm{X}_{t}=\left\{ \bm{X}_{j}\right\} _{j=1}^{M}$, unlabeled data $\bm{X}_{u}=\left\{ \bm{X}_{k}\right\} _{k=1}^{U}$ \textbf{Initialization}: support set $\left(\bm{X}_{s},\bm{Y}_{s}\right)=\left\{ \left(\bm{X}_{i},\bm{y}_{i}\right)\right\} _{i=1}^{c\times s}$, feature matrix $\bm{X}_{c\times s+U,d}=\left[\bm{X}_{s};\bm{X}_{u}\right]$, classifier \textbf{Repeat:} Train classifier using $\left(\bm{X}_{s},\bm{Y}_{s}\right)$; Get pseudo-label $\bm{Y}_u$ for $\bm{X}_u$ by classifier; Rank $\left(\bm{X},\bm{Y}\right)=\left(\bm{X},[\bm{Y}_s;\bm{Y}_u]\right)$ by ICI; Select a subset $\left(\bm{X}_{\mathrm{sub}},\bm{Y}_{\mathrm{sub}}\right)$ into $\left(\bm{X}_{s},\bm{Y}_{s}\right)$; \textbf{Until Converged.} \textbf{Inference:} Train classifier using $\left(\bm{X}_{s},\bm{Y}_{s}\right)$; Get pseudo-label $\bm{Y}_t$ for $\bm{X}_t$ by classifier; \textbf{Output}: inference labels $\bm{Y}_{t}=\left\{ \hat{\bm{y}}_{j}\right\} _{j=1}^{M}$ \caption{\label{alg:Inference-process.}Inference process of our algorithm.} \end{algorithm} \subsection{Self-taught learning with ICI} The proposed ICI can thus be easily integrated to improve the self-taught learning algorithm. Particularly, the initialized classifier can predict the pseudo-labels of unlabeled instances; and we further employ the ICI algorithm to select the most confident subset of unlabeled instances, to update the classifier. The whole algorithm can be iteratively updated, as summarized in Algorithm~\ref{alg:Inference-process.}. We also show a qualitative result in an inference episode in Fig.~\ref{fig:qualitative-images}. Intuitively, ICI focuses on fitting a line using the observations $\left(\bm{x}_{i},\bm{y}_{i}\right)_{i=1}^{n}$ which contains outliers. \revise{Starting} from the labeled instances, we search the most possible inliers from the pseudo-labeled instances in each iteration. When we solve the line along the regularization path (from $\lambda_{max}$ to $\lambda_{min}$), the estimated line will approach the more linear-separable subset, resulting in $\left\Vert\bm{\gamma}_i\right\Vert=0$ for instances in this subset while $\left\Vert\bm{\gamma}_i\right\Vert>0$ for others. Then we could use the linear-separable subset to improve the linear classifier. Furthermore, the fitted line cannot provide the right label for those outliers, hence the re-train process and re-infer process are essential to transfer outliers to inliers. \section{Identifiability of ICI} In this part, we provide a theory for identifiability of ICI with linear regression model. Our theory is based on the model selection consistency for a linear regression with $\ell_1$-sparsity regularization ~\cite{zhao2006model,wainwright2009sharp}. Here our purpose is to answer the question of \textit{under which conditions can we find the right-predicted instances}? Recall that our intuition is that $\bm{\gamma}_{i,j}$ can be regarded as the correction of the chance that instance $i$ belonging to class $j$. Suppose $\bm{\gamma}^*$ is the ground truth. If the pseudo-labeled instance $i$ is right-predicted, then we have $\bm{\gamma}^*_{i,j}=0,\forall j \in \left\{ 1,\ldots,c\right\}$. On the contrary, if the instance is wrongly predicted, then we should have $\bm{\gamma}^*_{i,j}\neq0$ for some $j$. We start with reformulating the derivation process from Eq.~\eqref{eq:loss_func} to Eq.~\eqref{eq:penalty} by another decoupled representation of solving $\bm{\beta}$ and $\bm{\gamma}$. \minor{ Recall that the linear regression model with incidental parameters is \begin{equation} \bm{Y}=\bm{X}\bm{\beta}^{*}+\bm{\gamma}^{*}+\bm{\varepsilon}, \end{equation} where $\bm{Y}\in\left\{ 0,1\right\} ^{n\times c},\bm{X}\in\mathbb{R}^{n\times d},\bm{\beta}^{*}\in\mathbb{R}^{d\times c},\bm{\gamma}^{*}\in\mathbb{R}^{n\times c},\bm{\varepsilon}\in\mathbb{R}^{n\times c}$. We are solving the problem of \begin{equation} \underset{\bm{\beta},\bm{\gamma}}{\mathrm{argmin}}\frac{1}{2}\left\Vert \bm{Y}-\bm{X}\bm{\beta}-\bm{\gamma}\right\Vert _{\mathrm{F}}^{2}+\lambda\sum_{i=1}^{n}\sum_{j=1}^{c}\left|\gamma_{i,j}\right|. \end{equation} With this formulation, one could vectorize the problem and transfer it into the single-response regression case. Denote the vectorization operator for $\bm{A}\in\mathbb{R}^{m\times n}$ as $\mathrm{vec}\left(\bm{A}\right)\coloneqq\left(a_{1,1},\ldots,a_{m,1},a_{1,2},\ldots,a_{m,2},\ldots,a_{1,n},\ldots,a_{m,n}\right)^{\top}$, then \begin{equation} \mathrm{vec}\left(\bm{Y}\right)=\left(\bm{I}_{c}\otimes\bm{X}\right)\mathrm{vec}\left(\bm{\beta}^{*}\right)+\mathrm{vec}\left(\bm{\gamma}^{*}\right)+\mathrm{vec}\left(\bm{\varepsilon}\right), \end{equation} where $\otimes$ is the Kronecker product operator. We denote $\vec{\bm{y}}=\mathrm{vec}\left(\bm{Y}\right)\in\left\{ 0,1\right\} ^{nc},\bm{X}_{\otimes}=\left(\bm{I}_{c}\otimes\bm{X}\right)\in\mathbb{R}^{nc\times dc},\vec{\bm{\beta}}=\mathrm{vec}\left(\bm{\beta}\right)\in\mathbb{R}^{dc},\vec{\bm{\gamma}}=\mathrm{vec}\left(\bm{\gamma}\right)\in\mathbb{R}^{nc},\vec{\bm{\varepsilon}}=\mathrm{vec}\left(\bm{\varepsilon}\right)\in\mathbb{R}^{nc}$. We are now solving the problem of \begin{equation} \underset{\vec{\bm{\beta}},\vec{\bm{\gamma}}}{\mathrm{argmin}}\frac{1}{2}\left\Vert \vec{\bm{y}}-\bm{X}_{\otimes}\vec{\bm{\beta}}-\vec{\bm{\gamma}}\right\Vert _{\mathrm{2}}^{2}+\lambda\left\Vert \vec{\bm{\gamma}}\right\Vert _{1}. \end{equation} } We conduct the singular vector decomposition of $\bm{X}_{\otimes}$ as $\bm{X}_{\otimes}=\bm{U}\bm{\Sigma} \bm{V}^{\top}$, where $\bm{U}\in\mathbb{R}^{nc\times nc},\ \bm{\Sigma}\in\mathbb{R}^{nc\times dc},\ \bm{V}\in\mathbb{R}^{dc\times dc}$. Recall that $d$ is set as the reduced dimension from the original feature, hence we have $d\ll n$. Thus we could divide $\bm{U}$ into $\bm{U}=\left[\bm{U}_1,\bm{U}_2\right]$ where $\bm{U}_1$ is an orthogonal basis of the column space of $\bm{X}_{\otimes}$. Then we have $\bm{U}^\top\bm{U}=\bm{U}\bU^\top=\bm{I}$ and $\bm{U}_2^\top \bm{X}_{\otimes}=0$. Hence \begin{equation} \label{eq:thm-loss} \begin{aligned} L\coloneqq&\left\Vert\vec{\bm{y}}-\bm{X}_{\otimes}\vec{\bm{\beta}}-\vec{\bm{\gamma}}\right\Vert _{2}^{2} =\left\Vert\bm{U}^\top\left(\vec{\bm{y}}-\bm{X}_{\otimes}\vec{\bm{\beta}}-\vec{\bm{\gamma}}\right)\right\Vert _{2}^{2}\\ =&\left\Vert\bm{U}_1^\top\vec{\bm{y}}-\bm{U}_1^\top\bm{X}_{\otimes}\vec{\bm{\beta}}-\bm{U}_1^\top\vec{\bm{\gamma}}\right\Vert _{2}^{2} +\left\Vert\bm{U}_2^\top\vec{\bm{y}}-\bm{U}_2^\top\vec{\bm{\gamma}}\right\Vert _{2}^{2}. \end{aligned} \end{equation} Again, we find the derivative with respect to $\vec{\bm{\beta}}$ and make it equal to 0, then we have \begin{equation} \hat{\vec{\bm{\beta}}}=\left(\bm{X}_{\otimes}^{\top}\bm{X}_{\otimes}\right)^{\dagger}\bm{X}_{\otimes}^{\top}\left(\vec{\bm{y}}-\vec{\bm{\gamma}}\right). \label{eq:thm-beta} \end{equation} \revise{ Note that since $\partial L/\partial\minor{\hat{\vec{\bm{\beta}}}}=0$, we have \begin{equation} \bm{X}_{\otimes}^{\top}\bm{U}_{1}\left(\bm{U}_{1}^{\top}\vec{\bm{y}}-\bm{U}_{1}^{\top}\bm{X}_{\otimes}\minor{\hat{\vec{\bm{\beta}}}}-\bm{U}_{1}^{\top}\vec{\bm{\gamma}}\right)=0. \end{equation} Denote $\mathrm{rank}\left(\bm{X}_{\otimes}\right)=k$, then we have $\bm{X}_{\otimes}^{\top}\bm{U}_{1}\in\mathbb{R}^{dc\times k}$, $\bm{U}_{1}^{\top}\vec{\bm{y}}-\bm{U}_{1}^{\top}\bm{X}_{\otimes}\minor{\hat{\vec{\bm{\beta}}}}-\bm{U}_{1}^{\top}\vec{\bm{\gamma}}\in\mathbb{R}^{k\times 1}$ and $\mathrm{rank}\left(\bm{X}_{\otimes}^{\top}\bm{U}_{1}\right)=k$ by definition. Using Sylvester\textquoteright s rank inequality, we have \begin{equation} \begin{aligned} &\mathrm{rank}\left(\bm{X}_{\otimes}^{\top}\bm{U}_{1}\right)+\mathrm{rank}\left(\bm{U}_{1}^{\top}\vec{\bm{y}}-\bm{U}_{1}^{\top}\bm{X}_{\otimes}\minor{\hat{\vec{\bm{\beta}}}}-\bm{U}_{1}^{\top}\vec{\bm{\gamma}}\right)-k\\ \leq&\mathrm{rank}\left(\bm{X}_{\otimes}^{\top}\bm{U}_{1}\left(\bm{U}_{1}^{\top}\vec{\bm{y}}-\bm{U}_{1}^{\top}\bm{X}_{\otimes}\minor{\hat{\vec{\bm{\beta}}}}-\bm{U}_{1}^{\top}\vec{\bm{\gamma}}\right)\right)=0. \end{aligned} \end{equation} Hence \begin{equation} \mathrm{rank}\left(\bm{U}_{1}^{\top}\vec{\bm{y}}-\bm{U}_{1}^{\top}\bm{X}_{\otimes}\minor{\hat{\vec{\bm{\beta}}}}-\bm{U}_{1}^{\top}\vec{\bm{\gamma}}\right)=0. \end{equation} Hence the first term of $L$ equals to $0$. Now we are solving the problem of } \begin{equation} L\left(\vec{\bm{\gamma}}\right)=\left\Vert\bm{U}_2^\top\vec{\bm{y}}-\bm{U}_2^\top\vec{\bm{\gamma}}\right\Vert _{2}^{2}+\lambda \left\Vert \vec{\bm{\gamma}}\right\Vert _{1}. \label{eq:another-penalty} \end{equation} Eq.~\eqref{eq:another-penalty} is equivalent to Eq.~\eqref{eq:penalty} but provides another interpretation that the incidental parameters (with a projection) try to find a sparse approximation of $\bm{U}_2^\top\vec{\bm{y}}$. Based on this, we could provide the answer of \textit{under which condition could we recover the true support set of $\vec{\bm{\gamma}}$}? Formally, let $S=\mathrm{supp}\left(\vec{\bm{\gamma}}^*\right)$ and $\hat{S}=\mathrm{supp}\left(\hat{\vec{\bm{\gamma}}}\right)$, where $\vec{\bm{\gamma}}^*$ is the \revise{ground-truth} prediction error, $\hat{\vec{\bm{\gamma}}}$ is the estimator provided by our algorithm \minor{ and $\mathrm{supp}\left(\vec{\bm{\gamma}}\right)=\{i\mid\vec{\bm{\gamma}}_{i}\neq 0\}$. Recall that our goal is to find the wrongly predicted instances. Hence we further define a ground-truth wrongly-predicted set $O=\left\{ i\vert\gamma_{i,j}^{*}\neq0,\textrm{ for some }j\in\left[c\right]\right\}$ and the estimator $\hat{O}=\left\{ i\vert\hat{\gamma}_{i,j}\neq0,\textrm{ for some }j\in\left[c\right]\right\}$. } For simplicity, \revise{ we denote $\vec{\bm{y}}_{u}=\bm{U}_2^\top \vec{\bm{y}}$ and $\tilde{\bm{U}}=\bm{U}_2^\top$. } Furthermore, denote $\tilde{\bm{U}}_S$ ($\tilde{\bm{U}}_{S^c}$) as the column vectors of $\tilde{\bm{U}}$ whose index are in $S$ ($S^c$), respectively. We are solving the problem of \begin{equation} \label{eq:thm-problem} \min_{\vec{\bm{\gamma}}} \left\Vert\vec{\bm{y}}_{u}-\tilde{\bm{U}}\vec{\bm{\gamma}}\right\Vert _{2}^{2}+\lambda \left\Vert \vec{\bm{\gamma}}\right\Vert _{1}, \end{equation} \minor{ Recall that the linear regression model indicates that for ground-truth values $\vec{\bm{\beta}}^{*},\vec{\bm{\gamma}^{*}}$ \begin{equation} \vec{\bm{y}}=\bm{X}_{\otimes}\vec{\bm{\beta}}^{*}+\vec{\bm{\gamma}}^{*}+\vec{\bm{\varepsilon}}, \end{equation} and hence \begin{equation} \tilde{\bm{U}}\vec{\bm{y}}=\tilde{\bm{U}}\left(\bm{X}_{\otimes}\vec{\bm{\beta}}^{*}+\vec{\bm{\gamma}}^{*}+\vec{\bm{\varepsilon}}\right). \end{equation} Hence we have \begin{equation} \vec{\bm{y}}_{u}=\tilde{\bm{U}}\vec{\bm{y}}=\tilde{\bm{U}}\vec{\bm{\gamma}}^{*}+\tilde{\bm{U}}\vec{\bm{\varepsilon}}=\tilde{\bm{U}}_{S}\vec{\bm{\gamma}}_{S}^{*}+\tilde{\bm{U}}\vec{\bm{\varepsilon}},\label{eq:y-ground-truth} \end{equation} where $\vec{\bm{\varepsilon}}$ is the sub-Gaussian noise assumed in the linear regression model. } Further let $\mu_{\tilde{\bm{U}}}=\underset{i\in S^{c}}{\max}\left\Vert \tilde{\bm{U}}_{i}\right\Vert _{2}^{2}$. We give three assumptions: \noindent(C1: Restricted eigenvalue) \begin{equation} \lambda_{\min }\left(\tilde{\bm{U}}_{S}^{\top} \tilde{\bm{U}}_{S}\right)=C_{\min }>0. \end{equation} (C2: Irrepresentability) $\exists\ \eta\in\left(0,1\right]$, \begin{equation} \left\|\tilde{\bm{U}}_{S^{c}}^{\top} \tilde{\bm{U}}_{S}\left(\tilde{\bm{U}}_{S}^{\top} \tilde{\bm{U}}_{S}\right)^{-1} \right\|_{\infty} \leq 1-\eta. \end{equation} (C3: Large error) \begin{equation} \vec{\bm{\gamma}}_{\min }:=\min _{i \in S}\left|\vec{\bm{\gamma}}_{i}^{*}\right|> h\left(\lambda, \eta, \tilde{\bm{U}}, \vec{\bm{\gamma}}^{*}\right), \end{equation} where \begin{equation} h\left(\lambda, \eta, \tilde{\bm{U}}, \vec{\bm{\gamma}}^{*}\right)=\frac{\lambda\eta}{\sqrt{C_{\min } \mu_{\tilde{\bm{U}}}}}+\lambda\left\|\left(\tilde{\bm{U}}_{S}^{\top} \tilde{\bm{U}}_{S}\right)^{-1} \operatorname{sign}\left(\vec{\bm{\gamma}}_{S}^{*}\right)\right\|_{\infty} \end{equation} \revise{and $\left\Vert \bm{A}\right\Vert_{\infty}\coloneqq\max_{i}\sum_j\left|A_{i,j}\right|$.} Based on these conditions, we could provide the following theorem: \begin{thm}[\minor{Identifiability of ICI}] \label{thm:sufficiency} Let \begin{equation*} \lambda \geq \frac{2 \sigma \sqrt{\mu_{\tilde{\bm{U}}}}}{\eta } \sqrt{ \log cn}. \end{equation*} Then with probability greater than \begin{equation*} 1-2 cn \exp \left\{-\frac{\lambda^{2} \eta^{2}}{2 \sigma^{2} \mu_{\tilde{\bm{U}}}}\right\} \geq 1-2 \left(cn\right)^{-1}, \end{equation*} Eq.~\eqref{eq:thm-problem} has a unique solution $\hat{\bm{\gamma}}$ satisfies the following properties: \begin{enumerate} \item If C1 and C2 hold, the wrong-predicted instances indicated by ICI has no false positive error, i.e. $\hat{S}\subseteq S$ \minor{and hence $\hat{O}\subseteq O$} , and \begin{equation*} \left\|\hat{\vec{\bm{\gamma}}}_{S}-\vec{\bm{\gamma}}_{S}^{*}\right\|_{\infty} \leq h\left(\lambda, \eta, \tilde{\bm{U}}, \vec{\bm{\gamma}}^{*}\right); \end{equation*} \item If C1, C2, and C3 hold, ICI will identify all the correctly-predicted instances, i.e. $\hat{S}= S$ \minor{and hence $\hat{O} = O$}~(in fact~$\mathrm{sign} \left(\hat{\vec{\bm{\gamma}}}\right)=\mathrm{sign} \left(\vec{\bm{\gamma}}^*\right)$). \end{enumerate} \end{thm} \minor{ \begin{rem*} Assumption C1 is necessary to ensure that there is a unique $\vec{\bm{\gamma}}^*$ satisfying model \eqref{eq:y-ground-truth}. Assumptions C1-C2 (C1-C3) are sufficient for $\hat{O}\subseteq O$ ($\hat{O} = O$), respectively. They are also necessary in the sense that once violated, there are cases which fail the conclusion with non-vanishing probability. \end{rem*} } The proof is given in the Appendix section \ref{appendix}. The theorem shows that our algorithm could find the right-predicted pseudo-labeled instances under specific conditions. Practically, it may be hard for us to choose a reasonable $\lambda$ to satisfy the three conditions since we could not know $\vec{\bm{\gamma}}_S^*$ in advance. Specifically, in the tasks of both semi-supervised and transductive few-shot learning concerned in this paper, one can not assume knowing $\vec{\bm{\gamma}}_S^*$. Hence, we use the iterative strategy to search along the solution path to select the instances automatically. \mypar{Effectiveness of the identifiablity in reality.} \minor{ It is desirable to check to which extent the assumptions hold in reality. To answer this question, we run 5-way-1-shot TFSL experiments on \emph{mini}ImageNet dataset for 2000 episodes. } \begin{figure}[h] \centering \includegraphics[width=0.6\columnwidth]{fig/hist.pdf} \caption{\label{fig:hist-of-error} Histogram of errors in 2000 episodes. The x-axis is the value of errors, while the y-axis is the number of errors. } \end{figure} \mypar{Sub-Gaussian noise.} \minor{ We collect all the noises in the 2000 episodes and visualize the histogram in Fig.~\ref{fig:hist-of-error}. It can be seen that the noise can be approximated by a Gaussian Mixed Model, specifically the sum of three independent Gaussian distribution. Hence the noise can be assumed as following sub-Gaussian distribution with bounded variance. Further, the magnitude of sample mean of the noises is $10^{-19}$, which can be seen as zero mean. } \begin{table}[H] \begin{centering} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lcccc} \toprule Satisfied Assumptions & None & C1 & C1 and C2 & All\tabularnewline \midrule Improved Episodes & $0$ & $424$ & $1035$ & $40$\tabularnewline Total Episodes & $0$ & $793$ & $1164$ & $43$\tabularnewline I/T & $-$ & $53.5\%$ & $88.9\%$ & $93.0\%$\tabularnewline \bottomrule \end{tabular*} \par\end{centering} \caption{\label{tab:Number-of-episodes}Number of episodes satisfying each assumption and whether the transductive inference improve the performance.} \end{table} \mypar{Assumptions C1-C3.} \minor{ In each episode, we test whether the assumptions are satisfied and count them in Table \ref{tab:Number-of-episodes}. We can see that: (i) In more than half of the episodes the assumptions C1-C2 are satisfied. From our theorem, in this case ICI will have no false positive error. Hence our ICI will reduce the noise of pseudo-labeled instances without eliminating the correctly-predicted instances. Practically, most of them ($\left(1035+40\right)/\left(1164+43\right)=89.0\%$) will achieve better performance after transductive inference. (ii) When all the assumptions are satisfied, the transductive inference will get better performance in a high ratio ($93.0\%$). (iii) Even if C2-C3 are not satisfied, transductive inference still have the chance of improving the performance ($53.5\%$). One major reason is that our iterative update strategy will help reduce the noise. } \section{Experiments} \mypar{Datasets.} Our experiments are conducted on \revise{four} widely \revise{used} few-shot learning benchmark datasets including \emph{mini}ImageNet~\cite{ravi2016optimization}, \emph{tiered}ImageNet~\cite{ren2018meta}, CIFAR-FS~\cite{bertinetto2018metalearning} and CUB~\cite{wah2011caltech}. \textbf{\emph{mini}}\textbf{ImageNet}\footnote{ \revise{https://github.com/gidariss/FewShotWithoutForgetting}} consists of $100$ classes with $600$ labeled instances \revise{per} category. We follow the split proposed by~\cite{ravi2016optimization}, using $64$ classes as the base set to train the feature extractor, $16$ classes as the validation set, and report performance on the novel set which consists of $20$ classes. \textbf{\emph{tiered}}\textbf{ImageNet}\footnote{\revise{https://github.com/yaoyao-liu/meta-transfer-learning}} is a larger dataset compared \revise{to} \emph{mini}ImageNet, and its categories are selected \revise{from a} hierarchical structure to split base and novel datasets semantically. We follow the split introduced in~\cite{ren2018meta} with base set of $20$ superclasses ($351$ classes), validation set of $6$ superclasses ($97$ classes) and novel set of $8$ superclasses ($160$ classes). Each class contains $1281$ images on average. \textbf{CUB}\footnote{\revise{http://www.vision.caltech.edu/visipedia/CUB-200-2011.html}} is a fine-grained dataset of $200$ bird categories with $11788$ images in total. Following the previous \revise{few-shot} setting in~\cite{hilliard2018few}, we use \revise{$100$, $50$ and $20$ classes for base, validation and novel set respectively.} To make a fair comparison \revise{in model training and testing}, we crop the bounding \revise{boxes} provided by~\cite{triantafillou2017few} \revise{for all the images in CUB.} \textbf{CIFAR-FS}\footnote{\revise{https://github.com/bertinetto/r2d2}} is a dataset derived from CIFAR-100~\cite{krizhevsky2009learning} \revise{with lower-resolution images.} It contains $100$ classes with $600$ instances in each class. We follow the common split given by~\cite{bertinetto2018metalearning}, using $64$ classes to construct the base set, $16$ for validation, and $20$ as the novel set. \mypar{Experimental setup.} \revise{We present the implementation details and experiment settings in the following.} \revise{Unless otherwise specified, our implementation details and experiment setting are same with the default setting adopt by majority few-shot learning methods~\cite{ye2020fewshot, Liu2020E3BM,Zhang_2020_CVPR,lee2019meta,hilliard2018few} for a fair comparison.} \revise{Same as}~\cite{oreshkin2018tadam,lee2019meta}, we \revise{employ} ResNet-12~\cite{DBLP:journals/corr/HeZRS15} with $4$ residual blocks as the feature extractor in our experiments. Each \revise{residual} block consists of three $3\times3$ convolutional layers, each of which followed by a \revise{batch normlization} layer and a LeakyReLu~(0.1) activation. \revise{A $2\times2$ max-pooling layer is appended at the end of each block to downsample the spatial size.} The number of filters in each block is $64$, $128$, $256$ and $512$ respectively. Specifically, \revise{following}~\cite{lee2019meta}, we adopt the \textit{Dropout}~\cite{JMLR:v15:srivastava14a} in first two blocks to vanish $10\%$ of the output, and adopt \textit{DropBlock}~\cite{ghiasi2018dropblock} in latter two blocks to vanish $10\%$ of output at channel level. Finally, an average-pooling layer is employed to produce the input feature embedding. \revise{ We use the baseline method R12-proto-ac introduced in~\cite{CAN} to train the backbone with the global and nearest neighbor classification loss. } \revise{SGD with momentum is adopted} as the optimizer to train the feature extractor \textit{from scratch}. Momentum factor and strength of $L_{2}$ weight decay is set to $0.9$ and $5e-4$, respectively. All \revise{input images} are resized to $84\times84$. \revise{ Our initial learning rate is set to $0.1$ and decay to $0.006,~0.0012$ and $0.00024$ after $60,~70$ and $80$ epochs, respectively. } The total training epochs \revise{is set to $90$}. In all of our experiments, we normalize the feature with $L_2$ norm and reduce the feature dimension to $d=5$ using LLE~\cite{roweis2000nonlinear} \revise{for the pre-processing part of ICI, while the classification part still use the original features}. \revise{We use the logistic regression as our basic classifier.} Our model and all baselines are evaluated over $2000$ episodes with $15$ test samples in each class. \begin{table*}[!ht] \centering \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} p{0.7cm} l l l l l l l l l} \toprule \multirow{2}{*}{Setting} & \multirow{2}{*}{Model} & \multicolumn{2}{c}{\emph{mini}ImageNet} & \multicolumn{2}{c}{\emph{tiered}ImageNet} & \multicolumn{2}{c}{CIFAR-FS} & \multicolumn{2}{c}{CUB}\tabularnewline & &$1$shot & $5$shot & $1$shot & $5$shot & $1$shot & $5$shot & $1$shot & $5$shot\tabularnewline \midrule \multirow{11}{*}{In.} &Baseline$^{*}$~\cite{DBLP:journals/corr/abs-1904-04232} & $51.75$\ci{0.80} & $74.27$\ci{0.63} & - & - & - & - & $65.51$\ci{0.87} & $82.85$\ci{0.55}\tabularnewline & Baseline++$^{*}$~\cite{DBLP:journals/corr/abs-1904-04232}&$51.87$\ci{0.77} &$75.68$\ci{0.63} & - & - & - & - & $67.02$\ci{0.90} & $83.58$\ci{0.54}\tabularnewline & MatchingNet$^{*}$~\cite{vinyals2016matching}& $52.91^{\textcolor{black}{1}}$\ci{0.88} & $68.88^{\textcolor{black}{1}}$\ci{0.69} & - & - & - & - & $72.36^{\textcolor{black}{1}}$\ci{0.90} & $83.64^{\textcolor{black}{1}}$\ci{0.60}\tabularnewline & ProtoNet$^{*}$~\cite{snell2017prototypical}& $54.16^{\textcolor{black}{1}}$\ci{0.82} & $73.68^{\textcolor{black}{1}}$\ci{0.65} & - & - & $72.20^{\textcolor{black}{3}}$ & $83.50^{\textcolor{black}{3}}$ & $71.88^{\textcolor{black}{1}}$\ci{0.91} & $87.42^{\textcolor{black}{1}}$\ci{0.48}\tabularnewline & MAML$^{*}$~\cite{finn2017model}& $49.61^{\textcolor{black}{1}}$\ci{0.92}& $65.72^{\textcolor{black}{1}}$\ci{0.77} & - & - & - & - & $69.96^{\textcolor{black}{1}}$\ci{1.01} & $82.70^{\textcolor{black}{1}}$\ci{0.65}\tabularnewline &RelationNet$^{*}$~\cite{sung2018learning} & $52.48^{\textcolor{black}{1}}$\ci{0.86} & $69.83^{\textcolor{black}{1}}$\ci{0.68} & - & - & - & - & $67.59^{\textcolor{black}{1}}$\ci{1.02} & $82.75^{\textcolor{black}{1}}$\ci{0.58}\tabularnewline & adaResNet~\cite{munkhdalai2018rapid}& $56.88$ & $71.94$ & - & - & - & - & - & -\tabularnewline & TapNet~\cite{yoon2019tapnet} & $61.65$ & $76.36$ & $63.08$& $80.26$ & - & - & - & -\tabularnewline & CTM$^{\dag}$~\cite{li2019finding} & $64.12$ & $80.51$ & $68.41$ & $84.28$ & - & - & - & -\tabularnewline &MetaOptNet~\cite{lee2019meta}&$64.09$&$80.00$&$65.81$&$81.75$&$72.60$&$84.30$&-&-\tabularnewline \midrule \multirow{4}{*}{Tran.} &TPN~\cite{liu2018learning} & $59.46$ & $75.65$ & $58.68^{\textcolor{black}{4}}$ & $74.26^{\textcolor{black}{4}}$ & $65.89^{\textcolor{black}{4}}$ & $79.38^{\textcolor{black}{4}}$ & - & -\tabularnewline &TEAM$^{*}$~\cite{qiao2019transductive} & $60.07$ & $75.90$ & - & - & $70.43$ & $81.25$ & $80.16$ & $87.17$ \tabularnewline &CAN+T~\cite{hou2019cross} & $67.19$\ci{0.55} & $80.64$\ci{0.35} & $73.21$\ci{0.58} & $84.93$\ci{0.38} & - & - & - & - \tabularnewline &DPGN~\cite{yang2020dpgn} & $67.77$\ci{0.32} & $\textbf{84.60}$\ci{0.43} & $72.45$\ci{0.51} & $\textbf{87.24}$\ci{0.39} & $77.90$\ci{0.50} & $\textbf{90.20}$\ci{0.40} & $75.71$\ci{0.47} & $91.48$\ci{0.33} \tabularnewline \midrule \multirow{5}{*}{Semi.} &MSkM + MTL & $62.10^{\textcolor{black}{2}}$ & $73.60^{\textcolor{black}{2}}$ & $68.6^{\textcolor{black}{2}}$ & $81.00^{\textcolor{black}{2}}$ & - & - & - &- \tabularnewline &TPN + MTL & $62.70^{\textcolor{black}{2}}$ & $74.20^{\textcolor{black}{2}}$ & $72.10^{\textcolor{black}{2}}$ & $83.30^{\textcolor{black}{2}}$ & - & - & - & -\tabularnewline &MSkM~\cite{ren2018meta}&$50.40$ & $64.40$ & $52.40$ & $69.90$ & - & - & - & - \tabularnewline &TPN~\cite{liu2018learning}& $52.78$& $66.42$ & $55.70$ & $71.00$ & - & - & - & - \tabularnewline &LST~\cite{sun2019learning}& $70.10$& $78.70$ & $77.70$ & $85.20$ & - & - & - & - \tabularnewline \midrule \midrule \multirow{2}{*}{Tran.} &ICIC & $71.29$\ci{0.59} & $83.12$\ci{0.33} & $76.13$\ci{0.62} & $86.73$\ci{0.36} & $78.47$\ci{0.60} & $86.41$\ci{0.36} & $90.38$\ci{0.42} & $94.30$\ci{0.20}\tabularnewline &ICIR & $\textit{72.39}$\ci{0.62} & $83.27$\ci{0.33} & $77.48$\ci{0.62} & $86.84$\ci{0.36} & $79.19$\ci{0.63} & $86.66$\ci{0.36} & $ 90.89$\ci{0.43} & $94.36$\ci{0.20}\tabularnewline \midrule \multirow{2}{*}{ \parbox[t]{0.7cm}{ Semi. \\ 15/15}} &ICIC & $70.97$\ci{0.56} & $82.69$\ci{0.33} & $76.00$\ci{0.60} & $86.19$\ci{0.36} & $78.44$\ci{0.58} & $86.10$\ci{0.36} & $89.89$\ci{0.42} & $94.00$\ci{0.20} \tabularnewline &ICIR & $72.32$\ci{0.58} & $82.78$\ci{0.33} & $76.98$\ci{0.61} & $86.24$\ci{0.36} & $79.20$\ci{0.58} & $86.14$\ci{0.36} & $90.45$\ci{0.42} & $94.00$\ci{0.20} \tabularnewline \midrule \multirow{2}{*}{\parbox[t]{0.7cm}{ Semi. \\ 30/50}} &ICIC& $71.43$\ci{0.62} & $\textit{83.41}$\ci{0.35} & $\textit{78.01}$\ci{0.63} & $\textit{86.86}$\ci{0.37} & $\textit{80.25}$\ci{0.58} & $86.99$\ci{0.36} & $\textit{91.75}$\ci{0.39} & $\textit{94.42}$\ci{0.20}\tabularnewline &ICIR& $\textbf{73.12}$\ci{0.65} & $83.28$\ci{0.37} & $\textbf{78.99}$\ci{0.66} & $86.76$\ci{0.39} & $\textbf{80.74}$\ci{0.61} & $\textit{87.16}$\ci{0.36} & $\textbf{92.12}$\ci{0.40} & $\textbf{94.52}$\ci{0.20} \tabularnewline \bottomrule \end{tabular*} \caption{\label{fig:tfsl results} The averaged accuracies with $95\%$ confidence intervals over $2000$ episodes on several datasets. Results with $\left(\cdot\right)^{1}$ are reported in~\cite{DBLP:journals/corr/abs-1904-04232}, with $\left(\cdot\right)^{\textcolor{black}{2}}$ are reported in~\cite{sun2019learning}, with $\left(\cdot\right)^{\textcolor{black}{3}}$ are reported in~\cite{lee2019meta}. $\left(\cdot\right)^{\textcolor{black}{4}}$ is our implementation with the official code of~\cite{liu2018learning}. Methods denoted by $\left(\cdot\right)^*$ denotes ResNet-18 with input size $224\times224$, while $\left(\cdot\right)^{\dag}$ denotes ResNet-18 with input size $84\times84$. Our method and other alternatives use ResNet-12 with input size $84\times84$. \textbf{In.} and \textbf{Tran.} indicate inductive and transductive setting, respectively. \textbf{Semi.} denotes semi-supervised setting where $(\cdot/\cdot)$ shows the number of unlabeled data available in $1$-shot and $5$-shot experiments. ICIC indicates the logistic regression version of our model, and ICIR indicates the linear regression version. We use logistic regression as our classifier. \revise{ In each column, the highest result is in bold, and the second highest result is in italics. } } \end{table*} \subsection{Semi-supervised few-shot learning} \mypar{Settings.} In the inference \revise{stage}, the unlabeled data from the corresponding category pool is utilized to help FSL. In our experiments, we report the following settings of SSFSL: (1) we use $15$ unlabeled samples for each class, the same as TFSL, \revise{to compare the performance of ICI between SSFSL and TFSL setting with the same number of unlabeled data.} (2) we use $30$ unlabeled samples in $1$-shot task, and $50$ unlabeled samples in $5$-shot task, same as current SSFSL approaches~\cite{sun2019learning}; We denote these as 15/15 and 30/50 in Table~\ref{fig:tfsl results}. Note that CUB is a fine-grained dataset and does not have sufficient samples in each class, so we simply choose $5$ as support set, $15$ as query set and \revise{left} samples as unlabeled set (about $39$ samples on average) in the $5$-shot task in the latter setting. For all settings, we select $5$ samples for each class in each iteration. The process is finished when at most 15/15, 25/45 unlabeled instances are selected in total, respectively. \mypar{Competitors.} We compare our algorithm with \revise{existing} approaches in \revise{the SSFSL setting}. \revise{ TPN~\cite{liu2018learning} classifies query samples by propagating labels from the support set and extra unlabeled set.} LST~\cite{sun2019learning} also uses self-taught learning strategy to pseudo-label data and select confident ones, but they \revise{achieve so by episodically training a neural network for many iterations.} Other approaches include Masked Soft k-Means~\cite{ren2018meta} and a combination of MTL with TPN and Masked Soft k-Means reported by LST. \mypar{Results.} The results are shown in Table~\ref{fig:tfsl results} where denoted as Semi. in the first column. \revise{We can observe} that: (1) Comparing SSFSL with TFSL with the same number of unlabeled data, we can see that our SSFSL results are only reduced by a little or even beat TFSL results, which indicates that the information we got from the unlabeled data are robust and we can indeed handle the true distribution with unlabeled data practically. (2) The more unlabeled data we get, the better performance we have. Thus we can learn more knowledge with more unlabeled data almost consistently using a linear classifier (\textit{e.g.} logistic regression). (3) \revise{Comparing} to other SSFSL approaches, ICI also achieves varying degrees of improvements in almost all tasks and datasets. These results further \revise{verify the effectiveness of our approach.} \subsection{Transductive few-shot learning} \mypar{Settings.} In transductive few-shot learning setting, \revise{people} have the chance to access \revise{many} query data \revise{in one go} in the inference stage. Thus the unlabeled set and the query dataset are the same. In our experiments, we select $5$ instances for each class in each iteration and repeat our algorithm until all the query samples are included. \mypar{Competitors.} We compare ICI with current TFSL approaches. TPN~\cite{liu2018learning} constructs a graph and uses label propagation to transfer labels from support samples to query samples and learn their framework in a meta-learning way. TEAM~\cite{qiao2019transductive} utilizes class prototypes with a data-dependent metric to inference labels of query samples. \revise{ CAN+T~\cite{hou2019cross} uses the self-taught learning to train the model repeatedly within the specific designed network. DPGN~\cite{yang2020dpgn} adopts contrastive comparisons to produce distribution representation. } \mypar{Results.} The results are shown in Table~\ref{fig:tfsl results} where denoted as Tran. in the first column. \revise{ Compared with current TFSL approaches, ICI is competitive, especially in the 1-shot tasks. Importantly and theoretically, under mild conditions of restricted eigenvalue, irrepresentability, and large error, we empirically show that our approach is guaranteed to collect the correctly-predicted pseudo-labeled instances from the noisy pseudo-labeled set; and our ICIR results achieve very competitive performance in almost all dataset. Essentially, our algorithm is theoretically grounded, orthogonal and useful to the other state-of-the-art methods. It is thus a future work of exploring how to incorporate our algorithm with the other competitors. } \subsection{Ablation study\label{subsec:Ablation-Study}} \mypar{\revise{Visualization.}} We visualize the regularization path of $\gamma$ in one episode of the inference process in Fig.~\ref{fig:effective} where red lines are instances that are correct-predicted while black lines are wrong-predicted ones. It is obvious that that most of the correct-predicted instances lie in the lower-left part. Since ICI select samples whose norm will vanish in a lower $\lambda$, so could get more correct-predicted instances than wrong-predicted instances in a high ratio. \begin{figure}[ht] \begin{centering} \includegraphics[width=0.8\columnwidth]{fig/visual.pdf} \caption{\label{fig:effective}Regularization path of $\lambda$. Red lines are correct-predicted instances while black lines are wrong-predicted ones. ICI will choose instances in the lower-left subset.} \end{centering} \end{figure} \mypar{\revise{Comparison with baselines.}} To further show the effectiveness of ICI, we compare ICI with other sample selection strategies under the self-taught learning pipeline. \revise{ We consider the following baselines: (1) RA (random): Select instances randomly. (2) NN (nearest-neighbor): Select instances based on the distance between the pseudo-labeled instances and the labeled instance. We will select the pseudo-labeled instances which are the nearest neighbors of labeled instances with the same (pseudo-)category. (3) CO (confidence): Select instances based on the confidence given by the classifier, where the confidence is defined as the prediction scores/probabilities of the classifier. (4) CN (coefficient norm): Select instances based on the proposed metric without considering the effect of $\gamma$. That is, selecting instances based on the y-axis in Fig.~\ref{fig:effective} instead of x-axis. In this part, we have $15$ unlabeled instances for each class and select $5$ to re-train the classifier by different methods for Semi. and Tran. task on \emph{ mini}ImageNet. From Table~\ref{tab:ablation}, we observe that ICI outperforms all the baselines in all settings. } \begin{table}[h] \centering \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lcccc} \toprule \multirow{2}{*}{Model}&\multicolumn{2}{c}{Tran.}&\multicolumn{2}{c}{Semi.}\tabularnewline &1shot&5shot&1shot&5shot\tabularnewline \midrule RA&$67.54$\ci{0.51}&$81.45$\ci{0.32}&$68.09$\ci{0.52}&$81.30$\ci{0.33}\tabularnewline NN&$69.80$\ci{0.53}&$82.12$\ci{0.32}&$69.99$\ci{0.52}&$81.96$\ci{0.33}\tabularnewline CO&$70.57$\ci{0.54}&$82.41$\ci{0.31}&$70.53$\ci{0.52}&$82.10$\ci{0.32}\tabularnewline CN&$67.44$\ci{0.53}&$81.44$\ci{0.33}&$67.87$\ci{0.52}&$81.49$\ci{0.34}\tabularnewline \midrule ICIR& $\bf71.19$\ci{0.58} & $\bf82.55$\ci{0.32} & $\bf71.25 $\ci{0.55} & $\bf82.32$\ci{0.32} \tabularnewline \bottomrule \end{tabular*} \caption{\label{tab:ablation} Compare to baselines on \emph{ mini}ImageNet under several settings.} \end{table} \revise{ The main reason why the confidence predicted by the classifier (i.e. the results of ``CO" in Table~\ref{tab:ablation}) is not enough is that some high-confident predictions are actually wrongly-predicted. Take the baseline of coefficient norm (CN) for example, the norm of the coefficient is directly the confidence score provided by the linear regression ``classifier'', where small norm indicates small error on fitting the corresponding sample. In our illustration of regularization path (see Fig.~\ref{fig:effective}), the norm of some wrongly-predicted instances (see the lowest black line for example) vanishes slower than the right-predicted instances. This is a case when the confidence predicted by the classifier cannot exclude the noise but ICI still works very well. Particularly, the most important difference is that our x-axis method is theoretically guaranteed; in contrast, there is no theoretical guarantee for y-axis method and other sample selection baselines, as explained in Theorem~\eqref{thm:sufficiency}. } \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{fig/iter.pdf} \caption{\label{fig:iter-manner} Variation of accuracy as the selected samples increases over 2000 episodes on \emph{mini}ImageNet. ``ICI (\textit{n})'': select \textit{n} samples per class in each iteration.} \end{figure} \mypar{Effectiveness of iterative manner.} Our intuition is the proposed ICI learns to generate a set of trustworthy unlabelled data for classifier training. \revise{ One basic baseline is simply running the algorithm for one time, selecting a subset, re-training the classifier, and ending the process. We argue that such a pipeline cannot utilize the information provided by the pseudo-labeled instances sufficiently. To verify this, we run experiments with selecting different number of instances, and take different iterations in Figure~\ref{fig:iter-manner}. Results suggest that ICI obtains better accuracy with iterative selection manner. } For example, select $6$ images with two iterations (ICI(3)) is superior to select $8$ images in one iteration (ICI(8)). \revise{To make a balance between computational cost, and performance, our experiments select $5$ images per iteration.} \begin{table}[h] \centering \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lccccc} \toprule Acc (\%)&0-10&10-20&20-30&30-40&40-50\tabularnewline \midrule b/t&0/0&0/0&1/2&7/16&91/133\tabularnewline \midrule \midrule Acc (\%)&50-60&60-70&70-80&80-90&90-100\tabularnewline \midrule b/t&312/446&526/663&464/544&154/191&3/5\tabularnewline \bottomrule \end{tabular*} \caption{\label{tab:stat} We run 2000 episodes, with each episode training an initial classifier. We denote ``Acc'' as the accuracy intervals; and ``b/T'' as the number of classifiers experienced improvement v.s. total classifiers in this accuracy interval. } \end{table} \mypar{Robustness against initial classifier.} What are the requirements for the initial linear classifier? Is it necessary to satisfy that the accuracy of the initial linear classifier is higher than 50\% or even higher? The answer is no. As long as the initial linear classifier can be trained, theoretically our method should work. \revise{It thus is a future open question of the influence of initial classifier.} We briefly validate it in Table~\ref{tab:stat}. We run 2000 episodes, with each episode training an initial classifier with different classification accuracy. Table~\ref{tab:stat} shows that most classifiers can get improved by ICI regardless of the initial accuracy. \begin{table}[H] \centering \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lcccc} \toprule \multirow{2}{*}{Model}&\multicolumn{2}{c}{Tran.}&\multicolumn{2}{c}{Semi.}\tabularnewline &1shot&5shot&1shot&5shot\tabularnewline \midrule kNN & $71.45$\ci{0.61}&$79.88$\ci{0.38}& $69.14$\ci{0.57} & $77.20$\ci{0.38} \tabularnewline SVM & $72.13$\ci{0.62} & $82.76$\ci{0.34}& $70.76$\ci{0.58} & $80.83$\ci{0.35}\tabularnewline LR & $72.39$\ci{0.62}& $83.27$\ci{0.33}& $72.32$\ci{0.58}& $82.78$\ci{0.33}\tabularnewline \bottomrule \end{tabular*} \caption{\label{tab:classifiers} Performance of ICI using different classifiers on \emph{ mini}ImageNet under several settings.} \end{table} \mypar{Robustness against choices of classifiers.} \revise{ Naturally, our proposed ICI is orthogonal to the choices of classifiers. To verify this, we select two other popular machine learning classifiers, linear support vector machine and k-nearest neighbor classifier, and run the SSFSL/TFSL 1-shot/5-shot tasks on the \emph{mini}ImageNet dataset. From results listed in Table~\ref{tab:classifiers}, the performance on 1-shot task is comparable, while on 5-shot task LR is superior to the other two classifiers. Thus, one can select the classifier which fits best in their own task and still enjoy the improvements given by ICI. } \mypar{Influence of reduced dimension.} In this part, we study the influence of reduced dimension $d$ in our algorithm on $5$-way $1$-shot \textit{mini}ImageNet experiments. The results with reduced dimension $2$, $5$, $10$, $20$, $50$, and without dimensionality reduction \textit{i.e.}, $d=512$, are shown in Table~\ref{tab:reduced}. Our algorithm achieves better performance when the reduced dimension is much smaller than the number of instances (\textit{i.e.}, $d\ll n$), which is consistent with the theoretical property~\cite{fan2018partial}. Moreover, we can observe that our model achieves the best accuracy of $72.39\%$ when $d=5$. Practically, we adopt $d=5$ in our model. \begin{table}[H] \begin{centering} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lclc} \toprule $d$ & Acc (\%)&Alg.&Acc (\%)\tabularnewline \cmidrule{1-2} \cmidrule{3-4} $2$ & $70.03$\ci{0.58}&Isomap~\cite{tenenbaum2000global} & $71.49$\ci{0.60}\tabularnewline $5$ & $\bf72.39$\ci{0.62}&PCA~\cite{tipping1999probabilistic} & $71.52$\ci{0.63}\tabularnewline $10$ & $71.80$\ci{0.61}&LTSA~\cite{zhang2004principal} & $70.10$\ci{0.59}\tabularnewline $20$ & $71.17$\ci{0.59}&MDS~\cite{borg2003modern} & $68.05$\ci{0.53}\tabularnewline $50$ & $69.30 $\ci{0.55}&LLE~\cite{roweis2000nonlinear} & $72.39 $\ci{0.62}\tabularnewline $512$ & $67.08$\ci{0.51}&SE~\cite{belkin2003laplacian} & $72.43$\ci{0.63} \tabularnewline \bottomrule \end{tabular*} \par\end{centering} \caption{\label{tab:reduced}Influence of reduced dimension and dimension reduction algorithms.} \end{table} \mypar{Influence of dimension reduction algorithms.} Furthermore, we study the robustness of ICI to different dimension reduction algorithms. We compare Isomap~\cite{tenenbaum2000global}, principal components analysis~\cite{tipping1999probabilistic} (PCA), local tangent space alignment~\cite{zhang2004principal} (LTSA), multi-dimensional scaling~\cite{borg2003modern} (MDS), locally linear embedding~\cite{roweis2000nonlinear} (LLE) and spectral embedding~\cite{belkin2003laplacian} (SE) on $5$-way $1$-shot \textit{mini}ImageNet experiments. From Table~\ref{tab:reduced} we can observe that the performance of ICI is comparable across most of the dimensionality reduction algorithms (from LTAS $70.10\%$ to SE $72.43\%$) except MDS ($68.05\%$). We adopt LLE for dimension reduction in our method. \begin{table}[H] \begin{centering} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lllcc} \toprule \multirow{2}{*}{Features} & \multirow{2}{*}{Backbone}& \multirow{2}{*}{Task} & \multicolumn{2}{c}{Accuracy}\tabularnewline & && Competitors & ICIR\tabularnewline \midrule \multirow{2}{*}{CAN~\cite{hou2019cross}} & \multirow{2}{*}{ResNet-12} &1-shot& $67.19$\ci{0.55} & $70.53$\ci{0.63} \tabularnewline & & 5-shot&$80.64$\ci{0.35} & $81.30$\ci{0.36}\tabularnewline \midrule \multirow{2}{*}{E$^{3}$BM~\cite{Liu2020E3BM}} & \multirow{2}{*}{WRN-28-10} &1-shot& $71.4$ & $71.39$\ci{0.63} \tabularnewline & & 5-shot& $81.2$ & $82.61$\ci{0.36}\tabularnewline \midrule \multirow{2}{*}{TAFSSL~\cite{lichtenstein2020tafssl}} & \multirow{2}{*}{DenseNet} &1-shot& $77.06$\ci{0.26} & $76.83$\ci{0.60} \tabularnewline & & 5-shot& $84.99$\ci{0.14}& $85.12$\ci{0.32} \tabularnewline \bottomrule \end{tabular*} \end{centering} \caption{\label{tab:backbone} Comparison under different backbones with exactly the same features.} \end{table} \mypar{Influence of backbone.} \revise{ One might wonder how does the backbone influences the performance of ICI. In this part, we select three different competitors with different backbones, including ResNet-12, ResNet-18, and WideResNet. We use their pre-trained model to ensure that we are using exactly the same features in experiments. The transudctive few-shot learning results is listed in Table~\ref{tab:backbone}, from where we could find that ICI enjoys comparable or even better performance with different backbones using only a simple linear classifier. Hence the effectiveness of ICI does not depend on the selection of backbone. } \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{fig/alpha.pdf} \caption{\label{fig:lr-penalty} Validation accuracy with different $\alpha$s. } \end{figure} \mypar{Influence of the penalty of logistic regression coefficient in ICI.} In Section~\ref{sec:extension-lr}, we have shown that the penalty of the logistic regression coefficient is necessary for a unique solution. However, this introduces the hyper-parameters $\lambda_1$ and $\lambda_2$ which we need to trade-off. Note that since we still aim to find the solution path of $\bm{\gamma}$, which is solved when we use a list of $\lambda_2$s. We set $\lambda_1 = \alpha\lambda_2$ for each solution point along the path and search for the best $\alpha$ based on the inference performance on the validation set. Results are shown in Fig.~\ref{fig:lr-penalty}, indicating that the performance is maximized when $\alpha$ is set around $0.5$. In our experiments, we use $\alpha=0.5$. \section{Conclusion} In this paper, we have proposed a statistical method, called Instance Credibility Inference (ICI) to exploit the distribution support of unlabeled instances for few-shot \revise{visual recognition}. The proposed ICI effectively select the most trustworthy pseudo-labeled instances according to their credibility to augment the training set. In order to measure the credibility of each pseudo-labeled instance, we propose to solve a hypothesis by increasing the sparsity of the incidental parameters and rank the pseudo-labeled instance \revise{according to} their sparsity degree. Theoretical analysis shows that under conditions of \textit{restricted eigenvalue, irrepresentability, and large error}, our ICI \revise{is able to find} all the correctly-predicted instances from the noisy pseudo-labeled set. Extensive experiments show that our simple approach achieves appealing performance on four widely used few-shot \revise{visual recognition} benchmark datasets including \textit{mini}ImageNet, \textit{tiered}ImageNet, CIFAR-FS, and CUB. \bibliographystyle{IEEEtran}
{ "timestamp": "2021-05-12T02:10:03", "yymm": "2007", "arxiv_id": "2007.08461", "language": "en", "url": "https://arxiv.org/abs/2007.08461" }
\section{Introduction} Dualities play an important role in our understanding of string theory. One of the best-understood dualities is T-duality, which relates string theory on backgrounds with $U(1)^{d}$ isometries, with the backgrounds related by $\mathrm{O}(d,d)$ transformations. These T-dualities are already visible in perturbative string theory, and are enlarged into U-dualities in the non-perturbative framework of M-theory \cite{Hull:1994ys,Witten:1995ex}. Generalisations of Abelian T-dualities exist for backgrounds with non-Abelian isometries, leading to non-Abelian T-duality (NATD) \cite{delaOssa:1992vci}, and for backgrounds without any isometries, called Poisson-Lie T-duality (PLTD) \cite{Klimcik:1995dy,Klimcik:1995ux}. Instead of the isometry algebra, PLTD is controlled by an underlying Drinfel'd double. Unlike Abelian T-duality, which is an equivalence between string theories on different backgrounds to all orders in the string coupling and string length, these generalised T-duality are currently best understood at the supergravity level and only to a limited extend beyond leading order in $\alpha'$ \cite{Hoare:2019mcc} and their status as true dualities of the string genus expansion remains doubtful \cite{Giveon:1993ai}. Nonetheless, NATD and PLTD have led to fruitful results. For example, NATD has been successfully used as solution-generating mechanisms of supergravity \cite{ Sfetsos:2010uq}, leading to the discovery of new minimally supersymmetric AdS backgrounds starting with \cite{Itsios:2013wd } (see \cite{Thompson:2019ipl} for a review and further references). Moreover, there is a close connection between PLTD and the (modified) classical Yang-Baxter equation which controls integrable deformations of $\sigma$-models \cite{Klimcik:2002zj,Klimcik:2008eq}. The non-perturbative generalisation of Poisson-Lie T-duality to a U-duality version in M-theory, or more conservatively as a solution-generating mechanism of 11-dimensional supergravity, has long been an open problem, which was recently addressed in \cite{Sakatani:2019zrs,Malek:2019xrf} and further elaborated on in \cite{Sakatani:2020iad,Hlavaty:2020pfj,Blair:2020ndg}. Building on the interpretation of PLTD and Drinfel'd doubles within Double Field Theory (DFT), \cite{Hassler:2017yza,Demulder:2018lmj,Sakatani:2019jgu}, \cite{Sakatani:2019zrs,Malek:2019xrf} used Exceptional Field Theory (ExFT)/Exceptional Generalised Geometry to propose a natural generalisation of the Drinfel'd double for dualities along four spacetime dimensions. This ``Exceptional Drinfel'd Algebra'' (EDA) was shown to lead to a new solution-generating mechanism of 11-dimensional supergravity that suggests a notion of Poisson-Lie U-duality, as well as a generalisation of the classical Yang-Baxter equation. Other recent works \cite{Bakhmatov:2019dow,Bakhmatov:2020kul,Musaev:2020bwm} have considered closely related ideas, although the detailed relation between these approaches and the EDA is not completely apparent. In this paper, we will further develop the ideas of \cite{Sakatani:2019zrs,Malek:2019xrf} by constructing EDAs and Poisson-Lie U-duality amongst six directions. We choose six dimensions, because important new features arise when dualities are considered in six directions. This is because now the 6-form can completely wrap the six directions we are considering. As a result, the $\frak{e}_{6(6)}$ algebra contains a generator, corresponding to a hexavector, which will generate new kinds of dualities and deformations which have no counterpart in PLTD, as we will see. The outline of the rest of this paper is as follows: in section \ref{s:LinCon} we describe the EDA from a purely algebraic perspective. In section \ref{s:GenFrame} we show how the EDA can be realised within exceptional generalised geometry as a Leibniz parallelisation of a particular type of group manifold $G$, that we will call a $(3,6)$-Nambu-Lie group. We then consider more closely the case of a coboundary EDA in section \ref{s:YB} whose structure is governed by a generalisation of the Yang Baxter equation. We provide a range of examples in section \ref{s:Examples} of EDAs both coboundary and otherwise, some of which have Drinfel'd doubles as subalgebras, and other which do not have such an interpretation. The aim of these examples is not to provide here a full classification, which could form an interesting investigation in its own right, but rather to highlight the various features that can arise. \section{The $E_{6(6)}$ EDA} \label{s:LinCon} Before specialising to the case of $E_{6(6)}$ we begin by presenting some generalities of the Exceptional Drinfel'd Algebra. The EDA, $\frak{d}_n$, is a Leibniz algebra which is a subalgebra of $E_{n(n)}$ \footnote{In general one can allow for EDAs as subalgebras of $E_{n(n)} \times \mathbb{R}^+$, see \cite{Malek:2019xrf}. However, we will not deal with the extra $\mathbb{R}^+$ factor here.}, admitting a ``maximally isotropic'' subalgebra, as we will define shortly. In table 1 we provide details of the representations of $E_{n(n)}$ inherited from the exceptional field theory (ExFT) approach to eleven-dimensional supergravity that are useful to the present construction. \renewcommand{\arraystretch}{1.1} \begin{table}[h]\centering \begin{tabular}{|c|c|c|c|c|c|c H c|} \hline $D$ & $E_{n(n)}$ & $H_n$ & $R_1$ & $R_2$ & $R_3$ &$R_4$ & $R_c$ & \Tstrut\Bstrut \\ \hline 7 & $\SL{5}$ & $\USp{4} / \mathbb{Z}_2$ & $\mbf{10}$ & $\obf{5}$ & $\mbf{5}$ &$\obf{10}$ & $\emptyset$ & \\ 6 & $\Spin{5,5}$ & $\USp{4}\times\USp{4} / \mathbb{Z}_2$ & $\mbf{16}$ & $\mbf{10}$ & $\obf{16}$ & $\mbf{45}$ & $\mbf{1}$ &\\ 5 & $\EG{6}$ & $\USp{8} / \mathbb{Z}_2$ & $\mbf{27}$ & $\obf{27}$ & $\mbf{78}$ &$\obf{351'}$ & $\mbf{27}$ &\\ \hline \end{tabular} \vskip-0.5em \captionof{table}{\small{The split real form of exceptional groups $E_{n(n)}$ with $D=11-n$, their maximal compact subgroups $H_n$ and representations $R_1 \dots R_4$ appearing in the tensor hierarchy of ExFT. In this work we will be mostly concerned with representations $R_1$ and $R_2$ which will be associated to the generalised tangent bundles $E$ and $N$ respectively.}} \label{t:Edd} \end{table} \renewcommand{\arraystretch}{1} We denote the generators of $\frak{d}_n$ by $\{ T_A \}$, with the index $A$ inherited from the $\bar{R}_1$ representation of $E_{n(n)}$ {ExFT} and their product by \begin{align} T_A \circ T_B = X_{AB}{}^C\, T_C\,, \end{align} with $X_{AB}{}^C$ structure constants which are not necessarily antisymmetric in their lower indices. The product obeys the Leibniz identity, namely \begin{align} \label{eq:Leibniz} T_A \circ (T_B \circ T_C) = (T_A\circ T_B)\circ T_C + T_B\circ( T_A \circ T_C)\, , \end{align} which implies for the structure constants \begin{align}\label{eq:QC1} X_{AC}{}^D{}\,X_{BD}{}^E - X_{BC}{}^D\,X_{AD}{}^E + X_{AB}{}^D\,X_{DC}{}^E = 0 \,. \end{align} Note that if the Leibniz algebra is a Lie algebra, i.e. the $X_{AB}{}^C$ are antisymmetric in their lower indices, then this reduces to the Jacobi identity. We place two further (linear) requirements on the EDA. Firstly, we demand that there is a maximal \underline{Lie} subalgebra $\frak{g}$ spanned by $\{T_a\}\subset \{ T_A \}$ obeying \begin{align}\label{eq:seccond} \frak{g} \otimes \frak{g}|_{\bar{R}_2} = 0 \,, \end{align} in which the representation $ \bar{R}_2$ is found in table 1. We call such a subalgebra $\frak{g}$ maximally isotropic. We will be interested here in the case that $\dim \frak{g} = n$ as this is relevant to the M-theory context.\footnote{There is another inequivalent way to maximally solve the condition eq. \eqref{eq:seccond} with $\dim \frak{g} = n-1$ leading to a IIB scenario \cite{Blair:2013gqa,Hohm:2013vpa}.} Since $G =\exp \frak{g}$ acts adjointly on $\frak{d}_n$, it follows that $G$ should be endowed with a trivector and hexavector. We will further require that these objects give rise to a 3- and 6-bracket on $\frak{g}^*$, thereby imposing some further restrictions on the structure constants $X_{AB}{}^C$.\footnote{It is worth emphasising that these are impositions beyond simply demanding that $\frak{g}$ be a maximal isotropic.} These additional requirements imply that the EDA can be given a geometrical realisation in terms of certain generalised frames whose action is mediated by the generalised Lie derivative \eqref{eq:gLie}, as we will show in section \ref{s:GenFrame}. Let us now discuss these restrictions in detail. \subsection{Linear Constraints} We now study in detail the consequence of the requirements of the maximally isotropic subalgebra $\frak{g}$ and its adjoint action. Since these constraints arise from placing requirements directly to the form of $X_{AB}{}^C$ we describe them as linear constraints; this is to be contrasted with quadratic constraints of the form $X^2=0 $ that arise from the Leibniz identity. Firstly, since $\frak{g}$ is a Lie algebra, we immediately have \begin{equation} \begin{split} T_a \circ T_b &= f_{ab}{}^c\,T_c \,, \end{split} \end{equation} with $f_{ab}{}^c$ antisymmetric in $a$, $b$. Secondly, the adjoint action\footnote{To be more precise we inherit an action via the rack product: \begin{align*} g \cdot T_A \cdot g^{-1} \equiv g^{-1} \triangleright T_A \equiv T_A + h \circ T_A + \frac{1}{2} h \circ (h \circ T_A) + \cdots \qquad (g^{-1}\equiv e^h)\,. \end{align*}} of $g \in G = \exp{\frak{g}}$ on $\frak{d}_n$ implies that \begin{equation} g \cdot T_A \cdot g^{-1} = \left(A_g\right)_A{}^B T_B \,, \end{equation} with $\left(A_g\right)_A{}^B \in E_{6(6)}$ since $\mathfrak{g} \subset \frak{d}_n \subset \frak{e}_{6(6)}$. Let us denote the adjoint action of $g \in G$ on $\frak{g}$ by $a_g$. Then $g \cdot T_A \cdot g^{-1}$ takes the form: \begin{equation} \begin{split} \label{eq:adjoint} g \cdot T_a \cdot g^{-1} &= (a_g)_a{}^b\,T_b\,,\\ g \cdot T^{a_1a_2} \cdot g^{-1} &= - \lambda_g^{a_1a_2c}\,(a_g)_c{}^b\,T_b + (a_g^{-1})_{b_1}{}^{a_1}\,(a_g^{-1})_{b_2}{}^{a_2}\,T^{b_1b_2}\,,\\ g \cdot T^{a_1\ldots a_5} \cdot g^{-1} &= \bigl(\lambda_g^{a_1\ldots a_5c}+ 5\,\lambda_g^{[a_1a_2a_3}\,\lambda_g^{a_4a_5]c}\bigr)\,(a_g)_c{}^b\,T_b \\ &\quad - 10\,\lambda_g^{[a_1a_2a_3}\,(a_g^{-1})_{b_1}{}^{a_4}\,(a_g^{-1})_{b_2}{}^{a_5]}\,T^{b_1b_2} \\ &\quad + (a_g^{-1})_{b_1}{}^{[a_1}\ldots (a_g^{-1})_{b_5}{}^{a_5]}\,T^{b_1\ldots b_5}\,. \end{split} \end{equation} Thus, $G$ admits a totally antisymmetric trivector $\lambda^{abc}$ and totally antisymmetric hexavector $\lambda^{a_1\ldots a_6}$ which control its adjoint action on the generators $T^{ab}$ and $T^{a_1 \ldots a_5}$. Equations \eqref{eq:adjoint} imply that $\left(\lambda_g\right)^{abc}$ and $\left(\lambda_g\right)^{a_1 \ldots a_6}$ vanish at the identity, i.e. \begin{equation} \label{eq:LambdaE} \left(\lambda_e\right)^{abc} = \left( \lambda_e\right)^{a_1 \ldots a_6} = 0 \,, \end{equation} and they inherit a group composition rule \begin{equation} \begin{split} \label{eq:LambdaComp} \lambda_{hg}^{a_1a_2a_3} &= \lambda_g^{a_1a_2a_3} + (a_g^{-1})_{c_1}{}^{a_1}\,(a_g^{-1})_{c_2}{}^{a_2}\,(a_g^{-1})_{c_3}{}^{a_3}\, \lambda_h^{c_1c_2c_3} \,, \\ \lambda_{hg}^{a_1\ldots a_6} &= \lambda_{g}^{a_1\ldots a_6} + (a_{g}^{-1})_{c_1}{}^{a_1}\ldots (a_g^{-1})_{c_6}{}^{a_6}\, \lambda_{h}^{c_1\ldots c_6} \\ &\quad + 10\,\lambda_g^{[a_1a_2a_3}\,(a_{g}^{-1})_{c_1}{}^{a_4}\,(a_{g}^{-1})_{c_2}{}^{a_5}\,(a_g^{-1})_{c_3}{}^{a_6]}\, \lambda_h^{c_1c_2c_3}\,, \end{split} \end{equation} for $g, h \in G$. Finally, we come to the second condition on the EDA, i.e. the existence of a 3- and 6-bracket on $\frak{g}^*$. This is equivalent to imposing the following differential conditions on $\lambda^{abc}$ and $\lambda^{a_1\ldots a_6}$: \begin{equation} \begin{split} \label{eq:dlambda} d\lambda^{a_1a_2a_3} &= r^b \left( f_b{}^{a_1a_2a_3} + 3\,f_{bc}{}^{[a_1} \, \lambda^{|c|a_2a_3]} \right) \,, \\ d\lambda^{a_1\ldots a_6} &= r^b \left( f_b{}^{a_1\ldots a_6} + 6\,f_{bc}{}^{[a_1}\, \lambda^{|c|a_2\ldots a_6]} + 10\, f_b{}^{[a_1a_2a_3}\, \lambda^{a_4a_5a_6]} \right) \,, \end{split} \end{equation} where $r = r^a\, T_a$ are the right-invariant 1-forms on $G$ obeying $dr^a = \frac{1}{2} f_{bc}{}^a r^b \wedge r^c$ and we have dropped the subscript $g$ on $\lambda^{(3)}$ and $\lambda^{(6)}$. The $f_b{}^{a_1 \ldots a_3}$ and $f_{b}{}^{a_1 \ldots a_6}$ are structure constants for a 3- and 6-bracket and are totally antisymmetric in their upper indices. In fact, as we will see in section \ref{s:QC}, the Leibniz identity implies further properties of the trivector and hexavector, in particular that they define a certain Nambu 3- and 6-bracket which are compatible with the Lie bracket on $G$. Therefore, it seems apt to call $G$ a (3,6)-Nambu-Lie Group. With the above conditions, the EDA takes the following form \begin{equation}\label{eq:TheEDA} \begin{split} T_a \circ T_b &= f_{ab}{}^c\,T_c \,, \\ T_a \circ T^{b_1b_2} &= f_a{}^{b_1b_2c}\,T_c + 2\,f_{ac}{}^{[b_1}\,T^{b_2]c}\,,\\ T_a \circ T^{b_1\ldots b_5} &= -f_a{}^{b_1\ldots b_5c}\,T_c + 10\,f_{a}{}^{[b_1b_2b_3}\,T^{b_4b_5]} - 5\,f_{ac}{}^{[b_1}\,T^{b_2\ldots b_5]c} \,, \\ T^{a_1a_2} \circ T_b &= -f_b{}^{a_1a_2c}\,T_c + 3\,f_{[c_1c_2}{}^{[a_1}\,\delta^{a_2]}_{b]}\,T^{c_1c_2}\,,\\ T^{a_1a_2} \circ T^{b_1b_2} &= -2\, f_c{}^{a_1a_2[b_1}\, T^{b_2]c} + f_{c_1c_2}{}^{[a_1}\,T^{a_2]b_1b_2c_1c_2}\,,\\ T^{a_1a_2} \circ T^{b_1\ldots b_5} &= 5\,f_c{}^{a_1a_2[b_1}\, T^{b_2\ldots b_5]c} \,, \\ T^{a_1\ldots a_5} \circ T_b &= f_b{}^{a_1\ldots a_5c}\,T_c - 10\,f_b{}^{[a_1a_2a_3}\,T^{a_4a_5]} -20\,f_c{}^{[a_1a_2a_3}\,\delta_b^{a_4}\,T^{a_5]c} \\ &\quad + 5\,f_{bc}{}^{[a_1}\,T^{a_2\ldots a_5]c} + 10\,f_{c_1c_2}{}^{[a_1}\,\delta^{a_2}_b\,T^{a_3a_4a_5]c_1c_2} \,, \\ T^{a_1\ldots a_5} \circ T^{b_1b_2} &= 2\,f_c{}^{a_1\ldots a_5[b_1}\,T^{b_2]c} - 10\,f_c{}^{[a_1a_2a_3}\, T^{a_4a_5]b_1b_2c}\,, \\ T^{a_1\ldots a_5} \circ T^{b_1\ldots b_5} &= -5\,f_c{}^{a_1\ldots a_5[b_1}\, T^{b_2\ldots b_5]c} \,. \end{split} \end{equation} \subsection{Leibniz identity constraints} \label{s:QC} We will now study the compatibility conditions between the Lie algebra $\frak{g}$, the 3-bracket and 6-bracket, as well as their appropriate ``closure'' conditions that are required for the EDA to satisfy the Leibniz identity of eq. \eqref{eq:Leibniz}. This yields a number of immediate constraints. In particular, we obtain the following fundamental identities, i.e. generalisations of Jacobi for higher brackets, \begin{align} \label{eq:FI-1} 0 &= 3 f_{[a_1 a_2}{}^c \,f_{a_3]c}{}^b \,, \\ \label{eq:FI-3} 0 & = f_a{}^{d c_1c_2}\, f_d{}^{b_1b_2b_3} - 3\, f_d{}^{c_1c_2[b_1}\,f_a{}^{b_2b_3]d} + f_{d_1d_2}{}^{[c_1}\, f_a{}^{c_2] b_1b_2b_3 d_1d_2} \,, \\ \label{eq:FI-6} 0 &= f_a{}^{d c_1\ldots c_5}\, f_d{}^{b_1\ldots b_6} - 6\,f_d{}^{c_1\ldots c_5[b_1} f_a{}^{b_2\ldots b_6]d} \,, \end{align} as well as compatibility conditions between the dual structure constants $f_b{}^{a_1a_2 a_3}$, $f_b{}^{a_1\ldots a_6}$ and the Lie algebra structure constants. These compatibility conditions take the form of cocycle conditions \begin{align} \label{eq:cocycle-3} 0 &= f_{a_1a_2}{}^c\,f_c{}^{b_1b_2b_3} + 6\, f_{c[a_1}{}^{[b_1|}\, f_{a_2]}{}^{c|b_2b_3]} \,, \\ \label{eq:cocycle-6} 0 &= f_{a_1a_2}{}^c\,f_c{}^{b_1\ldots b_6} +12\,f_{c[a_1}{}^{[b_1|}\,f_{a_2]}{}^{c|b_2\ldots b_6]} -20\, f_{[a_1}{}^{[b_1b_2b_3}\,f_{a_2]}{}^{b_4b_5b_6]} \,, \end{align} as well as the additional constraint \begin{equation} \label{eq:addcon} f_{d_1 d_2}{}^a f_{c}{}^{d_1 d_2 b} = 0 \,. \end{equation} If we only consider EDAs $\frak{d}_n$ with $n\leq6$, as we are doing here, the conditions given by the above eqs. \eqref{eq:FI-1}-\eqref{eq:addcon} are equivalent to imposing the Leibniz identity. This is because in $n \leq 6$, the fundamental identity for the six-bracket implies that $f_{b}{}^{a_1 \ldots a_6} = 0$. However, since the structure we are studying here will also exist for $n > 6$, we will keep the remaining discussion as dimension-independent as possible, whilst keeping in mind that for $n > 6$, the Leibniz identity will lead to further or modified compatibility conditions between $f_{ab}{}^c$, $f_b{}^{a_1 a_2 a_3}$ and $f_{b}{}^{a_1 \ldots a_6}$. These additional constraints will need to be studied using EDAs based on $E_{7(7)}$ and higher. Before interpreting these constraints, we remark that the Leibniz identity ensures, much as the structure constants of a Lie algebra $\frak{g}$ are invariant under $G=\exp\frak{g}$ acting adjointly, that the EDA structure constants enjoy an invariance \begin{equation}\label{eq:adinvariance} X_{AB}{}^{D} (A_g)_D{}^C = (A_g)_A{}^D (A_g)_B{}^E X_{DE}{}^C \, . \end{equation} Substitution of eq. \eqref{eq:adjoint} here results in a variety of identities that we shall revisit later on. \subsubsection{Fundamental identities} Let us now introduce the 3-bracket $\left\{\, \right\}_3$ and 6-bracket $\left\{\, \right\}_6$ on $\frak{g}^*$ with structure constants $f_b{}^{a_1 a_2 a_3}$ and $f_b{}^{a_1 \ldots a_6}$, respectively, i.e. \begin{equation} \begin{split} \left\{ x,\, y,\, z \right\}_3 &= f_a{}^{b_1b_2b_3}\, x_{b_1}\, y_{b_2}\, z_{b_3} \,, \\ \left\{ u,\, v,\, w,\, x,\, y,\, z \right\}_6 &= f_a{}^{b_1\ldots b_6}\, u_{b_1}\, v_{b_2}\, w_{b_3}\, x_{b_4}\, y_{b_5}\, z_{b_6} \,. \end{split} \end{equation} The conditions \eqref{eq:FI-1} and \eqref{eq:FI-3} imply that the 3- and 6-brackets satisfy \begin{equation} \begin{split} \label{eq:FIa} \left\{ x_1,\, x_2,\, \left\{ x_3,\, x_4,\, x_5 \right\}_3 \right\}_3 &= \left\{ \left\{x_1,\, x_2,\,x_3 \right\}_3,\, x_4,\, x_5\right\}_3 + \left\{ x_3,\, \left\{x_1,\, x_2,\,x_4 \right\}_3,\, x_5\right\}_3 \\ & \quad + \left\{ x_3,\, x_4,\, \left\{x_1,\, x_2,\,x_5 \right\}_3 \right\}_3 \\ &- \left\{ \Delta(x_1), x_2, x_3, x_4, x_5 \right\}_6 + \left\{ \Delta(x_2), x_1, x_3, x_4, x_5 \right\}_6 \,, \\ \left\{ x_1,\, \ldots ,\, x_5,\, \left\{ y_1,\, \ldots ,\, y_6 \right\}_6 \right\}_6 &= \left\{ \left\{ x_1,\,\ldots ,\, x_5,\, y_1 \right\}_6,\, y_2,\, \ldots,\, y_6 \right\}_6 \\ & \quad + \left\{ y_1,\, \left\{ x_1,\, \ldots,\, x_5,\, y_2 \right\}_6,\, y_3,\, \ldots,\, y_6 \right\}_6 \\ & \quad + \left\{ y_1,\, y_2,\, \left\{ x_1,\, \ldots,\, x_5,\, y_3 \right\}_6,\, y_4,\, \ldots,\, y_6 \right\}_6 \\ & \quad + \left\{ y_1,\, \ldots,\, y_3,\, \left\{ x_1,\, \ldots,\, x_5,\, y_4 \right\}_6,\, y_5,\, y_6 \right\}_6 \\ & \quad + \left\{ y_1,\, \ldots,\, y_4,\, \left\{ x_1,\, \ldots,\, x_5,\, y_5 \right\}_6,\, y_6 \right\}_6 \\ & \quad + \left\{ y_1,\, \ldots,\, y_5,\, \left\{ x_1,\, \ldots,\, x_5,\, y_6 \right\}_6 \right\}_6 \,, \end{split} \end{equation} for all $x_1,\, \ldots,\, x_5,\, y_1,\, \ldots,\, y_6 \in \frak{g}^*$, and where we used the Lie bracket on $\frak{g}$ to define the $\textrm{ad}$-invariant co-product $\Delta$ on $\frak{g}^*$ \begin{equation} \begin{split} \label{eq:adcoproduct} \Delta: \frak{g}^* &\longrightarrow \frak{g}^* \wedge \frak{g}^* \,, \\ ad_x \Delta(y) &= \Delta(ad_x y) \,,\,\, \forall\; x \in \frak{g},\, y \in \frak{g}^* \,, \end{split} \end{equation} which is given, assuming a basis $\{T^a\}$ for $\frak{g}^*$, by \begin{equation} \begin{split} \label{eq:adcoproductind} \Delta(x_a\, T^a) = \frac12 f_{bc}{}^{a}\, x_a\, T^b \wedge T^c \,. \end{split} \end{equation} We see that the 6-bracket must satisfy the fundamental identity for Nambu 6-brackets, while the 3-bracket's fundamental identity is modified by the 6-bracket and the co-product defined by the structure constants of $\frak{g}$. \subsubsection{Compatibility conditions} The first set of compatibility conditions, eqs. \eqref{eq:cocycle-3} and \eqref{eq:cocycle-6}, between the 3- and 6-brackets and the Lie algebra $\frak{g}$ imply that $f_b{}^{a_1a_2a_3}$ defines a $\frak{g}$-cocycle and that $f_b{}^{a_1\ldots a_6}$ is an $f_3$-twisted $\frak{g}$-cocycle, as follows. $f_b{}^{a_1a_2a_3}$ and $f_b{}^{a_1\ldots a_6}$ define $\Lambda^3 \frak{g}$- and $\Lambda^6 \frak{g}$-valued 1-cochains \begin{equation} \begin{split} f_3: \frak{g} &\longrightarrow \Lambda^3 \frak{g} \,, \\ f_6: \frak{g} &\longrightarrow \Lambda^6 \frak{g} \,, \end{split} \end{equation} defined by \begin{equation} \begin{split} f_3(x) &= \frac1{3!} x^b\, f_b{}^{a_1a_2a_3}\, T_{a_1} \wedge T_{a_2} \wedge T_{a_3} \,,\; \forall\, x = x^a T_{a} \in \frak{g} \,, \\ f_6(x) &= \frac{1}{6!} x^b\, f_b{}^{a_1 \ldots a_6}\, T_{a_1} \wedge \ldots \wedge T_{a_6} \,,\; \forall\, x = x^a T_{a} \in \frak{g} \,. \end{split} \end{equation} Using the coboundary operator $d:\frak{g}^*\otimes\Lambda^p\frak{g} \longrightarrow \Lambda^2\frak{g}^*\otimes\Lambda^p\frak{g}$, for $p=3$ and $p=6$ here, \begin{align} df_3(x,y)&\equiv ad_x f_3(y) -ad_y f_3(x) - f_3([x,y])\,, \\ df_6(x,y)&\equiv ad_x f_6(y) -ad_y f_6(x) - f_6([x,y])\, , \end{align} the conditions \eqref{eq:cocycle-3} and \eqref{eq:cocycle-6} are more elegantly stated as \begin{align}\label{eq:cocycle-condensed} df_3(x,y) = 0\,,\qquad df_6(x,y) + f_3(x)\wedge f_3(y) = 0\,. \end{align} The coboundary operator is nilpotent with $d:\Lambda^p\frak{g}\longrightarrow\frak{g}^*\otimes\Lambda^p\frak{g}$ defined as \begin{equation} d\rho_p(x) = ad_x \rho_p \,, \end{equation} for all $x \in \frak{g}$ and $\rho_p \in \Lambda^p\frak{g}$. Therefore, the cocycle conditions \eqref{eq:cocycle-condensed} can be solved by the (twisted) coboundaries \begin{equation} \begin{split} f_3 &= d\rho_3 \,, \qquad f_6 = d\rho_6 + \frac12 \rho_3 \wedge d\rho_3 \,. \end{split} \end{equation} In components, these are equivalent to \begin{equation} \begin{split} f_a{}^{b_1b_2b_3} &= 3\,f_{ac}{}^{[b_1}\,\rho^{|c|b_2b_3]}\,, \\ f_a{}^{b_1\ldots b_6} &= 6\,f_{ac}{}^{[b_1|}\,\rho^{c|b_2\ldots b_6]} + 30\,f_{ac}{}^{[b_1}\,\rho^{|c|b_2b_3}\,\rho^{b_4b_5b_6]} \,. \label{eq:coboundary} \end{split} \end{equation} The coboundary case is related to a generalisation of Yang-Baxter deformations. The trivector $\rho^{a_1a_2a_3}$ and the hexavector $\rho^{a_1\ldots a_6}$ correspond to the M-theoretic analogue of the classical $r$-matrix. The equations corresponding to the classical Yang-Baxter equations for the $r$-matrices are implied by substituting the solutions \eqref{eq:coboundary} to the fundamental identities \eqref{eq:FI-3} and \eqref{eq:FI-6}. We will discuss this further in section \ref{s:YB}. Finally, the additional constraint \eqref{eq:addcon} implies that the ad-invariant co-product $\Delta$ on $\frak{g}^*$ \eqref{eq:adcoproduct} defines a commuting subspace of the 3-bracket: \begin{equation} \left\{ \Delta(x_1),\, x_2 \right\}_3 = 0 \,,\; \forall\, x_1, x_2 \in \frak{g}^*\,. \end{equation} \section{$E_{6(6)}$ EDA from generalised frame fields} \label{s:GenFrame} We now provide a geometric realisation of the $E_{6(6)}$ EDA by constructing a Leibniz parallelisation \cite{Grana:2008yw,Aldazabal:2011nj,Geissbuhler:2011mx,Berman:2012uy,Lee:2014mla,Hohm:2014qga} of the exceptional generalised tangent bundle \cite{Pacheco:2008ps,Berman:2010is,Berman:2011cg,Coimbra:2011ky,Berman:2012vc,Coimbra:2012af,Hohm:2013pua} \begin{align} E &\cong TM\oplus \Lambda^2 T^*M \oplus \Lambda^5 T^*M \, , \end{align} in which we identify the manifold $M= G= \exp \frak{g}$. We will also be interested in a second bundle \begin{align} N &\cong T^*M \oplus \Lambda^4 T^*M \oplus( T^*M\otimes \Lambda^6 T^*M) \,. \end{align} The action of sections of these bundles, \begin{align} V = v + \nu_2 + \nu_5\ \in \Gamma (E)\,, \quad W = w + \omega_2 + \omega_5\ \in \Gamma (E)\,, \quad \mathcal{X} &= \chi_1 + \chi_4 + \chi_{1,6}\ \in \Gamma(N)\,, \end{align} is mediated by the generalised Lie derivative \cite{Berman:2011cg,Coimbra:2011ky,Berman:2012vc} defined as \begin{align} \LL_{V} W &= [v,w] + \bigl(L_v \omega_2 - \imath_w d\nu_2\bigr) + \bigl( L_v \omega_5 - \imath_w d\nu_5 - \omega_2 \wedge d\nu_2 \bigr) \,, \label{eq:gLie} \\ \LL_V \mathcal{X} &= L_v \chi_1 + \bigl(L_v \chi_4 - \chi_1 \wedge d\nu_2\bigr) + \bigl(L_v \chi_{1,6} + j\chi_4 \wedge d\nu_2 + j\chi_1\wedge d \nu_5\bigr) \,. \end{align} We define \cite{Coimbra:2011ky} a symmetric bilinear map $\langle\cdot ,\,\cdot \rangle : E\times E\to N$ as \begin{align} \langle V,\,W \rangle &= (\imath_v \omega_2+\imath_w \nu_2) + (\imath_v \omega_5 - \nu_2\wedge \omega_2 + \imath_w\nu_5) + (j\nu_2\wedge \omega_5 + j\omega_2\wedge \nu_5)\,, \end{align} such that the generalized Lie derivative satisfies \begin{align} \langle \LL_U V,\,W\rangle + \langle V,\,\LL_U W\rangle = \LL_U \langle V,\,W \rangle \, \quad \forall U,V,W \in \Gamma(E). \label{eq:gL-bracket} \end{align} The parallelisation consists of a set of sections $E_A \in \Gamma(E)$ that: \begin{itemize} \item form a globally defined basis for $\Gamma(E)$ \item give rise to an $E_{6(6)}$ element\footnote{An extension of this setup is to allow $E_{A}{}^M$ to be elements of $E_{6(6)}\times \mathbb{R}^+$, though for simplicity in the presentation we shall demand no $\mathbb{R}^+$ weighting.}, $E_{A}{}^M$, whose matrix entries are the components of $E_A$ \item realise the algebra of the EDA through the generalised Lie derivative \begin{equation}\label{eq:framealg} {\cal L}_{E_A} E_B =- X_{AB}{}^C E_C \, , \end{equation} where the constants $X_{AB}{}^C$ are the same as those defined through the relations in eq. \eqref{eq:TheEDA} and obey the Leibniz identity. \end{itemize} The parallelisation can be directly constructed in terms of the right-invariant Maurer-Cartan one-forms on $G$, $r^a$, their dual vector fields $e_a$, and the trivector, $\lambda^{a_1 a_2 a_3}$, and hexavector, $\lambda^{a_1 \dots a_6}$. This can thus be thought of as a special example of the more general prescription of \cite{Inverso:2017lrz}, where we only make use of the aforementioned geometric data on the $(3,6)$-Nambu-Lie Group $G$. Following the decomposition of EDA generators we write $ {E}_{A} = \{ E_{a},\, E^{a_1 a_2} ,\, E^{a_1 \dots a_5} \}$ with \begin{align} \begin{split} E_a &= e_a\,, \qquad E^{a_1a_2} = - \lambda^{a_1a_2b}\,e_b + r^{a_1} \wedge r^{a_2}\,, \\ E^{a_1\ldots a_5} &= \bigl( \lambda^{a_1\ldots a_5b}+ 5\,\lambda^{[a_1a_2a_3}\,\lambda^{a_4a_5]b}\bigr)\,e_b - 10\,\lambda^{[a_1a_2a_3}\,r^{a_4} \wedge r^{a_5]} + r^{a_1} \wedge \ldots\wedge r^{a_5} \,. \end{split} \label{eq:frames} \end{align} It is straightforward, but indeed quite lengthy, to verify that these furnish the EDA algebra. A first check is to see that after using the identities \eqref{eq:dlambda} to evaluate derivatives we can go to the identity of $M$ where $\lambda^{a_1 a_2 a_3}$ and $\lambda^{a_1 \dots a_6}$ vanish. One then has to use the adjoint invariance conditions that follow from eq \eqref{eq:adinvariance} to conclude that this holds away from the identity. If we specialise now to the case of $f_{b}{}^{a_1\dots a_6}=0$, which we recall is enforced for $n\leq 6$ by the fundamental identities, we find quickly an immediate consequence of eq. \eqref{eq:adinvariance} is that $d\lambda^{a_1\dots a_6} =0 $, and since $\lambda^{a_1\dots a_6} $ vanishes at the identity, it must be identically zero. The remaining adjoint invariance conditions can be combined to imply that \begin{align} \begin{split} f_{ab}{}^c\, \lambda^{abd} =0 \, , \quad f_{af}{}^{[b_1|}\,\lambda^{f|b_2b_3}\,\lambda^{b_4b_5b_6]} = 0\,, \quad f_a{}^{[b_1b_2b_3}\,\lambda^{b_4b_5b_6]} = 0 \, ,\\ f_d{}^{b_1b_2c}\, \lambda^{a_1a_2 d}- 3\,f_d{}^{a_1a_2 [c}\, \lambda^{b_1b_2] d} - 3\, f_{de}{}^{[b_1}\, \lambda^{b_2 c]d}\, \lambda^{a_1a_2e} - 3\, f_{de}{}^{[a_1}\, \lambda^{a_2]d[b_1}\, \lambda^{b_2c] e} =0\,. \end{split} \end{align} These conditions are sufficient to ensure that frame algebra is obeyed. We also define the generalized frame field ${\cal E}_{\cal A}$, which is a section of $N$, through \begin{align} \langle E_A,\,E_B\rangle = \eta_{AB}{}^{{\cal C}}\,{\cal E}_{\cal C} \,, \label{eq:cE-def} \end{align} where $\eta_{AB}{}^{{\cal C}}$ is an invariant tensor of the $E_{n(n)}$. For $E_{6(6)}$ this tensor is related to the symmetric invariant (see the appendix for details) such that the explicit form of ${\cal E}_{\cal A}$ has components \begin{align} \begin{split} {\cal E}^a &= r^a\,,\qquad {\cal E}^{a_1\ldots a_4} =4\,\lambda^{[a_1a_2a_3}\,r^{a_4]} + r^{a_1\ldots a_4} \,, \\ {\cal E}^{a',a_1\ldots a_6}&= (\lambda^{a_1\ldots a_6}\,r^{a'}-30\,\lambda^{a'[a_1a_2}\,\lambda^{a_3a_4a_5}\,r^{a_6]}) -15\,\lambda^{a'[a_1a_2}\,r^{a_3\ldots a_6]} + jr^{a'} r^{a_1\ldots a_6} \,. \end{split} \end{align} Here we denote $r^{a_1 \ldots a_m} = r^{a_1} \wedge \dots \wedge r^{a_m}$ and make use of the $j$-wedge contraction of \cite{Pacheco:2008ps} to deal with mixed symmetry fields.\footnote{For a $p+1$-form $\alpha$ and a $(n-p)$-form $\beta$, we define \begin{equation} \left( j\alpha \wedge \beta\right)_{i,i_1\ldots i_n} = \frac{n!}{p!(n-p)!} \alpha_{i[i_1 \ldots i_p} \beta_{i_{p+1} \ldots i_n]} \,. \end{equation}} One can consider now the action of the frame field $E_A$ on these ${\cal E}_{{\cal A}}$ and by virtue of eq. \eqref{eq:cE-def}, again find that they furnish the EDA algebra, albeit in a different representation as described in the appendix. \subsection{Generalised Scherk-Schwarz reductions and the Embedding Tensor} The generalised frame field introduced above can be used as a compactification Ansatz within ExFT known as a generalised Scherk-Schwarz reduction. In this procedure all internal coordinate dependence is factorised into dressings given by the generalised frame. The algebra in eq. \eqref{eq:framealg} ensures the dimensional reduction results in a lower dimensional gauged supergravity. The structure constants of the EDA determine the gauge group of this lower dimensional theory, and in such a context are known as the embedding tensor. To facilitate contact with the literature \cite{LeDiffon:2008sh} we can express this in terms of the $\overline{{\bf 27}}$ and ${\bf 351}$ representations of $E_{6(6)}$ as \begin{equation} X_{AB}{}^C= d_{AB D}Z^{CD} + 10 d_{ADS} d_{BRT} d^{CDR} Z^{ST} - \frac{3}{2} \vartheta_{[A} \delta_{B]}^C - \frac{15}{2} d_{AB D} d^{CDE}\vartheta_E \, . \end{equation} The components of the antisymmetric $Z^{AB}$ are determined to be \begin{align} \begin{split} Z^{ab}&=0\,,\\ Z_{a_1a_2}{}^b &=-\tfrac{5 }{2 \cdot 4!\sqrt{10}} \,f_d{}^{db c_1\ldots c_4 }\,\epsilon_{a_1a_2 c_1\ldots c_4 } \, (=0) \,, \\ Z_{a_1\ldots a_5 }{}^b &= - \tfrac{5}{2\sqrt{10}} \,f_d{}^{dbc}\,\epsilon_{a_1\ldots a_5 c} \,, \\ Z_{a_1 a_2\, , \, b_1 b_2} &= - \tfrac{5}{ 3! \sqrt{10}} \,(f_{[a_1}{}^{c_1c_2c_3}\,\epsilon_{a_2]c_1c_2c_3 b_1 b_2 }- f_{[b_1}{}^{c_1c_2c_3}\,\epsilon_{b_2]c_1c_2c_3 a_1 a_2 } ) \,, \\ Z_{a_1 a_2 \, , \, b_1 \ldots b_5} &= - \tfrac{5}{2\sqrt{10}} \,(f_{a_1 a_2 }{}^c\,\epsilon_{c b_1 \ldots b_5}- 10 f_{[b_1 b_2}{}^c\,\epsilon_{b_3 b_4 b_5] c a_1 a_2}) \,, \\ Z_{a_1\dots a_5 \, , \, b_1 \dots b_5 } &= 0\,, \end{split} \end{align} and those of $\vartheta_A$ (sometimes called the trombone gauging) to be \begin{align} \vartheta_a{} = \frac{f_{ac}{}^c}{3} \,, \quad \vartheta^{a_1 a_2 } = - \frac{f_c{}^{ca_1 a_2}}{3} \,,\quad \vartheta^{a_1\ldots a_5 } = - \frac{f_c{}^{ca_1\ldots a_5}}{3} \, (= 0) \, . \end{align} \section{Yang-Baxter-ology} \label{s:YB} \subsection{EDA via $\rho$-twisting} In the context of DFT, Yang-Baxter deformations can be understood as the $\mathrm{O}(d,d)$ transformation, generated by a bivector, acting on a Drinfel'd double with vanishing dual structure constants \cite{Araujo:2017jkb,Sakamoto:2017cpu,Sakamoto:2018krs,Bakhmatov:2018apn,Bakhmatov:2018bvp,Catal-Ozer:2019tmm}. The bivector that generates the transformation is then related to the classical $r$-matrix, the dual structure constants are coboundaries and the requirement that the $\mathrm{O}(d,d)$ transformed algebra is a Drinfel'd double is precisely the classical Yang-Baxter equation. This suggests a natural generalisation of Yang-Baxter deformations to EDAs \cite{Sakatani:2019zrs,Malek:2019xrf}. We begin with an EDA $\widehat{\frak{d}}_6$ with only the structure constants, $f_{ab}{}^c$, corresponding to a maximally isotropic Lie subalgebra $\frak{g}$, non-vanishing and $f_a{}^{bcd}=f_a{}^{b_1\ldots b_6}=0$, i.e. \begin{equation} \begin{split} \hat{T}_a \circ \hat{T}_b &= f_{ab}{}^c\,\hat{T}_c \,,\\ \hat{T}_a \circ \hat{T}^{b_1b_2} &= 2\,f_{ac}{}^{[b_1}\,\hat{T}^{b_2]c}\,,\\ \hat{T}_a \circ \hat{T}^{b_1\ldots b_5} &= - 5\,f_{ac}{}^{[b_1}\,\hat{T}^{b_2\ldots b_5]c} \,,\\ \hat{T}^{a_1a_2} \circ \hat{T}_b &= 3\,f_{[c_1c_2}{}^{[a_1}\,\delta^{a_2]}_{b]}\,\hat{T}^{c_1c_2}\,,\\ \hat{T}^{a_1a_2} \circ \hat{T}^{b_1b_2} &= f_{c_1c_2}{}^{[a_1}\,\hat{T}^{a_2]b_1b_2c_1c_2}\,,\\ \hat{T}^{a_1a_2} \circ \hat{T}^{b_1\ldots b_5} &= 0 \,,\\ \hat{T}^{a_1\ldots a_5} \circ \hat{T}_b &= 5\,f_{bc}{}^{[a_1}\,\hat{T}^{a_2\ldots a_5]c} + 10\,f_{c_1c_2}{}^{[a_1}\,\delta^{a_2}_b\,\hat{T}^{a_3a_4a_5]c_1c_2} \,,\\ \hat{T}^{a_1\ldots a_5} \circ \hat{T}^{b_1b_2} &= 0\,,\\ \hat{T}^{a_1\ldots a_5} \circ \hat{T}^{b_1\ldots b_5} &=0 \,. \end{split} \end{equation} We denote the structure constants collectively as $\hat{X}_{AB}{}^C$\,. We now perform an $E_{6(6)}$ transformation of the above EDA by a trivector, $\rho^{abc}$, and hexavector, $\rho^{a_1\ldots a_6}$, which will play the analogue of the classical $r$-matrix. The corresponding $E_{6(6)}$ group element is given by \begin{align} \label{eq:YBtr} C_A{}^B\equiv \bigl(e^{\frac{1}{6!}\,\rho^{a_1\ldots a_6}\,R_{a_1\ldots a_6}}e^{\frac{1}{3!}\,\rho^{a_1a_2a_3}\,R_{a_1a_2a_3}}\bigr)_A{}^B\,, \end{align} in which the generators $R_{a_1a_2 a_3}$ and $R_{a_1\ldots a_6}$ are specified in the appendix. Explicitly we have that \begin{align} (C_A{}^B) = \begin{pmatrix} \delta_a^b & 0 & 0 \\ \frac{\rho^{b a_1a_2}}{\sqrt{2!}} & \delta^{a_1a_2}_{b_1b_2} & 0 \\ \frac{\tilde{\rho}^{b;a_1\cdots a_5}}{\sqrt{5!}} & \frac{20\,\delta_{b_1b_2}^{[a_1a_2} \rho^{a_3a_4a_5]}}{\sqrt{2!\,5!}} & \delta^{a_1\cdots a_5}_{b_1\cdots b_5} \end{pmatrix} \, , \end{align} where \begin{align} \tilde{\rho}^{b;a_1\ldots a_5} &\equiv \rho^{ba_1\ldots a_5}+5\,\rho^{b[a_1a_2}\,\rho^{a_3a_4a_5]} \,. \end{align} Equivalently, we twist the generators by the group element \eqref{eq:YBtr} resulting in \begin{equation} \begin{split} T_a &= \hat{T}_a\,,\qquad T^{a_1a_2} = \hat{T}^{a_1a_2} + \rho^{ba_1a_2}\,\hat{T}_b\,, \\ T^{a_1\ldots a_5} &= \hat{T}^{a_1\ldots a_5} + 10\,\rho^{[a_1a_2a_3}\,\hat{T}^{a_4a_5]} + \tilde{\rho}^{b;a_1\ldots a_5} \,\hat{T}_b\,. \end{split} \end{equation} For the twisted generators, we obtain $T_A \circ T_B =X_{AB}{}^C\,T_C$ with \begin{align} X_{AB}{}^C \equiv C_A{}^D \,C_B{}^E\,(C^{-1})_F{}^C\,\hat{X}_{DE}{}^F\,. \label{eq:cF-def} \end{align} We now require that the new algebra defines an EDA $\frak{d}_6$. This imposes conditions on $\rho^{abc}$ and $\rho^{a_1\ldots a_6}$ and we will interpret these as analogues of the classical Yang-Baxter equation. From the products $T_a\circ T^{b_1b_2}$ and $T_a\circ T^{b_1\ldots b_5}$\,, the dual structure constants are identified as \begin{align} f_a{}^{b_1b_2b_3} = 3\,f_{ac}{}^{[b_1}\,\rho^{|c|b_2b_3]}\,, \qquad f_a{}^{b_1\ldots b_6} = 6\,f_{ac}{}^{[b_1|}\,\tilde{\rho}^{c;|b_2\ldots b_6]} (=0)\,, \end{align} which take the form of (twisted) coboundaries \eqref{eq:coboundary} and the $(=0)$ holds for $\frak{d}_6$. The remaining products impose a number of conditions, of which the following is particularly intriguing \begin{equation} \begin{split} \label{eq:YB} 3\, f_{d_1d_2}{}^{[b_1}\,\rho^{b_2b_3]d_1}\,\rho^{a_1a_2d_2} &= f_{d_1d_2}{}^{[a_1}\,\tilde{\rho}^{a_2];b_1b_2b_3 d_1d_2} \,. \end{split} \end{equation} This is a natural generalisation of the classical Yang-Baxter (YB) equation which we will elaborate more on later. In general, we get a further set of conditions which are required to ensure that the new algebra defines an EDA $\frak{d}_6$. Mostly these additional conditions appear rather cumbersome but we note the requirement that \begin{equation} \begin{split} \label{eq:YB-2} \rho^{a_1 a_2 b}f_{a_1 a_2}{}^c &=0\,. \\ \end{split} \end{equation} With $f_a{}^{b_1\ldots b_6}=0$, the Bianchi identity for $f_{ab}{}^c$ together with the generalised Yang-Baxter equation eq. \eqref{eq:YB} and compatibility condition eq. \eqref{eq:YB-2} imply the fundamental identity for $f_{a}{}^{b_1\dots b_3}$. Indeed since the Leibniz identity \eqref{eq:Leibniz} is $E_{6(6)}$-invariant, it is guaranteed to hold for $X_{AB}{}^C$. Therefore, we see that the generalised Yang-Baxter equation \eqref{eq:YB} together with the other conditions obtained by imposing that the new algebra is an EDA imply that the new dual structure constants satisfy their fundamental identities \eqref{eq:FI-3} and \eqref{eq:FI-6} and the condition \eqref{eq:addcon}. In \cite{Bakhmatov:2019dow}, a different approach was taken using a generalisation of the open/closed string map to propose a generalisation of the classical YB equation for a trivector deformation of 11-dimensional supergravity. The approach of \cite{Bakhmatov:2019dow} is not limited to group manifolds, unlike the present case, but also only considers trivector deformations. However, when specialising \cite{Bakhmatov:2019dow} to group manifolds and considering our deformations with $\rho^{a_1\ldots a_6} = 0$, the resulting equation of \cite{Bakhmatov:2019dow} is different and, in particular, weaker than the YB equation we find here \eqref{eq:YB} with $\rho^{a_1\ldots a_6} = 0$, or indeed the $\SL{5}$ case discussed in \cite{Sakatani:2019zrs,Malek:2019xrf}. Indeed, as shown in \cite{Bakhmatov:2020kul} based on explicit examples, the proposed YB-like equation of \cite{Bakhmatov:2019dow} is not sufficient to guarantee a solution of the equations of motion of 11-dimensional supergravity, while our deformations subject to the above conditions preserve the equations of motion of 11-dimensional supergravity by construction. \subsection{ Nambu 3- and 6-brackets from $\rho$-twisting} \label{s:YBBrackets} The trivector $\rho^{a_1a_2a_3}$ and hexavector $\rho^{a_1\ldots a_6}$ define 3- and 6-brackets via \eqref{eq:coboundary}. First define the maps \begin{equation} \rho_3: \frak{g}^* \wedge \frak{g}^* \longrightarrow \frak{g} \,, \qquad \tilde{\rho}_6: \Lambda^5 \frak{g}^* \longrightarrow \frak{g} \,, \end{equation} as \begin{equation} \begin{split} \rho_3(x_1,x_2) &= \rho^{abc}\, (x_1)_b\, (x_2)_c\, T_a \,, \\ \tilde{\rho}_6(x_1,\ldots, x_5) &= \tilde{\rho}^{a_1; a_2 \ldots a_6}\, (x_1)_{a_2} \ldots (x_5)_{a_6}\, T_{a_1} \,, \; \forall\; x_1,\, \ldots,\, x_5 \in \frak{g}^* \,. \end{split} \end{equation} This allows the generalised Yang-Baxter equation eq. \eqref{eq:YB} to be cast in a basis independent way \begin{equation} \begin{split} \label{eq:YB-bi} x_1 \left( \left( ad_{\rho_3(y_1,y_2)} \rho_3 \right)(x_2,x_3) \right) + y_1 \left( \tilde{\rho}_6(\Delta(y_2),x_1,x_2,x_3) \right) - y_2 \left( \tilde{\rho}_6(\Delta(y_1),x_1,x_2,x_3) \right) &= 0 \,, \end{split} \end{equation} for all $y_1,\,y_2,\,x_1,\,x_2,\,x_3 \in \frak{g}^*$. Note that the first term in \eqref{eq:YB-bi} is automatically antisymmetric in $\left(x_1,\,x_2,\,x_3\right)$ due to antisymmetry of $\rho^{a_1a_2a_3}$. Then, the associated 3- and 6-brackets are defined as \begin{equation} \begin{split} \label{eq:YBbrackets} \left\{ x_1,\,x_2,\,x_3\right\}_3 &= ad_{\rho_3(x_1,x_2)} x_3 + ad_{\rho_3(x_2,x_3)} x_1 + ad_{\rho_3(x_3,x_1)} x_2 \,, \\ \left\{ x_1,\, \ldots,\, x_6 \right\}_6 &= ad_{\tilde{\rho}_6(x_{1},\ldots, x_{5})} x_{6} + \text{cyclic permutations} \,, \end{split} \end{equation} for all $x_1,\, \ldots ,\, x_6 \in \frak{g}^*$. Alternatively, to match more with the usual discussion of classical $r$-matrices in integrability, we can use the Cartan-Killing form on $\frak{g}$ to define 3- and 6-brackets on $\frak{g}$. For this, it is more convenient to define $\rho'_3$ and $\rho'_6$ as \begin{equation} \rho'_3: \frak{g} \wedge \frak{g} \longrightarrow \frak{g} \,, \qquad \tilde{\rho}'_6: \Lambda^5 \frak{g} \longrightarrow \frak{g} \,, \end{equation} with \begin{equation} \rho'_3(x_1,x_2) = \rho_3(\kappa(x_1),\kappa(x_2)) \,, \qquad \tilde{\rho}'_6(x_1,\ldots,x_6) = \tilde{\rho}_6(\kappa(x_1),\ldots,\kappa(x_5)) \,, \end{equation} and where $\kappa$ is the Cartan-Killing metric viewed as a map $\kappa: \mathfrak{g} \longrightarrow \mathfrak{g}^*$. Now, the 3- and 6-brackets on $\frak{g}$ are defined as \begin{equation} \begin{split}\label{eq:YBbrackets2} \left\{x_1,\,x_2,\,x_3\right\}_3 &= \left[ x_1, \rho'_3(x_2,x_3) \right] + \left[ x_2,\rho'_3(x_3,x_1) \right] + \left[ x_3,\rho'_3(x_1,x_2) \right] \,, \\ \left\{ x_1,\, \ldots,\, x_6 \right\}_6 &= \left[ x_1, \tilde{\rho}'_6(x_{2},\ldots,x_6) \right] + \text{cyclic permutations} \,. \end{split} \end{equation} The generalised Yang-Baxter equation \eqref{eq:YB} together with the other constraints required such that the new algebra is an EDA, such as \eqref{eq:YB-2}, imply that the 3- and 6-brackets defined above in \eqref{eq:YBbrackets} and \eqref{eq:YBbrackets2} satisfy their fundamental identities \eqref{eq:FI-3} and \eqref{eq:FI-6}. \subsection{The generalised YB equation} To understand better the generalised YB equation obtained above, let us adopt a tensor product notation $\rho_{124} = \rho^{abc} T_{a}\otimes T_{b} \otimes 1 \otimes T_c \otimes 1 $ etc. such that the indices denote the contracted slots in a tensor product of $\frak{g}$. Assuming that \eqref{eq:YB-2} holds and that $\rho^{a_1\ldots a_6}= 0$ we have that \eqref{eq:YB} becomes \begin{align}\label{eq:YBtensor} &[\rho_{123},\rho_{145}] + [\rho_{123},\rho_{245}] + [\rho_{123},\rho_{345}] \nonumber\\ &+ \frac{1}{2}\bigl( [\rho_{124}+\rho_{125},\rho_{345}] +[\rho_{234}+\rho_{235},\rho_{145}] +[\rho_{314}+\rho_{315},\rho_{245}] \bigr) = 0\,. \end{align} Introducing a (anti-)symmetrizer in the tensor product $\sigma_{[123],[45]} $, allows this equation to be concisely given as \begin{equation}\label{eq:YBtensor2} \sigma_{[123],[45]} [\rho_{123} +\rho_{234} , \rho_{145}] = 0 \, . \end{equation} Suppose that we have a preferred $q\in\frak{g}$ such that \begin{equation}\label{eq:rhotor} \rho_{123} = r_{ 12} \otimes q_{3 } + r_{ 23} \otimes q_{1} - r_{ 13} \otimes q_{2} \,,\quad r_{12} = \sum_{a,b\neq q} r^{ab}T_{a}\otimes T_b = - r_{21}\, , \end{equation} with $r_{12}$ neutral (i.e. $[r_{12},q_1]= 0$) then we find eq. \eqref{eq:YBtensor} becomes \begin{align} {\tt YB}_{[12|4|} \otimes q_{3]} \otimes q_{5} - {\tt YB}_{[12|5|} \otimes q_{3]} \otimes q_{4} = 0\, , \end{align} in which \begin{equation}\label{eq:cYB} {\tt YB}_{123} = [r_{12}, r_{13}] + [r_{12}, r_{23}] + [r_{13}, r_{23}]\, , \end{equation} is the classical Yang-Baxter equation for $r$. Recall that the classical YB equation arises from the quantum one \begin{equation} \mathbb{R}_{13}\mathbb{R}_{12}\mathbb{R}_{32} = \mathbb{R}_{32}\mathbb{R}_{12}\mathbb{R}_{13} \end{equation} as the leading terms in the `classical' expansion $\mathbb{R}_{12} = 1 + \hbar\,r_{12} + O(\hbar^2)$. An obvious question is if there is an equivalent `quantum' version of eq. \eqref{eq:YBtensor2}? We give here one proposal (with no claim of first principle derivation or uniqueness) for such a starting point. Let us define\footnote{We use lower case Roman indices to denote tensor product locations.} in the semi-classical limit \begin{align}\label{eq:Rsemicl} \mathbb{R}_{i;jk} &= 1 + \hbar\, \rho_{ijk} + O(\hbar^2) \, , \\ \mathbb{R}_{ij;kl} &= 1 + \frac{\hbar}{4} \left( \rho_{ijk}+\rho_{ijl} - \rho_{ikl}-\rho_{jkl} \right)+ O(\hbar^2)\, . \end{align} Then eq. \eqref{eq:YBtensor} follows from \begin{equation}\label{eq:Rguess} \sigma_{[123],[45]} \mathbb{R}_{1;23} \mathbb{R}_{23;45} \mathbb{R}_{1;45} = \sigma_{[123],[45]} \mathbb{R}_{1;45} \mathbb{R}_{23;45} \mathbb{R}_{1;23} \, . \end{equation} This view point is very suggestive that this may just represent a standard Yang-Baxter equation for the scattering of $\wedge^2\frak{g},\wedge^2\frak{g}$ and $\frak{g}$ obtained by S-matrix fusion. Here we leave an exploration of this as an open direction; further work is required to understand which quantum R-matrices give rise under S-matrix fusion to an $\mathbb{R}_{i;jk}$ and $\mathbb{R}_{ij;kl}$ with the expansion \eqref{eq:Rsemicl} and what are the resultant $\rho_{ijk}$. Conversely one might ask if there exist solutions of \eqref{eq:Rguess} compatible \eqref{eq:Rsemicl} but that are not obtained from fusion? \begin{figure}[tbp!] \begin{center} \includegraphics[width=0.9\textwidth]{interaction_rev.pdf} \end{center} \vskip -0.6cm \caption{ A proposed schematic for the generalised Yang-Baxter equation. The red lines indicate anti-symmetrisation and the black circle is a contact term that gives rise in the semi-classical limit to a contribution involving $\rho_6$. } \label{fig:fusion} \end{figure} Restoring $\rho^{a_1\ldots a_6}$ we can amend this equation to \begin{equation} \sigma_{[123],[45]} [\rho_{123} +\rho_{234} , \rho_{145}] = \frac{1}{2}\left( \rho_{12345;5} +\rho_{12345;4} \right)\,, \end{equation} where, for example, \begin{equation} \rho_{12345;5} \equiv \rho^{abcdef}\,T_a\otimes T_b \otimes T_c\otimes T_d \otimes [T_e,T_f] \,. \end{equation} This is somewhat suggestive of a contact term in the YB relation that may lead to a quantum version of the form \begin{equation}\label{eq:Rguesswithrho6} \sigma_{[123],[45]} \mathbb{R}_{1;23} \mathbb{R}_{23;45} \mathbb{R}_{1;45} - \sigma_{[123],[45]} \mathbb{R}_{1;45} \mathbb{R}_{23;45} \mathbb{R}_{1;23} = \sigma_{[123],[45]} \mathbb{R}_{12345;5} \, . \end{equation} Pictographically this is indicated in figure \ref{fig:fusion}. \section{Examples} \label{s:Examples} In this section we wish to present a range of examples of the EDA, both of coboundary type and otherwise. We will give some broad general classes that correspond to embedding the algebraic structure underlying existing T-dualities of the type II theory. In addition, in the absence of a complete classification, here we provide a selection of specific examples. \subsection{Abelian} When the subalgebra $\{T_a\}$ is Abelian, arbitrary $\rho^{a_1 a_2 a_3 }$ and $\rho^{a_1 \ldots a_6}$ are solutions of Yang--Baxter-like equations. However $f_a{}^{b_1b_2 b_3}=0$ and $f_a{}^{b_1 \ldots b_6}=0$ and the EDA is Abelian. \subsection{Semi-Abelian EDAs and Three Algebras} The algebraic structure corresponding to non-Abelian T-duality is a semi-abelian Drinfel'd double i.e. a double constructed from some $n-1$ dimensional Lie-algebra (representing the non-Abelian isometry group of the target space) together with a $U(1)^{n-1}$ (or perhaps $\mathbb{R}_+^{n-1}$) factor. An analogue here would be to take $f_{ab}{}^c \neq 0 $ and $f_{a}{}^{b_1\dots b_3}=0$, this however is not especially interesting. More intriguing is to consider the analogue of the picture {\em after} non-Abelian T-dualisation has been performed in which the $U(1)^{n-1}$ would be viewed as the physical space. This motivates the case of semi-Abelian EDAs with $f_{ab}{}^c = 0 $ but $f_a{}^{b_1 \dots b_3} \neq 0$. In this case the Leibniz identities reduce to the fundamental identities \begin{align} f_a{}^{d c_1c_2}\, f_d{}^{b_1b_2b_3} - 3\, f_d{}^{c_1c_2[b_1}\,f_a{}^{b_2b_3]d} =0\,,\qquad f_a{}^{b_1\cdots b_6}=0\,. \end{align} Each solution for this identity gives an EDA. To identify these one can use existing classification efforts and considerations of three algebras that followed in light of their usage \cite{Bagger:2006sk} to describe theories of interacting multiple M2 branes. The first case to consider are the Euclidean three algebras, such that $f^{b_1\dots b_4}= f_{a}{}^{b_1\dots b_3} \delta^{ab_4} $ is totally antisymmetric. Here the fundamental identity is very restrictive and results in a unique possibility: the four-dimensional Euclidean three algebra\cite{Nagy:2007cle,Papadopoulos:2008sk,Gauntlett:2008uf}, whose structure constants are just the antisymmetric symbol, complemented with two $U(1)$ directions. Relaxing the requirement of a positive definite invariant inner product allows a wider variety \cite{Gomis:2008uv,Benvenuti:2008bt,Ho:2008ei,deMedeiros:2008bf,Ho:2009nk,deMedeiros:2009hf}. Dispensing the requirement of an invariant inner product (which thus far appears unimportant for the EDA) allows non-metric three algebras \cite{Awata:1999dz, Gustavsson:2008dy,Gran:2008vi}.\footnote{In addition there are three algebra structures \cite{Bagger:2008se,Chen:2009cwa} in which $f_d{}^{abc}$ is not totally antisymmetric in its upper indices. These can be used to describe interacting 3d theories with lower supersymmetry. It is a unclear if they could play role in the context of EDAs.} \subsection{$r$-matrix EDAs} We now consider coboundary EDAs given in terms of an $r$-matrix as in eq.\eqref{eq:rhotor} obeying the YB equation \eqref{eq:cYB}. Splitting the generators of $\frak{g}$ into $T_{\bar{a}}$ with $\bar{a} = 1,\dots, 5$ and $T_6$ (identified with the generator $q$ appearing in \eqref{eq:rhotor}) we have the non-vanishing components $\rho^{\bar{a}\bar{b} 6} = r^{\bar{a}\bar{b}}$. Furthermore the condition \eqref{eq:YB-2} requires that \begin{equation} \label{eq:rf} r^{\bar{a}\bar{b}} f_{\bar{a} \bar{b}}{}^6 = r^{\bar{a}\bar{b}} f_{\bar{a} \bar{b}}{}^{\bar{c}} = r^{\bar{a}\bar{b}} f_{\bar{a} 6 }{}^{\bar{c}} =r^{\bar{a}\bar{b}} f_{\bar{a} 6 }{}^6 =0 \, , \end{equation} in which the last two equalities match the statement that $r$ is neutral under $T_6$. In such a setup, the dual structure constants are specified as \begin{align} \begin{split} f_{\bar{a}}{}^{\bar{b}_1\bar{b}_2 6} = 2 f_{\bar{a}\bar{c}}{}^{[\bar{b}_1} r^{|\bar{c}|\bar{b}_2]} + f_{\bar{a}6}{}^6 r^{\bar{b}_1\bar{b}_2} \, , \quad f_{\bar{a}}{}^{\bar{b}_1\bar{b}_2\bar{b}_3} = 3 f_{\bar{a}6}{}^{[\bar{b}_1}r^{\bar{b}_2 \bar{b}_3]}\, , \quad f_{6}{}^{\bar{b}_1\bar{b}_2 6} = f_{6}{}^{\bar{b}_1\bar{b}_2 \bar{b}_3} = 0 \, . \end{split} \end{align} Assuming further that $\bar{\frak{g}} =\textrm{span}(T_{\bar{a}}) $ is a sub-algebra of $\frak{g}$ then $r^{\bar{a}\bar{b}}$ defines an $r$-matrix on $\bar{\frak{g}}$ obeying the YB equation. Consequently $\tilde{f}^{\bar{b}_1\bar{b}_2}{}_{\bar{a} }=- 2 f_{\bar{a}\bar{c}}{}^{[\bar{b}_1} r^{|\bar{c}|\bar{b}_2]} $ are the structure constants of a dual Lie algebra $\bar{\frak{g}}_R$ and $\bar{\frak{d}} = \bar{\frak{g}} \oplus \bar{\frak{g}}_R $ is a Drinfel'd double. Thus we have a family of embeddings of the Drinfel'd double into the EDA specified by $f_{\bar{a}6}{}^6$ and $f_{\bar{a}6}{}^{\bar{b}}$. When $\frak{g} = \bar{\frak{g}} \oplus u(1)$ is a direct sum (such that $f_{\bar{a}6}{}^6=f_{\bar{a}6}{}^{\bar{b}} = 0 $), then this is precisely an example of the non-metric three algebra of \cite{Awata:1999dz, Gustavsson:2008dy,Gran:2008vi}. We emphasise though that not every (coboundary) double can be embedded in this way; one must still ensure that equation \eqref{eq:rf} holds. \subsection{Explicit examples} We now present a selection of explicit examples that illustrate coboundary and non-coboundary EDAs. \subsubsection{Trivial non-examples based on $SO(p,q)$} To illustrate that the EDA requirements are indeed quite restrictive we can first consider the case of $\frak{g} =\frak{so}(p,q)$ with $p+q=4$. A direct consideration of the Leibniz identities reveals that there is no non-zero solution for $f_a{}^{b_1\dots b_3}$ (in fact the cocycle conditions alone determine this). Equally the Leibniz identities admit only trivial solutions in the case of $\frak{iso}(p,q)= \frak{so}(p,q) \ltimes \mathbb{R}_{+}^{p+q}$ with $p+q =3$. \subsubsection{An example both coboundary and non-coboundary solutions} We consider an indecomposable nilpotent Lie algebra $N_{6,22}$ of \cite{doi:10.1063/1.522992} specified by the structure constants\footnote{Here we introduced a parameter $c_0$ for convenience, which is 1 in \cite{doi:10.1063/1.522992}.} \begin{align} f_{12}{}^3 = 1\,,\quad f_{13}{}^5 = 1\,,\quad f_{15}{}^6 = c_0\,,\quad f_{23}{}^4 = 1\,,\quad f_{24}{}^5 = 1\,,\quad f_{34}{}^6 = c_0\,. \end{align} We find a family of solutions \begin{align} \begin{split} f_1{}^{356} &= d_1\,,\quad f_1{}^{456} = d_2\,,\quad f_2{}^{156} = d_3\,,\quad f_2{}^{346} = -d_3\,, \\ f_2{}^{356} &= d_4\,,\quad f_2{}^{456} = d_5\,,\quad f_3{}^{456} = -d_1 + d_3\,, \end{split} \end{align} which indeed satisfies the closure constraints. In particular, if we choose $c_0=0$, we can clearly see that this EDA contains a 10D Drinfel'd double $\{T_{\bar{a}},T^{\bar{a}6}\}$ ($\bar{a}=1,\dotsc,5$) as a Lie subalgebra. We can also find $\rho^{abc}$ by considering the coboundary Ansatz. Supposing $c_0\neq 0$, the general solution to the generalised Yang-Baxter equation and compatibility condition is \begin{align} \rho^{146} = e_1\,,\quad \rho^{156} = e_2\,,\quad \rho^{256} = e_3\,,\quad \rho^{346} = -e_2\,,\quad \rho^{356} = e_4\,,\quad \rho^{456} = e_5\,, \end{align} where $e_1=0$ or $e_3=0$\,. The corresponding structure constants are \begin{align} \begin{split} f_1{}^{356}&=e_3\,,\quad f_1{}^{456}=e_2\,,\quad f_2{}^{156}=e_1\,,\quad f_2{}^{346}=-e_1\,, \\ f_2{}^{356}&=-2 e_2\,,\quad f_2{}^{456}=e_4\,,\quad f_3{}^{456}=e_1-e_3\,. \end{split} \end{align} This means that only when $d_i$ ($i=1,\dotsc,5$) have the form \begin{align} d_1=e_3\,,\quad d_2=e_2\,,\quad d_3=e_1\,,\quad d_4=-2e_2\,,\quad d_5=e_4 \,, \end{align} and satisfy $d_1=0$ or $d_3=0$, the cocycle becomes the coboundary. \subsubsection{An example with $\rho^6$} In the previous example $\rho^6$ is absent. By considering the Lie algebra of the form $\mathfrak{g} =\mathfrak{g}_4\oplus u(1)\oplus u(1)$, where $\mathfrak{g}_4$ denotes a real 4D Lie algebra that is classified in \cite{Popovych:2003xb}, one can construct a number of examples\footnote{In the notation of \cite{Popovych:2003xb} examples with $\rho^6\neq 0$ are found when $\mathfrak{g}_4$ is one of the following: $A_{3,1}+\mathfrak{u}(1)$, $A_{3,4}^{-1}+\mathfrak{u}(1)$, $A_{3,5}^{0}+\mathfrak{u}(1)$, $A_{4,1}$, $A_{4,2}^{-2}$, $A_{4,5}^{a,b,-(a+b)}$, $A_{4,6}^{-2b,b}$} (based on unimodular Lie algebras) that admit $\rho^6$. To illustrate this let us consider the case that $\mathfrak{g}_4=A_{4,1}$ specified by structure constants \begin{align} f_{24}{}^1 = 1\,,\quad f_{34}{}^2 = 1\, . \end{align} We find the generalised Yang-Baxter and compatibility equations admit the following family of solutions: \begin{align} \begin{split} \rho^{123} &= d_1\,,\quad \rho^{125} = d_2\,,\quad \rho^{126} = d_3\,,\quad \rho^{135} = \frac{d_1 d_4 d_8}{2 d_0}\,,\quad \rho^{136} = \frac{d_1 d_5 d_8}{2 d_0}\,,\quad \rho^{145} = d_4\,, \\ \rho^{146} &= d_5\,,\quad \rho^{156} = d_6\,,\quad \rho^{256} = d_7\,,\quad \rho^{356} = d_8\,,\quad \rho^{456} = \frac{2 d_0}{d_1}\,,\quad \rho^{123456} = d_0\,. \end{split} \end{align} The corresponding dual structure constants are \begin{align} \begin{split} f_2{}^{156}&=\frac{2 d_0}{d_1}\,,\quad f_3{}^{125}=d_4\,,\quad f_3{}^{126}=d_5\,,\quad f_3{}^{256}=\frac{2 d_0}{d_1}\,, \\ f_4{}^{125}&=-\frac{d_1 d_4 d_8}{2 d_0}\,,\quad f_4{}^{126}=-\frac{d_1 d_5 d_8}{2 d_0}\,,\quad f_4{}^{156}=-d_7\,,\quad f_4{}^{256}=-d_8\,. \end{split} \end{align} \subsubsection{An $r$-matrix EDA} In order to find a non-trivial example of the $r$-matrix EDAs, we consider a solvable Lie algebra $N_{6,29}^{\alpha\beta}$ of \cite{doi:10.1063/1.528721} defined by structure constants: \begin{align} f_{13}{}^3 = 1\,,\quad f_{15}{}^5 = \alpha\,,\quad f_{16}{}^6 = 1\,,\quad f_{23}{}^3 = 1\,,\quad f_{24}{}^4 = 1\,,\quad f_{25}{}^5 = \beta\,,\quad f_{46}{}^3 = -1\,, \end{align} where $\alpha^2+\beta^2\neq 0$\,. This algebra contains a subalgebra generated by $\bar{\frak{g}} =\textrm{span}(T_{\bar{a}})$ ($\bar{a}=1,\dotsc,5$) and is non-trivial in the sense that it satisfies $f_{\bar{a}6}{}^{\bar{b}}\neq 0$ and $f_{\bar{a}6}{}^6\neq 0$. Supposing $\alpha\beta\neq 0$, we find the general solution for $\rho^{abc}$ is given by \begin{align} \rho^{135} = c_1\,,\quad \rho^{235} = -c_1\,,\quad \rho^{356} = c_2\,,\quad \rho^{345} = c_3\,, \end{align} where $c_1=0$ when $\alpha\neq \beta$\,. The corresponding dual structure constants are \begin{align} \begin{split} f_1{}^{135}&=(1+\alpha) c_1\,,\quad f_1{}^{235}=-(1+\alpha) c_1\,,\quad f_1{}^{345}=(1+\alpha) c_3\,,\quad f_1{}^{356}=(2+\alpha) c_2\,, \\ f_2{}^{135}&=(1+\beta) c_1\,,\quad f_2{}^{235}=-(1+\beta) c_1\,,\quad f_2{}^{345}=(2+\beta) c_3\,,\quad f_2{}^{356}=(1+\beta) c_2\,, \\ f_4{}^{345}&=-c_1\,,\quad f_6{}^{356}=-c_1\,. \end{split} \end{align} This solution contains an $r$-matrix EDA as a particular case $c_1=c_3=0$\,. \section{Conclusion and Outlook} In this work we have consolidated the exploration of exceptional Drinfel'd algebras introduced in \cite{Sakatani:2019zrs,Malek:2019xrf} extending the construction to the context of the $E_{6(6)} $ exceptional group. The algebraic construction here requires the introduction of a new feature: we have to consider not only a Lie algebra $\frak{g}$ together with a three-algebra specified by $f_3 \equiv f_a{}^{b_1\ldots b_3}$ as in \cite{Sakatani:2019zrs,Malek:2019xrf}, but we have to also include a six-algebra $f_6 \equiv f_a{}^{b_1\ldots b_6}$. The Leibniz identities that the EDA must obey enforce a set of fundamental (Jacobi-like) identities for the three- and six-algebra as well as some compatibility conditions. These compatibility conditions require that $f_3$ be a $\frak{g}$-cocycle and $f_6$ be an $f_3$-twisted $\frak{g}$-cocycle. In terms of the $\frak{g}$ coboundary operator $d$ this can be stated as \begin{equation} df_3 = 0 \, , \quad df_6 + f_3 \wedge f_3 = 0 \, . \end{equation} We can solve this requirement with a coboundary Ansatz, $f_3 = d\rho_3 $ and $f_6= d\rho_6 + \frac{1}{2} \rho_3 \wedge d\rho_3$, reminiscent of the way a Drinfel'd double can be constructed through an $r$-matrix. Indeed, we find a generalised version for the Yang Baxter equation for $\rho_3$, concisely expressed as \begin{equation} \sigma_{[123],[45]} [\rho_{123} +\rho_{234} , \rho_{145}] = \frac{1}{2}\left( \rho_{12345;5} +\rho_{12345;4} \right)\, . \end{equation} We proposed a `quantum' relation from which this classical equation can be obtained. This feature, and the resultant interplay between one-, three-, and six-algebras, opens up many interesting avenues for further exploration. The construction of the EDA is closely motivated by considerations within exceptional generalised geometry. We have shown how the EDA can be realised as a generalised Leibniz parallelisation of the exceptional generalised tangent bundle of a group manifold $G$. The data required to construct this mean that $G$ is equipped with a 3-bracket and a 6-bracket which invites the consideration of Nambu-Lie groups. Now we come to solving the various constraint equations that govern the structure of the EDA. The first thing to note is that due to the dimension, the only solutions to the fundamental identities have vanishing $f_6$ (and consequently a trivial 6-bracket on $G$). We believe however that in higher dimension this condition is less stringent and that there will solutions for which the structure described above is exhibited in full. We then provided a range of examples that illustrate the various features here. We have examples with and without Drinfel'd double subalgebras, and examples that are both of coboundary type (specified by a $\rho_3$ and $\rho_6$) and not of coboundary type. All of the coboundary examples presented here (and indeed in all the numerous other examples we have found) can be obtained from the procedure of $\rho$-twisting i.e. starting with a semi-Abelian EDA and applying an $E_{6(6)}$ transformation parametrised by the $\rho_3$ and $\rho_6$. Despite the dimensionality induced restriction to $f_6=0$, there are examples for which $\rho_6 \neq 0$. We provide examples where $\rho_3$ can be parametrised in terms of a Yang-Baxter $r$-matrix for a lower dimensional algebra, as well as where this is not the case. There are several exciting open directions here that we share in the hope that others may wish to develop them further: \begin{itemize} \item Extensions of the EDA to $E_{7(7)}$ and higher are likely to shed further light on the structures involved. As the space gets larger there is more scope to find interesting solutions. \item It would be interesting to develop a more general classification of EDA solutions. \item One feature of the EDA is that they may admit multiple decompositions into physical spaces, and a resultant notion of duality. Further development should go into this very interesting aspect. \item Here we make some robust requirements that result in structures compatible with maximally supersymmetric gauged supergravities. It would likely be interesting to see how the requirements of the EDA can be consistently relaxed to lower supersymmetric settings, for example using \cite{Malek:2017njj}. \item On a mathematical note perhaps the most intriguing area of all is to develop the `quantum' equivalent of the classical EDA proposed here. \end{itemize} \section{Acknowledgements} EM is supported by ERC Advanced Grant “Exceptional Quantum Gravity” (Grant No.740209). YS is supported by JSPS Grant-in-Aids for Scientific Research (C) 18K13540 and (B) 18H01214. DCT is supported by The Royal Society through a University Research Fellowship {\em Generalised Dualities in String Theory and Holography} URF 150185 and in part by STFC grant ST/P00055X/1; FWO-Vlaanderen through the project G006119N; and by the Vrije Universiteit Brussel through the Strategic Research Program ``High-Energy Physics''. DCT thanks Chris Blair collaboration on related work and Tim Hollowood for informative communications.
{ "timestamp": "2020-07-17T02:22:40", "yymm": "2007", "arxiv_id": "2007.08510", "language": "en", "url": "https://arxiv.org/abs/2007.08510" }
\section{Introduction} The classical spin-1/2 Ising model with nearest neighbor interactions for a lattice with $N$ sites was suggested by Lenz in 1920's is defined by the following Hamiltonian \cite{RefJ1} \begin{eqnarray}\label{E:HalfHamil} {\mathcal{H}_{\rm{spin}1/2}} =-J\sum_{<ij>} s_{i}s_{j}-H\sum_{i=1}^{N}s_{i}, \end{eqnarray} where $s_{i}=+1$ or $-1$ is the spin variable. The notation $<ij>$ indicates a sum over nearest neighbors lattice sites, $J$ is the exchange constant which gives the interaction strength between two neighboring spins and $H$ is an external field applied to each degree of freedom. Ising solved the one-dimensional (1D) model in 1924 and, on the basis of the fact that 1D system had no phase transition, he wrongly asserted that there was no phase transition in any dimension \cite{RefJ1,RefJ2}. Peierls proved that the model exhibits a phase transition in two or higher-dimensional lattices \cite{RefJ3}. The exact solution was found by Onsager \cite{RefJ4,RefJ14}and Yang \cite{RefJ5} using algebraic approach and transfer matrix method. According to this analytical solution, in the two-dimensional (2D) square lattice a second-order phase transition at some critical temperature $T_c$ takes place when an external field $H $ tends to zero. This is in agreement with the result of the series expansion method \cite{RefB1} and Monte Carlo simulations \cite{RefB2}. Although there's no exact solution for three-dimensional (3D) lattices, it is possible to find the critical temperature and critical exponents of the model using numerical methods like Monte Carlo simulations \cite{RefB2,RefJ6}. Spin-1/2 Ising model is appropriate to describe systems in which each degree of freedom has two states, but for systems with three states the spin-1 Ising model is more suitable \cite {RefB1}. In the last decades, efforts have been made to study theoretically the underlying physics predicted by the spin-1 Ising model using mean-field theories and effective field theories \cite{RefnewJ1}. In particular, the mean-field solution in the presence of a random crystal field and the effects of the magnitude of the crystal field on the critical properties have been investigated. \cite{RefnewJ2,RefnewJ3,RefnewJ4,RefnewJ5} On the other hand, the behavior of the tricritical point as a function of crystal-field interactions for honeycomb and its dependence on the strength of biquadratic and bilinear exchange interactions in square and cubic lattices have been studied by using the effective-field theory \cite{RefnewJ6,RefnewJ7}. Recently, an analysis of spin-1 Ising model including only the bilinear term on tetrahedron recursive lattices with arbitrary values of the coordination number has been performed to find an equation for the exact determination of the critical points and all critical phases \cite{RefnewJ8}. ${\rm{He}^{3}-\rm{He}^{4}}$ mixture can be considered as spin-1 Ising model and some features of the mixture such as the $\lambda$ transition and the phase diagram of the mixture are described by the model \cite{RefJ7}. However, despite their relevant results the above mentioned works focus on specific aspects of the critical properties of the spin-1 Ising model. In this paper a systematic investigation of the critical properties exhibited by 1D, 2D (square) and 3D (cubic) systems modeled via the spin-1 classical Ising model Hamiltonian for different exchange interactions is performed overcoming some of the restrictions of the previous studies. The analytical investigation is carried out by using the low-temperature series expansion method applied to to the partition function. This is achieved after determining the counts and the Boltzmann weights of the partition function depending on the full Hamiltonian of the classical spin-1 Ising model on both a square (2D) and on a cubic lattice (3D). The results obtained using the low-temperature series expansion method are compared to the numerical ones determined via Metropolis Monte Carlo simulations. The comparison of the exact Monte Carlo results with the ones derived using the approximated mean-field theory allows to highlight the limits of the latter approach that was widely used in the past decades as mentioned above. The most general form of Hamiltonian of the spin-1 Ising model for a lattice with $N$ spin is \cite{RefB1} \begin{eqnarray}\label{E:OneHamil} {\mathcal{H}_{\rm{spin1}}} =-J\sum_{<ij>}s_{i} s_{j} -K\sum_{<ij>}s_i^{2} s_j^{2}-D\sum_{i=1}^{N}s_{i}^{2} \nonumber \\ -L\sum_{<ij>}(s_{i}^{2} s_{j}+s_{i} s_{j}^{2} ) -H \sum_{i=1}^{N}s_{i}, \end{eqnarray} where $s_{i}=+1$ or $0$ or $-1$, $K$ is the biquadratic coefficient, $D$ is the anisotropy coefficient and $L$ is the bi-cubic coefficient. Note that all coefficient appearing in (\ref {E:OneHamil}) have the dimension of an energy. Sums are extended to the $N$ degrees of freedom and if the coefficients in (\ref {E:OneHamil}) are chosen to be positive, the ground state energy of the system corresponds to the configuration in which for all $i's$ we have $s_{i}=+1$. As the spin-1/2 model the 1D classical spin-1 Ising model does not exhibit any phase transition at finite temperatures. In order to prove this claim we consider a 1D chain of spins with periodic boundary conditions. The free energy of the system, $F$ with entropy $S$ and energy $E$ at temperature $T$ by definition is given by \cite{RefB3}: \begin{eqnarray}\label{E:FreeEnergy} F=E-TS=E-k_{B}T\ln \Omega, \end{eqnarray} where $k_{B}$ is Boltzmann constant and $\Omega$ is the number of configurations with energy $E$. Since we assume that all coefficients in (\ref {E:OneHamil}) are non-negative, in the ground state all spin variables are up, i.e. for each site denoted by index $i$ we have $s_{i}=+1$ and the ground state configuration is $C=(+++...+++)$ where + means spin up. Hence, the energy and free energy of this configuration defined by $E_{0}$ and $F_{0}$ are \begin{eqnarray}\label{E:EnergyGs} F_{0}=E_{0}=-N(H+D+J+K+2L). \end{eqnarray} Now, by flipping one of the spins to $0$ or $-1$, the system will assume another configuration, which must have higher free energy with respect to ground state configuration $C$ in order to undertake a phase transition at $T\neq 0$. We denote these two possible configurations by $C_{1}=(++...++- ++...++)$ and $C_{1}^{'}=(++...++\ 0++...++)$ where $-$ and $0$ mean spin down and spin-less respectively. The free energies, $F_{1}$ and $F_{1}^{'}$, associated with these configurations are \begin{eqnarray}\label{E:EnergyC1} F_{1}=E_{0}+4J+4L+2H-k_{B} T\ln N, \end{eqnarray} \begin{eqnarray}\label{E:EnergyC'1} F_{1}^{'}=E_{0}+2J+4L+H-k_{B} T\ln N. \end{eqnarray} In both cases, in the thermodynamic limit and for $T\neq 0$ we get $F_{1},F_{1}^{'}<F_{0}$. Thus, the 1D version of spin-1 Ising model does not exhibit any phase transition at non-zero temperatures. However, for higher dimensions the system described by the spin-1 Ising model exhibits a phase transition and, due to its enlarged parameter space, a much richer variety of critical behavior with respect to the spin-1/2 counterpart \cite{RefB1}. Since the study of the 2D and 3D spin-1 Ising model in its general form using either numerical or analytical tools becomes very interesting but at the same time very complicated, its critical behavior has only been studied in a few special cases in the literature. In one case just $J$ and $H$ are assumed different from zero and the system described by this spin-1 Ising model has been solved by applying low-temperature series expansion \cite{RefJ8,RefJ9}. In another case the spin-1 Ising model with non-vanishing $J$, $D$, and $K$ has been investigated by using mean-field approximation \cite{RefJ7}. The latter one has been studied for $K=0$ by other methods like series expansion \cite{RefJ10}, renormalization group theory \cite{RefJ11}, and Monte Carlo simulation \cite{RefJ12,RefJ13}. The paper is organized as follows. In section $2$ we shortly introduce the analytical and numerical methods applied to the spin-1 Ising model. In Section $3$ we firstly study the critical behavior of the classical spin-1 Ising model with nearest neighbor interaction in 2D square and 3D cubic lattices, respectively comparing some critical features with those of the spin-1/2 Ising model, then last part of section $3$ deals with the long-range spin-1 Ising model and the dependence of the critical temperature on the strength of long-range interactions. Finally, section $4$ is devoted to the conclusions. \section{Methods} \label{sec:2} In this section we briefly discuss the analytical and numerical approaches we have used to analyze the classical spin-1 Ising model. In the first and second subsections we describe the mean-field theory and the low-temperature series expansion methods as representative analytical methods. The third subsection is a short description of the Metropolis Monte Carlo simulation which is a strong numerical tool that enables us to investigate the critical properties of the system. \subsection{Mean-field theory} \label{sec:2.1} We use a systematic way of deriving the mean-field theory for some Hamiltonian $\mathcal{H}$ in arbitrary dimension and coordinate number $z$ (cf. \cite {RefB1} where Mean-field theories are studied). We begin from the Bogoliubov inequality \begin{eqnarray}\label{E:Bogo} F\le \Phi=F_{0}+<\mathcal{H}-\mathcal{H}_{0}>_{0}, \end{eqnarray} where $F$ is the true free energy of the system, $\mathcal{H}_{0}$ is a trial Hamiltonian depending on some variational parameters which we will introduce, $F_{0}$ is the corresponding free energy, and $<...>_{0}$ denotes an average taken in the ensemble defined by $\mathcal{H}_{0}$. The mean-field free energy, $F_{\rm{MF}}$, is then defined by minimizing $\Phi$ with respect to the variational parameters. In order to see how this method works let us consider the most general form of the classical spin-1 Ising model given by (\ref {E:OneHamil}). We introduce the trial Hamiltonian as the following \begin{eqnarray}\label{E:TrialH} \mathcal{H}_{0}=-H_{0} \sum_{i=1}^{N}s_{i} -D_{0} \sum_{i=1}^{N}s_{i}^{2}. \end{eqnarray} Hence, there are two variational parameters, $H_0$ and $D_0$, which will be determined by minimizing the functional $\Phi$. Assuming that the lattice is translationally invariant it is straightforward to find the partition function and the mean-field free energy. Consequently one can easily compute the magnetization, $M=<s_{i}>_{0}$ defined in dimensionless units as the thermal average of the spin variable and the thermal average of the square of the spin variable $\tau=<s_i^2>_0$ as follows \begin{eqnarray}\label{E:gen_M_mf} M=\frac{2e^{\beta(D+LzM+Kz\tau)} \sinh[\beta(JzM+H+Lz\tau)]}{1+2e^{\beta(D+LzM+Kz\tau)} \cosh[\beta(JzM+H+Lz\tau)]}, \nonumber \\ \end{eqnarray} \begin{eqnarray}\label{E:gen_sigma_mf} \tau=\frac{2e^{\beta(D+LzM+Kz\tau)} \cosh[\beta(JzM+H+Lz\tau)]}{1+2e^{\beta(D+LzM+Kz\tau)} \cosh[\beta(JzM+H+Lz\tau)]}. \nonumber \\ \end{eqnarray} Solving two self consistent equations (\ref {E:gen_M_mf}) and (\ref {E:gen_sigma_mf}) simultaneously, one can find the mean-field magnetization as a function of the temperature. We remind that, although the use of mean-field theory gives us some valuable information about the behavior of the system and specifically it allows us to reproduce the phase diagram of the model in a relatively simple and qualitative way, from a quantitative point of view the results are not generally exact for the case of 2D and 3D lattices so that in this case we need to use more precise analytical and numerical methods allowing us to have a more exact knowledge of the critical behavior of these systems also quantitatively. \subsection{low-temperature series expansion method} \label{sec:2.2} Another possible way to investigate the spin-1 Ising model is to use low-temperature series expansion. The idea is to start from a completely ordered configuration, i.e. the ground state, and then flip spins one by one, and take all the configurations into account to compute the partition function, $Z$ as the following: \cite{RefB3} \begin{eqnarray}\label{E:PartLTSEformula} Z=e^{-\frac{E_{0}}{k_{B} T}} (1+\sum_{n=1}^{\infty}\Delta Z_{N}^{(n)}), \end{eqnarray} where $E_0$ is ground state energy, and $\Delta Z_{N}^{(n)}$ is the sum of Boltzmann factors with energy that is measured with respect to the ground state energy when $n$ spins are flipped starting from the ground state configuration. Two factors contribute to the Boltzmann factor $\Delta Z_{N}^{(n)}$, namely the number of ways of flipping $n$ spins with a specific Boltzmann weight, or counts, and the corresponding Boltzmann weights \cite{RefB3}. \begin{table} \caption{The counts and the Boltzmann weights contributing to the to the low-temperature series expansion of the partition function of a square lattice classical spin-1 Ising model for different numbners of flipped spins ($N_f$).} \label{tab:1} \begin{tabular}{|c|c|c|} \hline $N_f$ & Count & Boltzmann weight \\ \hline $1$ & $N$ & $x^{8}y^{2}z^{8}$ \\ $1$ & $N$ & $x^{4}yz^{8}uw^{4}$ \\ $2$ & $2N$ & $x^{12}y^{4}z^{16}$ \\ $2$ & $4N$ & $x^{10}y^{3}z^{14}uw^{4}$ \\ $2$ & $2N$ & $x^{7}y^{2}z^{14}u^{2}w^{7}$ \\ $2$ & $N(N-5)/2$ & $x^{16}y^{4}z^{16}$ \\ $2$ & $N(N-5)$ & $x^{12}y^{3}z^{16}uw^{4}$ \\ $2$ & $N(N-5)/2$ & $x^{8}y^{2}z^{16}u^{2}w^{8}$ \\ $3$ & $2N$ & $x^{16}y^{6}z^{24}$ \\ $3$ & $4N$ & $x^{14}y^{5}z^{22}uw^{4}$ \\ $3$ & $2N$ & $x^{16}y^{5}z^{20}uw^{4}$ \\ $3$ & $4N$ & $x^{13}y^{4}z^{20}u^{2}w^{7}$ \\ $3$ & $2N$ & $x^{12}y^{4}z^{20}u^{2}w^{8}$ \\ $3$ & $2N$ & $x^{10}y^{3}z^{20}u^{3}w^{10}$ \\ $3$ & $4N$ & $x^{16}y^{6}z^{24}$ \\ $3$ & $8N$ & $x^{14}y^{5}z^{22}uw^{4}$ \\ $3$ & $4N$ & $x^{16}y^{5}z^{20}uw^{4}$ \\ $3$ & $8N$ & $x^{13}y^{4}z^{20}u^{2}w^{7}$ \\ $3$ & $4N$ & $x^{12}y^{4}z^{20}u^{2}w^{8}$ \\ $3$ & $4N$ & $x^{10}y^{3}z^{20}u^{3}w^{10}$ \\ $3$ & $2N(N-8)$ & $x^{20}y^{6}z^{24}$ \\ $3$ & $2N(N-8)$ & $x^{16}y^{5}z^{24}uw^{4}$ \\ $3$ & $4N(N-8)$ & $x^{18}y^{5}z^{22}uw^{4}$ \\ $3$ & $2N(N-8)$ & $x^{15}y^{4}z^{22}u^{2}w^{7}$ \\ $3$ & $4N(N-8)$ & $x^{14}y^{4}z^{22}u^{2}w^{8}$ \\ $3$ & $2N(N-8)$ & $x^{11}y^{3}z^{22}u^{3}w^{11}$ \\ $3$ & $N(N^{2}-15N+62)/6$ & $x^{24}y^{6}z^{24}$ \\ $3$ & $N(N^{2}-15N+62)/2$ & $x^{20}y^{5}z^{24}uw^{4}$ \\ $3$ & $N(N^{2}-15N+62)/2$ & $x^{16}y^{4}z^{24}u^{2}w^{8}$ \\ $3$ & $N(N^{2}-15N+62)/6$ & $x^{12}y^{3}z^{24}u^{3}w^{12}$ \\ $4$ & $N$ & $x^{12}y^{4}z^{24}u^{4}w^{12}$ \\ $4$ & $18N$ & $x^{13}y^{4}z^{26}u^{4}w^{13}$ \\ \hline \end{tabular}\\ \end{table} \normalsize We consider the spin-1 Ising model described by the Hamiltonian (\ref {E:OneHamil}) for a square lattice as an example. For the sake of simplicity, we introduce the following parameters \begin{eqnarray}\label{E:Variables} x=e^{-\beta J},y=e^{-\beta H},z=e^{-\beta L},u=e^{-\beta D},w=e^{-\beta K}, \nonumber \\ \end{eqnarray} where $\beta =\frac{1}{k_{B}T}$. Then we need to calculate the counts for different configurations. The computation of the counts is more complicated with respect to that of the spin-1/2 Ising model because in the spin-1 Ising model each spin can choose among three possible states. Table \ref {tab:1} shows Boltzmann weights and their counts for some configurations in which a few spins are flipped. Starting from these calculations summarized in Table \ref {tab:1} we finally obtain the low-temperature series expansion of the partition function for the spin-1 Ising model : \begin{align} Z_{\rm{spin1}}^{\rm{square}} = & e^{-\beta E_{0}} \bigg(1+Nx^{4} yz^{8} uw^{4}+2Nx^{7} y^{2} z^{14} u^{2} w^{7}+ \nonumber\\ & Nx^{8} y^{2} z^{8} -\frac {5N}{2} x^{8} y^{2} z^{16} u^{2} w^{8}+ 4Nx^{10}y^{3} z^{14} uw^{4}+ \nonumber \\ &6Nx^{10} y^{3} z^{20} u^{3} w^{10} -16Nx^{11} y^{3} z^{22} u^{3} w^{11}+ \nonumber \\&2Nx^{12} y^{4} z^{16}-5Nx^{12} y^{3} z^{16} uw^{4} +\nonumber \\&6Nx^{12} y^{4} z^{20} u^{2} w^{8} +\frac {31N}{3} x^{12} y^{3} z^{24} u^{3} w^{12}+\nonumber \\&Nx^{12} y^{4} z^{24} u^{4} w^{12} +12Nx^{13} y^{4} z^{20}u^{2} w^{7}+\nonumber \\&18Nx^{13} y^{4} z^{26} u^{4} w^{13}+O(x^{14})\bigg) \label{E:PartLTSE} \end{align} We will follow the same procedure by finding the Boltzmann weights and the associated counts for the 3D cubic lattice in one of the special cases outlined in section $3$. \subsection{Metropolis Monte Carlo method} \label{sec:2.3} Monte Carlo method is a powerful numerical tool widely used to evaluate discrete spin models like Ising models to investigate the behavior of the associated thermodynamic functions of the models. It is also very popular to study continuous spin systems like XY and Heisenberg model, fluids, polymers, disordered materials, and lattice gauge theories (cf. \cite {RefB4} where Monte Carlo Methods are studied). In this paper we use Metropolis Monte Carlo simulation. The algorithm can be summarized in three steps: 1. Set up of the lattice sites. To do it, for example in the case of 2D square lattice, we define a 2D array with $l_{x} \times l_{x}$ spin, which is called spin$[i][j]$, where $i$ and $j$ determine a specific lattice site. 2. Initialization of the system. We use a function named init $(l_x$, $J$, $K$, $D$, $L$, $H)$ to set the initial state of the system, which in principal can be chosen arbitrarily. In the simulations we choose a completely ordered state as the initial state with maximum magnetization. In this function, $l_{x}$ is the number of spins in each direction, and $J$, $K$, $D$, $L$, and $H$ are the values of the coefficients in the Hamiltonian of the spin-1 Ising model given by equation (\ref {E:OneHamil}). 3. Use of a main loop in the main program to update the system many times. The function mc$(T)$ takes the temperature $T$ and uses Metropolis update. This function at first chooses randomly one of the spins in the lattice and flips that spin. For instance, if the randomly chosen spin is $+1$ it is flipped to $0$ or $-1$ with the same probability. The probability that the system is allowed to move from the initial state to the final state is $$ P(\mathrm{initial} \to \mathrm{final})= \begin{cases} 1, \ \ \text{if} \ \ E_{\mathrm{final}}<E_{\mathrm{initial}}\\ e ^{ ({-\beta (E_{\mathrm{final}}-E_{\mathrm{initial}}}))}, \ \ \ \text{otherwise} \\ \end{cases} $$ By updating the system a sufficient number of times, it eventually reaches the equilibrium state at any temperature. Finally, it is possible to determine the thermodynamic functions such as magnetization, and susceptibility using following formulas \cite {RefB2}: \begin{eqnarray}\label{E:MagMC} M=\frac {1}{N} \sum_{i=1}^{N}s_{i}, \end{eqnarray} \begin{eqnarray}\label{E:SuscMC} \chi=\frac {1}{k_{B} T} (<M^{2}>-<M>^2 ). \end{eqnarray} \section{Results and discussions: special cases of spin-1 Ising Hamiltonian for 2D square and 3D cubic lattices \label{sec:3} Owing to the previous arguments, in principle we know how to calculate the partition function associated with (\ref {E:OneHamil}) expressing the Hamiltonian of the spin-1 Ising model and consequently we can characterize thermodynamically the model. However, since the general form of the Hamiltonian is very complex and the number of parameters appearing in the parameter space is high, it is not possible to study analytically and/or numerically in an exact way the critical behavior of the full Hamiltonian. It is thus useful to understand better the critical behavior of the spin-1 Ising model focusing our attention on some special cases with a lower number of parameters that are numerically solvable. These cases will be discussed in the following subsections. In the last subsection we will introduce the long-range spin-1 Ising model in analogy with the well-known spin-1/2 model to find out how its critical temperature depends on the magnitude of the long-range interaction. \subsection{Case 1} \label{sec:3.1} In this subsection, we restrict ourselves to the specific case in which all the coefficients in (\ref {E:OneHamil}) are zero except $J$ and $H$, so that the Hamiltonian is given by \begin{eqnarray}\label{E:Hamil1} {\mathcal{H}_{\rm{Ising1}}^{(1)}} =-J\sum_{<ij>}s_{i} s_{j} -H\sum_{i=1}^{N}s_{i}. \end{eqnarray} In this special case, the series expansion of the partition function for a 2D square lattice is obtained by setting $D$, $K$, and $L$ zero in (\ref {E:PartLTSE}). Let's write down the free energy $F$, the magnetization $M$, and the susceptibility $\chi$ as follows \cite {RefB3} \begin{eqnarray}\label{E:Fener} F=-k_{B}T\ln Z, \end{eqnarray} \begin{eqnarray}\label{E:Mag} M=-\frac {1}{N}\lim_{H \to 0} \left (\frac {\partial F}{\partial H} \right) _{T}, \end{eqnarray} \begin{eqnarray}\label{E:Susc} \chi=\frac {1}{N\beta}\lim_{H \to 0} \left (\frac {\partial^2 \ln Z}{\partial H^2} \right) _{T}. \end{eqnarray} Now we can write down the series expansion of $M$, and $\chi$ for the square lattice: \begin{align}\label{E:MagLSR} M_{\rm{square}}^{(1)}=&1-x^{4}-4x^{7}+3x^{8}-30x^{10}+48x^{11}-52x^{12} \nonumber \\ &-120x^{13}+O(x^{14} ), \end{align} \begin{align}\label{E:SuscLSR} \chi_{\rm{square}}^{(1)}=&\beta(x^{4}+8x^{7}-6x^{8}+90x^{10}-144x^{11}+192x^{12} \nonumber \\ &+480x^{13}+O(x^{14} )). \end{align} Likewise, one can find corresponding expressions for a 3D lattice. For simplicity we take a simple cubic lattice with each site having $z=6$ nearest neighbors. Since the procedure is similar to what we have done for square lattice we only write down the final expression of the above quantities as follows: \begin{align}\label{E:PqrtCub1} Z^{(1)}_{\rm{cube}}&= e^{-\frac {E_0}{k_B T}} \bigg[1+Nx^6 y+3Nx^{11} y^2 \nonumber\\ &+(\frac {N(N-7)}{2}+N) x^{12} y^2 +21Nx^{16} y^3\nonumber\\ &+3N(N-12) x^{17} y^3 \nonumber\\ &+\bigg(N(N-7)+ \frac {N(N^2-3N+2)}{6} \nonumber \\ &-3N(N-12)-15N\bigg) x^{18} y^3 \nonumber\\ &+21Nx^{20} y^4+77Nx^{21} y^4 \nonumber \\ & +\bigg(3N(N-17)+12N(N-16) \nonumber \\ &+\frac{3N(N-17)}{2} +3N(N-20)+6N(N-12)\bigg) x^{22} y^4 \nonumber \\ & +O(x^{23} )\bigg], \end{align} \begin{align}\label{E:MagCub1} M_{\rm{cube}}^{(1)}=&1-x^6-6x^{11}+5x^{12}-63x^{16}+108x^{17}-43x^{18}\nonumber\\ &-84x^{20}-308x^{21} +1602x^{22}+O(x^{23} ), \end{align} \begin{align}\label{E:SuscCub1} \chi_{\rm{cube}}^{(1)}=&\beta[x^6+12x^{11}-10x^{12}+189x^{16}-324x^{17}+129x^{18}\nonumber\\&+336x^{20} +1232x^{21}-6408x^{22}+O(x^{23} )]. \end{align} Equations (\ref {E:MagLSR}) and (\ref {E:SuscLSR}) obtained from combinatorial combinations are in agreement with the results found in \cite {RefJ8} using finite lattice method for a 2D square lattice. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{M-1.jpg} \caption{\label{fig:MT-1}Magnetization $M$ vs. reduced temperature $t =\frac {k_{B}T}{J}$ obtained by using Metropolis Monte Carlo simulation for a square lattice with $40\times40$ sites for the case 1. The error bars are smaller than size of the markers.} \end{center} \end{figure} As shown by Enting, Guttmann and Jensenin \cite {RefJ8} at least the first 60 terms of low-temperature series expansion of thermodynamic functions are needed to have a physically consistent result. The critical temperature has been finally approximated as follows \begin{eqnarray}\label{E:CriticalTemp1} \exp({-\frac {J}{k_{B}T_{c, SE}^{(1, \rm{square})}}})=x_{c, SE}^{(1, \rm{square})}=0.554075\pm 0.000015. \nonumber \\ \end{eqnarray} Also the calculation of the critical exponents $\beta$ and $\gamma$ associated to $M$ and $\chi$, respectively lead to the conclusion that the model belongs to the same universality class as spin-1/2 Ising model. Now, we apply the Metropolis Monte Carlo simulation on a 2D square lattice and examine these conclusions. Figure \ref{fig:MT-1} shows the magnetization versus the reduced temperature, $t =\frac {k_{B}T}{J}$ for a square lattice with $40\times40$ spins. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{TC-1.jpg} \caption{Binder Cumulant vs. $t =\frac {k_{B}T}{J}$ for different size lattices for Case 1. The error bars are smaller than size of the markers.} \label{fig:TC-1} \end{center} \end{figure} It shows that in the high temperature regime, system is in a disordered phase and the magnetization is zero, and for a reduced temperature $t=\frac{k_BT}{J}\simeq1.7$ a critical phase transition occurs and the system evolves towards the ordered phase. Finally, for very low-temperatures the magnetization is close to one as expected. The Binder cumulant defined as \begin{eqnarray}\label{E:BC} U=1-\frac{<M^4 >}{3<M^2 >^2 }, \end{eqnarray} is an observational tool to estimate critical points. It turns out that the intersection of $U-T$ curves, for networks with different number of sites, gives the critical temperature of the lattice with a good accuracy \cite {RefB4,RefJ20}. Figure \ref{fig:TC-1} shows how we can use the Binder cumulant to determine the critical temperature considering three different size lattices. In this figure the Binder cumulants for three 2D square lattices with $5\times5$, $10\times10$, and $15\times15$ sites are displayed. The intersection of the curves corresponds to the critical point given by \begin{eqnarray}\label{E:TC1} \frac {k_{B}T_{c,\rm{MC}}^{(1,\rm{square})}}{J}=1.70\pm0.01. \end{eqnarray} This result is in the accordance with the critical temperature found from series expansion method given by (\ref{E:CriticalTemp1}). In order to complete our discussion about this specific case we shall find some of the critical exponents of the model using the data we have obtained from simulation by the so called finite lattice method \cite {RefB2}. Let us firstly evaluate the $\beta$ critical exponent. In order to calculate the $\beta$ exponent what is usually done is to plot $y=\ln M$ vs. $x=\ln l_{x}$. The slope of this graph is the $\beta$ exponent. Likewise the slope of $y=\ln \chi$ vs. $x=\ln l_{x}$ gives the $\gamma$ exponent. We eventually find \begin{eqnarray}\label{E:Beta1} \beta = 0.13\pm0.01, \end{eqnarray} \begin{eqnarray}\label{E:gamma1} \gamma = 1.78\pm0.05. \end{eqnarray} These observations suggest that spin-1 Ising model governed by Hamiltonian (\ref {E:Hamil1}) belongs to the same universality class of the spin-1/2 Ising model in agreement with series expansion method. In order to have an approximation of the critical temperature for a 3D cubic lattice we use again the Binder cumulant. The critical temperature is: \begin{eqnarray}\label{E:TC1-cube} \frac {k_{B}T_{c,\rm{MC}}^{(1,\rm{cube})}}{J}=3.2\pm0.1 . \end{eqnarray} \subsection{Case 2} \label{sec:3.2} In this subsection, we consider the Hamiltonian of the spin-1 Ising model given by (\ref {E:OneHamil}) and we assume $J=L=0$. Therefore, the Hamiltonian of the system is: \begin{eqnarray}\label{E:Hamil2} {\mathcal{H}_{\rm{Ising1}}^{(2)}} =-K\sum_{<ij>}s_{i}^{2} s_{j}^{2} -D\sum_{i=1}^{N}s_{i}^{2} -H\sum_{i=1}^{N}s_{i}, \end{eqnarray} where $s_{i}=0$,$+1$,$-1$. The Hamiltonian in the absence of an external magnetic field has been studied by Griffiths \cite {RefJ15}, who showed that the statistical mechanics of the spin-1 Ising model can be reduced to that of spin-1/2 Ising model. Although it would be possible in principle to solve this model with $H\ne0$ using series expansion method, in this section we outline a very simple analytical solution not directly obtained with series expansion and we compare it with the results derived by means of Metropolis Monte Carlo technique simulations. We will prove that, by applying a simple transformation, the spin-1 Ising model is reduced to the spin-1/2 Ising model, with a constant exchange, but a temperature dependent external field \cite{wu1978phase}. Before starting our discussion about the Hamiltonian (\ref {E:Hamil2}) we make a very simple consideration. If we assume that $K=0$ the system does not exhibit any critical behavior because of the lack of a collective behavior in the system. Therefore, all the arguments in this subsection are valid only for $K\ne0$ that marks the collective behavior in this case. We consider a lattice of arbitrary dimensions and coordinate number $z$ described by the Hamiltonian (\ref {E:Hamil2}). The partition function, $Z$ is defined by \cite {RefB1} \begin{eqnarray}\label{E:PartDef} Z_{\rm{spin1}}=\sum_{s=0,+1,-1}e^{-\beta \mathcal{H}} \end{eqnarray} Substituting (\ref {E:Hamil2}) in (\ref {E:PartDef}) and using the transformation $t_{i}=2s_{i}^{2}-1$ and some straightforward calculations we get \begin{eqnarray}\label{E:Part2} Z_{\rm{spin1}}^{(2)}=e^{N\beta C} \sum_{t_i=+1,-1}e^{R\sum_{i=1}^{N}t_{i} +Q\sum_{<ij>}t_{i} t_{j} }, \end{eqnarray} where \begin{eqnarray}\label{E:CDef} C=\frac {1}{2\beta} \ln(2\cosh \beta H)+\frac{D}{2}+\frac{Kz}{8}, \end{eqnarray} \begin{eqnarray}\label{E:RDef} R=\frac{1}{2} \ln(2\cosh \beta H)+\frac {\beta D}{2}+\frac {\beta Kz}{4} \end{eqnarray} \begin{eqnarray}\label{E:QDef} Q=\frac {\beta K}{4}. \end{eqnarray} Thus, according to the equation (\ref {E:Part2}) the partition function of the spin-1 Ising model given by the Hamiltonian equation (\ref {E:Hamil2}) with an appropriate transformation is reduced to that of the spin-1/2 Ising model with temperature dependent external field $\frac {R}{\beta}$ and temperature independent exchange interaction $\frac {Q}{\beta}$ times an exponential factor: \begin{eqnarray}\label{E:PartitionDef} Z_{\rm{spin1}}^{(2)}=e^{N\beta C}\times Z_{\rm{spin1/2}} (\frac {R}{\beta},\frac {Q}{\beta}). \end{eqnarray} The Hamiltonian of the equivalent spin-1/2 Ising model with external field $\frac {R}{\beta}$ and exchange coefficient $\frac {Q}{\beta}$ is \begin{eqnarray}\label{E:eqHamil} \mathcal{H}_{\rm{spin1/2}} =-\frac {R}{\beta} \sum_{i=1}^{N}t_{i} -\frac {Q}{\beta} \sum_{<ij>}t_{i} t_{j} \end{eqnarray} So (\ref{E:Part2}) and (\ref {E:eqHamil}) show that spin-1 model with Hamiltonian (\ref {E:Hamil2}) at temperatures less than the critical temperature exhibits the first order phase transition by crossing following surface: \begin{eqnarray}\label{E:PhTr2} \frac {R}{\beta}=0. \end{eqnarray} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{Phase-2.pdf} \caption{The surface $\frac {R}{\beta }=0$ in the $(T, D, H)$ space for $K=1$ in two views. We assume $k_{B}=1$ for simplicity.} \label{fig:Phase-2} \end{center} \end{figure} Figure \ref {fig:Phase-2} displays the phase transition surface of the spin-1 Ising model governed by Hamiltonian (\ref {E:Hamil2}) for $K=1$. So if we fix the temperature to a low enough value that is less than the critical temperature and change the external field $H$ in a wide enough range for appropriate values of $D$, there are two first-order phase transitions. Furthermore, it is clear that there is a maximum value of $D$ up to which the phase transition is possible; one can find this maximum value in the limit $T\to0$, and $H\to0$: \begin{eqnarray}\label{E:Dmax} D_{\rm{max}}=-\frac {Kz}{2}. \end{eqnarray} Since we know the exact critical temperature of the spin-1/2 Ising model \cite {RefJ4} for the 2D square lattice, we can easily find the critical temperature for case 2 for such a lattice. The critical temperature of the 2D square lattice spin-1/2 Ising model with Hamiltonian (\ref {E:eqHamil}) equation or equivalently for the spin-1 Ising model described by (\ref {E:Hamil2}) is \begin{eqnarray}\label{E:TC2} T_{c}^{(2,\rm{square})}=\frac {K}{2k_{B}\ln(1+\sqrt{2})} \end{eqnarray} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{M-H2.jpg} \caption{$M-H$ curves for case 2 with $D=-2.8$, and $K=1$ at different temperatures: $k_BT=0.5$, $k_BT=0.6$, $k_BT=0.7$, $k_BT=1.5$, for a $40\times40$ square lattice obtained from Metropolis Monte Carlo simulation. The error bars are smaller than size of the markers.} \label{fig:MH-2} \end{center} \end{figure} We now perform Metropolis Monte Carlo simulation for 2D square lattice to compare the simulation results with the analytical calculation with special regard to the critical temperature. We set $K=1$, and use the suitable value for $D$, and change $H$, from $+1$ to $-1$ for different values of $T$. In Figure \ref {fig:MH-2} the results for a $40\times40$ 2D lattice are shown. The magnetization $M$ is plotted versus the external field, $H$ for different values of temperature. As it can be seen, for very low $T$, there are two jumps in the curve. One jump corresponds to a positive value of $H$, in which the magnetization jumps from $1$ to zero. Another jump occurs at a negative value of $H$, and in this case magnetization has a sudden variation from zero to $-1$. This result is conceptually is in accordance with our analytical solution. In fact, we have two first order phase transitions corresponding to the two values of $H$, where coefficient $\frac {R}{\beta}$ is zero. Moreover, the critical temperature can be estimated by plotting the susceptibility versus temperature; it turns out that \begin{eqnarray}\label{E:MCTC2} \frac{k_BT_{c,\rm{MC}}^{(2,\rm{square})}}{K}=0.57\pm0.01 \end{eqnarray} which agrees with the expression of the analytical critical temperature given by (\ref {E:TC2}). The qualitative behavior of the 3D lattice system is very similar to the square lattice as we expect and the critical phase transition occurs at \begin{eqnarray}\label{E:MCTC2-cube} \frac{k_BT_{c,\rm{MC}}^{(2,\rm{cube})}}{K}=1.20\pm0.1. \end{eqnarray} \subsection{Case 3} \label{sec:3.3} In this section we consider the Hamiltonian (\ref {E:OneHamil}) with $K=0$: \begin{align}\label{E:Hamil3} {\mathcal{H}_{\rm{Ising1}}^{(3)}} =&-J\sum_{<ij>}s_{i} s_{j} -D\sum_{i=1}^{N}s_{i}^{2} -L\sum_{<ij>}(s_{i}^{2} s_{j}+s_{i} s_{j}^{2} ) \nonumber \\ &-H \sum_{i=1}^{N}s_{i}, \end{align} and we apply mean-field approximation to obtain some physical quantities characterizing the model. More specifically, we get $F_{\rm{MF}}$ and, as a result, two self-consistent equations for mean-field magnetization $M$ and the thermal average of the square of the spin variable $\tau$: \begin{align}\label{E:F_mf} F_{\rm{MF}}^{(3)}=&-Nk_B T \times \nonumber \\ &\ln{\bigg \{1+2e^{\beta(D+LzM)} \cosh[\beta(zJM+H+Lz\tau)] \bigg \}} \nonumber \\ & +\frac {NJz}{2} M^2+NLzM\tau, \end{align} \begin{eqnarray}\label{E:M_mf} M=\frac{2e^{\beta(D+LzM)} \sinh[\beta(JzM+H+Lz\tau)]}{1+2e^{\beta(D+LzM)} \cosh[\beta(JzM+H+Lz\tau)]}, \nonumber \\ \end{eqnarray} \begin{eqnarray}\label{E:sigma_mf} \tau=\frac{2e^{\beta(D+LzM)} \cosh[\beta(JzM+H+Lz\tau)]}{1+2e^{\beta(D+LzM)} \cosh[\beta(JzM+H+Lz\tau)]}. \nonumber \\ \end{eqnarray} One usual way to solve a system of nonlinear and transcendental algebraic equations like (\ref {E:M_mf}) and (\ref {E:sigma_mf}) is Newton's method \cite {RefB5}. Eq.s (\ref {E:M_mf}) and (\ref {E:sigma_mf}) imply that no phase transitions are predicted by the mean-field theory for nonzero $L$. For instance, let us assume that in Hamiltonian (\ref {E:Hamil3}) $L$ is positive and all the other coefficients vanish. When $T$ decreases, the system tends towards a configuration in which all the spins are up; for negative $L$ the system instead tends to a configuration with all spins down. Hence, $L$ plays a role similar to $H$. Furthermore, as should be expected, at high temperatures the system is in the disordered phase. In other words, the number of spin up, spin down and spin-less sites are equal, and consequently the magnetization is very small and $\tau$ is about $2/3\approx0.67$. On the other hand, at very low-temperatures we have a completely ordered lattice with all sites spin up or down, i.e. $|M|\approx1$. We emphasize that, for $L=0$, the model reduces to the Blum-Capel model \cite {RefJ13} and corresponding mean-field expressions for free energy, magnetization, and $\tau$ can be simply found from (\ref {E:F_mf}), (\ref {E:M_mf}) and (\ref {E:sigma_mf}). In particular, for $H=0$ we get \begin{eqnarray}\label{E:MBC_mf} M=\frac {2e^{\beta D} \sinh{[\beta zJM]}}{1+2e^{\beta D} \cosh{[\beta zJM]}}. \end{eqnarray} Equation (\ref {E:MBC_mf}) enables us to calculate mean-field approximation for transition temperature, $T_0$; it's enough to expand the expression on the right hand of (\ref {E:MBC_mf}) up to the first order of $M$. As $ M \to 0$ we have $ \sinh [\beta zJM] \to \beta zJM$ and $ \cosh [\beta zJM] \to 1 $, thus \begin{eqnarray}\label{E:T0_mf} zJ\beta_0=1+\frac {1}{2} e^{-\beta_0 D}, \end{eqnarray} where $\beta _0= \frac {1}{k_BT_0} $. Equation (\ref {E:T0_mf}) determines the curve of phase transition in phase diagram of Blum-Capel model at $H=0$ plane, however up to now we do not know the type of phase transition occurring at $ T_0 $. Using (\ref {E:M_mf}) we write down the first few terms of series expansion of the zero external field free energy around $M=0$: \begin{eqnarray}\label{E:Fexp_mf} F_{\rm{MF}} (H=0)\simeq a_0+a_2 M^2+a_4 M^4, \end{eqnarray} with \begin{eqnarray}\label{E:a0_mf} a_0=-Nk_B T \ln{(1+2e^{\beta D})}, \end{eqnarray} \begin{eqnarray}\label{E:a2_mf} a_2=\frac {zJ}{2} [1-\beta zJ \frac {2e^{\beta D}}{1+2e^{\beta D}}], \end{eqnarray} \begin{eqnarray}\label{E:a4_mf} a_4=\frac {zJ}{24} {(\beta zJ)}^{3} \frac {2e^{\beta D}}{1+2e^{\beta D}} [\frac {6e^{\beta D}}{1+2e^{\beta D} }-1]. \end{eqnarray} The essential condition for having a critical phase transition according to Landau theory is \cite {RefB1} \begin{eqnarray}\label{E:Lnd_2nd} a_2=0 ,\ a_4>0. \end{eqnarray} Hence, to have a critical phase transition we get according to mean-field theory \begin{eqnarray}\label{E:D_2nd} D>-zJ \frac {\ln{4}}{3}. \end{eqnarray} In addition, at the tricritical ($\rm{tc}$) point where the three phases predicted by the classical spin-1 Ising model become critical simultaneously we must have \begin{eqnarray}\label{Lnd_Tri} a_2=0 ,\ a_4=0. \end{eqnarray} Then, the $\rm{tc}$ point is determined by \begin{eqnarray}\label{Tri_mf} \beta_{\rm{tc},\rm{MF}}^{(3)} zJ=3 ,\ D_{\rm{tc},\rm{MF}}^{(3)}=-zJ \frac {\ln{4}}{3}. \end{eqnarray} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{MCBC3.jpg} \caption{Absolute magnetization $M$ vs. reduced temperature $t=k_BT/J$ for different values of $D$ for a 2D square lattice with $20\times20$ sites described by the Blum-Capel Hamiltonian in zero external field according to Metropolis Monte Carlo simulation for $D/J=1.00$, $D/J=-1.00$, $D/J=-1.96$, $D/J=-1.97$ and $D/J=-2.10$. The critical phase transition becomes first-order at $D_{\rm{tc,MC}}^{(3, \rm{square})}/J=-1.96\pm0.01$, and $t_{\rm{tc,MC}}^{(3, \rm{square})}=0.64\pm0.01$. The error bars are smaller than size of the markers.\label{fig:MCBC3}} \end{center} \end{figure} Mean-field solution in general is not the exact solution but only approximated because it neglects the effect of dimensionality. The results of mean-field calculations become more precise when the dimensionality of the system becomes larger and the mean-field predictions, like for example the critical exponents, become exact if the dimensionality of the system is equal or higher than the upper critical dimension, $d_{\rm{up}}$, which is given by \cite {RefJ18_new} \begin{eqnarray}\label{E:Up-D} d_{\rm{up}}=\frac{(\gamma+2\beta)}{\nu}. \end{eqnarray} Mean-field usually gives good predictions for the phase diagrams of the 3D systems \cite {RefB1}. In this respect, the interesting example is a 3D system like a cubic lattice at the $\rm {tc}$ point. At the $\rm {tc}$ point we have the following critical exponents \cite {RefJ18_new} \begin{eqnarray}\label{E:Crit-t} \beta_ {\rm {tc}}=\frac{1}{4},\ \nu_ { \rm{tc}}=\frac{1}{2},\ \gamma_ {\rm{tc}}=1, \end{eqnarray} (\ref {E:Up-D}) and (\ref {E:Crit-t}) lead to \begin{eqnarray}\label{E:Dup-value} d_{\rm{up}}=3. \end{eqnarray} It means that at the $\rm {tc}$ point, mean-field theory provides a very good description of the model in 3D lattices in terms of critical exponents. So for the spin-1 3D Ising model, which is described by the Hamiltonian (\ref {E:Hamil3}), around the $\rm{tc}$ point, for zero external field and $L=0$ the critical exponents are the ones given by (\ref {E:Crit-t}). Now we use the Metropolis Monte Carlo technique to investigate the behavior of 2D square and 3D cubic lattices defined by the Hamiltonian (\ref {E:Hamil3}) and compare the results with those of mean-field theory. Figure \ref {fig:MCBC3} shows the absolute magnetization versus reduced temperature for different values of $D$ obtained by Metropolis Monte Carlo simulation for a 2D square lattice. It agrees qualitatively with mean-field but obviously leads to different values for the $\rm{tc}$ point: \begin{align}\label{Tri_MC} \frac {D_{\rm{tc},\rm{MC}}^{(3,\rm{square})}}{J}&=-1.96\pm0.01,\nonumber \\ \frac {k_BT_{\rm{tc},\rm{MC}}^{(3,\rm{square})}}{J}&=0.64\pm0.01. \end{align} We have already seen that mean-field approximation suggests that the spin-1 Ising model governed by (\ref {E:Hamil3}) does not exhibit any phase transitions when $L$ is nonzero.Interestingly, Metropolis Monte Carlo simulation that is a more accurate method confirms this result. For instance, Figure \ref {fig:ML-3} proves the non-existence of the phase transition for a 2D square lattice with $20\times20$ spins in the case in which only $L$ is non-zero. For a 3D cubic lattice the system behavior is similar and $\rm{tc}$ point is given by \begin{align}\label{Tri_MC-cube} \frac {D_{\rm{tc},\rm{MC}}^{(3,\rm{cube})}}{J}&=-2.86\pm0.01,\nonumber \\ \frac {T_{\rm{tc},\rm{MC}}^{(3,\rm{cube})}}{J}&=1.4\pm0.1. \end{align} \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{ML-3.jpg} \caption{Magnetization as a function of L plots obtained from Metropolis Monte Carlo simulation for the spin-1 Ising model defined by Hamiltonian (\ref {E:Hamil3}) for a $20\times20$ square lattice with $H=D=J=0$ for $k_BT/L=1.0$, $k_BT/L=2.0$, \emph{Bottom}: $k_BT/L=10.0$, which indicates that no phase transition takes place. Error bars are smaller than size of the markers.} \label{fig:ML-3} \end{center} \end{figure} \subsection{Case 4} \label{sec:3.4} As another specific case we consider the following spin-1 Ising model \begin{eqnarray}\label{BEG} {\mathcal{H}_{\rm{Ising1}}^{(4)}} =-J\sum_{<ij>}s_i s_j -K\sum_{<ij>}s_i^2 s_j^2 -D\sum_{i=1}^{N}s_i^2. \nonumber \\ \end{eqnarray} First note that this Hamiltonian is equivalent to the Ising spin-1/2 lattice gas with following Hamiltonian: \begin{eqnarray}\label{Lgas} \mathcal{H}_{\rm{lg}} =-J\sum_{<ij>}s_i s_j t_i t_j -K\sum_{<ij>}t_i t_j -D\sum_{i=1}^{N}t_i, \nonumber \\ \end{eqnarray} where $s_i=\pm1$ and $t_i=0,1$ and the subscript $lg$ denotes lattice gas. Before we prove this equivalence let us discuss about the Hamiltonian (\ref {Lgas}) shortly. For simplicity, we assume that we have a 2D square lattice with $N$ sites. According to the spin-1/2 Ising lattice gas model each site can be occupied with a particle or it can be a vacancy. If site $i$ is occupied, the variable $t_i$ is one, otherwise it will be zero. In the Hamiltonian (\ref {Lgas}) the first term proportional to $J$ expresses an exchange interaction between two neighbor sites if and only if both are occupied and the amount and sign of this interaction depend on spin variables of these two occupied sites. The second term of $\mathcal{H}_{\rm{lg}}$ proportional to $K$ is the interaction energy between a pair of filled neighbors regardless of their spins and the last term proportional to $D$ is a spin independent effect of some external field with occupied sites with $D$ playing the role of this field. Now we are ready to prove the equivalence of $\mathcal{H}_{\rm{Ising1}}^{(4)}$ given by (\ref {BEG}) and $\mathcal{H}_{\rm{lg}}$ given by (\ref {Lgas}). To do it we start from (\ref {Lgas}) and impose the following transformation \begin{eqnarray}\label{Trans} r_i=t_i s_i \end{eqnarray} Equation (\ref {Trans}) illustrates that $r_i$ can be $+1$, $-1$, or $0$. Obviously $r_i^2$ only has two possible values: $+1$, or $0$. So in the two last terms of (\ref {Lgas}) we can substitute $t_i$ with $r_i^2$. Thus $\mathcal{H}_{\rm{lg}}$ in terms of new spin variable $r_i$ is \begin{eqnarray}\label{AfterTrans} \mathcal{H}_{\rm{lg}} =-J\sum_{<ij>}r_i r_j -K\sum_{<ij>}r_i^2 r_j^2 -D\sum_{i=1}^{N}r_i^2. \end{eqnarray} Thus Hamiltonians (\ref {BEG}) and (\ref {Lgas}) are equivalent and share the same physics. This equivalence is conceptually trivial, because spin-1 model can be considered as a spin-1/2 model with vacancies but the underlying physics is interesting. Regarding this point, historically Blume, Emery, and Griffiths suggested the Hamiltonian (\ref {BEG}) as a spin-1 lattice model to describe a mixture of non-magnetic $(s = 0)$ and magnetic $(s = \pm1)$ components \cite {RefJ7}. The model was originally inspired by the experimental observation that the continuous superfluid transition in $\rm{He}^3$ with $\rm{He}^4$ impurity becomes a first order transition into normal and superfluid phase separation above some critical $\rm{He}^3$ concentration. \begin{table} \caption{Metropolis Monte Carlo simulation result for $T_{\rm{tc}}$ and $D_{\rm{tc}}$ for different values of $K$ for a $20\times20$ 2D spin square lattice governed by the Hamiltonian (\ref {BEG}). } \label{tab:2} \begin{center} \begin{tabular}{|c|c|c|} \hline $K/J$ & $k_BT_{\rm{tc}}/J$ & $D_{\rm{tc}}/J$ \\ \hline 0.00 & 0.64 & -1.96 \\ 0.10 & 0.68 & -2.16 \\ 0.20 & 0.75 & -2.36 \\ 0.30 & 0.82 & -2.56 \\ 0.40 & 0.85 & -2.75 \\ 0.50 & 0.92 & -2.96 \\ 0.60 & 0.97 & -3.16 \\ \hline \end{tabular}\\ \end{center} \end{table} \normalsize Blume, Emery and Griffiths have found the mean-field solution and have determined the approximated phase diagram and the $\rm{tc}$ of the model. Since we have found that Metropolis Monte Carlo results qualitatively agree with the mean-field solution, e.g. phase diagrams obtained from the two methods are similar, in Table \ref {tab:2} we present the Metropolis Monte Carlo simulation results for the tc temperature $T_{\rm{tc}}$ and the tc crystal field of strength $D_{\rm{tc}}$ for a 2D square lattice. These results show that with increasing $K$ there is an increase of $T_{\rm{tc}}$ and a negative increase of $D_{\rm{tc}}$. \subsection{Case 5: long-range Ising spin-1 model} \label{sec:3.5} Long-range interaction that are typical of statistical mechanical systems may affect the critical behavior of the corresponding models \cite{RefJ16}. Regarding this, in this section we deal with the 2D spin-1 Ising model with long-range spin interactions. We investigate the effect of this further interaction using Metropolis Monte Carlo simulation comparing the results with the ones of the corresponding 2D spin-1/2 Ising model in the presence of the same interaction. In analogy with long-range spin-1/2 Ising model \cite {RefJ17,RefJ18} we define the long-range Hamiltonian for spin-1 Ising model as follows \begin{eqnarray}\label{LRIH} \mathcal{H}_{\rm{lr}}^{(5)} =-\sum_{ij}\frac {J}{r_{ij}^{d+\sigma}} s_i s_j, \end{eqnarray} where $s_i=0$, $1$, or $-1$, $d$ is the lattice dimensionality, $\sigma$ is the phenomenological parameter which determines the interaction strength, $r_{ij}$ is the distance between a couple of spins labeled by the indices $i$ and $j$, and the subscript $lr$ denotes long range. Explicitly, in the Metropolis algorithm if the flipped spin is at position $(x,y)$ we have \begin{eqnarray}\label{rij} r_{ij}=\sqrt{{(x-i)}^2+{(y-j)}^2}. \end{eqnarray} In this analysis we limit ourselves to ferromagnetic materials, i.e. $J>0$. Notice that for spin-1/2 Ising model the Hamiltonian (\ref {LRIH}) has the same expression but $s_i=1$, or $-1$. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{TC_spin_half.jpg} \includegraphics[width=0.45\textwidth]{TC_spin_one.jpg} \caption{$t_c=\frac{k_B T_c}{J}$ as a function of $\sigma$ for the long-range interaction Ising model \emph{Top}: spin-1/2, \emph{Bottom}: spin-1 for a square lattice with $40\times40$ sites described by Hamiltonian (\ref {LRIH}) with $R=7$ obtained from Metropolis Monte Carlo simulation. Error bars are smaller than size of the markers.} \label{fig:Tc-4} \end{center} \end{figure} On the basis of this numerical simulation we can investigate the dependence of the critical temperature on $\sigma$. We do the Metropolis Monte Carlo simulation for different values of parameter $\sigma$, considering the long-range interaction and assuming that each spin interacts with other spins which their distance is equal or less than some radius $R$. In other words the summation in the Hamiltonian (\ref {LRIH}) is carried out over all spins, which are in the circle of radius $R$ around the flipped spin in each step of Metropolis algorithm. It means that in the algorithm $r_{ij}<R$. Figure \ref{fig:Tc-4} shows the result for $R=7$. As we expect the critical temperature of the model decreases with parameter $\sigma$. The dependence of critical temperature on parameter $\sigma$, basing upon the results of numerical modelling of the least squares for spin-1/2 is given by \begin{eqnarray}\label{t_c-sigma-half} \frac{k_B T_{c,\rm{spin-1/2}}^{\rm{lr}}}{J}=2.3+6.9e^{-0.7\sigma}. \end{eqnarray} Similarly for spin-1 model we get: \begin{eqnarray}\label{t_c-sigma-one} \frac{k_B T_{c,\rm{spin-1}}^{\rm{lr}}}{J}=1.7+4.7e^{-0.7\sigma}. \end{eqnarray} So in both cases we have \begin{eqnarray}\label{t_c-sigma-half_one} T_c=a+be^{-c\sigma}, \end{eqnarray} where $a$ is the critical temperature of the short-range model and $c\approx0.7$. \section{Conclusion \label{sec:4} In the present work we studied the classical spin-1 Ising model using different analytical and numerical methods such as mean-field theory, series expansions and Monte Carlo simulation to investigate some critical properties of the model like critical temperature and critical exponents for 1D chain, 2D square lattice, and 3D cubic lattice. We have found that, albeit some similarities with the critical behavior of the classical spin-1/2 Ising model, because of the presence in the Hamiltonian of the spin-1 model of more terms, i.e. more different types of interactions between spin pairs, the critical properties of this model are much richer and more variegated with respect to the ones of the corresponding spin-1/2 Ising model. We have used mean-field theory that represents a strong mathematical tool to study the physics of the model in some special cases. We have found that, for 3D lattices near the tricritical point, the critical properties of the model can be described by mean-field theory. In particular, we have found that the critical exponents around the tricritical point calculated via the mean-field approximation are confirmed by Monte Carlo simulations. On the other hand, the mean-field results for 2D lattices are only qualitatively but not quantitatively correct as highlighted by Monte Carlo simulation. The simulation results obtained for 2D square and 3D cubic lattices can be easily extended to other types of lattices. We have shown that, for a special case of the spin-1 Ising model where the bilinear and the bicubic terms are set equal to zero, it is possible to write the corresponding partition function in arbitrary dimensions as the one of the spin-1/2 Ising model in agreement with our Monte Carlo simulation. Finally, we have investigated the long-range spin-1 Ising model Hamiltonian by including in the Hamiltonian a long-range interaction term in analogy with what was carried out for spin-1/2 Ising model determining the dependence of the critical temperature of the two models on the strength of this interaction. \section*{Acknowledgements} This work was partially supported by National Group of Mathematical Physics (GNFM-INdAM) and Istituto Nazionale di Alta Matematica “F. Severi”. \bibliographystyle{apsrev4-1}
{ "timestamp": "2020-07-20T02:02:33", "yymm": "2007", "arxiv_id": "2007.08593", "language": "en", "url": "https://arxiv.org/abs/2007.08593" }
\section{Introduction}\label{S1} \setcounter{equation}{0} \noindent We continue the study of the sharp constants in multivariate inequalities of approximation theory that began in \cite{G2018, G2019, G2019b, G2020}. In this paper we prove an asymptotic equality between the sharp constants in the multivariate Markov-Bernstein-Nikolskii type inequalities for entire functions of exponential type and algebraic polynomials whose Newton polyhedra are subsets of the given convex body. \vspace{.12in}\\ \textbf{Notation.} Let $\R^m$ be the Euclidean $m$-dimensional space with elements $x=(x_1,\ldots,x_m),\, y=(y_1,\ldots,y_m), \,t=(t_1,\ldots,t_m),\,u=(u_1,\ldots,u_m)$, the inner product $t\cdot x:=\sum_{j=1}^mt_jx_j$, and the norm $\vert x\vert:=\sqrt{x\cdot x}$. Next, $\CC^m:=\R^m+i\R^m$ is the $m$-dimensional complex space with elements $z=(z_1,\ldots, z_m)=x+iy$ and the norm $\vert z\vert:=\sqrt{\vert x\vert^2+\vert y\vert^2}$; $\Z^m$ denotes the set of all integral lattice points in $\R^m$; and $\Z^m_+$ is a subset of $\Z^m$ of all points with nonnegative coordinates. We also use multi-indices $s=(s_1,\ldots,s_m)\in \Z^m_+,\, \be=(\be_1,\ldots,\be_m)\in \Z^m_+$, and $\al=(\al_1,\ldots,\al_m)\in \Z^m_+$ with \ba \vert s\vert:=\sum_{j=1}^m s_j,\quad \vert\be\vert:=\sum_{j=1}^m\be_j,\quad \vert\al\vert:=\sum_{j=1}^m\al_j,\quad y^\be:=y_1^{\be_1}\cdot\cdot\cdot y_m^{\be_m}, \quad D^\al:=\frac{\partial^{\al_1}}{\partial y_1^{\al_1}}\cdot\cdot\cdot \frac{\partial^{\al_m}}{\partial y_m^{\al_m}}. \ea \noindent Given $\sa\in\R^m,\,\sa_j\ne 0,\,1\le j\le m$, and $M>0$, let $\Pi^m(\sa):=\{t\in\R^m: \vert t_j\vert\le \vert\sa_j\vert, 1\le j\le m\},\, Q^m(M):=\{t\in\R^m: \vert t_j\vert\le M, 1\le j\le m\},\, \BB^m(M):=\{t\in\R^m: \vert t\vert\le M\}$, and $O^m(M):=\{t\in\R^m: \sum_{j=1}^m\vert t_j\vert\le M\}$ be the $m$-dimensional parallelepiped, cube, ball, and octahedron, respectively. In addition, $\vert \Omega\vert_k$ denotes the $k$-dimensional Lebesgue measure of a measurable set $\Omega\subseteq\R^m,\,1\le k\le m$. We also use the floor function $\lfloor a \rfloor$. Let $L_r(\Omega)$ be the space of all measurable complex-valued functions $F$ on a measurable set $\Omega\subseteq\R^m$ with the finite quasinorm \ba \|F\|_{L_r(\Omega)}:=\left\{\begin{array}{ll} \left(\int_\Omega\vert F(x)\vert^r\, dx\right)^{1/r}, & 0<r<\iy,\\ \mbox{ess} \sup_{x\in \Omega} \vert F(x)\vert, &r=\iy. \end{array}\right. \ea This quasinorm allows the following "triangle" inequality: \beq\label{E1.1} \left\|\sum_{j=1}^l F_j\right\|^{\tilde{r}}_{L_r(\Omega)} \le \sum_{j=1}^l \left\|F_j\right\|^{\tilde{r}}_{L_r(\Omega)}, \qquad F_j\in L_r(\Omega),\qquad 1\le j\le l, \eeq where $l\in\N:=\{1,\,2,\ldots\}$ and $\tilde{r}:=\min\{1,r\}$ for $r\in(0,\iy]$. In this paper we will need certain definitions and properties of convex bodies in $\R^m$. Throughout the paper $V$ is a centrally symmetric (with respect to the origin) closed convex body in $\R^m$ and $V^*:=\{y\in\R^m: \forall\, t\in V, \vert t\cdot y\vert \le 1\}$ is the \emph{polar} of $V$. It is well known that $V^*$ is a centrally symmetric (with respect to the origin) closed convex body in $\R^m$ and $V^{**} =V$ (see, e.g., \cite[Sect. 14]{R1970}). The set $V$ generates the following dual norm on $\CC^m$ by \ba \|z\|_V^*:=\sup_{t\in V}\left\vert\sum_{j=1}^m t_jz_j\right\vert,\quad z\in\CC^m. \ea Throughout the paper we assume that the body $V\subset \R^m$ satisfies the \emph{parallelepiped condition ($\Pi$-condition)}, that is, for every vector $t\in V$ with nonzero coordinates, the parallepiped $\Pi^m(t)$ is a subset of $V$. It is easy to verify that $V$ satisfies the $\Pi$-condition if and only if $V$ is symmetric about all coordinate hyperplanes, that is, for every $t\in V$ the vectors $(\pm\vert t_1\vert,\ldots, \pm\vert t_m\vert)$ belong to $V$. In particular, given $\la\in [1,\iy]$ and $\sa\in\R^m,\,\sa_j>0,\,1\le j\le m$, the set $V_{\la,\sa}:=\left\{t\in\R^m: \left(\sum_{j=1}^m\vert t_j/\sa_j\vert^{\la}\right)^{1/\la}\le 1\right\}$, satisfies the $\Pi$-condition. Therefore, the sets $\Pi^m(\sa)$ (for $\la=\iy$), $Q^m(M)$ (for $\la=\iy$ and $\sa=(M,\ldots,M)$), $\BB^m(M)$ (for $\la=2$ and $\sa=(M,\ldots,M)$), and $O^m(M)$ (for $\la=1$ and $\sa=(M,\ldots,M)$) satisfy the $\Pi$-condition as well. Given $a\ge 0$, the set of all trigonometric polynomials $T(x)=\sum_{\theta\in aV\cap \Z^m}c_\theta\exp[i(\theta\cdot x)]$ with complex coefficients is denoted by $\TT_{aV}$. \begin{definition}\label{D1.1} We say that an entire function $f:\CC^m\to \CC^1$ has exponential type $V$ if for any $\vep>0$ there exists a constant $C_0(\vep,f)>0$ such that for all $z\in \CC^m$, $\vert f(z)\vert\le C_0(\vep,f)\exp\left((1+\vep)\|z\|_V^*\right)$. \end{definition} The class of all entire function of exponential type $V$ is denoted by $B_V$. In the univariate case we use the notation $B_\la:=B_{[-\la,\la]},\,\la>0$. Throughout the paper, if no confusion may occur, the same notation is applied to $f\in B_V$ and its restriction to $\R^m$ (e.g., in the form $f\in B_V\cap L_p(\R^m))$. The class $B_V$ was defined by Stein and Weiss \cite[Sect. 3.4]{SW1971}. For $V=\Pi^m(\sa),\,V=Q^m(M),$ and $V=\BB^m(M)$, similar classes were defined by Bernstein \cite{B1948} and Nikolskii \cite[Sects. 3.1, 3.2.6]{N1969}, see also \cite[Definition 5.1]{DP2010}. Properties of functions from $B_V$ have been investigated in numerous publications (see, e.g., \cite{B1948, N1969, SW1971, NW1978, G1982, G1991, G2001} and references therein). Some of these properties are presented in Lemma \ref{L2.1}. Given $a\ge 0$, let $\PP_{aV}$ be a set of all polynomials $P(x)=\sum_{\be \in aV\cap \Z_+^m}c_\be x^\be$ in $m$ variables with complex coefficients whose Newton polyhedra are subsets of $aV$. In the univariate case we use the notation $\PP_a=\PP_{\lfloor a\rfloor}:=\PP_{a[-1,1]}$. In the case of $V=O^m(1),\,\PP_{nV}=\PP_{O^m(n)}$ coincides with the set of all polynomials in $m$ variables of total degree at most $n,\,n\in\N$. It is easy to verify that if $V_1\subseteq V_2$, then $B_{V_1}\subseteq B_{V_2}$ and $\PP_{aV_1}\subseteq\PP_{aV_2}$. Throughout the paper $C,\,C_1,\,C_2,\ldots,C_{25}$ denote positive constants independent of essential parameters. Occasionally we indicate dependence on certain parameters. The same symbol $C$ does not necessarily denote the same constant in different occurrences, while $C_k,\,1\le k\le 25$, denotes the same constant in different occurrences. \vspace{.12in}\\ \textbf{ Markov-Bernstein-Nikolskii Type Inequalities.} Let $D_N:=\sum_{\vert\al\vert=N}b_\al D^\al$ be a linear differential operator with constant coefficients $b_\al\in\CC^1,\,\vert\al\vert=N,\, N\in \Z^1_+$. We assume that $D_0$ is the corresponding imbedding or identity operator. Next, we define sharp constants in multivariate Markov-Bernstein-Nikolskii type inequalities for algebraic and trigonometric polynomials and entire functions of exponential type. Let \bna &&M_{p,D_N,n,m,V}:=n^{-N-m/p} \sup_{P\in\PP_{O^m(n)}\setminus\{0\}}\frac{\vert D_N(P)(0)\vert} {\|P\|_{L_p(V^*)}},\label{E1.2a}\\ &&\Tilde{M}_{p,D_N,a,m,V}:=a^{-N-m/p} \sup_{P\in\PP_{aV}\setminus\{0\}}\frac{\vert D_N(P)(0)\vert} {\|P\|_{L_p(Q^m(1))}},\label{E1.2}\\ &&P_{p,D_N,a,m,V}:=a^{-N-m/p} \sup_{T\in\TT_{aV}\setminus\{0\}}\frac{\|D_N(T)\|_{L_\iy(Q^m(\pi))}} {\|T\|_{L_p(Q^m(\pi))}},\nonumber\\ && E_{p,D_N,m,V}:= \sup_{f\in (B_{V}\cap L_p(\R^m))\setminus\{0\}}\frac{\|D_N(f)\|_{L_\iy(\R^m)}} {\|f\|_{L_p(\R^m)}}.\label{E1.3} \ena Here, $a>0,\,N\in\Z^1_+,\,n\in\N,\,V\subset\R^m$, and $p\in(0,\iy]$. In a sense, $M_{p,D_N,n,m,V}$ and $\Tilde{M}_{p,D_N,n,m,V},\linebreak n\in\N$, are dual sharp constants since the domain of integration $V^*$ in \eqref{E1.2a} is the polar of the polynomial "degree" $V$ in \eqref{E1.2}, and the domain of integration $Q^m(1)=(O^m(1))^*$ in \eqref{E1.2} is the polar of the polynomial "degree" $O^m(1)$ in \eqref{E1.2a}. In particular, $M_{p,D_N,n,m,O^m(1)}=\Tilde{M}_{p,D_N,n,m,O^m(1)},\,n\in\N$. We show in this paper that the equality can be asymptotically extended to any $V$, satisfying the $\Pi$-condition. Newton polyhedra and polynomial classes $\PP_{aV}$ associated with Newton polyhedra play an important role in algebra, geometry, and analysis (see, e.g., a survey \cite[Sect. 3]{AVGK1984}). However, the only sharp estimate for polynomials from $\PP_{aV}$ we know in multivariate approximation theory is a sharp V. A. Markov-type inequality for polynomial coefficients with $a\in\N$ and $V=\Pi^m(\sa),\,\sa_j\in\N,\,1\le j\le m,$ proved by Bernstein \cite[Theorem 1]{B1948a} (see \eqref{E1.5c} below). The purpose of this paper is to prove a limit relation between $E_{p,D_N,m,V}$ and $\Tilde{M}_{p,D_N,a,m,V}$ as $a\to\iy$ for $V$, satisfying the $\Pi$-condition. The following limit relation for multivariate trigonometric polynomials \beq\label{E1.5} \lim_{a\to\iy}P_{p,D_N,a,m,V}=E_{p,D_N,m,V},\qquad p\in(0,\iy], \eeq was proved by the author \cite[Theorem 1.3]{G2018}. In the univariate case of $V=[-1,1],\,D_N=d^N/dx^N$, and $a\in\N$, \eqref{E1.5} was proved by the author and Tikhonov \cite{GT2017}. In earlier publications \cite{LL2015a, LL2015b}, Levin and Lubinsky established versions of \eqref{E1.5} on the unit circle for $N=0$. Quantitative estimates of the remainder in asymptotic equalities of the Levin-Lubinsky type were found by Gorbachev and Martyanov \cite{GM2020}. Certain extensions of the Levin-Lubinsky's results to the $m$-dimensional unit sphere in $\R^{m+1}$ were recently proved by Dai, Gorbachev, and Tikhonov \cite{DGT2018}. The first sharp constant in the inequality for polynomial coefficients was found by V. A. Markov \cite{M1892} (see also \cite[Eqs. (5.1.4.1)]{MMR1994}) in the form ($n\in\N$) \bna\label{E1.5a} &&M_{\iy,d^N/dx^N,n,1,[-1,1]} =\Tilde{M}_{\iy,d^N/dx^N,n,1,[-1,1]}\nonumber\\ &&=\mu^N_n:=n^{-N} \left\{\begin{array}{ll} \left\vert T_{n-1}^{(N)}(0)\right\vert, &n-N\,\mbox{is odd},\\ \left\vert T_{n}^{(N)}(0)\right\vert, &n-N\,\mbox{is even} \end{array}\right. \nonumber\\ &&=1+o(1)=(1+o(1))E_{\iy,d^N/dx^N,1,[-1,1]}, \ena as $n\to\iy$, where $T_n\in\PP_n$ is the Chebyshev polynomial of the first kind. For $p=2$ Labelle \cite{L1969} proved the equalities ($n\in\N,\,N\le n$) \bna\label{E1.5b} &&M_{2,d^N/dx^N,n,1,[-1,1]} =\Tilde{M}_{2,d^N/dx^N,n,1,[-1,1]}\nonumber\\ &&=\frac{(2N)!}{2^N N!}\sqrt{N+1/2}\,\,n^{-(N+1/2)} \binom{\lfloor(n-N)/2\rfloor+N+1/2} {N+1/2}=\frac{1+o(1)}{\sqrt{\pi(2N+1)}}\nonumber\\ &&=(1+o(1))E_{2,d^N/dx^N,1,[-1,1]}, \ena as $n\to\iy$. The following sharp constant in the multivariate inequality for polynomial coefficients was found in \cite[Theorem 1]{B1948a}: \beq\label{E1.5c} \Tilde{M}_{\iy,D^\al,a,m,\Pi^m(\sa)} =a^{-\vert\al\vert} \prod_{j=1}^m \lfloor a\sa_j\rfloor^{\al_j} \mu^{\al_j}_{\lfloor a\sa_j\rfloor} =(1+o(1))\prod_{j=1}^m\sa_j^{\al_j} =(1+o(1))E_{\iy,D^\al,m,\Pi^m(\sa)}, \eeq as $a\to\iy$, where $\mu^{\al_j}_{\lfloor a\sa_j\rfloor}$, is defined in \eqref{E1.5a} and $\sa_j>0,\,1\le j\le m$. Note that \beq\label{E1.5d} \Tilde{M}_{\iy,D^\al,a,m,\Pi^m(\sa)} \le \prod_{j=1}^m\sa_j^{\al_j}, \eeq which follows from the left equality in \eqref{E1.5c} and the corresponding univariate version of \eqref{E1.5d} $\mu^N_{n}\le 1$ (see \cite[Eq. 2.6(9)]{T1963} with its proof in \cite[Lemma 2.5]{G2019b}). A crude estimate \beq\label{E1.5e} \vert c_\be\vert \le \left(\prod_{j=1}^m \be_j!\right)^{-1}\,(A(V)a/M)^{\vert\be\vert} \|P\|_{L_\iy(Q^m(M))}, \qquad \be\in aV\cap\Z^m_+, \eeq for coefficients of a polynomial $P(x)=\sum_{\be\in aV\cap\Z^m_+}c_\be x^\be$ from $\PP_{aV}$ follows immediately from \eqref{E1.5d} if we choose a cube $Q^m(A),\,A=A(V)$, such that $V\subseteq Q^m(A)$ and use \eqref{E1.5d} for $\Pi^m(\sa)=Q^m(A)$. The author \cite[Theorem 1.2]{G2019b} extended \eqref{E1.5a} and \eqref{E1.5b} to a general asymptotic relation for a multivariate $L_p$-version of the V. A. Markov constant for polynomial coefficients in the following form ($n\in\N,\,p\in(0,\iy]$): \beq\label{E1.6} \lim_{n\to\iy}M_{p,D_N,n,m,V}=E_{p,D_N,m,V}. \eeq For $m=1,\,D_N=d^N/dx^N$, and $V=[-1,1]$ this equality was proved by the author in \cite[Theorem 1.1]{G2017}. A special case of \eqref{E1.6} for an even $N\in\Z^1_+,\,p\in[1,\iy]$, the unit ball $V=\BB^m(1)$, and the operator $D_N=\Delta^{N/2}$, where $\Delta$ is the Laplace operator, was obtained by the author in \cite[Corollary 4.4]{G2019}. Note that relations \eqref{E1.5} and \eqref{E1.6} are valid for any centrally symmetric $V$ (see \cite{G2018, G2019b}). Note also that certain properties of the sharp constants in univariate weighted spaces are discussed by Arestov and Deikalova \cite{AD2015}. In addition, note that the Bernstein-Nikolskii sharp constants $E_{p,D_N,m,V}$ can be easily found only for $p=2$ (see \cite[Eq. (1.6)]{G2018}). Despite the fact that the constants $M_{p,D_N,n,m,V}$ and $\Tilde{M}_{p,D_N,a,m,V}$ for $m>1$ are defined differently by \eqref{E1.2a} and \eqref{E1.2}, it turns out that they are asymptotically equal. In this paper we extend \eqref{E1.5a}, \eqref{E1.5b}, and \eqref{E1.5c} to a general asymptotic relation for $\Tilde{M}_{p,D_N,a,m,V}$, which is similar to \eqref{E1.6}. \vspace{.12in}\\ \textbf{Main Results and Remarks.} Recall that $V$ is a closed convex body in $\R^m$, satisfying the $\Pi$-condition. In particular, $V$ is centrally symmetric (with respect to the origin). \begin{theorem} \label{T1.2} If $N\in\Z^1_+,\,V\subset\R^m$, and $p\in(0,\iy]$, then $ \lim_{a\to\iy}\Tilde{M}_{p,D_N,a,m,V}$ exists and \beq \label{E1.7} \lim_{a\to\iy}\Tilde{M}_{p,D_N,a,m,V}=E_{p,D_N,m,V}. \eeq In addition, there exists a nontrivial function $f_0\in B_V\cap L_p(\R^m)$ such that \beq \label{E1.8} \lim_{a\to\iy}\Tilde{M}_{p,D_N,a,m,V}= \|D_N(f_0)\|_{L_\iy(\R^m)}/\|f_0\|_{L_p(\R^m)}. \eeq \end{theorem} \noindent The following corollary is a direct consequence of relations \eqref{E1.5}, \eqref{E1.6}, and \eqref{E1.7}. \begin{corollary} \label{C1.3} If $n\in\N,\,N\in\Z^1_+,\,V\subset\R^m$, and $p\in(0,\iy]$, then \ba \lim_{n\to\iy}M_{p,D_N,n,m,V}=\lim_{a\to\iy}\Tilde{M}_{p,D_N,a,m,V} =\lim_{a\to\iy}P_{p,D_N,a,m,V}=E_{p,D_N,m,V}. \ea \end{corollary} \begin{remark}\label{R1.4} Relations \eqref{E1.7} and \eqref{E1.8} show that the function $f_0\in B_V\cap L_p(\R^m)$ from Theorem \ref{T1.2} is an extremal function for $E_{p,D_N,m,V}$. \end{remark} \begin{remark}\label{R1.5} In definitions \eqref{E1.2} and \eqref{E1.3} of the sharp constants we discuss only complex-valued functions $P$ and $f$. We can define similarly the "real" sharp constants if the suprema in \eqref{E1.2} and \eqref{E1.3} are taken over all real-valued functions on $\R^m$ from $\PP_{aV}\setminus\{0\}$ and $(B_V\cap L_p(\R^m))\setminus\{0\}$, respectively. It turns out that the "complex" and "real" sharp constants coincide. For $m=1$ this fact was proved in \cite[Sect. 1]{G2017} (cf. \cite[Theorem 1.1]{GT2017} and \cite[Remark 1.5]{G2019b}), and the case of $m>1$ can be proved similarly. \end{remark} \begin{remark}\label{R1.5a} Answering a referee's question, we announced in \cite[Remark 1.6]{G2019b} relation \eqref{E1.7} for $V=Q^m(M)$ and $a\in\N$ with a typo ($a^{-N-m/p}$ was missing). \end{remark} \begin{remark}\label{R1.6} Now and then we call $M_{p,D_N,n,m,V}$ and $\Tilde{M}_{p,D_N,a,m,V}$ the V. A. Markov constants for polynomial coefficients because of relations \eqref{E1.5a}. However, there are different constants $\MM_{p,D_N,n,m,V}$ and $\Tilde{\MM}_{p,D_N,a,m,V}$, defined by \eqref{E1.2a} and \eqref{E1.2}, respectively, with $n^{-N-m/p} \vert D_N(P)(0)\vert$ replaced by the corresponding $L_\iy$-norm. They are associated with the name of V. A. Markov as well because he \cite{M1892} found the sharp constant for $m=1,\,p=\iy,\,D_N=d^N/dx^N$, and $V=[-1,1]$. A brief survey on $\MM_{p,d^N/dx^N,n,1,[-1,1]}=\Tilde{\MM}_{p,d^N/dx^N,n,1,[-1,1]}$ and its asymptotic behaviour were presented in \cite{G2017} (see also \cite[Corollary 4.6]{G2019}). Certain estimates of $\MM_{p,D_0,n,m,V}$ were surveyed in \cite[Remark 1.8]{G2019b}. \end{remark} The proof of Theorem \ref {T1.2} is presented in Section \ref{S3}. It follows general ideas developed in \cite[Corollary 7.1]{G2020}. Section \ref{S2} contains certain properties of functions from $B_V$ and $\PP_{aV}$. \section{Properties of Entire Functions and Polynomials}\label{S2} \setcounter{equation}{0} \noindent In this section we discuss certain properties of entire functions of exponential type and polynomials that are needed for the proof of Theorem \ref {T1.2}. We start with three standard properties of multivariate entire functions of exponential type. \begin{lemma}\label{L2.1} (a) If $f\in B_V$, then there exists $M=M(V)>0$ such that $f\in B_{Q^m(M)}$.\\ (b) The following crude Bernstein and Nikolskii type inequalities hold true: \bna &&\left\|D^\al(f)\right\|_{L_{\iy}(\R^m)} \le C \left\|f\right\|_{L_{\iy}(\R^m)},\quad f\in B_V\cap L_\iy(\R^m), \quad \al\in Z^m_+,\label{E2.1}\\ &&\left\|f\right\|_{L_{\iy}(\R^m)} \le C \left\|f\right\|_{L_{p}(\R^m)},\quad f\in B_V\cap L_p(\R^m),\quad p\in(0,\iy),\label{E2.2} \ena where $C$ is independent of $f$.\\ (c) For any sequence $\{f_n\}_{n=1}^\iy,\, f_n\in B_V\cap L_\iy(\R^m),\,n\in\N,$ with $\sup_{n\in\N}\| f_n\|_{L_\iy(\R^m)}= C$, there exist a subsequence $\{f_{n_d}\}_{d=1}^\iy$ and a function $f_0\in B_V\cap L_\iy(\R^m)$ such that for every $\al\in\Z^m_+$, \beq\label{E2.3} \lim_{d\to\iy} D^\al f_{n_d}=D^\al f_0 \eeq uniformly on any compact set in $\CC^m$. \end{lemma} \proof Statement (a) follows from the obvious inclusion $V\subseteq Q^m(M)$ for a certain $M=M(V)>0$ (cf. \cite[Lemma 2.1 (a)]{G2019b}). Inequality \eqref{E2.1} for $V=Q^m(M),\,M>0$, is well known (see, e.g., \cite[Eq. 3.2.2(8)]{N1969}), while for any $V$, \eqref{E2.1} follows from statement (a) (cf. \cite[Lemma 2.1(c)]{G2019b}). Inequality \eqref{E2.2} was established in \cite[Theorem 5.7]{NW1978}. Statement (c) was proved in \cite[Lemma 2.3]{G2018}. \hfill $\Box$ Given $a\ge 0,\,\g>0$, and a univariate continuous function $f\in L_\iy(\R^1)$, let \beq\label{E2.3a} E(f,\PP_{a},L_\iy([-\g,\g])):=\inf_{R\in\PP_a}\|f-R\|_{L_\iy([-\g,\g])} =\|f-R_a\|_{L_\iy([-\g,\g])} \eeq be the error of best approximation of $f$ by polynomials from $\PP_{a}$ in the norm of $L_\iy([-\g,\g])$. Here, $R_a(\cdot)=R_a(f,\g,\cdot)\in\PP_a$ is the polynomial of best uniform approximation to $f$. Some elementary properties of $R_a$ are discussed in the next lemma. \begin{lemma}\label{L2.2} (a) The following inequality holds true: \beq\label{E2.4} \|R_a\|_{L_\iy([-\g,\g])}\le 2\|f\|_{L_\iy(\R^1)}. \eeq (b) If $f_\mu(v):=f(\mu v),\,\mu\ne 0$, then $R_a(f_\mu,\g/\vert\mu\vert,v) =R_a(f,\g,\mu v),\,v\in[-\g/\vert\mu\vert,\g/\vert\mu\vert]$.\\ (c) For $a_j\ge 0,\,\g_j>0$, and $t\in\R^d$ with $t_j\ne 0,\,1\le j\le d$, the following inequality holds true: \bna\label{E2.5} &&\max_{\vert x_j\vert\le \g_j/\vert t_j\vert, 1\le j\le d} \left\vert\prod_{j=1}^df(t_jx_j) -\prod_{j=1}^d R_{a_j}(f, \g_j,t_jx_j)\right\vert\nonumber\\ && \le \|f\|_{L_\iy(\R^1)}^{d-1} \sum_{j=1}^d 2^{j-1} E(f(t_j\cdot),\PP_{a_j}, L_\iy([-\g_j/\vert t_j\vert,\g_j/\vert t_j\vert])). \ena \end{lemma} \proof Statement (a) follows from the inequalities \ba \|R_a\|_{L_\iy([-\g,\g])}\le \|f\|_{L_\iy([-\g,\g])} +E(f,\PP_{a},L_\iy([-\g,\g]))\le 2\|f\|_{L_\iy([-\g,\g])}, \ea while statement (b) is an immediate consequence of the Kolmogorov characterization of an element of best approximation to a complex-valued function \cite{K1948} (see also \cite[Theorem 1.9]{S1974} and \cite[Sect. 47]{A1965}). To prove statement (c), we note that for $\vert x_j\vert\le \g_j/\vert t_j\vert,\, 1\le j\le d$, the following relations hold true by \eqref{E2.4}: \bna &&\left\vert\prod_{j=1}^df(t_jx_j) -\prod_{j=1}^d R_{a_j}(f,\g_j,t_jx_j)\right\vert\nonumber\\ &=&\left\vert\sum_{j=1}^d\left[f(t_jx_j)-R_{a_j} (f,\g_j,t_jx_j)\right] \prod_{k=j+1}^df(t_kx_k)\prod_{k=1}^{j-1} R_{a_k}(f,\g_k,t_kx_k)\right\vert \label{E2.6a}\\ &\le& \|f\|_{L_\iy(\R^1)}^{d-1} \sum_{j=1}^d 2^{j-1} \left\|f(t_j\cdot)-R_{a_j}(f,\g_j,t_j\cdot)\right\|_ {L_\iy([-\g_j/\vert t_j\vert,\g_j/\vert t_j\vert])},\label{E2.6} \ena where $\prod_{k=l}^q:=1$ for $q<l$. Note that the proof of identity \eqref{E2.6a} is simple and left as an exercise to the reader. Then \eqref{E2.5} follows from \eqref{E2.6} since $R_{a_j}(f,\g_j,t_jx_j) =R_{a_j}(f_{t_j},\g_j/\vert t_j\vert,x_j)$ by statement (b) for $a=a_j,\,\mu=t_j\ne 0$, and $v=x_j, \,1\le j\le d$. \hfill $\Box$ \begin{remark}\label{R2.2a} Concerning Lemma \ref{L2.2} (b), we note that for every fixed $v\in\R^1$ the polynomial $\left\{\begin{array}{ll} R_a(f,\g,\mu v), &\mu\ne 0,\\ f(0), &\mu=0, \end{array}\right.$ is obviously a continuous function of $\mu\in\R^1\setminus \{0\}$, but it can be discontinuous at $\mu=0$ since $R_a(f,\g,0)$ is not necessarily equal to $f(0)$. \end{remark} In the next four lemmas we discuss estimates of the error of polynomial approximation for functions from $B_V$. \begin{lemma}\label{L2.3} Let $g\in B_\la\cap L_\iy(\R^1)$ be a univariate entire function of exponential type at most $\la>0$. Given $a\ge 1$ and $\tau\in(0,1)$, the following inequality holds true: \beq\label{E2.7} E(g,\PP_{a},L_\iy([-a\tau/\la,a\tau/\la])) \le C_1(\tau)\exp[-C_2(\tau)\,a]\,\|g\|_{L_\iy(\R^1)}, \eeq where \beq\label{E2.8} C_1(\tau):=2\left(1+1/\sqrt{1-\tau^2}\right),\qquad C_2(\tau):=\log\left(1+\sqrt{1-\tau^2}\right)-\log \tau-\sqrt{1-\tau^2}>0. \eeq \end{lemma} \noindent \proof It is known (see, e.g., \cite[Sect 5.4.4]{T1963}) that for any $g\in B_\la\cap L_\iy(\R^1),\,a\ge 1,\, \tau\in(0,1)$, and $\de>0$, \ba E(g,\PP_{a},L_\iy([-a\tau/\la,a\tau/\la])) \le \frac{2\exp[a\tau \de]} {\de\left(\de+\sqrt{1+\de^2}\right)^{\lfloor a\rfloor}} \|g\|_{L_\iy(\R^1)}. \ea Therefore, \bna\label{E2.9} &&E(g,\PP_{a},L_\iy([-a\tau/\la,a\tau/\la]))\nonumber\\ &&\le \frac{2\left(\de+\sqrt{1+\de^2}\right)}{\de} \exp\left[\left(\tau\de -\log\left(\de+\sqrt{1+\de^2}\right)\right)a\right] \|g\|_{L_\iy(\R^1)}. \ena Setting $\de=\sqrt{1-\tau^2}/\tau$ in \eqref{E2.9}, we arrive at \eqref{E2.7} and \eqref{E2.8}. \hfill $\Box$ In case of $a\in\N$, versions of Lemma \ref{L2.3} were proved by the author \cite[Lemma 4.1]{G1982} and Bernstein \cite[Theorem VI]{B1946} (see also \cite[Sect. 5.4.4]{T1963} and \cite[Appendix, Sect. 83]{A1965}). More general and more precise inequalities were obtained in \cite{G1982} and \cite{G1991}. \begin{lemma}\label{L2.4} For given $a\ge 1$ and $\tau\in(0,1)$ and for every $t\in V$, there exists a polynomial $P_t(x)=P_{t,a,V,\tau}(x) =\sum_{\be\in aV\cap\Z^m_+}c_{\be}(t)x^\be$ from $\PP_{aV}$ such that $c_{\be}=c_{\be,a,V,\tau}\in L_\iy(V), \be\in aV\cap\Z^m_+$, and the following inequality holds true: \beq\label{E2.10} \esssup_{t\in V}\max_{x\in Q^m(a\tau)} \vert\exp[i(t\cdot x)]-P_{t}(x)\vert \le C_3(\tau,m)\exp[-C_4(\tau,V)\, a], \qquad t\in V. \eeq \end{lemma} \proof We prove the lemma in three steps.\vspace{.12in}\\ \textbf{Step 1.} We first obtain the univariate inequality ($\la\ne 0$) \beq\label{E2.11} E\left(\exp[i\la\cdot],\PP_{a},L_\iy ([-a\tau/\vert \la\vert,a\tau/\vert\la\vert])\right) \le C_1(\tau)\exp[-C_2(\tau)\,a] \eeq by using Lemma \ref{L2.3} for $g(\cdot)=\exp[i\la\cdot]\in B_\la\cap L_\iy(\R^1)$.\vspace{.12in}\\ \textbf{Step 2.} Next, we prove \eqref{E2.10} for a parallelepiped $V=\Pi^m(u)$, where $u\in\R^m,\,u_j\ne 0,\,1\le j\le m$, in the following form: \beq\label{E2.12} \esssup_{t\in \Pi^m(u)} \max_{x\in Q^m(a\tau)} \vert\exp[i(t\cdot x)]-P_{t,a,\Pi^m(u),\tau}(x)\vert \le C_3(\tau,m)\exp \left[-C_4\left(\tau,\Pi^m(u)\right)\, a\right]. \eeq Here, \beq\label{E2.13} C_3(\tau,m)=m2^{m-1}C_1(\tau),\qquad C_4\left(\tau,\Pi^m(u)\right) =\min_{1\le j\le m}\vert u_j\vert\,C_2(\tau), \eeq and the constants $C_1$ and $C_2$ in \eqref{E2.11} and \eqref{E2.13} are defined by \eqref{E2.8}. To prove \eqref{E2.12}, for any $t\in \Pi^m(u)$ we define a polynomial \beq\label{E2.13a} P_t(x)= P_{t,a,\Pi^m(u),\tau}(x):=\left\{\begin{array}{ll} \prod_{t_j\ne 0,1\le j\le m} R_{a\vert u_j\vert} \left(\exp[i\cdot],a\tau\vert u_j\vert,t_jx_j\right), &t\ne 0,\\ 1,&t=0, \end{array}\right. \eeq from the class $\PP_{a\Pi^m(u)}=\PP_{\Pi^m(au)}$. We recall that $R_a=R_a(f,\g,\cdot)$ is defined by \eqref{E2.3a}. Since $\vert t_j\vert\le \vert u_j\vert,\,1\le j\le m$, we obtain from \eqref{E2.13a}, \eqref{E2.5}, and \eqref{E2.11} \ba &&\max_{x\in Q^m(a\tau)} \vert\exp[i(t\cdot x)]-P_{t}(x)\vert\\ &&\le \max_{t_j\ne 0,\,\vert x_j\vert \le a\tau\vert u_j\vert/\vert t_j\vert,\,1\le j\le m } \vert\exp[i(t\cdot x)]-P_{t}(x)\vert\\ &&\le 2^{m-1}\sum_{t_j\ne 0,1\le j\le m} E(\exp[it_j\cdot],\PP_{a\vert u_j\vert}, L_\iy\left(\left[-a\tau\vert u_j\vert/ \vert t_j\vert,a\tau\vert u_j\vert/\vert t_j\vert\right]\right) \\ &&\le m2^{m-1}C_1(\tau)\exp \left[-\min_{1\le j\le m}\vert u_j\vert\, C_2(\tau)\, a\right]. \ea This proves \eqref{E2.12} and \eqref{E2.13}. Note that by formula \eqref{E2.13a}, all coefficients $c_\be(t),\,\be\in \Pi^m(au)\cap\Z^m_+$, of the polynomial $P_t$ are continuous in $t\in \R^m\setminus \bigcup_{j=1}^m H_j$, where $H_j$ is the $j$th $(m-1)$-dimensional coordinate hyperplane in $\R^m,\,1\le j\le m$. We also note that the coefficients can be discontinuous on $H:=\bigcup_{j=1}^m H_j$ (see Remark \ref{R2.2a}). However, $c_{\be}=c_{\be,a,\Pi^m(au),\tau}\in L_\iy(\R^m),\, \be\in \Pi^m(au)\cap\Z^m_+$. Indeed, using relations \eqref{E2.13a} and \eqref{E2.4}, we obtain the inequality \beq\label{E2.13b} \max_{x\in Q^m(a\tau)}\vert P_t(x)\vert \le 2^{m} \eeq for every $t\in \R^m$. Therefore, for coefficients of $P_t$ we have the estimate $\sup_{t\in \R^m}\vert c_{\be}(t)\vert<\iy,\, \be\in \Pi^m(au)\cap\Z^m_+$, by \eqref{E1.5e} and \eqref{E2.13b}. Then $c_{\be}\in L_\iy(\R^m),\, \be\in \Pi^m(au)\cap\Z^m_+$, since $\vert H\vert_m=0$. \vspace{.12in}\\ \textbf{Step 3.} Finally, let $V$ be a convex body, satisfying the $\Pi$-condition.\\ \textbf{Step 3a).} First of all, given $\de\in(1,\iy)$, we construct a finite family of parallelepipeds \linebreak $\left\{\Pi^m\left(u^{(k)}\right)\right\}_{k=1}^K$ such that \beq\label{E2.14} V\subseteq \bigcup_{k=1}^K \Pi^m\left(u^{(k)}\right)\subseteq \de V,\qquad \min_{1\le j\le m,1\le k\le K}\left\vert u^{(k)}_j\right\vert \ge C_5(\de,V), \eeq where $K=K(\de,m,V)$. To construct the family, we first consider the following parallelepipeds \ba \Pi_l:=\left\{x\in\R^m:\vert x_l\vert\le C_6(\de,V),\, \vert x_j\vert\le C_7(\de,V),\,j\ne l\right\}, \ea where $C_6(\de,V):=\min_{1\le l\le m} \sqrt{(1+\de)/2}\,\left|OX_l\cap V\right|_1$ ($OX_l$ is the $l$th coordinate axis, $1\le l\le m$), and $C_7(\de,V)$ is chosen such that \beq\label{E2.15} \Pi_l\subseteq \sqrt{\de}\, V,\quad 1\le l\le m, \qquad \inf_{x\in V\setminus\bigcup_{l=1}^m\Pi_l} \min_{1\le j\le m}\left\vert x_j\right\vert \ge C_7(\de,V). \eeq Since $\sqrt{(1+\de)/2}<\sqrt{\de}$, there is a small enough $C_7(\de,V)<C_6(\de,V)$ such that \eqref{E2.15} holds true. Next, let $x\in V\setminus\bigcup_{l=1}^m\Pi_l$. Then $\vert x_j\vert>0,\,1\le j\le m$, by the second relation of \eqref{E2.15}, and $x$ is an interior point of $\Pi^m(\de x)$. In addition, since $V$ satisfies the $\Pi$-condition, we see that $\Pi^m(\de x)\subseteq \de V$. Furthermore, setting \beq\label{E2.16} \Pi_x:=\left\{\begin{array}{ll} \sqrt{\de}\,\Pi_l,&x\in \Pi_l\cap V,\,1\le l\le m,\\ \Pi^m(\de x), &x\in V\setminus\bigcup_{l=1}^m\Pi_l, \end{array}\right. \eeq for every $x\in V$, we see by the construction of $\Pi_x=\Pi^m(u),\,u=u(x)\in \de V,$ and by relations \eqref{E2.15} and \eqref{E2.16} that \beq\label{E2.16a} V\subseteq \bigcup_{x\in V}\Pi_x\subseteq \de V,\qquad \min_{x\in V}\min_{1\le j\le m}\left\vert u_j(x)\right\vert \ge \sqrt{\de}C_7(\de,V). \eeq To construct the family $\left\{\Pi^m\left(u^{(k)}\right)\right\}_{k=1}^K$ with $K=K(\de,m,V)$, we need the following special case of Morse's theorem \cite{M1947} (see also \cite[Remark 1.4]{Gu1975}): \begin{lemma}\label{L2.5} Let for every $x\in V$ there exist a parallelepiped $\tilde{\Pi}_x$ (not necessarily centered at the origin), satisfying the condition: there exists a fixed constant $C\ge 1$ independent of $x$, and there exist two balls $x+\BB^m(r(x))$ and $x+\BB^m(Cr(x))$ centered at $x$ of radiuses $r(x)$ and $Cr(x)$, respectively, such that $x+\BB^m(r(x))\subseteq \tilde{\Pi}_x\subseteq x+\BB^m(Cr(x))$. Then a family $\left\{\tilde{\Pi}_x\right\}_{x\in V}$ contains a subfamily $\pi:=\left\{\tilde{\Pi}_{x(d)}\right\}_{d=1}^\iy$ with the following properties: \begin{itemize} \item[(a)] $V\subseteq \bigcup_{d=1}^\iy\tilde{\Pi}_{x(d)}$; \item[(b)] there exist subfamilies $\pi_k,\, 1\le k\le K_1(m,C)$, of mutually disjoint parallelepipeds such that $\pi=\cup_{k=1}^{K_1}\pi_k$. \end{itemize} \end{lemma} \noindent Then the family $ \left\{\tilde{\Pi}_x\right\}_{x\in V}=\left\{\Pi_x\right\}_{x\in V}$ defined by \eqref{E2.16} satisfies the condition of Lemma \ref{L2.5}. Indeed, by the construction of $\Pi_x$, the condition of Lemma \ref{L2.5} is satisfied for \ba r(x)=C_8(\de,V):=\left(\sqrt{\de}-1\right)C_7, \quad x\in V;\qquad C=\de D(V)/C_8, \ea where $D(V)$ is the diameter of $V$. In addition, note that any two parallelepipeds defined by \eqref{E2.16} have nonempty intersection. Hence subfamilies $\pi_k,\, 1\le k\le K_1(m,C)$, from property (b) of Lemma \ref{L2.5} contain no more than one parallelepiped. Therefore, by Lemma \ref{L2.5}, there exists a finite subfamily of parallelepipeds $\left\{\tilde{\Pi}_{x(d)}\right\}_{d=1}^{K_1} =\left\{\Pi^m\left(u^{(k)}\right)\right\}_{k=1}^K$ with $K(\de,m,V):=K_1(m,C)$ and $u^{(k)}\in \de V,\,1\le k\le K$, such that \eqref{E2.14} holds true for $C_5=\sqrt{\de} C_7$ by \eqref{E2.16a}.\vspace{.12in}\\ \textbf{Step 3b).} Furthermore, given $\tau\in(0,1)$, let us set $\de=1/\tau$, and let $\left\{\Pi^m\left(u^{(k)}\right)\right\}_{k=1}^K$ be a finite family of parallelepipeds, where $K=K(\de,m,V)$ and $u^{(k)}\in \de V,\,1\le k\le K$, such that \eqref{E2.14} holds true. Let us define $P_t(x)=P_{t,a,V,\tau}(x),\,x\in Q^m(a\tau),\, t\in V$, by the formula \beq\label{E2.17} P_{t,a,V,\tau}(x):=P_{t,a/\de,\Pi^m\left(u^{(k)}\right),\tau}(x), \qquad t\in V\cap\left(\Pi^m\left(u^{(k)}\right) \setminus \bigcup_{l=1}^{k-1}\Pi^m\left(u^{(l)}\right)\right), \quad 1\le k\le K. \eeq Recall that the polynomial $P_{t,a/\de,\Pi^m (u^{(k)}),\tau}(x)$ is defined by \eqref{E2.13a}, and its coefficients belong to $L_\iy(\Pi^m (u^{(k)})),\,1\le k\le K$. Since $V\subseteq \bigcup_{k=1}^K\Pi^m\left(u^{(k)}\right)$ by \eqref{E2.14}, we see from \eqref{E2.17} that the coefficients of $\PP_{t,a,V,\tau}$ belong to $L_\iy(V)$. Next, since $\bigcup_{k=1}^K\Pi^m\left(u^{(k)}\right)\subseteq \de V$ by \eqref{E2.14}, $P_{t,a,V,\tau}\in \PP_{(a/\de)(\de V)}=\PP_{aV}$ for each fixed $t\in V$. Furthermore, we obtain from \eqref{E2.17}, \eqref{E2.12}, \eqref{E2.13}, and \eqref{E2.14} \ba && \esssup_{t\in V} \max_{x\in Q^m(a\tau)} \vert\exp[i(t\cdot x)]-P_{t,a,V,\tau}(x)\vert\\ &&\le \max_{1\le k\le K} \esssup_{t\in \Pi^m\left(u^{(k)}\right)} \max_{x\in Q^m(a\tau)} \left\vert\exp[i(t\cdot x)] -P_{t,a\tau,\Pi^m\left(u^{(k)}\right),\tau}(x)\right\vert\\ &&\le C_3(\tau,m) \exp\left[-\min_{1\le k\le K}\min_{1\le j\le m} \vert u_j^{(k)}\vert\,C_2(\tau)\tau\, a\right]\\ &&\le C_3(\tau,m)\exp \left[-C_5(1/\tau,V)C_2(\tau)\tau\, a\right]\\ &&=C_3(\tau,m)\exp \left[-C_4(\tau,V)\, a\right]. \ea This completes the proof of Lemma \ref{L2.4}. \hfill $\Box$ \begin{lemma}\label{L2.6} For any $f\in B_V\cap L_\iy(\R^m),\,\tau\in(0,1)$, and $a\ge 1$, there is a polynomial $P_a=P_{a,V,\tau,f}\in\PP_{aV}$ such that for each $\al\in\Z_+^m$ and $r\in(0,\iy]$, \beq\label{E2.18} \lim_{a\to\iy}\left\|D^\al(f)-D^\al(P_a)\right\|_{L_{r}(Q^m(a\tau))}=0. \eeq \end{lemma} \proof We prove the lemma in three steps.\vspace{.12in}\\ \textbf{Step 1.} We first assume that $f\in B_V\cap L_2(\R^m)$. By the Paley-Wiener type theorem \cite[Theorem 4.9]{SW1971}, there exists $\vphi\in L_2(V)$ such that $f(x)=(2\pi)^{-m/2}\int_V \vphi(t) \exp[i(t\cdot x)]\,dt,\,x\in\R^m$. Let $P_t(x)$ be a polynomial from Lemma \ref{L2.4}. Then for $a\ge 1$ the integral ($x\in\R^m$) \ba P_a^*(x)=P_a^*(f,V,\tau,x) :=(2\pi)^{-m/2}\int_V \vphi(t)P_t(x)\,dt =(2\pi)^{-m/2}\sum_{\be\in aV\cap\Z^m_+}\int_V \vphi(t)c_\be(t)\,dt\,x^\be \ea exists since $c_\be=c_{\be,a,V,\tau}\in L_\iy(V),\, \be\in aV\cap\Z^m_+$. Therefore, $P_a^*\in\PP_{aV}$. Next, it follows from \eqref{E2.10} that given $\tau\in(0,1)$, \bna\label{E2.19} \left\|f-P_a^*\right\|_{L_{\iy}(Q^m(a\tau))} &\le& (2\pi)^{-m/2}\int_V\vert \vphi(t)\vert\, dt\, \esssup_{t\in V} \max_{x\in Q^m(a\tau)} \vert\exp[i(t\cdot x)]-P_{t}(x)\vert\nonumber\\ &\le& \vert V\vert_m^{1/2} C_3(\tau,m)\exp[-C_4(\tau,V)\, a]\,\left\|f\right\|_{L_{2}(\R^m)}, \ena where $C_3$ and $C_4$ are the constants from Lemma \ref{L2.4}.\vspace{.12in}\\ \textbf{Step 2.} Next, let $f\in B_V\cap L_\iy(\R^m)$. Then given $\tau\in (0,1)$ and $\vep\in(0,(1-\tau)/(2\tau C_9)]$, where $C_{9}(m,V):= (m+1)\sup_{z\in\CC^m} \vert z\vert/\|z\|_{V}^*$, the function \ba f_1(z):=f(z)\left(\frac{\sin\left[\vep \left(\sum_{j=1}^mz_j^2\right)^{1/2}\right]} {\vep\left(\sum_{j=1}^mz_j^2\right)^{1/2}}\right)^{m+1} \ea belongs to $B_{(1+\vep C_9)V}\cap L_2(\R^m)$ and $\left\|f_1\right\|_{L_{2}(\R^m)} \le C_{10}(m)\,\vep^{-m/2}\left\|f\right\|_{L_{\iy}(\R^m)}$. Replacing now $a$ with $a/(1+\vep C_9)$ and $\tau$ with $\tau(1+\vep C_9)\le (1+\tau)/2$ in \eqref{E2.19}, we see from \eqref{E2.19} that there exists a polynomial $P_a(\cdot)=P_{a,V,\tau,f,\vep}(\cdot) :=P^*_{a/(1+\vep C_9)}(f_1,(1+\vep C_9)V,\cdot)\in\PP_{aV}$, where $\vep$ will be chosen later, such that \bna\label{E2.20} &&\left\|f_1-P_a\right\|_{L_{\iy}(Q^m(a\tau))} \le \left\|f_1-P_a\right\|_{L_{\iy}(Q^m((a/(1+\vep C_9))(1+\tau)/2)} \nonumber\\ &&\le C_{11}(\tau,m,V)\vep^{-m/2} \exp[-C_4((1+\tau)/2,V)\, 2a\tau/(1+\tau)]\,\left\|f\right\|_{L_{\iy}(\R^m)} \nonumber\\ &&=C_{11}(\tau,m,V)\vep^{-m/2} \exp[-C_{12}(\tau,V)\, a]\,\left\|f\right\|_{L_{\iy}(\R^m)}. \ena Furthermore, using an elementary inequality $v-\sin v\le v^3/6,\,v\ge 0$, we have \bna\label{E2.21} \left\|f-f_1\right\|_{L_{\iy}(Q^m(a\tau))} &\le& (m+1)\max_{x\in Q^m(a\tau)}\left|1-\frac{\sin(\vep\vert x\vert)} {\vep\vert x\vert}\right|\left\|f\right\|_{L_{\iy}(\R^m)}\nonumber\\ &\le& (1/6)(m+1)m\,\vep^2\,a^2\left\|f\right\|_{L_{\iy}(\R^m)}. \ena Combining \eqref{E2.20} and \eqref{E2.21}, we obtain \bna\label{E2.22} \left\|f-P_a\right\|_{L_{\iy}(Q^m(a\tau))} \le C_{13}(\tau,m,V)\left(\vep^2\,a^2+\vep^{-m/2} \exp[-C_{12}(\tau,V)\, a]\right) \left\|f\right\|_{L_{\iy}(\R^m)}. \ena Finally minimizing the right-hand side of \eqref{E2.22} over all $\vep\in (0,(1-\tau)/(2\tau C_9)]$, we arrive at the following inequality: \beq\label{E2.23} \left\|f-P_a\right\|_{L_{\iy}(Q^m(a\tau))} \le C_{14}(\tau,m,V)a^{\frac{2m}{m+4}} \exp[-C_{15}(\tau,m,V)\, a] \left\|f\right\|_{L_{\iy}(\R^m)}, \eeq where $C_{15}=4C_{12}/(m+4)$. Note that if the minimum occurs at $\vep=\vep_0$, then $P_a=P_{a,V,\tau,f,\vep_0}$ in \eqref{E2.23}. \vspace{.12in}\\ \textbf{Step 3.} First of all, for $P_b\in\PP_{bV},\,b\ge 1,\,M>0$, and $\al\in\Z_+^m$, we need the following crude Markov-type inequality: \beq\label{E2.24} \left\|D^\al(P_b)\right\|_{L_\iy(Q^m(M))} \le C_{16}(m,V,\vert\al\vert) (b^2/M)^{\vert\al\vert}\|P_b\|_{L_\iy(Q^m(M))}. \eeq To prove \eqref{E2.24}, we note that there exists a constant $C_{17}(V)$ such that $P_b$ is a polynomial of total degree at most $n=\lfloor C_{17}b\rfloor\in\N$ (that is, $P_b\in \PP_{ O^m(n)}$). Then inequality \eqref{E2.24} easily follows from a multivariate A. A. Markov-type inequality proved by Wilhelmsen \cite[Theorem 3.1]{W1974}. Next, let $\{P_{a+k}\}_{k=0}^\iy$ be the sequence of polynomials, satisfying inequality \eqref{E2.23} with $a$ replaced by $a+k,\,k=0,\,1,\,\ldots$. Then the series \ba \sum_{k=0}^\iy \left(P_{a+k+1}-P_{a+k}\right) =\lim_{L\to\iy}\left(P_{a+L+1}-P_{a}\right) =\lim_{L\to\iy}\left(P_{a+L+1}-f+f-P_{a}\right) \ea converges to $f-P_a$ in the metric of $L_\iy(Q^m(a\tau))$ by \eqref{E2.23}. In addition, for any $\al\in \Z^m_+$ we obtain by \eqref{E2.24} for $M=a\tau$ and by \eqref{E2.23} \bna\label{E2.24a} &&\sum_{k=0}^\iy \left\|D^\al(P_{a+k+1}-P_{a+k}) \right\|_{L_\iy(Q^m(a\tau ))}\nonumber\\ &&\le C_{16} (a\tau)^{-\vert\al\vert} \sum_{k=0}^\iy (a+k+1)^{2\vert\al\vert} \left\|P_{a+k+1}-P_{a+k}\right\|_{L_\iy(Q^m(a\tau ))} \nonumber\\ &&\le C_{16} (a\tau)^{-\vert\al\vert} \sum_{k=0}^\iy (a+k+1)^{2\vert\al\vert} \left(\left\|f-P_{a+k+1}\right\|_{L_\iy(Q^m((a+k+1)\tau ))} + \left\|f-P_{a+k}\right\|_{L_\iy(Q^m((a+k)\tau ))}\right) \nonumber\\ &&\le 2C_{14}C_{16} (a\tau)^{-\vert\al\vert}\exp[-C_{15}\,a] \sum_{k=0}^\iy (a+k+1)^{2\vert\al\vert+2m/(m+4)} \exp[-C_{15}\,k]\,\|f\|_{L_\iy(\R^m)}\nonumber\\ &&\le C_{18}(\tau,m,V,\vert\al\vert,r)\, a^{\vert\al\vert+2}\exp[-C_{15}\,a] \,\|f\|_{L_\iy(\R^m)}. \ena Hence the series $\sum_{k=0}^\iy D^\al\left(P_{a+k+1}-P_{a+k}\right)$ is uniformly convergent on $Q^m(a\tau)$ by the Weierstrass M-test, and this series converges to $D^\al\left(f-P_a\right)$ in the metric of $L_\iy(Q^m(a\tau))$ by the Differentiation Theorem from multivariate calculus. It remains to take account of the following inequalities: \bna\label{E2.24b} && \left\|D^\al(f)-D^\al(P_a) \right\|_{L_r(Q^m(a\tau ))} \le (2a\tau)^{m/r} \left\|D^\al\left(f-P_a\right) \right\|_{L_\iy(Q^m(a\tau ))}\nonumber\\ && \le (2a\tau)^{m/r} \sum_{k=0}^\iy \left\|D^\al(P_{a+k+1}-P_{a+k}) \right\|_{L_\iy(Q^m(a\tau ))}. \ena Thus \eqref{E2.18} follows from \eqref{E2.24b} and \eqref{E2.24a}, and the proof of the lemma is completed. \hfill $\Box$ \begin{remark}\label{R2.8a} Note that limit relation \eqref{E2.18} holds true for the same polynomial $P_a$ and any $\al\in\Z^m_+$ and $r\in(0,\iy]$. The proof of this fact in Lemma \ref{L2.6} is based on the exponential approximation rate in \eqref{E2.23}. \end{remark} A certain polynomial estimate is discussed in the following lemma. \begin{lemma}\label{L2.7} Given $a\ge 1,\,M>0,\,p\in(0,\iy),\,\tau\in(0,1)$, and $P\in\PP_{aV}$, the following inequality holds true: \beq\label{E2.25} \|P\|_{L_\iy(Q^m(\tau M))} \le C_{19}(\tau,m,V,p) (a/M)^{m/p} \|P\|_{L_p(Q^m(M))}. \eeq \end{lemma} \proof Inequality \eqref{E2.25} for $V=Q^m(1)$ and $a\in\N$ follows from a more general inequality proved in \cite[Lemma 2.7 (b)]{G2019b}. To prove \eqref{E2.25} for any $V$, we note that there exists a constant $C_{20}(V)$ such that $P$ is a polynomial of degree at most $n=\lfloor C_{20}a\rfloor\in\N$ in each variable (that is, $P\in \PP_{Q^m(n)}$). Then \eqref{E2.25} follows from \cite[Lemma 2.7 (b)]{G2019b}. \hfill $\Box$ In the next lemma we discuss special properties of polynomials from $\PP_{aV}$. \begin{lemma}\label{L2.8} Given $a\ge 1,\,b\ge 1$, and $P(x)=\sum_{\be \in aV\cap \Z_+^m}c_\be x^\be \in\PP_{aV}$, let \ba R_{a,b}(t):=P(b\sin(t_1/b),\ldots, b\sin(t_1/b)),\qquad t\in\R^m, \ea be a trigonometric polynomial. Then the following statements are valid.\\ (a) $R_{a,b}\in B_{(a/b)V}$.\\ (b) For $\al\in\Z^m_+$ the following estimate holds true: \beq\label{E2.26} \left\vert D^\al(R_{a,b})(0)-D^\al(P)(0)\right\vert \le C_{21}(m,\al)\max_{0\le s_j\le \al_j,1\le j\le m,s\ne \al} \left\vert D^s(P)(0)\right\vert/b. \eeq \end{lemma} \proof (a) We see that \ba\label{E2.27} R_{a,b}(bt) =\sum_{\be\in aV\cap\Z^m_+} b^{\vert\be\vert}c_\be\prod_{j=1}^m\sin^{\be_j}t_j =\sum_{\be\in aV\cap\Z^m_+} b^{\vert\be\vert}c_\be \sum_{\theta\in\Z^m,0\le\vert \theta_j\vert\le\be_j,1\le j\le m} d_{\theta,\be}\exp[i(\theta\cdot t)]. \ea Then $R_{a,b}(b\cdot)\in\TT_{aV}$, since $V$ satisfies the $\Pi$-condition, and therefore, $R_{a,b}(\cdot)\in B_{(a/b)V}$.\vspace{.12in}\\ (b) To prove this statement, we need the identity \beq\label{E2.28} D^\al(R_{a,b})(0) =\sum_{s_1=1}^{\al_1}\ldots \sum_{s_m=1}^{\al_m} b^{\vert s\vert-\vert \al\vert}D^s(P)(0) \prod_{j=1}^mc(s_j,\al_j), \eeq where \beq\label{E2.29} c(l,k):=\sum\frac{k!1^{p_1}(-1)^{p_3}\ldots} {p_1!(1!)^{p_1}p_3!(3!)^{p_3}\ldots}, \eeq and the sum in \eqref{E2.29} is taken over all nonnegative integers $p_1,\,p_3,\ldots$, such that $1p_1+3p_3+\ldots=k$ and $p_1+p_3+\ldots=l ,\,0\le l\le k$. Identity \eqref{E2.28} for $m=1$ follows from Fa\`{a} di Bruno's formula for derivatives of the composite function $\psi(b\sin(\cdot/b))$ (see for example \cite{R1980} or \cite{CR1996}). For $m>1$, \eqref{E2.28} can be proved by induction in $m$. Since $c(k,k)=1,\,k\in\N$, by \eqref{E2.29}, estimate \eqref{E2.26} follows immediately from \eqref{E2.28}. \hfill $\Box$ \section{Proof of Theorem \ref{T1.2}}\label{S3} \noindent \setcounter{equation}{0} Throughout the section we use the notation $\tilde{p}=\min\{1,p\},\, p\in(0,\iy]$, introduced in Section \ref{S1}. \vspace{.1in}\\ \emph{Proof of Theorem \ref{T1.2}.} We first prove the inequality \beq \label{E3.1} E_{p,D_N,m,V}\le\liminf_{a\to\iy}\Tilde{M}_{p,D_N,a,m,V},\qquad p\in(0,\iy]. \eeq Let $f$ be any function from $B_V\cap L_p(\R^m),\,p\in(0,\iy]$. Then $f\in B_{Q^m(M)},\,M=M(V)>0$, by Lemma \ref{L2.1} (a); hence $D_N(f)\in B_{Q^m(M)}$ by \cite[Sect. 3.1]{N1969} (see also \cite[Lemma 2.1 (d)]{G2018}). In addition, $f\in L_\iy(\R^m)$ by Nikolskii's inequality \eqref{E2.2} and $D_N(f)\in L_p(\R^m)$ by Bernstein's and Nikolskii's inequalities \eqref{E2.1} and \eqref{E2.2} and by the "triangle" inequality \eqref{E1.1}. Therefore, \beq\label{E3.2} \lim_{\vert x\vert\to\iy}D_N(f)(x)=0,\qquad p\in(0,\iy). \eeq Indeed, since $D_N(f)\in B_{Q^m(M)}\cap L_p(\R^m)$, \eqref{E3.2} is known for $p\in[1,\iy)$ (see, e.g., \cite[Theorem 3.2.5]{N1969}), and for $p\in(0,1)$ it follows from \eqref{E2.2}, since if $D_N(f)\in L_p(\R^m),\,p\in(0,1)$, then $D_N(f)\in L_1(\R^m)$. Let us first prove \eqref{E3.1} for $p\in(0,\iy)$. Then by \eqref{E3.2}, there exists $x_0\in\R^m$ such that $\|D_N(f)\|_{L_\iy(\R^m)}=\left\vert D_N(f)(x_0)\right\vert$. Without loss of generality we can assume that $x_0=0$. Let $\tau\in(0,1)$ be a fixed number. Then using polynomials $P_a\in\PP_{aV},\,a\ge 1$, from Lemma \ref{L2.6}, we obtain for $r=\iy$ by \eqref{E2.18} and \eqref{E1.2}, \bna\label{E3.3} &&\|D_N(f)\|_{L_\iy(\R^m)}=\left\vert D_N(f)(0)\right\vert\nonumber\\ &&\le \lim_{a\to\iy}\left\vert D_N(f)(0)-D_N(P_a)(0)\right\vert +\liminf_{a\to\iy}\left\vert D_N(P_a)(0)\right\vert\nonumber\\ &&=\liminf_{a\to\iy}\left\vert D_N(P_a)(0)\right\vert \le \tau^{-(N+m/p)}\liminf_{a\to\iy}\left(\Tilde{M}_{p,D_N,a,m,V} \left\| P_a\right\|_{L_p(Q^m(a\tau))}\right). \ena Using again Lemma \ref{L2.6} (for $\al=0$ and $r=p$), we have from \eqref{E1.1} \bna\label{E3.4} \limsup_{a\to\iy} \left\| P_a\right\|_{L_p(Q^m(a\tau))} \le \lim_{a\to\iy}\left(\|f-P_a\|_{L_p(Q^m(a\tau))}^{\tilde{p}} +\|f\|_{L_p(Q^m(a\tau))}^{\tilde{p}}\right)^{1/\tilde{p}} =\|f\|_{L_p(\R^m)}. \ena Combining \eqref{E3.3} with \eqref{E3.4}, and letting $\tau\to 1-$, we arrive at \eqref{E3.1} for $p\in(0,\iy)$. In the case $p=\iy$, for any $\vep>0$ there exists $x_0\in\R^m$ such that $\|D_N(f)\|_{L_\iy(\R^m)}<(1+\vep)\left\vert D_N(f)(x_0)\right\vert$. Without loss of generality we can assume that $x_0=0$. Then similarly to \eqref{E3.3} and \eqref{E3.4} we can obtain the inequality \beq\label{E3.5} \|D_N(f)\|_{L_\iy(\R^m)} <(1+\vep)\tau^{-N}\liminf_{a\to\iy}\Tilde{M}_{\iy,D_N,a,m,V} \|f\|_{L_\iy(\R^m)}. \eeq Finally letting $\tau\to 1-$ and $\vep\to 0+$ in \eqref{E3.5}, we arrive at \eqref{E3.1} for $p=\iy$. This completes the proof of \eqref{E3.1}. Furthermore, we will prove the inequality \beq\label{E3.6} \limsup_{a\to\iy}\Tilde{M}_{p,D_N,a,m,V}\le E_{p,D_N,m,V},\qquad p\in(0,\iy], \eeq by constructing a nontrivial function $f_0\in B_V\cap L_p(\R^m)$, such that \beq \label{E3.7} \limsup_{a\to\iy}\Tilde{M}_{p,D_N,a,m,V} \le\|D_N(f_0)\|_{L_\iy(\R^m)}/ \|f_0\|_{L_p(\R^m)} \le E_{p,D_N,m,V}. \eeq Then inequalities \eqref{E3.1} and \eqref{E3.6} imply \eqref{E1.7}. In addition, $f_0$ is an extremal function in \eqref{E1.7}, that is, \eqref{E1.8} is valid. It remains to construct a nontrivial function $f_0$, satisfying \eqref{E3.7}. We first note that \beq \label{E3.8} \inf_{a\ge 1}\Tilde{M}_{p,D_N,a,m,V}\ge C_{22}(p,N,D_N,m,V). \eeq This inequality follows immediately from \eqref{E3.1}. Let $U_a\in\PP_{aV}$ be a polynomial, satisfying the equality \beq \label{E3.9} \Tilde{M}_{p,D_N,a,m,V}=a^{-N-m/p}\left\vert D_N(U_a)(0)\right\vert /\|U_a\|_{L_p(Q^m(a))},\qquad a\ge 1. \eeq The existence of an extremal polynomial $U_a$ in \eqref{E3.9} can be proved by the standard compactness argument (see, e.g., \cite[Proof of Theorem 1.5]{GT2017} and \cite[Proof of Theorem 1.3]{G2018}). Next, setting $P_a(x):=U_a(x/a)$, we have from \eqref{E3.9} that \beq \label{E3.10} \Tilde{M}_{p,D_N,a,m,V}=\left\vert D_N(P_a)(0)\right\vert /\|P_a\|_{L_p(Q^m(a))}=1/\|P_a\|_{L_p(Q^m(a))}, \eeq since we can assume that \beq \label{E3.11} \left\vert D_N(P_a)(0)\right\vert=1. \eeq Then it follows from \eqref{E3.10}, \eqref{E3.11}, and \eqref{E3.8} that \ba \|P_a\|_{L_p(Q^m(a))} =1/\Tilde{M}_{p,D_N,a,m,V}\le 1/C_{22}(p,N,D_N,m,V). \ea Hence using Lemma \ref{L2.7} for $M=a$ and $\tau\in(0,1)$, we obtain the estimate \beq \label{E3.12} \sup_{a\ge 1}\|P_a\|_{L_\iy(Q^m(a\tau))} \le C_{19}/C_{22} = C_{23}(\tau,p,N,D_N,m,V). \eeq In addition, combining estimates \eqref{E1.5e} for $M=a\tau$ and \eqref{E3.12}, we have for any $s\in\Z^m_+$, \beq \label{E3.13} \left\vert D^s(P_a)(0)\right\vert \le (A(V)/\tau)^{\vert s\vert} C_{23} = C_{24}(\tau,p,N,D_N,m,V,s). \eeq Furthermore, we define a trigonometric polynomial \ba R_{a,a\tau}(t) :=P_a(a\tau\sin(t_1/(a\tau)),\ldots, a\tau\sin(t_m/(a\tau))),\qquad t\in \R^m. \ea Then $R_{a,a\tau}$ satisfies the following properties: \begin{itemize} \item[(P1)] $R_{a,a\tau}\in B_{(1/\tau)V}$. \item[(P2)] The following relations hold true: \beq \label{E3.14} \sup_{a\ge 1}\|R_{a,a\tau}\|_{L_\iy(Q^m(a\tau\pi/2)} =\sup_{a\ge 1}\|R_{a,a\tau}\|_{L_\iy(\R^m)}\le C_{23}. \eeq \item[(P3)] For $\al\in\Z^m_+$ and $a\tau\ge 1$, \bna \label{E3.15} \left\vert D^\al(R_{a,a\tau})(0)-D^\al(P_{a})(0)\right\vert &\le& C_{21}\max_{0\le s_j\le \al_j,1\le j\le m,s\ne \al} C_{24}(\tau,p,N,D_N,m,V,s)/(a\tau)\nonumber\\ &=&C_{25}(\tau,p,N,D_N,m,V,\al)/a. \ena \item[(P4)] For $a\ge 1,\,p\in(0,\iy]$, and $M\in(0,a\tau/\sqrt{m}]$, \beq \label{E3.16} \|P_a\|_{L_p(Q^m(a)} \ge \left(1-mM^2(a\tau)^{-2}\right)^{1/p} \|R_{a,a\tau}\|_{L_p(Q^m(M))}. \eeq \end{itemize} Indeed, property (P1) follows from Lemma \ref{L2.8} (a), while (P2) is an immediate consequence of \eqref{E3.12}. Next, property (P3) follows from Lemma \ref{L2.8} (b) and relations \eqref{E3.13}. To prove (P4), we note first that for $p=\iy$ inequality \eqref{E3.16} is trivial. Next, setting $f(\cdot)=\cos(\cdot)$ and replacing $R_{a_j},\,1\le j\le m$, with $1$ in identity \eqref{E2.6a}, we obtain \bna\label{E3.17} 1-\prod_{j=1}^m \cos(t_jx_j) = \left\vert\sum_{j=1}^m\left(1-\cos(t_jx_j)\right) \prod_{k=j+1}^m\cos(t_kx_k)\right\vert \le (1/2)\sum_{j=1}^m(t_jx_j)^2. \ena Furthermore, for $a\ge 1,\,p\in(0,\iy)$, and $M\in(0,a\tau/\sqrt{m}]$, \bna\label{E3.18} &&\|P_a\|^p_{L_p(Q^m(a))} \ge \|P_a\|^p_{L_p(Q^m(a\tau))} =\int_{Q^m(a\tau\pi/2)}\left\vert R_{a,a\tau}(t)\right\vert^p \prod_{j=1}^m \cos(t_j/(a\tau))\,dt\nonumber\\ &&\ge \|R_{a,a\tau}\|^p_{L_p(Q^m(M))} -\int_{Q^m(M)}\left\vert R_{a,a\tau}(t)\right\vert^p \left(1-\prod_{j=1}^m \cos(t_j/(a\tau))\right)\,dt. \ena Finally using estimate \eqref{E3.17} for $x_j=1/(a\tau),\,1\le j\le m$, we obtain \eqref{E3.16} from \eqref{E3.18}. Let $\{a_n\}_{n=1}^\iy$ be an increasing sequence of numbers such that $\inf_{n\in\N}a_n\ge 1,\,\lim_{n\to\iy}a_n=\iy$, and \beq \label{E3.19} \limsup_{a\to\iy}\Tilde{M}_{p,D_N,a,m,V} =\lim_{n\to\iy}\Tilde{M}_{p,D_N,a_n,m,V}. \eeq Property (P1) and relation \eqref{E3.14} of property (P2) show that the sequence of trigonometric polynomials $\{R_{a_n,a_n\tau}\}_{n=1}^\iy =\{f_n\}_{n=1}^\iy$ satisfies the conditions of Lemma \ref{L2.1} (c) with $B_V$ replaced by $B_{(1/\tau)V}$. Therefore, there exist a subsequence $\{R_{a_{n_d},a_{n_d}\tau}\}_{d=1}^\iy$ and a function $f_{0,\tau}\in B_{(1/\tau)V}$ such that \beq \label{E3.20} \lim_{d\to\iy}R_{a_{n_d},a_{n_d}\tau}=f_{0,\tau},\qquad \lim_{d\to\iy}D_N\left(R_{a_{n_d},a_{n_d}\tau}\right) =D_N(f_{0,\tau}), \eeq uniformly on any cube $Q^m(M),\,M>0$. Moreover, by \eqref{E3.11}, \eqref{E3.15}, and \eqref{E3.20}, \beq \label{E3.21} \left\vert D_N(f_{0,\tau})(0)\right\vert =\lim_{d\to\iy}\left\vert D_N \left(R_{a_{n_d},a_{n_d}\tau}\right)(0) \right\vert =\lim_{d\to\iy}\left\vert D_N \left(P_{a_{n_d}}\right)(0) \right\vert =1. \eeq In addition, using \eqref{E1.1}, \eqref{E3.20}, \eqref{E3.16}, \eqref{E3.10}, and \eqref{E3.19}, we obtain for any cube $Q^m(M),\,M>0$, \bna \label{E3.22} &&\|f_{0,\tau}\|_{L_p(Q^m(M))} \le \lim_{d\to\iy} \left(\left\|f_{0,\tau}-R_{a_{n_d},a_{n_d}\tau} \right\|_{L_p(Q^m(M))}^{\tilde{p}} +\left\|R_{a_{n_d},a_{n_d}\tau}\right\|_{L_p(Q^m(M))}^{\tilde{p}} \right)^{1/\tilde{p}}\nonumber\\ &&=\lim_{d\to\iy}\left\|R_{a_{n_d},a_{n_d}\tau}\right\|_{L_p(Q^m(M))} \le \lim_{d\to\iy}\left\|P_{a_{n_d}}\right\|_ {L_p\left(Q^m\left(a_{n_d}\right)\right)} =1/ \lim_{d\to\iy}\Tilde{M}_{p,D_N,a_{n_d},m,V}. \ena Next using \eqref{E3.22} and \eqref{E3.8}, we see that \beq \label{E3.23} \|f_{0,\tau}\|_{L_p(\R^m)}\le 1/C_{22}(p,N,D_N,m,V). \eeq Therefore, $f_{0,\tau}$ is a nontrivial function from $B_{(1/\tau)V}\cap L_p(\R^m)$, by \eqref{E3.23} and \eqref{E3.21}. Thus for any cube $Q^m(M),\,M>0$, we obtain from \eqref{E3.19}, \eqref{E3.10}, \eqref{E3.16}, \eqref{E3.20}, and \eqref{E3.21} \bna \label{E3.24} \limsup_{a\to\iy}\Tilde{M}_{p,D_N,a,m,V} &=&\lim_{d\to\iy}\left(\left\|P_{a_{n_d}}\right\|_ {L_p\left(Q^m\left(a_{n_d}\right)\right)}\right)^{-1}\nonumber\\ &\le& \lim_{d\to\iy}\left(\left\| R_{a_{n_d},a_{n_d}\tau}\|\right\| _{L_p(Q^m(M))}\right)^{-1}\nonumber\\ &=&\left\vert D_N(f_{0,\tau})(0)\right\vert/ \|f_{0,\tau}\|_{L_p(Q^m(M))}. \ena It follows from \eqref{E3.24} that \beq \label{E3.25} \limsup_{a\to\iy}\Tilde{M}_{p,D_N,a,m,V} \le E_{p,D_N,m,(1/\tau)V}=\tau^{-N-m/p}E_{p,D_N,m,V}. \eeq Then letting $\tau\to 1-$ in \eqref{E3.25}, we arrive at \eqref{E3.6}. However, we need to prove stronger relations \eqref{E3.7}. To construct $f_0$, note first that $f_{0,\tau}(\tau\cdot)\in B_V$ and by \eqref{E3.23} and \eqref{E2.2}, \ba \sup_{\tau\in (1/2,1)}\|f_{0,\tau}\|_{L_\iy(\R^m)} =\sup_{\tau\in (1/2,1)}\|f_{0,\tau}(\tau\cdot)\|_{L_\iy(\R^m)} \le C \sup_{\tau\in (1/2,1)} \tau^{-m/p}\|f_{0,\tau}\|_{L_p(\R^m)} < \iy. \ea Therefore, by Lemma \ref{L2.1} (c) applied to a sequence $\left\{f_{0,\tau_n}\right\}_{n=1}^\iy$, where $\tau_n\in(1/2,1),\,n\in\N$, and $\lim_{n\to\iy}\tau_n=1$, there exist a subsequence $\{f_{0,\tau_{n_d}}\}_{d=1}^\iy$ and a function $f_0\in B_V\cap L_\iy(\R^m) =\bigcap_{d=1}^\iy \left(B_{(1/\tau_{n_d})V} \cap L_\iy(\R^m)\right)$ such that for every $\al\in\Z^m_+$, $\lim_{d\to\iy} D^\al \left(f_{0,\tau_{n_d}}\right) =D^\al \left(f_0\right)$ uniformly on any compact set in $\CC^m$. Note that by \eqref{E3.21} and \eqref{E3.23}, $f_0$ is a nontrivial function from $B_{V}\cap L_p(\R^m)$. Then using \eqref{E3.24}, we obtain \ba \limsup_{a\to\iy}\Tilde{M}_{p,D_N,a,m,V} &\le& \lim_{M\to\iy}\lim_{n\to\iy} \left\vert D_N(f_{0,\tau_n})(0)\right\vert/ \left\|f_{0,\tau_n}\right\|_{L_p(Q^m(M))}\\ &=&\left\vert D_N(f_{0})(0)\right\vert/ \left\|f_{0}\right\|_{L_p(\R^m)}\\ &\le& \left\| D_N(f_{0})\right\|_{L_\iy(\R^m)}/ \|f_{0}\|_{L_p(\R^m)}\\ &\le& E_{p,D_N,m,V}. \ea Thus \eqref{E3.7} holds true, and this completes the proof of the theorem. \hfill$\Box$ \begin{remark}\label{R3.1} The proof of \eqref{E3.1} is all but identical to the proof of the inequality $ E_{p,D_N,m,V} \le\liminf_{n\to\iy}{M}_{p,D_N,n,m,V}$ from \cite{G2019b} though these proofs are based on different lemmas. However, the proof of \eqref{E3.6} is different compared with the proof of the inequality $ \limsup_{n\to\iy}{M}_{p,D_N,n,m,V} \le E_{p,D_N,m,V}$ from \cite{G2019b}. The latter proof is based on V. A. Markov-type inequalities for polynomials from $\PP_{O^m(n)}$. We do not know if there are analogues of these inequalities for polynomials from $\PP_{aV}$ but the case of $V=\Pi^m(\sa)$, see \eqref{E1.5c}. That is why, \eqref{E3.6} is reduced to certain relations for trigonometric polynomials (cf. \cite{G2018}). \end{remark} \noindent \textbf{Acknowledgements.} We are grateful to both anonymous referees for valuable suggestions.
{ "timestamp": "2021-03-18T01:05:02", "yymm": "2007", "arxiv_id": "2007.08439", "language": "en", "url": "https://arxiv.org/abs/2007.08439" }
\section{Author's Response} \label{sec:rebuttal} We thank all the reviewers for their insightful and constructive comments. \subsection{Reviewer 1} \begin{itemize} \item \textit{``My only suggestion is to reconsider looking at the lens distortion experiment. Typically distortion is much more visible at the edges of the image, whereas the binary code data is mainly in the middle of the image.''} In addition to the backgrounds being mostly uniform, our images are all cropped from near the centre of a much larger image anyway. This means that even with a different pre-processing step such as Canny edges, which might show some peripheral features such as bottle edges in the cases where they are visible, there would be very little difference in the amount of lens distortion near the image borders. \end{itemize} \subsection{Reviewer 4} \begin{itemize} \item \textit{``I would like to see at least a small mention on these (Algorithmic Transparency, Fairness, and Accountability) in the introduction and/or conclusion part''} {\color{orange} References to Algorithmic Transparency, Fairness and Accountability have been added in the introduction (paragraph 2 last sentence) and conclusion (paragraph 1 third sentence).} \item \textit{``...some of the images taken in the dataset have, at least in a part of them, something in the background. Do you think that this might affect you analysis? I would like to see an explanation to that in the dataset section of the paper.''} Although distractors are present in the background of many images, it should not affect the conclusions we can draw from out experiments unless these distractors correlate with manufacturer or camera type. We were aware of the camera bias issue at the time out dataset was acquired, and so care was taken to avoid this specific issue: four different people captured the images, and each photographed an equal number of bottles from each manufacturer using an equal number of each camera, to avoid any correlation between camera / manufacturer and photographer, since photographers may introduce their own systematic distractors e.g. by holding the bottles a different way. {\color{orange} This has been clarified in the dataset section (paragraph 2, sentences 3 - 6).} \end{itemize} \subsection{Reviewer 6} \begin{itemize} \item \textit{``The authors claim that the CNNs identify camera types using high frequency features which are uniform throughout the image, but the adversarial attacks show rather localized perturbations, figure 5.''} We have used the term ``high frequencies'' somewhat loosely to refer to any spatially varying pattern that is detectable within a small window - {\color{magenta} this has been clarified in Section 4.D, sentence 2.} \item \textit{``They could show a heat map of all the perturbations on the entire dataset and show that it is uniform vs the the other model is localized around the text.''} Our claim that camera bias exploits high frequencies that are present throughout the image is supported by experiments classifying $32 \times 32$ image crops, particularly Figures \ref{fig:heatmap} and \ref{fig:singleimage_heatmap}. These figures show that camera classification is indeed possible from small image patches with no bias to any particular image region on average, although within individual images there are regions that cannot be classified accurately. \item \textit{``The claim that CNNs rely on the cameras types to classify seems a bit weak. To strengthen their argument, they could take the CNNs trained to classify current dataset and then use a different dataset with different objects but similarly biased with the same camera types and show that it is able to classify those images correctly.''} Training a CNN that cheats by exploiting camera bias and applying it to a new dataset with different objects but the same camera / label correlations would lend strength to our conclusions, but unfortunately we do not have access to the original cameras that captured our dataset. This is an issue because many of the potential sources of camera bias discussed in Section~\ref{sec:lit_review} produce fingerprints that are specific to individual cameras, not camera types. So a CNN that exploits camera / label correlations would not necessarily generalize to another camera biased dataset, even if the same camera types were used. \item \textit{``The training details were missing, e.g, loss function, how many layers of the used networks were retrained, any hyperparameters etc.''} {\color{magenta} Optimizer, weight decay, loss function and network pretraining details have been added to the first paragraph of Section 4.} \end{itemize} \subsection{Reviewer 7} \begin{itemize} \item \textit{``the main problem of the paper is that it looks like a survey paper and not a conference paper''} We thank the reviewer for their criticism, however we would respectfully argue this is not a survey paper; we have reviewed relevant literature to identify potential sources of camera bias, and then importantly tested each hypothesis in turn using detailed experiments until we converged on built-in image processing as the most likely explanation. \item \textit{``The authors just did some experiments to prove the authors' assumptions''} We thank the reviewer for their concern, but the original discovery that camera bias was the cause of poor generalization in our classification project took months of investigation; at no point did we assume this a priori. We believe the experiments in Section 4.2 demonstrate the role of camera bias quite adequately and we hope the reviewer agrees with the results of our experiments. \item \textit{``It is lack of novelty in general.''} We would like to reaffirm that our contribution is novel as the problem of camera bias in deep learning has not been seriously investigated in any prior work to the best of our knowledge. \end{itemize} \section{Conclusion} \label{sec:conclusion} We have shown that CNNs learn to exploit camera / class label correlations in an image classification dataset in which such correlations are present. By recognizing the camera that acquired an image, CNNs are able to infer the class label without learning any features that are relevant to the task (in our case, manufacturer classification), as evidenced by poor generalization to images where the camera / label correlation is broken. This finding has relevance both to fine grained classification and to algorithmic transparency and accountability. We also show that CNNs are capable of learning to infer origin cameras when explicitly trained to do so, corroborating the results of Bondi et al. and Tuama et al. \cite{bondi2017preliminary, tuama2016camera}. We test these phenomena across five different CNN architectures and show that the effects are common to all of them, although lesser among AlexNet and VGG16, the two architectures whose inputs must be downsampled to a smaller size due to the use of fully connected layers. We have also performed a number of experiments to gain insight into how CNNs are recognizing cameras, the results of which require some discussion in this section. Section~\ref{sec:lit_review} outlines a number of potential sources of camera identifying information, and our experiments provide evidence for and against those hypotheses. A simple explanation for camera bias would be differences in average color statistics among cameras, caused by differences in white balance and color correction settings. This hypothesis is largely ruled out by the fact that CNNs still recognize cameras easily even when hue, saturation, contrast and brightness are randomized (see Figure~\ref{fig:augmented}, Table~\ref{tab:colorjitter}). Another potential explanation was lens distortion; if different cameras have different shaped lenses then there may be slight differences in geometric distortion (e.g. radial lens distortion \cite{choi2006source}). This hypothesis too is ruled out, by the fact that CNNs are incapable of inferring cameras from binary segmented images (see Figure~\ref{fig:dotseg}, Table~\ref{tab:dotseg}). Geometric distortions would be visible in the spacing of the dots from the batch codes, which are the only features visible in these images. Chromatic aberration is also unlikely since it should only be visible at the edges of objects, not in flat regions (Figure~\ref{fig:adversarial_manufacturers},\ref{fig:singleimage_heatmap}), and should also be undetectable in grayscale images (Section~\ref{sec:colorjitter}). These results increase the likelihood that texture, which is absent in segmented images but preserved in color randomized images, plays an important role. High camera recognition accuracy on $32 \times 32$ random crops (Table~\ref{tab:crop32}), including in empty patches of the image where it is hard to imagine what features besides faint, high frequency texture are available (Figure~\ref{fig:singleimage_heatmap}), increases this likelihood further. There are two likely sources of camera correlated texture: pixel non-uniformity (PNU) noise, and the camera's on-board image processing, which typically includes algorithms such as kernel filtering, image sharpening and compression (both discussed by Lukas et al. \cite{lukavs2006digital}). PNU noise is dominated by a fixed multiplicative noise pattern that is introduced during manufacturing, as such we would expect different noise patterns in different parts of the field of view, as opposed to a repeating pattern. The fact that CNNs trained on the left hand sides of images generalize well to the right hand sides of those images (Table~\ref{tab:leftside}) therefore implies that PNU noise is not crucial for camera recognition, since they should not be able to recognize unseen noise patterns on the right side of the images. The fact that AlexNet and VGG16 are also able to recognize cameras from downsampled images is also strong evidence against PNU noise, which should be undetectable after downsampling. By a process of elimination, the most likely explanation therefore seems to be on-camera image processing algorithms. We do not consider these results to be conclusive; a conclusive answer would require full knowledge of the original cameras, which we do not have. Further research is required to ascertain exactly which textural features are exploited by CNNs to recognize cameras. \section{Dataset} \label{sec:dataset} Our dataset consists of $3090$ RGB images of the undersides of shampoo bottles. These images are of size $1024 \times 1024$ and are cropped tightly around the bottle's batch code, which is a two-line alphanumeric serial number printed by a dot matrix printer (see Figure~\ref{fig:batchcodes}). The crops are from roughly the same area of the original images, so they should cover mostly the same region of the cameras' sensors. This means they should contain roughly the same sensor pattern noise, up to some random translation. The batch codes of the two manufacturers' products are expected to differ in some potentially very subtle ways, hence the relatively high resolution of our images. Our images are captured with five different cameras: iPhone, Huawei, Samsung, Redmi and Vivo. In the base dataset these cameras occur at equal frequencies among the two manufacturers, but by excluding certain combinations of camera and label from the dataset, we can introduce correlations between camera type and class label. Since we were aware of the camera bias issue at the time our dataset was collected, care was taken to remove all sources of domain bias (e.g. different people photographing bottles from each manufacturer, and perhaps holding the bottles differently). To this end, the images were acquired by four different people who each photographed an equal number of images from each manufacturer and with each camera, all in the same room, under controlled lighting conditions. This means that domain bias should only exist if we deliberately induce it by excluding certain manufacturer / camera combinations. It also means that background distractors should be uncorrelated with manufacturer and camera type. The test set is a random $10\%$ of the samples, on which the model is never trained. \begin{figure} \centering \includegraphics[width=\linewidth]{batchcodes} \caption{A pair of images from our dataset. The left was taken with an iPhone camera, while the right was taken with a Samsung.} \label{fig:batchcodes} \end{figure} \section{Experiments} \label{sec:experiments} To address the questions raised in Section~\ref{sec:intro}, we run a series of classification experiments on variations of our dataset with camera/label correlation artificially introduced. All experiments are trained to convergence with the Adam optimizer \cite{kingma2014adam}, using a learning rate of $10^{-4}$, a weight decay of $0$, and the categorical cross entropy loss function. All accuracy numbers we report are averaged over four runs with different random number generator seeds. We perform all our experiments with five commonly used CNN architectures, all of which are pretrained on ImageNet and fine-tuned on our tasks with no layer freezing. \subsection{Camera Classification} As a basic sanity check, we verify here that state of the art vision models can very easily classify which camera took the image in this dataset. This corroborates the work of \cite{bondi2017preliminary} \cite{tuama2016camera} for our own datasets and cameras. Table~\ref{tab:camera-classification} shows that very high test accuracies can be achieved on this task across a range of architectures. As Figure~\ref{fig:camera_accuracy} shows, a pretrained ResNet34 not only achieves high accuracy at camera classification but does so very quickly. Camera recognition is learned faster than manufacturer recognition, suggesting that a model which can minimize its loss by recognizing manufacturers or by cheating by camera recognition will tend toward the latter, as it is somehow easier. \begin{table}[h!] \centering \begin{tabular}{c c} \toprule \textbf{Model} & \textbf{Test Accuracy}\\ \midrule \midrule ResNet34 & 0.999 \\ ResNet101 & 0.913 \\ InceptionV3 & 0.998 \\ AlexNet & 0.945 \\ VGG16 & 0.974 \\ \bottomrule \end{tabular} \vskip 3pt \caption{Accuracy on the test set when classifying which camera took an image.} \label{tab:camera-classification} \end{table} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{camera_classification_plot.png} \caption{A pretrained ResNet34 model learns to recognize manufacturers very quickly, and learns to recognize cameras even faster.} \label{fig:camera_accuracy} \end{figure} \subsection{Manufacturer Classification} We investigate our primary task, manufacturer classification, under three settings, which we refer to as Balanced, Partial, and Disjoint. In the Balanced setting, we use the full training set and there are no correlations between camera type and class label. In Partial, we use the same training set but with only iPhone and Samsung cameras included. In Disjoint, we introduce correlations between camera and class label by including only Manufacturer 1 images taken with iPhone or Samsung cameras, and Manufacturer 2 images taken with Huawei or Redmi cameras. Our test set is the same in all cases, balanced across camera types with no camera/label correlations. \begin{table}[h!] \centering \begin{tabular}{c c c c} \toprule \textbf{Model} & \textbf{Balanced} & \textbf{Partial} & \textbf{Disjoint}\\ \midrule \midrule ResNet34 & 0.974 & 0.957 & 0.505 \\ ResNet101 & 0.969 & 0.921 & 0.505 \\ InceptionV3 & 0.973 & 0.940 & 0.518 \\ AlexNet & 0.929 & 0.893 & 0.573 \\ VGG16 & 0.979 & 0.945 & 0.556 \\ \bottomrule \end{tabular} \vskip 3pt \caption{Manufacturer classification test set accuracy of five models with different training setups.} \label{tab:manufacturer-detection} \end{table} Table~\ref{tab:manufacturer-detection} shows the results of manufacturer classification experiments on these three datasets. It is immediately apparent that while respectable accuracy is achieved when training on the Balanced dataset, an accuracy drop of around $30\%$ occurs when training on Disjoint. In fact, Disjoint accuracy is close to $50\%$, hardly better than random guessing, which is entirely expected if the model were basing its classifications on camera types, each of which has an equal number of images of each class in the test set. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{disjoint_manufacturers_resnet34} \caption{Test accuracy plot showing the distribution of predicted labels among correct outputs, for a ResNet34 trained on the Disjoint training set, in which all Manufacturer 1 images are iPhone or Samsung, and all Manufacturer 2 are Huawei or Redmi. For images from iPhone and Samsung cameras the model predicts only Manufacturer 1, while for Huawei and Redmi it predicts only Manufacturer 2, while for the unseen Vivo images it appears to guess randomly, achieving $54\%$ accuracy with a mostly even mix of both classes. Best viewed in color.} \label{fig:disjoint_manufacturers_resnet34} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{partial_manufacturers_resnet34} \caption{Test accuracy plot showing the distribution of predicted labels among correct outputs, for a ResNet34 trained on the Partial training set, in which camera type is uncorrelated with class label but only iPhone and Samsung images are present. Overall accuracy across all camera types is close to that achieved when trained on the full dataset, with little bias in favor of familiar camera types. This implies that in the absence of camera / label correlations, the model learns robust features for manufacturer classification, which generalize well to images from unseen cameras. Best viewed in color.} \label{fig:partial_manufacturers_resnet34} \end{figure} We can confirm that this drop in accuracy is due to camera bias by observing the model's behavior across camera types in the test set. As Figure~\ref{fig:disjoint_manufacturers_resnet34} shows, a ResNet34 trained on the Disjoint dataset predicts Manufacturer 1 exclusively on test images acquired by iPhone or Samsung cameras, and Manufacturer 2 overwhelmingly on Huawei and Redmi images. Similar behavior is observed in the other models. \begin{figure*} \centering \includegraphics[width=\linewidth]{adversarial_manufacturers} \caption{Adversarial perturbations applied to two images, classified by a ResNet34 model trained on the Balanced dataset (left) and the Disjoint dataset (right). The left image in each pair shows the input image with the perturbation amplified for visibility and overlaid on top, while the right image shows just the amplified perturbation itself. Strikingly different perturbations to the same image are observed depending on whether the model was trained without camera / label correlations (Balanced) or with them (Disjoint). Best viewed digitally, zoomed in.} \label{fig:adversarial_manufacturers} \end{figure*} It is interesting to note that AlexNet and VGG16 both score higher test accuracy after training on Disjoint than their more modern counterparts, ResNet34, ResNet101 and InceptionV3. One possible explanation for this is that AlexNet and VGG16 both use fully connected layers to produce their final output, while the more recent networks are fully convolutional, i.e. using a global average pooling layer to convert the final feature maps into a fixed size vector that is then classified by a single fully connected output layer. Fully connected layers have one parameter per input unit and hence require fixed size input, whereas convolutional layers can process arbitrary sized input by using the same convolutional weights at every location in the input. AlexNet and VGG16 therefore require input images to be downsampled to $224 \times 224$, whereas the fully convolutional networks receive the full $1024 \times 1024$ images. This suggests that camera identification exploits high frequency features (as opposed to geometric distortions caused by lens variations), which are partly destroyed during downsampling, thus preventing AlexNet and VGG16 from exploiting them. When training on the Partial dataset, where only iPhone and Samsung images are present but no camera/label correlation exists, test accuracy is broadly similar to training on the full (Balanced) dataset. Not only is accuracy high, but as Figure~\ref{fig:partial_manufacturers_resnet34} shows, the model performs well on the unseen cameras. This implies that in the absence of camera/label correlation, the model learns a robust classification rule that is unaffected by camera type. \subsection{Adversarial Attacks on Manufacturer Classifiers} To gain some insight into the effect that camera / label correlations have on a trained model in terms of the patterns it learns to recognize, we perform adversarial attacks on trained models and visualize the perturbations that flip a trained model's judgement of an image from Manufacturer 2 to 1. Adversarial attacks are small perturbations to input images, imperceptible to the human eye, which nonetheless are sufficient to fool a model into classifying that image as whatever the attacker wishes \cite{szegedy2013intriguing}. They are easily generated by gradient ascent in image space, backpropagating the negative log likelihood of the target label into the image pixels and taking small steps in the direction of the resulting image gradient until the model's prediction favors our target (e.g. see Nguyen et a. \cite{nguyen2015deep}). By performing this process using the same image but different models and comparing the resulting image perturbations, we can learn something about how those models differ. Figure~\ref{fig:adversarial_manufacturers} shows that strikingly different adversarial perturbations are induced depending one which dataset the model was trained on. Perturbations that fool the Balanced model are focused around the batch code and other visible features of the bottle, such as the plastic seam, whereas those that fool the Disjoint model show a characteristic pink / green banding pattern in flat, featureless areas of the image. A distinct rainbow-like band of perturbation is also visible along the tops of images classified by the Disjoint model; these banding patterns at the tops and in featureless areas of images appear regardless of which input image the attack is performed on. Adversarial perturbations, when amplified for visibility, usually look like uninterpretable noise bearing little apparent resemblance to the target image class (e.g. Goodfellow et al. \cite{goodfellow2014explaining}), so it is interesting to see so much structure in our case. The appearance of banding patterns in flat regions provides some evidence against chromatic aberration, which should manifest at the edges of objects. \subsection{Classification of Binary Masks} As discussed in Section~\ref{sec:lit_review}, there is a finite set of image features that may be used to infer the camera from which an image originates. Since most of these features relate to color distribution or high frequency detail (i.e. patterns detectable within a small window), it seems likely that removal of these features would render camera identification impossible, and hence resolve the domain bias issues. To do this while preserving features that are likely relevant for robust manufacturer detection, we apply local mean thresholding to the images. This yields a binary image that effectively segments the dots of the batch codes while removing all elements of color and texture (see Figure~\ref{fig:dotseg}). Although lens distortion is typically more prevalent near the edges of images, As Table~\ref{tab:dotseg} shows, training on binary segmented images does not yield usable results on the manufacturer detection or camera classification tasks - in both cases, the test accuracy is close to the level expected of random guessing. As expected, we also observed no significant correlation between manufacturer classification test accuracy and camera type when training on the Disjoint dataset with binary thresholding. This largely rules out lens distortion or other large scale geometric artifacts as the source of camera bias, since these distortions would cause the dots to move and thus be visible in the binary thresholded images. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{dotseg} \caption{A bottle image with local mean thresholding applied, segmenting the batchcode dots. Origin camera classification does not work on such images, indicating that models use something other than the shape and position of the dots to classify cameras.} \label{fig:dotseg} \end{figure} \newcommand\T{\rule{0pt}{2.9ex}} \newcommand\B{\rule[-1.2ex]{0pt}{0pt}} \begin{table}[t!] \centering \begin{tabular}{l c c} \toprule \multicolumn{1}{l}{\multirow{2}{*}{\T\B\T\textbf{Model}}} & \multicolumn{2}{c}{\textbf{Test Accuracy}\B} \\ \cline{2-3} & Manufacturers & Cameras \T\B \\ \hline \hline ResNet34 & 0.489 & 0.229\T \\ ResNet101 & 0.530 & 0.186 \\ InceptionV3 & 0.525 & 0.270 \\ Alexnet & 0.499 & 0.214 \\ VGG16 & 0.616 & 0.384\B \\ \bottomrule \end{tabular} \vskip 3pt \caption{Manufacturer and camera classification accuracy on the test set when trained (and tested) on binary segmented images (see Figure~\ref{fig:dotseg}).} \label{tab:dotseg} \end{table} \subsection{Classification of Color Jittered Images} \label{sec:colorjitter} \begin{figure} \centering \includegraphics[width=\linewidth]{augmented} \caption{Color jitter augmentations applied to a single image (original in top left). Augmenting our training images with basic color distortions removes any correlations that may exist between class label and white balance, saturation, hue.} \label{fig:augmented} \end{figure} One hypothesis is that different cameras have subtly different color correction / white balancing settings, which a CNN could very easy detect and exploit, especially since the images were acquired in laboratory conditions with controlled lighting. We test this hypothesis by randomizing the hue, saturation, contrast and brightness of the images at training time, thus removing any correlation between camera type and global image color statistics. Table~\ref{tab:colorjitter} shows that even with the high level of color randomization used (see Figure~\ref{fig:augmented}), camera and manufacturer classification test accuracy remains high. Accuracy is somewhat diminished for the AlexNet and VGG16 architectures, which require downsampled images as input, suggesting that these features are still useful when high frequency features are less available. We also train models to recognize cameras from grayscale images, achieving test accuracies roughly identical to those in Table~\ref{tab:colorjitter}. \begin{table}[t!] \centering \begin{tabular}{l c c} \toprule \multicolumn{1}{l}{\multirow{2}{*}{\T\B\T\textbf{Model}}} & \multicolumn{2}{c}{\textbf{Test Accuracy}\B} \\ \cline{2-3} & Manufacturers & Cameras \T\B \\ \hline \hline ResNet34 & 0.975 & 0.992\T \\ ResNet101 & 0.961 & 0.995 \\ InceptionV3 & 0.974 & 0.998 \\ Alexnet & 0.923 & 0.768 \\ VGG16 & 0.972 & 0.883\B \\ \bottomrule \end{tabular} \vskip 3pt \caption{manufacturer and camera classification test set accuracy when trained on images with randomized hue, saturation, contrast and brightness. Robust camera classification accuracy implies that image color statistics are not necessary for camera inference.} \label{tab:colorjitter} \end{table} \subsection{Classifying Cameras from Small Image Patches} With lens deformation and color statistics ruled out as camera identifying features, we turn our attention towards high frequency features. As discussed in Section~\ref{sec:lit_review}, such features could be introduced by various forms of fixed sensor pattern noise, dust particles stuck to the lens, and image processing / compression algorithms performed automatically by the camera. We investigate the role of high frequency features by training CNNs to classify cameras given only a random $32 \times 32$ crop of our original input images (upsampled to $224 \times 224$ for Alexnet and VGG16). As Table~\ref{tab:crop32} shows, camera identification accuracy remains surprisingly robust even when input is restricted to a $32 \times 32$ window. This strongly implies that high frequency features are sufficient for camera identification, and confirms that lens distortion is not required. However, it remains unclear whether these features are localized to certain regions of the image or present uniformly. Figure~\ref{fig:heatmap} shows an accuracy heatmap, constructed by repeatedly sampling $32 \times 32$ crops from our training set and drawing a white square at the location of each correctly classified crop. This shows that classification accuracy is independent of the location of the crop, at least when averaged over the whole dataset. This implies that whatever pattern is being exploited occurs uniformly across the images on average. Figure~\ref{fig:singleimage_heatmap} shows how classification accuracy for $32 \times 32$ crops varies across five individual images, one from each camera. \begin{table}[h!] \centering \begin{tabular}{c c} \toprule \textbf{Model} & \textbf{Test Accuracy}\\ \midrule \midrule ResNet34 & 0.665 \\ ResNet101 & 0.681 \\ InceptionV3 & 0.948 \\ AlexNet & 0.770 \\ VGG16 & 0.872 \\ \bottomrule \end{tabular} \vskip 3pt \caption{Camera classification test accuracy when trained only on random $32 \times 32$ crops of the input data.} \label{tab:crop32} \end{table} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{crop32_heatmap} \caption{Heatmap representing relative classification accuracy of $32 \times 32$ crops at different locations in the image, averaged across images from the whole dataset. The lack of bias toward any particular part of the image implies that camera predictive patterns are present uniformly across the images.} \label{fig:heatmap} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{crop32_singleimage} \caption{Heatmaps representing the camera identification accuracy on $32 \times 32$ patches at different locations in single images. An image from each camera is shown, and the predictions are all from the same ResNet34 checkpoint. The model is able to correctly classify patches from most locations on most images, but some significant dark patches occur.} \label{fig:singleimage_heatmap} \end{figure} \vskip -30pt \subsection{Generalizing from Left Field of View to Right} Pixel non-uniformity (PNU) noise, as described in \cite{lukavs2006digital}, is a high frequency noise fingerprint, manifested as randomly varying sensitivities of individual sensor pixels to light. We would expect such a noise fingerprint to be non-repeating, that is, the noise pattern in one part of an image should be different to that in other parts. If the models are learning to recognize cameras by recognizing their PNU noise fingerprints, they should therefore be incapable of recognizing cameras from patches of noise fingerprint they have not encountered during training. We therefore test our models' reliance on PNU noise by training them on only the left halves of our Balanced training set images, and testing on the right halves. If they are reliant on PNU noise then generalization to the right halves of images should be poor. As Table~\ref{tab:leftside} shows, this is not the case, therefore PNU noise is unlikely to be the primary source of camera identifying information. \begin{table}[h!] \centering \begin{tabular}{c c} \toprule \textbf{Model} & \textbf{Test Accuracy}\\ \midrule \midrule ResNet34 & 0.992 \\ ResNet101 & 0.889 \\ InceptionV3 & 0.999 \\ AlexNet & 0.818 \\ VGG16 & 0.851 \\ \bottomrule \end{tabular} \vskip 3pt \caption{Camera classification accuracy on right halves of images after training on the left halves. Strong generalization to an unseen area of the training images implies that PNU noise fingerprints of the sort discussed by Lukas et al. \cite{lukavs2006digital} are unlikely to be the mechanism by which CNNs are recognizing cameras, because the noise fingerprint on the right side of the images will be different to that on the left side.} \label{tab:leftside} \end{table} \section{Introduction} \label{sec:intro} Convolutional neural networks (CNN) sometimes learn to satisfy their objective functions in ways we do not intend, typically by exploiting some subtle idiosyncrasy in the training data. For example, in \cite{fong2017interpretable} a CNN trained on ImageNet was found to be recognizing chocolate sauce pots by the presence of a spoon, because many of the chocolate sauce pots in the ImageNet dataset are indeed accompanied by a silver spoon. While effective at minimizing the loss function at training time, these clever exploits usually result in the model becoming brittle, as it is relying on characteristics that are specific to the training set and are not representative of the wider world. This tends to manifest as domain bias, whereby the model fails to generalise well to instances from other datasets with different idiosyncrasies. We investigate a real world applied computer vision problem in which severe domain bias was caused by strong correlations between camera model and class label. Since the training dataset consists of two classes acquired with different cameras, the model learns to predict the class label by recognizing the camera that captured the image. Since the sets of cameras used to acquire the two classes are non intersecting, this is sufficient to achieve perfect training accuracy, whilst learning nothing about the task itself. Our task has characteristics typical of industrial deep vision projects, and we believe the lessons learned will be useful to many deep learning practitioners working on similar projects. By illuminating the sometimes counterintuitive means by which CNNs can classify images, our work is also relevant to the ongoing quest for algorithmic transparency and accountability in machine learning. The task itself is to discriminate between shampoo bottles from two different manufacturers, which are distinguished only by very small differences in the printing of a batch code on the underside of the bottle. These differences are caused by different industrial printers being used, are independent of the actual character string that is printed, and are subtle enough that detecting them by eye is difficult even for trained experts. This therefore constitutes a fine grained binary classification problem, in which the intra-class variance is high relative to the inter-class variance. Fine grained classification is difficult, so one might intuitively expect a model to cheat more often on such tasks, if the correct decision function is more complex compared to a cheating rule. On the other hand, in this instance the exploit of recognizing cameras is also a fine grained classification task, and in general it is not obvious which tasks are ``harder'' for a CNN to learn. CNNs have been known to cheat by detecting patterns which are barely perceptible to humans, such as chromatic aberration \cite{doersch2015unsupervised}. CycleGAN even cheats its reconstruction loss by inserting steganographic codes into its converted images, which it then uses to reconstruct the originals \cite{chu2017cyclegan}. In this paper, we closely examine an instance of a model cheating on a real world visual classification task, and attempt to answer the following questions: \begin{enumerate} \item Is it possible for a CNN to recognize camera types when explicitly trained to do so? \item Can we prove that the same CNN cheats on the task of manufacturer classification by recognizing camera types? \item Does the propensity toward cheating depend on model architecture? \item How exactly does a CNN recognize camera types? \end{enumerate} Section~\ref{sec:lit_review} reviews relevant literature in fine grained classification and overfitting, while Section~\ref{sec:dataset} describes out dataset and classification task in detail. Section~\ref{sec:experiments} investigates the above questions systematically with a series of experiments, and in Section~\ref{sec:conclusion} we discuss our findings and draw conclusions. \section{Previous Work} \label{sec:lit_review} Two major branches of literature are relevant to our work: source camera identification from images, and understanding deep neural networks. \subsection{Camera / Image Sensor Pattern Identification} Because our work concerns accidental camera detection, a brief review of deliberate camera detection methods is warranted, as it may shed some light on how our model learns to cheat. Many techniques have been developed to trace digital photos back to their camera of origin, primarily by the digital forensics community \cite{fridrich2009digital}. Such techniques can be used to detect doctored images or videos, where images or frames from different cameras are spliced together \cite{cozzolino2019extracting,cozzolino2019noiseprint}. Most of these methods revolve around extracting a unique sensor noise fingerprint from the image, and matching it against the reference patterns of known cameras. Since sensor noise is a complex phenomenon with multiple sources (e.g. photonic noise, lens imperfections, dust particles, dark currents, non-uniform pixel sensitivity), there are many ways of doing this. Geradts et al. \cite{geradts2001methods} identify cameras by their unique patterns of dead and hot pixels, however not all cameras have dead pixels, and some remove them via post-processing. Kharrazi et al. \cite{kharrazi2004blind} train an SVM to recognize five different cameras based on hand-engineered feature vectors extracted from images. This approach achieves up to $95\%$ classification accuracy, but this is too low for forensic purposes. Choi et al. \cite{choi2006source} take a similar SVM based approach, additionally showing that radial lens distortion is a useful feature for identifying cameras. Unlike noise based approaches, lens distortion can identify models of camera but not individuals. Kurosawa et al. \cite{kurosawa1999ccd} recognizes cameras by dark current noise, which is a small, constant signal emitted by a CCD, varying randomly from pixel to pixel. Although every digital camera has such a noise pattern and it will always be unique, it can only be acquired from dark frames where no light strikes the sensor, and is only a small component of sensor noise. Lukas et al. \cite{lukavs2006digital} propose a more robust method that exploits the non-uniform sensitivity to light among sensor pixels, which is a much stronger component and does not require dark frames to measure. Another feature of consumer cameras that has thwarted a previous deep learning experiment \cite{doersch2015unsupervised} is chromatic aberration, in which different wavelengths of light are refracted by different amounts by the lens. This results in colored fringes around the edges of objects. This too has been used in digital forensics \cite{johnson2006exposing}. Recently, CNN-based methods have shown great potential in digital camera identification from images using standard supervised training \cite{tuama2016camera,bondi2017preliminary,yao2018graphics}, proving that CNNs are indeed able to infer which camera acquired a digital image. \subsection{Understanding Deep Convolutional Neural Networks} CNNs are often seen as something of a black box, with no clear consensus as to what information they are using to reach their decisions, how that information is represented internally, or what are the specific roles of their individual components. Attempts to answer these questions can be divided into two strands, feature visualization and attribution. Feature visualization aims to clarify the function of neurons or channels, by synthesizing images that maximize their activation \cite{olah2017feature}. Simonyan et al. \cite{simonyan2013deep} investigate what patterns CNNs look for in each image class by performing gradient ascent in image space, to maximize the activation of an output class neuron. Yosinski et al. \cite{yosinski2015understanding} do the same but with better regularization, producing more natural looking images. Mahendran et al. \cite{mahendran2015understanding} treat intermediate CNN representations as functions which they can invert via gradient ascent in image space. This yields images that the CNN maps to the same representation as the original image, implying that they ``look the same'' to the CNN. Nguyen et al. \cite{nguyen2016synthesizing} find natural looking images that maximally activate feature maps by searching the manifold learned by a generative adversarial network, rather than the full image space. Fong et al. \cite{fong2018net2vec} show evidence that far from feature maps learning separate, well defined concepts, the relationship between feature maps and semantic concepts is many-to-many, with each feature map involved in the detection of several concepts and most concepts activating multiple feature maps. Attribution investigates which parts of an image contribute most to a CNN's decision - often expressed as ``where the model is looking''. Zeiler and Fergus \cite{zeiler2014visualizing} propose two methods to this end: occlusion mapping, in which the importance of an image patch is measured as the reduction in class probability when it is obscured, and backpropagation of class probability gradients into image pixels. Both of these methods yield saliency maps showing which parts of the image have the greatest effect on the output when changed, corresponding to the notion of how much they contributed to the network's decision. Another popular approach is guided backpropagation \cite{springenberg2014striving}, which refines the gradient saliency maps of \cite{zeiler2014visualizing} by zeroing out negative gradients at every backpropagation step, so as to focus only on image parts that contribute positively to a particular class. A much faster alternative to occlusion mapping (which must run a forward pass for each test patch) is class activation mapping (CAM) \cite{zhou2016learning}, which uses final layer feature maps as saliency maps, weighted and summed according to the weight of their connection to the class neuron in question. This approach requires that the output layer takes its input directly from mean pooled feature maps (as is the case with GoogLeNet and ResNet but not for networks with fully connected layers such as AlexNet). Selvaraju et al. \cite{selvaraju2017grad} address this by using mean pooled gradients as a proxy for direct connection weights, allowing feature maps from any layer in any network to be used as saliency maps. Another technique by Fong et al. \cite{fong2017interpretable} learns a mask that causes a model to misclassify an image while obscuring the smallest area possible.
{ "timestamp": "2020-07-20T02:01:58", "yymm": "2007", "arxiv_id": "2007.08574", "language": "en", "url": "https://arxiv.org/abs/2007.08574" }
\section{Convolutional Neural Network-based Principal Component Analysis (CNN-PCA)} \label{sec-methodology} In this section, we first give a brief overview of PCA and the 2D CNN-PCA method. The 3D procedure is then introduced and described in detail. \subsection{PCA Representation} \label{sec-pca} We let the vector $\mathbf{m} \in \mathbb{R}^{N_{\text{c}}}$, where $N_{\text{c}}$ is the number of cells or grid blocks, denote the set of geological variables (e.g., facies type in every cell) that characterize the geomodel. Parameterization techniques map $\mathbf{m}$ to a new lower-dimensional variable $\boldsymbol{\xi} \in \mathbb{R}^{l}$, where $l < N_{\text{c}}$ is the reduced dimension. As discussed in detail in \cite{Liu2019}, PCA applies linear mapping of $\mathbf{m}$ onto a set of principal components. To construct a PCA representation, an ensemble of $N_{\text{r}}$ models is generated using a geomodeling tool such as Petrel \citep{manual2007petrel}. These models are assembled into a centered data matrix $Y \in \mathbb{R}^{N_{\text{c}} \times N_{\text{r}}}$, \begin{equation} \label{eq_center_data_matrix} Y = \frac{1}{\sqrt{N_{\text{r}} - 1}}[\mathbf{m}_\text{gm}^1 - \bar{\mathbf{m}}_\text{gm} \quad \mathbf{m}_\text{gm}^2 - \bar{\mathbf{m}}_\text{gm} \quad \cdots \quad \mathbf{m}_\text{gm}^{N_{\text{r}}} - \bar{\mathbf{m}}_\text{gm}], \end{equation} where $\mathbf{m}_\text{gm}^i\in \mathbb{R}^{N_{\text{c}}}$ represents realization $i$, $\bar{\mathbf{m}}_\text{gm}\in \mathbb{R}^{N_{\text{c}}}$ is the mean of the $N_{\text{r}}$ realizations, and the subscript `gm' indicates that these realizations are generated using geomodeling software. A singular value decomposition of $Y$ gives $Y = U\Sigma V^T$, where $U \in \mathbb{R}^{N_{\text{c}} \times N_{\text{r}}}$ and $V \in \mathbb{R}^{N_{\text{r}} \times N_{\text{r}}}$ are the left and right singular matrices and $\Sigma \in \mathbb{R}^{N_{\text{r}} \times N_{\text{r}}}$ is a diagonal matrix containing singular values. A new PCA model $\mathbf{m}_\text{pca} \in \mathbb{R}^{N_{\text{c}}}$ can be generated as follows, \begin{equation} \label{eq_pca} \mathbf{m}_\text{pca} = \bar{\mathbf{m}}_\text{gm} + U_l\Sigma_l \boldsymbol{\xi}_l, \end{equation} where $U_l \in \mathbb{R}^{N_{\text{c}} \times l}$ and $\Sigma_l \in \mathbb{R}^{l \times l}$ contain the leading left singular vectors and singular values, respectively. Ideally, it is the case that $l << N_{\text{c}}$. By sampling each component of $\boldsymbol{\xi}_l$ independently from the standard normal distribution and applying Eq.~\ref{eq_pca}, we can generate new PCA models. Besides generating new models, we can also apply PCA to approximately reconstruct realizations of the original models. We will see later that this is required for the supervised-learning-based loss function used to train 3D CNN-PCA. Specifically, we can project each of the $N_{\text{r}}$ realizations of $\mathbf{m}_\text{gm}$ onto the principal components via \begin{equation} \label{eq_pca_proj} \hat{\boldsymbol{\xi}}_l^i = \Sigma^{-1}_lU^T_l(\mathbf{m}_\text{gm}^i - \bar{\mathbf{m}}_\text{gm}), \hspace{8px} i=1,...,N_{\text{r}} . \end{equation} Here $\hat{\boldsymbol{\xi}}_l^i$ denotes low-dimensional variables obtained through projection. The `hat' is added to differentiate $\hat{\boldsymbol{\xi}}_l^i$ from low-dimensional variables obtained through sampling ($\boldsymbol{\xi}_l$). We can then approximately reconstruct $\mathbf{m}_\text{gm}^i$ as \begin{equation} \label{eq_pca_recon} \hat{\mathbf{m}}_\text{pca}^i = \bar{\mathbf{m}}_\text{gm} + U_l \Sigma_l \hat{\boldsymbol{\xi}}_l^i = \bar{\mathbf{m}}_\text{gm} + U_l U_l^T (\mathbf{m}_\text{gm}^i - \bar{\mathbf{m}}_\text{gm}), \hspace{8px} i=1,...,N_{\text{r}}, \end{equation} where $\hat{\mathbf{m}}_\text{pca}^i$ is referred to as a reconstructed PCA model. The larger the reduced dimension $l$, the closer $\hat{\mathbf{m}}_\text{pca}^i$ will be to $\mathbf{m}_\text{gm}^i$. If all $N_{\text{r}} - 1$ nonzero singular values are retained, $\hat{\mathbf{m}}_\text{pca}^i$ will exactly match $\mathbf{m}_\text{gm}^i$. For systems where $\mathbf{m}_\text{gm}$ follows a multi-Gaussian distribution, the spatial correlation of $\mathbf{m}_\text{gm}$ can be fully characterized by two-point correlations, i.e., covariance. For such systems, $\mathbf{m}_\text{pca}$ (constructed using Eq.~\ref{eq_pca}) will essentially preserve the spatial correlations in $\mathbf{m}_\text{gm}$, assuming $l$ is sufficiently large. For systems with complex geology, where $\mathbf{m}_\text{gm}$ follows a non-Gaussian distribution, the spatial correlation of $\mathbf{m}_\text{gm}$ is characterized by multiple-point statistics. In such cases, the spatial structure of $\mathbf{m}_\text{pca}$ can deviate significantly from that of $\mathbf{m}_\text{gm}$, meaning the direct use of Eq.~\ref{eq_pca} is not appropriate. \subsection{2D CNN-PCA Procedure} \label{Subsect_cnn_pca} In CNN-PCA, we post-process PCA models using a deep convolutional neural network to achieve better correspondence with the underlying geomodels. This process can be represented as \begin{equation} \mathbf{m}_\text{cnnpca} = f_W(\mathbf{m}_\text{pca}), \end{equation} where $f_W$ denotes the model transform net, the subscript $W$ indicates the trainable parameters within the network, and $\mathbf{m}_\text{cnnpca} \in \mathbb{R}^{N_{\text{c}}}$ is the resulting geomodel. For 2D models, the training loss ($L^i$) for $f_W$, for each training sample $i$, includes a content loss ($L_\text{c}^i$) and a style loss ($L_\text{s}^i$): \begin{equation} \label{eq:2Dloss} L^i = L_\text{c}^i(f_W(\mathbf{m}_\text{pca}^i), \mathbf{m}_\text{pca}^i) + L_\text{s}^i(f_W(\mathbf{m}_\text{pca}^i), M_\text{ref}), \hspace{8px} i=1,..,N_{\text{t}}, \end{equation} where $\mathbf{m}_\text{pca}^i, i=1,..,N_{\text{t}}$ is a training set (of size $N_{\text{t}}$) of random new PCA models, and $M_\text{ref}$ is a reference model (e.g., training image or an original realization $\mathbf{m}_\text{gm}$). Here $L_\text{c}^i$ quantifies the `closeness' of $f_W(\mathbf{m}_\text{pca}^i)$ to $\mathbf{m}_\text{pca}^i$, and acts to ensure that the post-processed model resembles (to some extent) the input PCA model. The style loss $L_\text{s}^i$ quantifies the resemblance of $f_W(\mathbf{m}_\text{pca}^i)$ to $M_\text{ref}$ in terms of spatial correlation structure. As noted in Section~\ref{sec-pca}, the spatial correlation of non-Gaussian models can be characterized by high-order multiple-point statistics. It is not practical, however, to compute such quantities directly. Therefore, $L_\text{s}^i$ in Eq.~\ref{eq:2Dloss} is not based on high-order spatial statistics but rather on low-order statistics of features extracted from another pretrained CNN, referred to as the loss net \citep{Gatys2015, Johnson2016}. More specifically, we feed the 2D models $\mathbf{m}_\text{gm}$, $\mathbf{m}_\text{pca}$ and $f_W(\mathbf{m}_\text{pca})$ through the loss net and extract intermediate feature matrices $F_k(\mathbf{m})$ from different layers $k \in \kappa$ of the loss net. The uncentered covariance matrices, called Gram matrices, are given by $G_k(\mathbf{m}) = F_k(\mathbf{m})F_k(\mathbf{m})^T/(N_{\text{c},k}N_z{k})$, where $N_{\text{c},k}$ and $N_z{k}$ are the dimensions of $F_k(\mathbf{m})$. These matrices have been shown to provide an effective set of metrics for quantifying the multipoint correlation structure of 2D models. The style loss is thus based on the differences between $f_W(\mathbf{m}_\text{pca}^i)$ and reference model $M_\text{ref}$ in terms of their corresponding Gram matrices. This is expressed as \begin{equation} \label{eq-ls} L_\text{s}^i(f_W(\mathbf{m}_\text{pca}^i), M_\text{ref}) = \sum_{k \in \kappa}\dfrac{1}{N_z{k}^2}||G_k(f_W(\mathbf{m}_\text{pca}^i)) - G_k(M_\text{ref})||, \hspace{8px} i=1,..,N_{\text{t}}. \end{equation} The content loss is based on the difference between the feature matrices for $f_W(\mathbf{m}_\text{pca}^i)$ and $\mathbf{m}_\text{pca}^i$ from a particular layer in the network. For the 2D models considered in \cite{Liu2019} and \cite{Liu2020}, VGG net was used as the loss net \citep{Simonyan2015a}. We have a concept of transfer learning here as the VGG net, pretrained on image classification, was shown to be effective at extracting features from 2D geomodels. However, since VGG net only accepts image-like 2D input, it cannot be directly used for extracting features from 3D geological models. Our extension of CNN-PCA to 3D involves two main components: the replacement of VGG net with 3D CNN models, and the use of a new loss term based on supervised learning. We now describe the 3D CNN-PCA formulation. \subsection{3D CNN-PCA Formulation} We experimented with several pretrained 3D CNNs for extracting features from 3D geomodels, which are required to compute style loss in 3D CNN-PCA. These include VoxNet \citep{Maturana2015} and LightNet \citep{Zhi2017} for 3D object recognition, and C3D net \citep{Tran2015} for classification of video clips in the sports-1M dataset \citep{Karpathy2014}. These CNNs accept input as dense 3D tensors with either three spatial dimensions or two spatial dimensions and a temporal dimension. Thus all are compatible for use with 3D geomodels. After numerical experimentation with these various networks, we found the C3D net to perform the best, based on visual inspection of the geomodels generated by the trained model transform nets. Therefore, we use the C3D net in our 3D CNN-PCA formulation. We observed, however, that the Gram matrices extracted from the C3D net were not as effective as they were with VGG net in 2D. This is likely due to the higher dimensionality and larger degree of variability of the 3D geomodels relative to those in 2D. We therefore considered additional treatments, and found that the use of a new supervised-learning-based loss term provides enhanced 3D CNN-PCA geomodels. We now describe this procedure. As discussed in Section~\ref{sec-pca}, we can approximately reconstruct realizations of the original model $\mathbf{m}_\text{gm}$ with PCA using Eq.~\ref{eq_pca_recon}. There is however reconstruction error between the two sets of models. The supervised learning component entails training the model transform net to minimize an appropriately defined reconstruction error. Recall that when the trained model transform net is used at test time (e.g., to generate new random models or to calibrate geomodels during history matching), this entails post-processing new PCA models $\mathbf{m}_\text{pca}(\boldsymbol{\xi}_l)$. Importantly, these new models involve $\boldsymbol{\xi}_l$ rather than $\hat{\boldsymbol{\xi}}_l^i$. In other words, at test time we do not have corresponding pairs of $(\mathbf{m}_\text{gm}^i, \hat{\mathbf{m}}_\text{pca}^i)$. Thus, during training, it is beneficial to partially `disrupt' the direct correspondence that exists between each pair of $(\mathbf{m}_\text{gm}^i, \hat{\mathbf{m}}_\text{pca}^i)$. An effective way of accomplishing this is to perturb the reconstructed PCA models used in training. We proceed by adding random noise to the $\hat{\boldsymbol{\xi}}_l^i$ in Eq.~\ref{eq_pca_proj}; i.e., \begin{equation} \Tilde{\boldsymbol{\xi}}_l^i = \hat{\boldsymbol{\xi}}_l^i + \boldsymbol{\epsilon}^i = \Sigma^{-1}_lU^T_l(\mathbf{m}_\text{gm}^i - \bar{\mathbf{m}}_\text{gm}) + \boldsymbol{\epsilon}^i, \hspace{8px} i=1,...,N_{\text{r}}, \label{eq_pca_proj_perturb} \end{equation} where $\Tilde{\boldsymbol{\xi}}_l^i$ denotes the perturbed low-dimensional variable and $\boldsymbol{\epsilon}^i$ is a perturbation term. Then we can approximately reconstruct $\mathbf{m}_\text{gm}^i$ with \begin{equation} \label{eq_pca_recon_perturb} \Tilde{\mathbf{m}}_\text{pca}^i = \bar{\mathbf{m}}_\text{gm} + U_l\Sigma_l \Tilde{\boldsymbol{\xi}}_l^i, \hspace{8px} i=1,...,N_{\text{r}}. \end{equation} We now describe how we determine $l$ and specify $\boldsymbol{\epsilon}$. To find $l$, we apply the `energy' criterion described in \cite{Sarma2006} and \cite{Vo2014}. This entails first determining the total energy $E_\text{t} = \sum_{i=1}^{N_{\text{r}}-1}(\sigma^i)^2$, where $\sigma^i$ are the singular values. The fraction of energy captured by the $l$ leading singular values is given by $\sum_{i=1}^{l}(\sigma^i)^2 / E_\text{t}$. Throughout this study, we determine $l$ such that the $l$ leading singular values explain \textapprox80\% of the total energy. For the components of the perturbation term $\boldsymbol{\epsilon}$, we set $\epsilon_j = 0$ for $j=1,...,p$, where $p$ is determined such that the first $p$ leading singular values explain \textapprox40\% of the total energy, and $\epsilon_j \mathtt{\sim} N(0,1)$ for $j=p+1,...,l$. With this treatment, we perturb only the small-scale features in $\Tilde{\mathbf{m}}_\text{pca}$. This approach was found to be effective as it disrupts the precise correspondence between $(\mathbf{m}_\text{gm}^i, \hat{\mathbf{m}}_\text{pca}^i)$ pairs, while maintaining the locations of major geological features. We use the same $N_{\text{r}}$ realizations of $\mathbf{m}_\text{gm}$ as were used for constructing the PCA representation for the generation of $\Tilde{\mathbf{m}}_\text{pca}$. We reiterate that Eqs.~\ref{eq_pca_proj_perturb} and \ref{eq_pca_recon_perturb} are used here (Eqs.~\ref{eq_pca_proj} and \ref{eq_pca_recon} are not applied). The supervised-learning loss function for each pair of $(\mathbf{m}_\text{gm}^i, \Tilde{\mathbf{m}}_\text{pca}^i)$, which we refer to as the reconstruction loss, is given by \begin{equation} \label{eq:3D_rec_loss} L_\text{rec}^i(\mathbf{m}_\text{gm}^i, f_W(\Tilde{\mathbf{m}}_\text{pca}^i)) = ||\mathbf{m}_\text{gm}^i- f_W(\Tilde{\mathbf{m}}_\text{pca}^i)||_1, \hspace{8px} i=1,...,N_{\text{r}}. \end{equation} Note that in 3D CNN-PCA we take $N_{\text{t}}=N_{\text{r}}$ in all cases. The style loss is evaluated using a separate set of $N_{\text{r}}$ new PCA models $\mathbf{m}_\text{pca}^i(\boldsymbol{\xi}_l^i)$, $i=1,...,N_{\text{r}}$, with $\boldsymbol{\xi}_l^i$ sampled from \textcolor{blue}{$N(\mathbf{0},I_l)$}. As for the reference model $M_\text{ref}$, in 2D CNN-PCA we used either a reference training image or one realization of $\mathbf{m}_\text{gm}$. Here we generate realizations of the original geomodel using object-based techniques in Petrel, so there is no reference training image. We therefore use realizations of $\mathbf{m}_\text{gm}$ to represent the reference. Instead of using one particular realization, all $N_{\text{r}}$ realizations of $\mathbf{m}_\text{gm}$ (in turn) are considered as reference models. Specifically, we use $\mathbf{m}_\text{gm}^i$ as the reference model for new PCA model $\mathbf{m}_\text{pca}^i$. It is important to emphasize that $\mathbf{m}_\text{gm}^i$ and $\mathbf{m}_\text{pca}^i$ are completely unrelated in terms of the location of geological features -- we are essentially assigning a random reference model ($\mathbf{m}_\text{gm}^i$) for each $\mathbf{m}_\text{pca}^i$. However, because the style loss is based on summary spatial statistics, the exact location of geological features does not affect the evaluation of the loss. The style loss between $\mathbf{m}_\text{gm}^i$ and the (non-corresponding) new PCA model $\mathbf{m}_\text{pca}^i$ is given by \begin{equation} \label{eq:3D_style_loss} L_\text{s}^i(\mathbf{m}_\text{gm}^i, f_W(\mathbf{m}_\text{pca}^i)) = \sum_{k \in \kappa}\dfrac{1}{N_z{k}^2}||G_k(\mathbf{m}_\text{gm}^i) - G_k(f_W(\mathbf{m}_\text{pca}^i))||_1, \hspace{8px} i=1,...,N_{\text{r}}, \end{equation} where $G_k$ are Gram matrices based on features extracted from different layers in the C3D net. The C3D net consists of four blocks of convolutional and pooling layers. Here we use the last convolutional layer of each block, which corresponds to $k= 1, 2, 4, 6$. Details on the network architecture are provided in SI. A hard data loss term is also include to assure hard data (e.g., facies type at well locations) are honored. Hard data loss $L_\text{h}^i$ is given by \begin{equation} L_\text{h}^i = \dfrac{1}{N_{\text{h}}}\left[\Bh^T(\mathbf{m}_\text{gm}^i-f_W(\mathbf{m}_\text{pca}^i))^2 + \Bh^T(\mathbf{m}_\text{gm}^i-f_W(\Tilde{\mathbf{m}}_\text{pca}^i))^2\right], \hspace{8px} i=1,...,N_{\text{r}}, \end{equation} where $\Bh$ is a selection vector, with $h_j=1$ indicating the presence of hard data at cell $j$ and $h_j=0$ the absence of hard data, and $N_{\text{h}}$ is the total number of hard data. The final training loss is a weighted combination of the reconstruction loss, style loss and hard data loss. For each pair of corresponding $(\mathbf{m}_\text{gm}^i, \Tilde{\mathbf{m}}_\text{pca}^i)$, and the unrelated new PCA model $\mathbf{m}_\text{pca}^i$, the total training loss is thus \begin{equation} \label{eq_cnnpca_loss} L^i = \gamma_r L_\text{rec}^i(\mathbf{m}_\text{gm}^i, f_W(\Tilde{\mathbf{m}}_\text{pca}^i)) + \gamma_s L_\text{s}^i(\mathbf{m}_\text{gm}^i, f_W(\mathbf{m}_\text{pca}^i)) + \gamma_h L_\text{h}^i, \hspace{8px} i=1,...,N_{\text{r}}. \end{equation} The three weighting factors $\gamma_r$, $\gamma_s$ and $\gamma_h$ are determined heuristically by training the network with a range of values and selecting the combination that leads to the lowest mismatch in quantities of interest (here we consider flow statistics) relative to the original (Petrel) geomodels. We also require that at least 99.9\% of the hard data are honored over the entire set of $\mathbf{m}_\text{cnnpca}$ models. The training set is divided into multiple mini-batches, and the total loss for each mini-batch of samples is \begin{equation} \label{eq_cnnpca_loss_total} L_\text{t} = \sum_{i=1}^{N_{\text{b}}} L^i, \end{equation} where $N_{\text{b}}$ is the batch size. \begin{figure}[!htb] \centering \includegraphics[width=1\textwidth]{cnnpca_3d_train.png} \caption{Training procedure for 3D CNN-PCA.} \label{fig-cnncpa-train} \end{figure} The model transform net for 3D CNN-PCA is obtained by replacing the 2D convolutional layers, upsampling layers, downsampling layers and padding layers in 2D CNN-PCA \citep{Liu2019} with their 3D counterparts. This can be readily accomplished within the Pytorch deep-learning framework \citep{Paszke2017}. The training procedure for 3D CNN-PCA is illustrated in Fig.~\ref{fig-cnncpa-train}. Each training sample consists of a pair of corresponding models $(\mathbf{m}_\text{gm}^i, \Tilde{\mathbf{m}}_\text{pca}^i)$ and an unrelated new PCA model $\mathbf{m}_\text{pca}^i$. The new PCA model $\mathbf{m}_\text{pca}^i$ and the reconstructed PCA model $\Tilde{\mathbf{m}}_\text{pca}^i$ are fed through the model transform net $f_W$. The reconstruction loss is evaluated using $f_W(\Tilde{\mathbf{m}}_\text{pca}^i)$ and the original model $\mathbf{m}_\text{gm}^i$ and Eq.~\ref{eq:3D_rec_loss}. To evaluate the style loss, $f_W(\mathbf{m}_\text{pca}^i)$ and $\mathbf{m}_\text{gm}^i$ are fed through the C3D net, and the relevant feature matrices are extracted. Then, Gram matrices are computed to form the style loss (Eq.~\ref{eq:3D_style_loss}). The final loss entails a weighted combination of reconstruction loss, style loss and hard data loss. The trainable parameters in $f_W$ are updated based on the gradient of the loss computed with back-propagation. This process iterates over all mini-batches and continues for a specified number of epochs. The detailed architectures for the model transform and C3D nets are provided in SI. \section{Geomodels and Flow Results Using 3D CNN-PCA} \label{sec-model-gen} We now apply the 3D CNN-PCA procedure to generate geomodels corresponding to three different geological scenarios (binary channelized, three-facies, and bimodal channelized systems). Visualizations of the CNN-PCA models, along with results for key flow quantities, are presented. All flow simulations in this work are performed using Stanford's Automatic Differentiation General Purpose Research Simulator, ADGPRS \citep{zhou2012parallel}. \subsection{Case 1 -- Binary Channelized System} The first case involves a channelized system characterized by rock facies type, with 1 denoting high-permeability sandstone and 0 indicating low-permeability mud. The geomodels are defined on a $60\times60\times40$ grid (144,000 total cells). The average sand fraction is 6.53\%. Figure~\ref{fig-chan-petrel} displays four random facies realizations generated using object-based modeling within Petrel (sandstone is shown in red, and mud in blue). \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-2-petrel-train-real5.png} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-2-petrel-train-real17.png} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-2-petrel-train-real35.png} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-2-petrel-train-real110.png} \end{subfigure}% \caption{Four realizations of the binary channel system generated using Petrel (Case~1).} \label{fig-chan-petrel} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[b]{1\textwidth} \includegraphics[width=0.24\textwidth]{case7-2-pca-real1.png} \includegraphics[width=0.24\textwidth]{case7-2-pca-real2.png} \includegraphics[width=0.24\textwidth]{case7-2-pca-real30.png} \includegraphics[width=0.24\textwidth]{case7-2-pca-real84.png} \caption{PCA models} \end{subfigure}% \begin{subfigure}[b]{1\textwidth} \includegraphics[width=0.24\textwidth]{case7-2-tpca-real1.png} \includegraphics[width=0.24\textwidth]{case7-2-tpca-real2.png} \includegraphics[width=0.24\textwidth]{case7-2-tpca-real30.png} \includegraphics[width=0.24\textwidth]{case7-2-tpca-real84.png} \caption{T-PCA models} \end{subfigure}% \begin{subfigure}[b]{1\textwidth} \includegraphics[width=0.24\textwidth]{case7-2-cnnpca-real1.png} \includegraphics[width=0.24\textwidth]{case7-2-cnnpca-real2.png} \includegraphics[width=0.24\textwidth]{case7-2-cnnpca-real30.png} \includegraphics[width=0.24\textwidth]{case7-2-cnnpca-real84.png} \caption{CNN-PCA models} \end{subfigure}% \caption{Four test-set realizations of the binary channel system from (a) PCA models, (b) corresponding truncated-PCA (T-PCA) models, and (c) corresponding CNN-PCA models (Case~1).} \label{fig_case1_models} \end{figure} In this case, there are two production wells and two injection wells. The well locations are given in Table~\ref{tab_well_case1}. All wells are assumed to be drilled through all 40~layers of the model, and hard data are specified (meaning $h_j=1$) in all blocks penetrated by a well. Wells are perforated (open to flow), however, only in blocks characterized by sand. These layers are indicated in Table~\ref{tab_well_case1}. A total of $N_{\text{r}}=3000$ conditional realizations $\mathbf{m}_\text{gm}^i$ are generated to construct the PCA model (through application of Eq.~\ref{eq_center_data_matrix}). A total of $l=400$ singular values are retained, which explains $\sim$80\% of the total energy. Then, $N_{\text{r}}=3000$ reconstructed PCA models $\Tilde{\mathbf{m}}_\text{pca}^i$ are generated using Eqs.~\ref{eq_pca_proj_perturb} and \ref{eq_pca_recon_perturb}. The first $p=40$ principal components explain $\sim$40\% of the energy, so perturbation is only applied to $\xi_j$ for $j=41, ..., 400$. A separate set of $N_{\text{r}}=3000$ random PCA models $\mathbf{m}_\text{pca}^i$ is generated by sampling $\boldsymbol{\xi}_l$ from $N(\mathbf{0},I_l)$. \begin{table}[!htb] \centering \begin{tabular}{ c | c | c | c |c } & P1 & P2 & I1 & I2 \\ \hline Areal location ($x,~y$)&(15, 57)&(45, 58)&(15, 2)&(45, 3)\\ Perforated layers&18 - 24&1 - 8&15 - 22&1 - 8\\ \end{tabular} \caption{Well locations ($x$ and $y$ refer to areal grid-block indices) and perforations (Case~1)} \label{tab_well_case1} \end{table} The $N_{\text{r}}=3000$ realizations of $\mathbf{m}_\text{gm}^i$, $\Tilde{\mathbf{m}}_\text{pca}^i$ and $\mathbf{m}_\text{pca}^i$ form the training set for the training of the model transform net $f_W$. The weighting factors for the training loss in Eq.~\ref{eq_cnnpca_loss} are $\gamma_\text{rec}=500$, $\gamma_s=100$, and $\gamma_h = 10$. These values were found to provide accurate flow statistics and near-perfect hard-data honoring. The Adam optimizer \citep{Kingma2014} is used for updating parameters in $f_W$, with a default learning rate of $l_r=0.001$ and a batch size of $N_{\text{b}}=8$. The model transform net is trained for 10~epochs, which requires around 0.5~hour on one Tesla V100 GPU. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-petrel-train-chan-real21.png} \caption{One Petrel model} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-opca-chan-real41.png} \caption{One T-PCA model} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-cnnpca-chan-real41.png} \caption{One CNN-PCA model} \end{subfigure}% \caption{3D channel geometry from (a) one new Petrel model, (b) one T-PCA model, and (c) corresponding CNN-PCA model. The T-PCA and CNN-PCA models correspond to the leftmost geomodels in Fig.~\ref{fig_case1_models}b and c (Case~1).} \label{fig_case1_chans} \end{figure} After training, 200 new Petrel realizations and 200 new PCA models are generated. The new PCA models are fed through the trained model transform net to obtain the CNN-PCA models. Truncation is performed on the new PCA models and on the CNN-PCA models to render them strictly binary. Specifically, cutoff values are determined for each set of models such that the final sand-facies fractions match that of the Petrel models (6.53\%). These models, along with the new Petrel realizations, comprise the test sets. Figure~\ref{fig_case1_models} presents four test-set PCA models (Fig.~\ref{fig_case1_models}a), the corresponding truncated-PCA models (denoted T-PCA, Fig.~\ref{fig_case1_models}b) and the corresponding CNN-PCA models (after truncation, Fig.~\ref{fig_case1_models}c). Figure~\ref{fig_case1_chans} displays the 3D channel geometry for a Petrel model (Fig.~\ref{fig_case1_chans}a), a truncated-PCA model (Fig.~\ref{fig_case1_chans}b), and the corresponding CNN-PCA model (Fig.~\ref{fig_case1_chans}~c). From Figs.~\ref{fig_case1_models} and \ref{fig_case1_chans}, it is apparent that the CNN-PCA models preserve geological realism much better than the truncated-PCA models. More specifically, the CNN-PCA models display intersecting channels of continuity, width, sinuosity and depth consistent with reference Petrel models. \begin{figure}[!htb] \centering \includegraphics[width=0.32\textwidth]{kr.eps} \caption{Relative permeability curves for all flow simulation models.} \label{fig_rel_perm} \end{figure} In addition to visual inspection, it is important to assess the CNN-PCA models quantitatively. We computed static channel connectivity metrics as well as flow responses for all of the test sets. The connectivity metrics suggested by \cite{Pardo-Iguzquiza2003} were found to be informative in our 2D study \citep{Liu2019}, but for the 3D cases considered here they appear to be too global to capture key interconnected-channel features that impact flow. Thus we focus on flow responses in the current assessment. The flow set up involves aqueous and nonaqueous liquid phases. These can be viewed as NAPL and water in the context of an aquifer remediation project, or as oil and water in the context of oil production via water injection. Our terminology will correspond to the latter application. Each grid block in the geomodel is of dimension 20~m in the $x$ and $y$ directions, and 5~m in the $z$ direction. Water viscosity is constant at 0.31~cp. Oil viscosity varies with pressure; it is 1.03~cp at a pressure of 325~bar. Relative permeability curves are shown in Fig.~\ref{fig_rel_perm}. The initial pressure of the reservoir (bottom layer) is 325~bar. The production and injection wells operate at constant bottom-hole pressures (BHPs) of 300~bar and 340~bar, respectively. The simulation time frame is 1500~days. The permeability and porosity for grid blocks in channels (sand) are $k=2000$~md and $\phi=0.2$. Grid blocks in mud are characterized by $k=20$~md and $\phi=0.15$. \begin{figure}[!htb] \begin{subfigure}[b]{1.0\textwidth} \centering \includegraphics[width=0.4\textwidth]{case7-2-opcapetrel-Field_OPR.png} \includegraphics[width=0.4\textwidth]{case7-2-cnnpcapetrel-Field_OPR.png} \caption{Field oil rate} \end{subfigure}% \begin{subfigure}[b]{1.0\textwidth} \centering \includegraphics[width=0.4\textwidth]{case7-2-opcapetrel-Field_WPR.png} \includegraphics[width=0.4\textwidth]{case7-2-cnnpcapetrel-Field_WPR.png} \caption{Field water rate} \end{subfigure}% \begin{subfigure}[b]{1.0\textwidth} \centering \includegraphics[width=0.4\textwidth]{case7-2-opcapetrel-Field_WIR.png} \includegraphics[width=0.4\textwidth]{case7-2-cnnpcapetrel-Field_WIR.png} \caption{Field water injection rate} \end{subfigure}% \caption{Comparison of Petrel and T-PCA (left) and Petrel and CNN-PCA (right) field-wide flow statistics over ensembles of 200 (new) test cases. Red, blue and black curves represent results from Petrel, T-PCA, and CNN-PCA, respectively. Solid curves correspond to $\text{P}_{50}$ results, lower and upper dashed curves to $\text{P}_{10}$ and $\text{P}_{90}$ results (Case~1).} \label{fig_case1_flow_stats_field} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-opcapetrel-P1_OPR.png} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-opcapetrel-P2_WPR.png} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-opcapetrel-I2_WIR.png} \end{subfigure}% \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-cnnpcapetrel-P1_OPR.png} \caption{P1 oil production rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-cnnpcapetrel-P2_WPR.png} \caption{P2 water production rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-cnnpcapetrel-I2_WIR.png} \caption{I2 water injection rate} \end{subfigure}% \caption{Comparison of Petrel and T-PCA (top) and Petrel and CNN-PCA (bottom) well-by-well flow statistics over ensembles of 200 (new) test cases. Red, blue and black curves represent results from Petrel, T-PCA, and CNN-PCA, respectively. Solid curves correspond to $\text{P}_{50}$ results, lower and upper dashed curves to $\text{P}_{10}$ and $\text{P}_{90}$ results (Case~1).} \label{fig_case1_flow_stats_well} \end{figure} Flow results are presented in terms of $\text{P}_{10}$, $\text{P}_{50}$ and $\text{P}_{90}$ percentiles for each test set. These results are determined, at each time step, based on the results for all 200 models. Results at different times correspond, in general, to different geomodels within the test set. Figure~\ref{fig_case1_flow_stats_field} displays field-level results for the 200 test-set Petrel models (red curves), truncated-PCA models (T-PCA, blue curves), and CNN-PCA models (black curves). Figure~\ref{fig_case1_flow_stats_well} presents individual well responses (the wells shown have the highest cumulative phase production or injection). The significant visual discrepancies between the truncated-PCA and Petrel geomodels, observed in Figs.~\ref{fig_case1_models} and \ref{fig_case1_chans}, are reflected in the flow responses, where large deviations are evident. The CNN-PCA models, by contrast, provide flow results in close agreement with the Petrel models, for both field and well-level predictions. The CNN-PCA models display slightly higher field water rates, which may be due to a minor overestimation of channel connectivity or width. The overall match in $\text{P}_{10}$--$\text{P}_{90}$ results, however, demonstrates that the CNN-PCA geomodels exhibit the same level of variability as the Petrel models. This feature is important for the proper quantification of uncertainty. \subsection{Impact of Style Loss on CNN-PCA Geomodels} We now briefly illustrate the impact of style loss by comparing CNN-PCA geomodels generated with and without this loss term. Comparisons of areal maps for different layers in different models are shown in Fig.~\ref{fig_model_style_loss_impact}. Maps for three PCA models appear in Fig.~\ref{fig_model_style_loss_impact}a, the corresponding T-PCA models in Fig.~\ref{fig_model_style_loss_impact}b, CNN-PCA models without style loss ($\gamma_s=0$) in Fig.~\ref{fig_model_style_loss_impact}c, and CNN-PCA models with style loss ($\gamma_s=100$) in Fig.~\ref{fig_model_style_loss_impact}d. The flow statistics for CNN-PCA models without style loss are presented in Fig.~\ref{fig_case1_flow_stats_no_style_loss}. Hard data loss is included in all cases. The CNN-PCA models with reconstruction loss alone ($\gamma_s=0$) are more realistic visually, and result in more accurate flow responses, than the truncated-PCA models. However, the inclusion of style loss clearly acts to improve the CNN-PCA geomodels. This is evident in both the areal maps and the field-level flow responses (compare Fig.~\ref{fig_case1_flow_stats_no_style_loss} to Fig.~\ref{fig_case1_flow_stats_field} (right)). We note finally that in a smaller 3D example considered in \cite{TangLiu2020}, the use of reconstruction loss alone was sufficient to achieve well-defined channels in the CNN-PCA geomodels (and accurate flow statistics). That case involved six wells (and thus more hard data) spaced closer together than in the current example. This suggests that style loss may be most important when hard data are limited, or do not act to strongly constrain channel geometry. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.65\textwidth} \includegraphics[width=1\textwidth]{pca_layer_1.png} \caption{Areal maps from PCA models} \end{subfigure}% \begin{subfigure}[b]{0.65\textwidth} \includegraphics[width=1\textwidth]{tpca_layer_1.png} \caption{Areal maps from T-PCA models} \end{subfigure}% \begin{subfigure}[b]{0.65\textwidth} \includegraphics[width=1\textwidth]{cnnpca_sw0_layer_1.png} \caption{Areal maps from CNN-PCA models with $\gamma_s=0$} \end{subfigure}% \begin{subfigure}[b]{0.65\textwidth} \includegraphics[width=1\textwidth]{cnnpca_sw100_layer_1.png} \caption{Areal maps from CNN-PCA models with $\gamma_s=100$} \end{subfigure}% \caption{Areal maps for layer 19 (left column) and layer 1 (middle and right columns) from (a) three different PCA models, (b) corresponding T-PCA models, (c) corresponding CNN-PCA models without style loss and (d) corresponding CNN-PCA models with style loss (Case~1).} \label{fig_model_style_loss_impact} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-sw0cnnpcapetrel-Field_OPR.png} \caption{Field oil rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-sw0cnnpcapetrel-Field_WPR.png} \caption{Field water rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-sw0cnnpcapetrel-Field_WIR.png} \caption{Field water injection rate} \end{subfigure}% \caption{Comparison of Petrel (red curves) and CNN-PCA without style loss (black curves) field-wide flow statistics over ensembles of 200 (new) test cases. Solid curves correspond to $\text{P}_{50}$ results, lower and upper dashed curves to $\text{P}_{10}$ and $\text{P}_{90}$ results (Case~1).} \label{fig_case1_flow_stats_no_style_loss} \end{figure} \subsection{Case 2 -- Three-Facies Channel-Levee-Mud System} This case involves three rock types, with the channel and levee facies displaying complex interconnected and overlapping geometries. The fluvial channel facies is of the highest permeability. The upper portion of each channel is surrounded by a levee facies, which is of intermediate permeability. The low-permeability mud facies comprises the remainder of the system. The average volume fractions for the channel and levee facies are 8.70\% and 5.42\%, respectively. Figure~\ref{fig_case2_petrel_models} displays four realizations generated from Petrel (channel in red, levee in green, mud in blue). The width and thickness of the levees is $\sim$55--60\% of the channel width and thickness. These models again contain $60 \times 60 \times 40$ cells. In this case there are three injection and three production wells. Well locations and hard data are summarized in Table~\ref{tab_well_case2}. We considered two different ways of encoding mud, levee and channel facies. The `natural' approach is to encode mud as 0, levee as 1, and channel as 2. This treatment, however, has some disadvantages. Before truncation, CNN-PCA models are not strictly discrete, and a transition zone exists between mud and channel. This transition region will be interpreted as levee after truncation, which in turn leads to levee surrounding channels on all sides. This facies arrangement deviates from the underlying geology, where levees only appear near the upper portion of channels. In addition, since we preserve levee volume fraction, the average levee width becomes significantly smaller than it should be. For these reasons we adopt an alternative encoding strategy. This entails representing mud as 0, levee as 2, and channel as 1. This leads to better preservation of the location and geometry of levees. \begin{table}[!htb] \centering \begin{tabular}{ c | c | c | c |c |c|c } & P1 & P2 & P3 & I1 & I2 & I3\\ \hline Areal location ($x,~y$)&(48, 48)&(58, 31)&(32, 57)&(12, 12)&(28, 3) & (4, 26)\\ Perforated layers in channel&15 - 21&25 - 30&1 - 5&15 - 20 & 25 - 30 & 1 - 6\\ Perforated layers in levee & - & - & 21 - 24 & - & 6 - 8 & 36 - 40 \\ \end{tabular} \caption{Well locations ($x$ and $y$ refer to areal grid-block indices) and perforations (Case~2)} \label{tab_well_case2} \end{table} We again generate $N_{\text{r}}=3000$ Petrel models, reconstructed PCA models and a separate set of new PCA models for training. We retain $l=800$ leading singular values, which capture $\sim$80\% of the total energy. The first 70 of these explain $\sim$40\% of the energy, so perturbation is performed on $\xi_j$, $j=71,...,800$ when generating the reconstructed PCA models. The training loss weighting factors in this case are $\gamma_\text{rec}=500$, $\gamma_s=50$ and $\gamma_h=10$. Other training parameters are the same as in Case~1. Test sets of 200 new realizations are then generated. The test-set PCA models are post-processed with the model transform net to obtain the CNN-PCA test set. The CNN-PCA geomodels are then truncated to be strictly ternary, with cutoff values determined such that the average facies fractions match those from the Petrel models. Four test-set CNN-PCA realizations are shown in Fig.~\ref{fig_models_case2}. These geomodels contain features consistent with those in the Petrel models (Fig.~\ref{fig_case2_petrel_models}). In addition to channel geometry, the CNN-PCA models also capture the geometry of the levees and their location relative to channels. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case8-3-petrel-real1.png} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case8-3-petrel-real2.png} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case8-3-petrel-real3.png} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case8-3-petrel-real4.png} \end{subfigure}% \caption{Four Petrel realizations of the three-facies system, with sand shown in red, levee in green, and mud in blue (Case~2).} \label{fig_case2_petrel_models} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.24\textwidth]{case8-3-cnnpca_real25.png} \includegraphics[width=0.24\textwidth]{case8-3-cnnpca_real3.png} \includegraphics[width=0.24\textwidth]{case8-3-cnnpca_real14.png} \includegraphics[width=0.24\textwidth]{case8-3-cnnpca_real34.png} \caption{Four test-set CNN-PCA realizations of the three-facies system, with sand shown in red, levee in green, and mud in blue (Case~2).} \label{fig_models_case2} \end{figure} We now present flow results for this case. Permeability for channel, levee and mud are specified as 2000~md, 200~md and 20~md, respectively. Corresponding porosity values are 0.25, 0.15 and 0.05. Other simulation specifications are as in Case~1. Field-wide and individual well flow statistics (for wells with the largest cumulative phase production/injection) are presented in Fig.~\ref{fig_case2_flow_stats}. Consistent with the results for Case~1, we observe generally close agreement between flow predictions for CNN-PCA and Petrel models. Again, the close matches in the $\text{P}_{10}$--$\text{P}_{90}$ ranges indicate that the CNN-PCA geomodels capture the inherent variability of the Petrel models. Truncated-PCA geomodels, and comparisons between flow results for these models against Petrel models, are shown in Figs.~S1 and~S2 in SI. The truncated-PCA models appear less geologically realistic, and provide much less accurate flow predictions, than the CNN-PCA geomodels. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case8-3-cnnpcapetrel-Field_OPR.png} \caption{Field oil rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case8-3-cnnpcapetrel-Field_WPR.png} \caption{Field water rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case8-3-cnnpcapetrel-Field_WIR.png} \caption{Field water injection rate} \end{subfigure}% \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case8-3-cnnpcapetrel-P1_OPR.png} \caption{P1 oil rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case8-3-cnnpcapetrel-P2_WPR.png} \caption{P2 water rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case8-3-cnnpcapetrel-I2_WIR.png} \caption{I2 water injection rate} \end{subfigure}% \caption{Comparison of Petrel (red curves) and CNN-PCA (black curves) flow statistics over ensembles of 200 (new) test cases. Solid curves correspond to $\text{P}_{50}$ results, lower and upper dashed curves to $\text{P}_{10}$ and $\text{P}_{90}$ results (Case~2).} \label{fig_case2_flow_stats} \end{figure} \subsection{Case 3 -- Bimodal Channelized System} \label{sec_case3} In the previous two cases, permeability and porosity within each facies were constant. We now consider a bimodal system, where log-permeability and porosity within the two facies follow Gaussian distributions. The facies model, well locations and perforations are the same as in Case~1. The log-permeability in the sand facies follows a Gaussian distribution with mean 6.7 and variance 0.2, while in the mud facies the mean is 3.5 and the variance is 0.19. The log-permeability at well locations in all layers is treated as hard data. We use the sequential Gaussian simulation algorithm in Petrel to generate log-permeability values within each facies. The final log-permeability field is obtained using a cookie-cutter approach. We use $\mathbf{m}_\text{gm}^\text{f} \in \mathbb{R}^{N_{\text{c}}}$ to denote the facies model, and $\mathbf{m}_\text{gm}^\text{s} \in \mathbb{R}^{N_{\text{c}}}$ and $\mathbf{m}_\text{gm}^\text{m} \in \mathbb{R}^{N_{\text{c}}}$ to denote log-permeability within the sand and mud facies. The log-permeability of grid block $i$, in the bimodal system $\mathbf{m}_\text{gm} \in \mathbb{R}^{N_{\text{c}}}$, is then \begin{equation} (m_\text{gm})_i = (m_\text{gm}^\text{f})_i (m_\text{gm}^\text{s})_i + [1 - (m_\text{gm}^\text{f})_i] (m_\text{gm}^\text{m})_i, \hspace{8px} i=1,...,N_{\text{c}}. \label{eq_bimodal_cc} \end{equation} Figure~\ref{fig_case3_petrel} shows four log-permeability realizations generated by Petrel. Both channel geometry and within-facies heterogeneity are seen to vary between models. For this bimodal system we apply a two-step approach. Specifically, CNN-PCA is used for facies parameterization, and two separate PCA models are used to parameterize log-permeability within the two facies. Since the facies model is the same as in Case~1, we use the same CNN-PCA model. Thus the reduced dimension for the facies model is $l^\text{f} = 400$. We then construct two PCA models to represent log-permeability in each facies. For these models we set $l^\text{s} = l^\text{m} = 200$, which explains \raisebox{-0.6ex}{\textasciitilde}\hspace{0.15em}$60\%$ of the total energy. A smaller percentage is used here to limit the overall size of the low-dimensional variable $l$ (note that $l=l^\text{f}+l^\text{s}+l^\text{m}$), at the cost of discarding some amount of small-scale variation within each facies. This is expected to have a relatively minor impact on flow response. To generate new PCA models, we sample $\boldsymbol{\xi}^\text{f} \in \mathbb{R}^{l^\text{f}}$, $\boldsymbol{\xi}^\text{s}\in \mathbb{R}^{l^\text{s}}$ and $\boldsymbol{\xi}^\text{m}\in \mathbb{R}^{l^\text{m}}$ separately from standard normal distributions. We then apply Eq.~\ref{eq_pca} to construct PCA models $\mathbf{m}_\text{pca}^\text{f} \in \mathbb{R}^{N_{\text{c}}}$, $\mathbf{m}_\text{pca}^\text{s} \in \mathbb{R}^{N_{\text{c}}}$ and $\mathbf{m}_\text{pca}^\text{m} \in \mathbb{R}^{N_{\text{c}}}$. The model transform net then maps $\mathbf{m}_\text{pca}^\text{f}$ to the CNN-PCA facies model (i.e., $\mathbf{m}_\text{cnnpca}^\text{f} = f_W(\mathbf{m}_\text{pca}^\text{f})$). After truncation, the cookie-cutter approach is applied to provide the final CNN-PCA bimodal log-permeability model: \begin{equation} (m_\text{cnnpca})_i = (m_\text{cnnpca}^\text{f})_i (m_\text{pca}^\text{s})_i + [1 -(m_\text{cnnpca}^\text{f})_i] (m_\text{pca}^\text{m})_i, \hspace{8px} i=1,...,N_{\text{c}}. \label{eq_bimodal_cc} \end{equation} Figure~\ref{fig_case3_cnnpca_models} shows four of the resulting test-set CNN-PCA geomodels. Besides preserving channel geometry, the CNN-PCA models also display large-scale property variations within each facies. The CNN-PCA models are smoother than the reference Petrel models because small-scale variations in log-permeability within each facies are not captured in the PCA representations, as noted earlier. The average histograms for the 200 test-set Petrel and CNN-PCA geomodels are shown in Fig.~\ref{fig_case3_histo}. The CNN-PCA histogram is clearly bimodal, and in general correspondence with the Petrel histogram, though variance is underpredicted in the CNN-PCA models. This is again because variations at the smallest scales have been neglected. To construct flow models, we assign permeability and porosity, for block $i$, as $k_i = \exp(m_i)$ and $\phi_i = m_i/40$. The median values for permeability within channel and mud facies are \textapprox800~md and \textapprox30~md, while those for porosity are \textapprox0.16 and \textapprox0.08. The simulation setup is otherwise the same as in Case~1. Flow statistics for Case~3 are shown in Fig.~\ref{fig_case3_flow_stats}. Consistent with the results for the previous cases, we observe close agreement between the $\text{P}_{10}$, $\text{P}_{50}$ and $\text{P}_{90}$ predictions from CNN-PCA and Petrel geomodels. There does appear to be a slight underestimation of variability in the CNN-PCA models, however, which may result from the lack of small-scale property variation. PCA geomodels and corresponding flow results for this case are shown in Figs.~S3--S5 in SI. Again, these models lack the realism and flow accuracy evident in the CNN-PCA geomodels. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-petrel-eval-real1.png} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-petrel-eval-real3.png} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-petrel-eval-real39.png} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-petrel-eval-real51.png} \end{subfigure}% \vspace{0.2cm} \begin{subfigure}[b]{0.6\textwidth} \includegraphics[width=1\textwidth]{colorbar.png} \end{subfigure}% \caption{Four Petrel log-permeability realizations of the bimodal channelized system (Case~3).} \label{fig_case3_petrel} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpca-eval-real16.png} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpca-eval-real128.png} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpca-eval-real129.png} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpca-eval-real159.png} \end{subfigure}% \vspace{0.2cm} \begin{subfigure}[b]{0.6\textwidth} \includegraphics[width=1\textwidth]{colorbar.png} \end{subfigure}% \caption{Four test-set CNN-PCA log-permeability realizations of the bimodal channelized system (Case~3).} \label{fig_case3_cnnpca_models} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\textwidth]{histo_petrel.png} \caption{Petrel} \end{subfigure}% ~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\textwidth]{histo_cnnpca.png} \caption{CNN-PCA} \end{subfigure}% \caption{Average histograms of the 200 test-set realizations from (a) Petrel and (b) CNN-PCA (Case~3).} \label{fig_case3_histo} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpcapetrel-Field_OPR.png} \caption{Field oil rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpcapetrel-Field_WPR.png} \caption{Field water rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpcapetrel-Field_WIR.png} \caption{Field water injection rate} \end{subfigure}% \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpcapetrel-P1_OPR.png} \caption{P1 oil rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpcapetrel-P2_WPR.png} \caption{P2 water rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpcapetrel-I2_WIR.png} \caption{I2 water injection rate} \end{subfigure}% \caption{Comparison of Petrel (red curves) and CNN-PCA (black curves) flow statistics over ensembles of 200 (new) test cases. Solid curves correspond to $\text{P}_{50}$ results, lower and upper dashed curves to $\text{P}_{10}$ and $\text{P}_{90}$ results (Case~3).} \label{fig_case3_flow_stats} \end{figure} \section{History Matching using CNN-PCA} \label{sec-hm} CNN-PCA is now applied for a history matching problem involving the bimodal channelized system. With the two-step CNN-PCA approach, the bimodal log-permeability and porosity fields are represented with three low-dimensional variables: $\boldsymbol{\xi}^{\text{f}} \in \mathbb{R}^{400}$ for the facies model, and $\boldsymbol{\xi}^{\text{s}} \in \mathbb{R}^{200}$ and $\boldsymbol{\xi}^{\text{m}} \in \mathbb{R}^{200}$ for log-permeability and porosity in sand and mud. Concatenating the three low-dimensional variables gives $\boldsymbol{\xi}_l = [\boldsymbol{\xi}^{\text{f}},\boldsymbol{\xi}^{\text{s}},\boldsymbol{\xi}^{\text{m}}] \in \mathbb{R}^{l}$, with $l=800$. This $\boldsymbol{\xi}_l$ represents the uncertain variables considered during history matching. Observed data include oil and water production rates at the two producers, and water injection rate at the two injectors, collected every 100~days for the first 500~days. This gives a total of $N_\text{d}=30$ observations. Standard deviations for error in observed data are 1\%, with a minimum value of 2~m$^3$/day. The leftmost Petrel realization in Fig.~\ref{fig_case3_petrel} is used as the true geomodel. Observed data are generated by performing flow simulation with this model and then perturbing the simulated production and injection data consistent with standard deviations of measurement errors. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.44\textwidth} \includegraphics[width=1\textwidth]{hm-priorpost-P1_OPR.png} \caption{P1 oil rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.44\textwidth} \includegraphics[width=1\textwidth]{hm-priorpost-P2_OPR.png} \caption{P2 oil rate} \end{subfigure}% \centering \begin{subfigure}[b]{0.44\textwidth} \includegraphics[width=1\textwidth]{hm-priorpost-P1_WPR.png} \caption{P1 water rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.44\textwidth} \includegraphics[width=1\textwidth]{hm-priorpost-P2_WPR.png} \caption{P2 water rate} \end{subfigure}% \centering \begin{subfigure}[b]{0.44\textwidth} \includegraphics[width=1\textwidth]{hm-priorpost-I1_WIR.png} \caption{I1 water injection rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.44\textwidth} \includegraphics[width=1\textwidth]{hm-priorpost-I2_WIR.png} \caption{I2 water injection rate} \end{subfigure}% \caption{Prior and posterior flow results for bimodal channelized system. Gray regions represent the prior $\text{P}_{10}$-–$\text{P}_{90}$ range, red points and red curves denote observed and true data, and blue dashed curves denote the posterior $\text{P}_{10}$ (lower) and $\text{P}_{90}$ (upper) predictions. Vertical dashed line divides simulation time frame into history match and prediction periods.} \label{fig_hm_data} \end{figure} We use ESMDA \citep{Emerick2013} for history matching. This algorithm has been used previously by \cite{Canchumuni2017, Canchumuni2018, Canchumuni2019a, Canchumun2020} for data assimilation with deep-learning-based geological parameterizations. ESMDA is an ensemble-based procedure that starts with an ensemble of prior uncertain variables. At each data assimilation step, uncertain variables are updated by assimilating simulated production data to observed data with inflated measurement errors. We use an ensemble size of $N_\text{e} = 200$. The prior ensemble consists of 200 random realizations of $\boldsymbol{\xi}_l \in \mathbb{R}^{800}$ sampled from the standard normal distribution. Each realization of $\boldsymbol{\xi}_l$ is then divided into $\boldsymbol{\xi}^{\text{f}}$, $\boldsymbol{\xi}^{\text{s}}$ and $\boldsymbol{\xi}^{\text{m}}$. CNN-PCA realizations of bimodal log-permeability and porosity are then generated and simulated. In ESMDA the ensemble is updated through application of \begin{equation} \boldsymbol{\xi}_l^{u,j} = \boldsymbol{\xi}_l^j + C_{\xi d}(C_{dd} + \alpha C_{d})^{-1}(\Bd^j - \mathbf{d}_{\text{obs}}^*), \hspace{8px} j=1,...,N_\text{e}, \end{equation} where $\Bd^j \in \mathbb{R}^{N_\text{d}}$ represents simulated production data, $\mathbf{d}_{\text{obs}}^*\in \mathbb{R}^{N_\text{d}}$ denotes randomly perturbed observed data sampled from $N(\mathbf{d}_{\text{obs}}, \alpha C_{d})$, where $C_{d}\in \mathbb{R}^{N_\text{d} \times N_\text{d}}$ is a diagonal prior covariance matrix for the measurement error and $\alpha$ is an error inflation factor. The matrix $C_{\xi d}\in \mathbb{R}^{l \times N_\text{d}}$ is the cross-covariance between $\boldsymbol{\xi}$ and $\Bd$ estimated by \begin{equation} C_{\xi d} = \dfrac{1}{N_\text{e} - 1}\sum_{j=1}^{N_\text{e}}(\boldsymbol{\xi}_l^j - \Bar{\boldsymbol{\xi}}_l)(\Bd^j - \Bar{\Bd})^T, \label{eq_cxid} \end{equation} and $C_{d d} \in \mathbb{R}^{N_\text{d} \times N_\text{d}}$ is the auto-covariance of $\Bd$ estimated by \begin{equation} C_{dd} = \dfrac{1}{N_\text{e} - 1}\sum_{j=1}^{N_\text{e}}(\Bd^j - \Bar{\Bd})(\Bd^j - \Bar{\Bd})^T. \label{eq_cdd} \end{equation} In Eqs.~\ref{eq_cxid} and \ref{eq_cdd}, the overbar denotes the mean over the $N_\text{e}$ samples at the current iteration. The updated variables $\boldsymbol{\xi}_l^{u,j}$, $j=1,...,N_\text{e}$, then represent a new ensemble. The process is applied multiple times, each time with a different inflation factor $\alpha$ and a new random perturbation of the observed data. Here we assimilate data four times using inflation factors of 9.33, 7.0, 4.0 and 2.0, as suggested by \cite{Emerick2013}. Data assimilation results are shown in Fig.~\ref{fig_hm_data}. The gray region denotes the $\text{P}_{10}$--$\text{P}_{90}$ range for the prior models, while the dashed blue curves indicate the $\text{P}_{10}$--$\text{P}_{90}$ posterior range. Red points show the observed data, and the red curves the true model response. We observe uncertainty reduction in all quantities over at least a portion of the time frame, with the observed and true data consistently falling within the $\text{P}_{10}$--$\text{P}_{90}$ posterior range. Of particular interest is the fact that substantial uncertainty reduction is achieved in water-rate predictions even though none of the producers experiences water breakthrough in the history matching period. Prior and posterior geomodels are shown in Fig.~\ref{fig_hm_logk}. In the true Petrel model (leftmost realization in Fig.~\ref{fig_case3_petrel}), wells I1 and P1, and I2 and P2, are connected via channels. In the first two prior models (Fig.~\ref{fig_hm_logk}a,~b), one or the other of these injector--producer connectivities does not exist, but it gets introduced in the corresponding posterior models (Fig.~\ref{fig_hm_logk}d,~e). The third prior model (Fig.~\ref{fig_hm_logk}c) already displays the correct injector--producer connectivity, and this is indeed retained in the posterior model (Fig.~\ref{fig_hm_logk}f). \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{prior_real96.png} \caption{Prior model \#1} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{prior_real121.png} \caption{Prior model \#2} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{prior_real118.png} \caption{Prior model \#3} \end{subfigure}% \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{post_real96_case2.png} \caption{Posterior model \#1} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{post_real121_case2.png} \caption{Posterior model \#2} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{post_real118_case2.png} \caption{Posterior model \#3} \end{subfigure}% \vspace{0.2cm} \begin{subfigure}[b]{0.6\textwidth} \includegraphics[width=1\textwidth]{colorbar.png} \end{subfigure}% \caption{Log-permeability for (a-c) three prior CNN-PCA models and (d-f) corresponding posterior CNN-PCA models. } \label{fig_hm_logk} \end{figure} \FloatBarrier \section{Concluding Remarks} \label{sec-concl} In this work, the 3D CNN-PCA algorithm, a deep-learning-based geological parameterization procedure, was developed to treat complex 3D geomodels. The method entails the use of a new supervised-learning-based reconstruction loss and a new style loss based on features of 3D geomodels extracted from the C3D net, a 3D CNN pretrained for video classification. Hard data loss is also included. A two-step treatment for parameterizing bimodal (as opposed to binary) models, involving CNN-PCA representation of facies combined with PCA for within-facies variability, was also introduced. The 3D CNN-PCA algorithm was applied for the parameterization of realizations from three different geological scenarios. These include a binary fluvial channel system, a bimodal channelized system, and a three-facies channel-levee-mud system. Training required the construction of 3000 object-based Petrel models, the corresponding PCA models, and a set of random PCA models. New PCA models were then fed through the trained model transform net to generate the 3D CNN-PCA representation. The resulting geomodels were shown to exhibit geological features consistent with those in the reference models. Flow results for injection and production quantities, generated by simulating a test set of CNN-PCA models, were found to be in close agreement with simulations using reference Petrel models. Enhanced accuracy relative to truncated-PCA models was also demonstrated. Finally, history matching was performed for the bimodal channel system, using the two-step approach and ESMDA. Significant uncertainty reduction was achieved, and the posterior models were shown to be geologically realistic. There are a number of directions for future work in this general area. The models considered in this study were Cartesian and contained $60\times60\times40$ cells (144,000 total grid blocks). Practical subsurface flow models commonly contain more cells and may be defined on corner-point or unstructured grids. Extensions to parameterize systems of this type containing, e.g., $O(10^6)$ cells, should be developed. Systems with larger numbers of wells should also be considered. It will additionally be of interest to extend our treatments to handle uncertainty in the geological scenario, with a single network trained to accomplish multi-scenario style transfer. Finally, the development of parameterizations for discrete fracture systems should be addressed. \section*{Computer Code Availability} Computer code and datasets will be available upon publication. \begin{acknowledgements} We thank the industrial affiliates of the Stanford Smart Fields Consortium for financial support. We are also grateful to the Stanford Center for Computational Earth \& Environmental Science for providing the computing resources used in this work. We also thank Obi Isebor, Wenyue Sun and Meng Tang for useful discussions, and Oleg Volkov for help with the ADGPRS software. \end{acknowledgements} \bibliographystyle{spbasic} \section{Introduction} In subsurface flow problems, data assimilation (history matching) typically entails the modification of geological parameters, such as permeability and porosity, such that flow simulation results are in essential agreement with observed data. Geological parameterization is often a useful component of this workflow, as it enables the mapping of high-dimensional (grid-block level) descriptions to a set of uncorrelated low-dimensional variables. A key challenge for parameterization procedures is the preservation of the large-scale geological structures that characterize the underlying geomodel. Recently, deep-learning-based parameterization algorithms have achieved promising results for the modeling of complex fluvial channel systems, though much of this work has been for 2D systems. Our goal in this study is to extend one such parameterization, CNN-PCA (convolutional neural network -- principal component analysis), to handle complex 3D geomodels. Deep-learning-based treatments are now being applied for many aspects of subsurface characterization and flow modeling. This includes, e.g., surrogate modeling for flow \citep{Mo2019, Wen2019, Tang2020, Jin2020}, data-space inversion for history matching and uncertainty quantification \citep{Jiang2020}, and dimension reduction of seismic data in time-lapse seismic history matching \citep{Liu2020b,Liu2020a}. Initial work on deep-learning-based geological parameterization includes formulations involving fully connected autoencoders (AE) \citep{Canchumuni2017} and deep belief networks (DBN) \citep{Canchumuni2018}. CNN-based algorithms are, in general, more efficient and scalable. Examples involving CNNs include convolutional variational autoencoders (VAE) \citep{Laloy2017, Canchumuni2019a} and deep convolutional generative adversarial networks (GAN) \citep{Chan2017, Chan2018, Dupont2018, Mosser2018, Laloy2018, Laloy2019, Chan2020, Azevedo2020}. The combination of VAE and GAN has also been proposed by \cite{Mo2020} and \cite{Canchumun2020}. The above-mentioned algorithms utilize deep-learning to establish an end-to-end mapping between geological models and low-dimensional latent variables. The CNN-PCA \citep{Liu2019,Liu2020} and PCA-Cycle-GAN \citep{Canchumun2020} algorithms take a different approach, in that deep-learning is used as a post-processor for PCA, which is a traditional parameterization method. In 2D CNN-PCA, a style loss based on geomodel features extracted from the VGG net \citep{Simonyan2015a} is employed to improve geological realism. This idea has also been employed to improve the quality of parameterized models from a standard VAE \citep{Canchumun2020}. A wide range of deep-learning-based geological parameterization algorithms have been developed and tested on 2D systems. They have been shown to provide more realistic models than traditional methods such as standalone PCA \citep{Sarma2006} and discrete cosine transform \citep{Jafarpour2010}. In a benchmark comparison performed by \cite{Canchumun2020}, seven deep-learning-based parameterization techniques, including VAE-based, GAN-based and PCA-based algorithms, were found to perform similarly for a 2D channel system. The development and detailed testing of deep-learning-based parameterizations for 3D geomodels is much more limited. Most assessments involving 3D systems are limited to unconditional realizations of a single geological scenario \citep{Laloy2018, Canchumuni2018, Mo2020}. To our knowledge, conditional realizations of 3D geomodels have thus far only been considered by \cite{Laloy2017} for the Maules Creek Valley alluvial aquifer dataset \citep{ti_lib}. Thus there appears to be a need for work on deep-learning-based parameterizations for complex 3D geomodels. In this study, we extend the 2D CNN-PCA framework to 3D by incorporating several new treatments. These include the use of a supervised-learning-based training loss and the replacement of the VGG net (appropriate for 2D models) with the C3D net \citep{Tran2015}, pretrained for video classification, as a 3D feature extractor. This formulation represents a significant extension of the preliminary treatment presented by \cite{TangLiu2020}, where style loss was not considered (in that work geomodel parameterization was combined with surrogate models for flow in a binary system). Here we apply the extended 3D CNN-PCA procedure for the parameterization of conditional realizations within three different geological settings. These include a binary fluvial channel system, a bimodal channel system, and a three-facies channel-levee-mud system. In all cases the underlying realizations, which provide the training sets for the 3D CNN-PCA parameterizations, are generated using object-based modeling within the Petrel geomodeling framework \citep{manual2007petrel}. Geomodels and results for flow statistics are presented for all three of the geological scenarios, and history matching results are presented for the bimodal channel case. This paper proceeds as follows. In Section~\ref{sec-methodology}, we begin by briefly reviewing the existing (2D) CNN-PCA algorithm. The 3D CNN-PCA methodology is then discussed in detail. In Section~\ref{sec-model-gen}, we apply 3D CNN-PCA to parameterize facies models for a binary channel system and a three facies channel-levee-mud system. A two-step approach for treating bimodal systems is then presented. For all of these geological scenarios, 3D CNN-PCA realizations, along with flow results for an ensemble of test cases, are presented. In Section~\ref{sec-hm}, we present history matching for the bimodal channel system using the two-step parameterization and ESMDA. A summary and suggestions for future work are provided in Section~\ref{sec-concl}. Details on the network architectures, along with additional geomodeling and flow results, are included in Supplementary Information (SI). \section{Convolutional Neural Network-based Principal Component Analysis (CNN-PCA)} \label{sec-methodology} In this section, we first give a brief overview of PCA and the 2D CNN-PCA method. The 3D procedure is then introduced and described in detail. \subsection{PCA Representation} \label{sec-pca} We let the vector $\mathbf{m} \in \mathbb{R}^{N_{\text{c}}}$, where $N_{\text{c}}$ is the number of cells or grid blocks, denote the set of geological variables (e.g., facies type in every cell) that characterize the geomodel. Parameterization techniques map $\mathbf{m}$ to a new lower-dimensional variable $\boldsymbol{\xi} \in \mathbb{R}^{l}$, where $l < N_{\text{c}}$ is the reduced dimension. As discussed in detail in \cite{Liu2019}, PCA applies linear mapping of $\mathbf{m}$ onto a set of principal components. To construct a PCA representation, an ensemble of $N_{\text{r}}$ models is generated using a geomodeling tool such as Petrel \citep{manual2007petrel}. These models are assembled into a centered data matrix $Y \in \mathbb{R}^{N_{\text{c}} \times N_{\text{r}}}$, \begin{equation} \label{eq_center_data_matrix} Y = \frac{1}{\sqrt{N_{\text{r}} - 1}}[\mathbf{m}_\text{gm}^1 - \bar{\mathbf{m}}_\text{gm} \quad \mathbf{m}_\text{gm}^2 - \bar{\mathbf{m}}_\text{gm} \quad \cdots \quad \mathbf{m}_\text{gm}^{N_{\text{r}}} - \bar{\mathbf{m}}_\text{gm}], \end{equation} where $\mathbf{m}_\text{gm}^i\in \mathbb{R}^{N_{\text{c}}}$ represents realization $i$, $\bar{\mathbf{m}}_\text{gm}\in \mathbb{R}^{N_{\text{c}}}$ is the mean of the $N_{\text{r}}$ realizations, and the subscript `gm' indicates that these realizations are generated using geomodeling software. A singular value decomposition of $Y$ gives $Y = U\Sigma V^T$, where $U \in \mathbb{R}^{N_{\text{c}} \times N_{\text{r}}}$ and $V \in \mathbb{R}^{N_{\text{r}} \times N_{\text{r}}}$ are the left and right singular matrices and $\Sigma \in \mathbb{R}^{N_{\text{r}} \times N_{\text{r}}}$ is a diagonal matrix containing singular values. A new PCA model $\mathbf{m}_\text{pca} \in \mathbb{R}^{N_{\text{c}}}$ can be generated as follows, \begin{equation} \label{eq_pca} \mathbf{m}_\text{pca} = \bar{\mathbf{m}}_\text{gm} + U_l\Sigma_l \boldsymbol{\xi}_l, \end{equation} where $U_l \in \mathbb{R}^{N_{\text{c}} \times l}$ and $\Sigma_l \in \mathbb{R}^{l \times l}$ contain the leading left singular vectors and singular values, respectively. Ideally, it is the case that $l << N_{\text{c}}$. By sampling each component of $\boldsymbol{\xi}_l$ independently from the standard normal distribution and applying Eq.~\ref{eq_pca}, we can generate new PCA models. Besides generating new models, we can also apply PCA to approximately reconstruct realizations of the original models. We will see later that this is required for the supervised-learning-based loss function used to train 3D CNN-PCA. Specifically, we can project each of the $N_{\text{r}}$ realizations of $\mathbf{m}_\text{gm}$ onto the principal components via \begin{equation} \label{eq_pca_proj} \hat{\boldsymbol{\xi}}_l^i = \Sigma^{-1}_lU^T_l(\mathbf{m}_\text{gm}^i - \bar{\mathbf{m}}_\text{gm}), \hspace{8px} i=1,...,N_{\text{r}} . \end{equation} Here $\hat{\boldsymbol{\xi}}_l^i$ denotes low-dimensional variables obtained through projection. The `hat' is added to differentiate $\hat{\boldsymbol{\xi}}_l^i$ from low-dimensional variables obtained through sampling ($\boldsymbol{\xi}_l$). We can then approximately reconstruct $\mathbf{m}_\text{gm}^i$ as \begin{equation} \label{eq_pca_recon} \hat{\mathbf{m}}_\text{pca}^i = \bar{\mathbf{m}}_\text{gm} + U_l \Sigma_l \hat{\boldsymbol{\xi}}_l^i = \bar{\mathbf{m}}_\text{gm} + U_l U_l^T (\mathbf{m}_\text{gm}^i - \bar{\mathbf{m}}_\text{gm}), \hspace{8px} i=1,...,N_{\text{r}}, \end{equation} where $\hat{\mathbf{m}}_\text{pca}^i$ is referred to as a reconstructed PCA model. The larger the reduced dimension $l$, the closer $\hat{\mathbf{m}}_\text{pca}^i$ will be to $\mathbf{m}_\text{gm}^i$. If all $N_{\text{r}} - 1$ nonzero singular values are retained, $\hat{\mathbf{m}}_\text{pca}^i$ will exactly match $\mathbf{m}_\text{gm}^i$. For systems where $\mathbf{m}_\text{gm}$ follows a multi-Gaussian distribution, the spatial correlation of $\mathbf{m}_\text{gm}$ can be fully characterized by two-point correlations, i.e., covariance. For such systems, $\mathbf{m}_\text{pca}$ (constructed using Eq.~\ref{eq_pca}) will essentially preserve the spatial correlations in $\mathbf{m}_\text{gm}$, assuming $l$ is sufficiently large. For systems with complex geology, where $\mathbf{m}_\text{gm}$ follows a non-Gaussian distribution, the spatial correlation of $\mathbf{m}_\text{gm}$ is characterized by multiple-point statistics. In such cases, the spatial structure of $\mathbf{m}_\text{pca}$ can deviate significantly from that of $\mathbf{m}_\text{gm}$, meaning the direct use of Eq.~\ref{eq_pca} is not appropriate. \subsection{2D CNN-PCA Procedure} \label{Subsect_cnn_pca} In CNN-PCA, we post-process PCA models using a deep convolutional neural network to achieve better correspondence with the underlying geomodels. This process can be represented as \begin{equation} \mathbf{m}_\text{cnnpca} = f_W(\mathbf{m}_\text{pca}), \end{equation} where $f_W$ denotes the model transform net, the subscript $W$ indicates the trainable parameters within the network, and $\mathbf{m}_\text{cnnpca} \in \mathbb{R}^{N_{\text{c}}}$ is the resulting geomodel. For 2D models, the training loss ($L^i$) for $f_W$, for each training sample $i$, includes a content loss ($L_\text{c}^i$) and a style loss ($L_\text{s}^i$): \begin{equation} \label{eq:2Dloss} L^i = L_\text{c}^i(f_W(\mathbf{m}_\text{pca}^i), \mathbf{m}_\text{pca}^i) + L_\text{s}^i(f_W(\mathbf{m}_\text{pca}^i), M_\text{ref}), \hspace{8px} i=1,..,N_{\text{t}}, \end{equation} where $\mathbf{m}_\text{pca}^i, i=1,..,N_{\text{t}}$ is a training set (of size $N_{\text{t}}$) of random new PCA models, and $M_\text{ref}$ is a reference model (e.g., training image or an original realization $\mathbf{m}_\text{gm}$). Here $L_\text{c}^i$ quantifies the `closeness' of $f_W(\mathbf{m}_\text{pca}^i)$ to $\mathbf{m}_\text{pca}^i$, and acts to ensure that the post-processed model resembles (to some extent) the input PCA model. The style loss $L_\text{s}^i$ quantifies the resemblance of $f_W(\mathbf{m}_\text{pca}^i)$ to $M_\text{ref}$ in terms of spatial correlation structure. As noted in Section~\ref{sec-pca}, the spatial correlation of non-Gaussian models can be characterized by high-order multiple-point statistics. It is not practical, however, to compute such quantities directly. Therefore, $L_\text{s}^i$ in Eq.~\ref{eq:2Dloss} is not based on high-order spatial statistics but rather on low-order statistics of features extracted from another pretrained CNN, referred to as the loss net \citep{Gatys2015, Johnson2016}. More specifically, we feed the 2D models $\mathbf{m}_\text{gm}$, $\mathbf{m}_\text{pca}$ and $f_W(\mathbf{m}_\text{pca})$ through the loss net and extract intermediate feature matrices $F_k(\mathbf{m})$ from different layers $k \in \kappa$ of the loss net. The uncentered covariance matrices, called Gram matrices, are given by $G_k(\mathbf{m}) = F_k(\mathbf{m})F_k(\mathbf{m})^T/(N_{\text{c},k}\Nzz{k})$, where $N_{\text{c},k}$ and $\Nzz{k}$ are the dimensions of $F_k(\mathbf{m})$. These matrices have been shown to provide an effective set of metrics for quantifying the multipoint correlation structure of 2D models. The style loss is thus based on the differences between $f_W(\mathbf{m}_\text{pca}^i)$ and reference model $M_\text{ref}$ in terms of their corresponding Gram matrices. This is expressed as \begin{equation} \label{eq-ls} L_\text{s}^i(f_W(\mathbf{m}_\text{pca}^i), M_\text{ref}) = \sum_{k \in \kappa}\dfrac{1}{\Nzz{k}^2}||G_k(f_W(\mathbf{m}_\text{pca}^i)) - G_k(M_\text{ref})||, \hspace{8px} i=1,..,N_{\text{t}}. \end{equation} The content loss is based on the difference between the feature matrices for $f_W(\mathbf{m}_\text{pca}^i)$ and $\mathbf{m}_\text{pca}^i$ from a particular layer in the network. For the 2D models considered in \cite{Liu2019} and \cite{Liu2020}, VGG net was used as the loss net \citep{Simonyan2015a}. We have a concept of transfer learning here as the VGG net, pretrained on image classification, was shown to be effective at extracting features from 2D geomodels. However, since VGG net only accepts image-like 2D input, it cannot be directly used for extracting features from 3D geological models. Our extension of CNN-PCA to 3D involves two main components: the replacement of VGG net with 3D CNN models, and the use of a new loss term based on supervised learning. We now describe the 3D CNN-PCA formulation. \subsection{3D CNN-PCA Formulation} We experimented with several pretrained 3D CNNs for extracting features from 3D geomodels, which are required to compute style loss in 3D CNN-PCA. These include VoxNet \citep{Maturana2015} and LightNet \citep{Zhi2017} for 3D object recognition, and C3D net \citep{Tran2015} for classification of video clips in the sports-1M dataset \citep{Karpathy2014}. These CNNs accept input as dense 3D tensors with either three spatial dimensions or two spatial dimensions and a temporal dimension. Thus all are compatible for use with 3D geomodels. After numerical experimentation with these various networks, we found the C3D net to perform the best, based on visual inspection of the geomodels generated by the trained model transform nets. Therefore, we use the C3D net in our 3D CNN-PCA formulation. We observed, however, that the Gram matrices extracted from the C3D net were not as effective as they were with VGG net in 2D. This is likely due to the higher dimensionality and larger degree of variability of the 3D geomodels relative to those in 2D. We therefore considered additional treatments, and found that the use of a new supervised-learning-based loss term provides enhanced 3D CNN-PCA geomodels. We now describe this procedure. As discussed in Section~\ref{sec-pca}, we can approximately reconstruct realizations of the original model $\mathbf{m}_\text{gm}$ with PCA using Eq.~\ref{eq_pca_recon}. There is however reconstruction error between the two sets of models. The supervised learning component entails training the model transform net to minimize an appropriately defined reconstruction error. Recall that when the trained model transform net is used at test time (e.g., to generate new random models or to calibrate geomodels during history matching), this entails post-processing new PCA models $\mathbf{m}_\text{pca}(\boldsymbol{\xi}_l)$. Importantly, these new models involve $\boldsymbol{\xi}_l$ rather than $\hat{\boldsymbol{\xi}}_l^i$. In other words, at test time we do not have corresponding pairs of $(\mathbf{m}_\text{gm}^i, \hat{\mathbf{m}}_\text{pca}^i)$. Thus, during training, it is beneficial to partially `disrupt' the direct correspondence that exists between each pair of $(\mathbf{m}_\text{gm}^i, \hat{\mathbf{m}}_\text{pca}^i)$. An effective way of accomplishing this is to perturb the reconstructed PCA models used in training. We proceed by adding random noise to the $\hat{\boldsymbol{\xi}}_l^i$ in Eq.~\ref{eq_pca_proj}; i.e., \begin{equation} \Tilde{\boldsymbol{\xi}}_l^i = \hat{\boldsymbol{\xi}}_l^i + \boldsymbol{\epsilon}^i = \Sigma^{-1}_lU^T_l(\mathbf{m}_\text{gm}^i - \bar{\mathbf{m}}_\text{gm}) + \boldsymbol{\epsilon}^i, \hspace{8px} i=1,...,N_{\text{r}}, \label{eq_pca_proj_perturb} \end{equation} where $\Tilde{\boldsymbol{\xi}}_l^i$ denotes the perturbed low-dimensional variable and $\boldsymbol{\epsilon}^i$ is a perturbation term. Then we can approximately reconstruct $\mathbf{m}_\text{gm}^i$ with \begin{equation} \label{eq_pca_recon_perturb} \Tilde{\mathbf{m}}_\text{pca}^i = \bar{\mathbf{m}}_\text{gm} + U_l\Sigma_l \Tilde{\boldsymbol{\xi}}_l^i, \hspace{8px} i=1,...,N_{\text{r}}. \end{equation} We now describe how we determine $l$ and specify $\boldsymbol{\epsilon}$. To find $l$, we apply the `energy' criterion described in \cite{Sarma2006} and \cite{Vo2014}. This entails first determining the total energy $E_\text{t} = \sum_{i=1}^{N_{\text{r}}-1}(\sigma^i)^2$, where $\sigma^i$ are the singular values. The fraction of energy captured by the $l$ leading singular values is given by $\sum_{i=1}^{l}(\sigma^i)^2 / E_\text{t}$. Throughout this study, we determine $l$ such that the $l$ leading singular values explain \textapprox80\% of the total energy. For the components of the perturbation term $\boldsymbol{\epsilon}$, we set $\epsilon_j = 0$ for $j=1,...,p$, where $p$ is determined such that the first $p$ leading singular values explain \textapprox40\% of the total energy, and $\epsilon_j \mathtt{\sim} N(0,1)$ for $j=p+1,...,l$. With this treatment, we perturb only the small-scale features in $\Tilde{\mathbf{m}}_\text{pca}$. This approach was found to be effective as it disrupts the precise correspondence between $(\mathbf{m}_\text{gm}^i, \hat{\mathbf{m}}_\text{pca}^i)$ pairs, while maintaining the locations of major geological features. We use the same $N_{\text{r}}$ realizations of $\mathbf{m}_\text{gm}$ as were used for constructing the PCA representation for the generation of $\Tilde{\mathbf{m}}_\text{pca}$. We reiterate that Eqs.~\ref{eq_pca_proj_perturb} and \ref{eq_pca_recon_perturb} are used here (Eqs.~\ref{eq_pca_proj} and \ref{eq_pca_recon} are not applied). The supervised-learning loss function for each pair of $(\mathbf{m}_\text{gm}^i, \Tilde{\mathbf{m}}_\text{pca}^i)$, which we refer to as the reconstruction loss, is given by \begin{equation} \label{eq:3D_rec_loss} L_\text{rec}^i(\mathbf{m}_\text{gm}^i, f_W(\Tilde{\mathbf{m}}_\text{pca}^i)) = ||\mathbf{m}_\text{gm}^i- f_W(\Tilde{\mathbf{m}}_\text{pca}^i)||_1, \hspace{8px} i=1,...,N_{\text{r}}. \end{equation} Note that in 3D CNN-PCA we take $N_{\text{t}}=N_{\text{r}}$ in all cases. The style loss is evaluated using a separate set of $N_{\text{r}}$ new PCA models $\mathbf{m}_\text{pca}^i(\boldsymbol{\xi}_l^i)$, $i=1,...,N_{\text{r}}$, with $\boldsymbol{\xi}_l^i$ sampled from \textcolor{blue}{$N(\mathbf{0},I_l)$}. As for the reference model $M_\text{ref}$, in 2D CNN-PCA we used either a reference training image or one realization of $\mathbf{m}_\text{gm}$. Here we generate realizations of the original geomodel using object-based techniques in Petrel, so there is no reference training image. We therefore use realizations of $\mathbf{m}_\text{gm}$ to represent the reference. Instead of using one particular realization, all $N_{\text{r}}$ realizations of $\mathbf{m}_\text{gm}$ (in turn) are considered as reference models. Specifically, we use $\mathbf{m}_\text{gm}^i$ as the reference model for new PCA model $\mathbf{m}_\text{pca}^i$. It is important to emphasize that $\mathbf{m}_\text{gm}^i$ and $\mathbf{m}_\text{pca}^i$ are completely unrelated in terms of the location of geological features -- we are essentially assigning a random reference model ($\mathbf{m}_\text{gm}^i$) for each $\mathbf{m}_\text{pca}^i$. However, because the style loss is based on summary spatial statistics, the exact location of geological features does not affect the evaluation of the loss. The style loss between $\mathbf{m}_\text{gm}^i$ and the (non-corresponding) new PCA model $\mathbf{m}_\text{pca}^i$ is given by \begin{equation} \label{eq:3D_style_loss} L_\text{s}^i(\mathbf{m}_\text{gm}^i, f_W(\mathbf{m}_\text{pca}^i)) = \sum_{k \in \kappa}\dfrac{1}{\Nzz{k}^2}||G_k(\mathbf{m}_\text{gm}^i) - G_k(f_W(\mathbf{m}_\text{pca}^i))||_1, \hspace{8px} i=1,...,N_{\text{r}}, \end{equation} where $G_k$ are Gram matrices based on features extracted from different layers in the C3D net. The C3D net consists of four blocks of convolutional and pooling layers. Here we use the last convolutional layer of each block, which corresponds to $k= 1, 2, 4, 6$. Details on the network architecture are provided in SI. A hard data loss term is also include to assure hard data (e.g., facies type at well locations) are honored. Hard data loss $L_\text{h}^i$ is given by \begin{equation} L_\text{h}^i = \dfrac{1}{N_{\text{h}}}\left[\Bh^T(\mathbf{m}_\text{gm}^i-f_W(\mathbf{m}_\text{pca}^i))^2 + \Bh^T(\mathbf{m}_\text{gm}^i-f_W(\Tilde{\mathbf{m}}_\text{pca}^i))^2\right], \hspace{8px} i=1,...,N_{\text{r}}, \end{equation} where $\Bh$ is a selection vector, with $h_j=1$ indicating the presence of hard data at cell $j$ and $h_j=0$ the absence of hard data, and $N_{\text{h}}$ is the total number of hard data. The final training loss is a weighted combination of the reconstruction loss, style loss and hard data loss. For each pair of corresponding $(\mathbf{m}_\text{gm}^i, \Tilde{\mathbf{m}}_\text{pca}^i)$, and the unrelated new PCA model $\mathbf{m}_\text{pca}^i$, the total training loss is thus \begin{equation} \label{eq_cnnpca_loss} L^i = \gamma_r L_\text{rec}^i(\mathbf{m}_\text{gm}^i, f_W(\Tilde{\mathbf{m}}_\text{pca}^i)) + \gamma_s L_\text{s}^i(\mathbf{m}_\text{gm}^i, f_W(\mathbf{m}_\text{pca}^i)) + \gamma_h L_\text{h}^i, \hspace{8px} i=1,...,N_{\text{r}}. \end{equation} The three weighting factors $\gamma_r$, $\gamma_s$ and $\gamma_h$ are determined heuristically by training the network with a range of values and selecting the combination that leads to the lowest mismatch in quantities of interest (here we consider flow statistics) relative to the original (Petrel) geomodels. We also require that at least 99.9\% of the hard data are honored over the entire set of $\mathbf{m}_\text{cnnpca}$ models. The training set is divided into multiple mini-batches, and the total loss for each mini-batch of samples is \begin{equation} \label{eq_cnnpca_loss_total} L_\text{t} = \sum_{i=1}^{N_{\text{b}}} L^i, \end{equation} where $N_{\text{b}}$ is the batch size. \begin{figure}[!htb] \centering \includegraphics[width=1\textwidth]{cnnpca_3d_train.jpg} \caption{Training procedure for 3D CNN-PCA.} \label{fig-cnncpa-train} \end{figure} The model transform net for 3D CNN-PCA is obtained by replacing the 2D convolutional layers, upsampling layers, downsampling layers and padding layers in 2D CNN-PCA \citep{Liu2019} with their 3D counterparts. This can be readily accomplished within the Pytorch deep-learning framework \citep{Paszke2017}. The training procedure for 3D CNN-PCA is illustrated in Fig.~\ref{fig-cnncpa-train}. Each training sample consists of a pair of corresponding models $(\mathbf{m}_\text{gm}^i, \Tilde{\mathbf{m}}_\text{pca}^i)$ and an unrelated new PCA model $\mathbf{m}_\text{pca}^i$. The new PCA model $\mathbf{m}_\text{pca}^i$ and the reconstructed PCA model $\Tilde{\mathbf{m}}_\text{pca}^i$ are fed through the model transform net $f_W$. The reconstruction loss is evaluated using $f_W(\Tilde{\mathbf{m}}_\text{pca}^i)$ and the original model $\mathbf{m}_\text{gm}^i$ and Eq.~\ref{eq:3D_rec_loss}. To evaluate the style loss, $f_W(\mathbf{m}_\text{pca}^i)$ and $\mathbf{m}_\text{gm}^i$ are fed through the C3D net, and the relevant feature matrices are extracted. Then, Gram matrices are computed to form the style loss (Eq.~\ref{eq:3D_style_loss}). The final loss entails a weighted combination of reconstruction loss, style loss and hard data loss. The trainable parameters in $f_W$ are updated based on the gradient of the loss computed with back-propagation. This process iterates over all mini-batches and continues for a specified number of epochs. The detailed architectures for the model transform and C3D nets are provided in SI. \section{Geomodels and Flow Results Using 3D CNN-PCA} \label{sec-model-gen} We now apply the 3D CNN-PCA procedure to generate geomodels corresponding to three different geological scenarios (binary channelized, three-facies, and bimodal channelized systems). Visualizations of the CNN-PCA models, along with results for key flow quantities, are presented. All flow simulations in this work are performed using Stanford's Automatic Differentiation General Purpose Research Simulator, ADGPRS \citep{zhou2012parallel}. \subsection{Case 1 -- Binary Channelized System} The first case involves a channelized system characterized by rock facies type, with 1 denoting high-permeability sandstone and 0 indicating low-permeability mud. The geomodels are defined on a $60\times60\times40$ grid (144,000 total cells). The average sand fraction is 6.53\%. Figure~\ref{fig-chan-petrel} displays four random facies realizations generated using object-based modeling within Petrel (sandstone is shown in red, and mud in blue). \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-2-petrel-train-real5.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-2-petrel-train-real17.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-2-petrel-train-real35.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-2-petrel-train-real110.jpg} \end{subfigure}% \caption{Four realizations of the binary channel system generated using Petrel (Case~1).} \label{fig-chan-petrel} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[b]{1\textwidth} \includegraphics[width=0.24\textwidth]{case7-2-pca-real1.jpg} \includegraphics[width=0.24\textwidth]{case7-2-pca-real2.jpg} \includegraphics[width=0.24\textwidth]{case7-2-pca-real30.jpg} \includegraphics[width=0.24\textwidth]{case7-2-pca-real84.jpg} \caption{PCA models} \end{subfigure}% \begin{subfigure}[b]{1\textwidth} \includegraphics[width=0.24\textwidth]{case7-2-tpca-real1.jpg} \includegraphics[width=0.24\textwidth]{case7-2-tpca-real2.jpg} \includegraphics[width=0.24\textwidth]{case7-2-tpca-real30.jpg} \includegraphics[width=0.24\textwidth]{case7-2-tpca-real84.jpg} \caption{T-PCA models} \end{subfigure}% \begin{subfigure}[b]{1\textwidth} \includegraphics[width=0.24\textwidth]{case7-2-cnnpca-real1.jpg} \includegraphics[width=0.24\textwidth]{case7-2-cnnpca-real2.jpg} \includegraphics[width=0.24\textwidth]{case7-2-cnnpca-real30.jpg} \includegraphics[width=0.24\textwidth]{case7-2-cnnpca-real84.jpg} \caption{CNN-PCA models} \end{subfigure}% \caption{Four test-set realizations of the binary channel system from (a) PCA models, (b) corresponding truncated-PCA (T-PCA) models, and (c) corresponding CNN-PCA models (Case~1).} \label{fig_case1_models} \end{figure} In this case, there are two production wells and two injection wells. The well locations are given in Table~\ref{tab_well_case1}. All wells are assumed to be drilled through all 40~layers of the model, and hard data are specified (meaning $h_j=1$) in all blocks penetrated by a well. Wells are perforated (open to flow), however, only in blocks characterized by sand. These layers are indicated in Table~\ref{tab_well_case1}. A total of $N_{\text{r}}=3000$ conditional realizations $\mathbf{m}_\text{gm}^i$ are generated to construct the PCA model (through application of Eq.~\ref{eq_center_data_matrix}). A total of $l=400$ singular values are retained, which explains $\sim$80\% of the total energy. Then, $N_{\text{r}}=3000$ reconstructed PCA models $\Tilde{\mathbf{m}}_\text{pca}^i$ are generated using Eqs.~\ref{eq_pca_proj_perturb} and \ref{eq_pca_recon_perturb}. The first $p=40$ principal components explain $\sim$40\% of the energy, so perturbation is only applied to $\xi_j$ for $j=41, ..., 400$. A separate set of $N_{\text{r}}=3000$ random PCA models $\mathbf{m}_\text{pca}^i$ is generated by sampling $\boldsymbol{\xi}_l$ from $N(\mathbf{0},I_l)$. \begin{table}[!htb] \centering \begin{tabular}{ c | c | c | c |c } & P1 & P2 & I1 & I2 \\ \hline Areal location ($x,~y$)&(15, 57)&(45, 58)&(15, 2)&(45, 3)\\ Perforated layers&18 - 24&1 - 8&15 - 22&1 - 8\\ \end{tabular} \caption{Well locations ($x$ and $y$ refer to areal grid-block indices) and perforations (Case~1)} \label{tab_well_case1} \end{table} The $N_{\text{r}}=3000$ realizations of $\mathbf{m}_\text{gm}^i$, $\Tilde{\mathbf{m}}_\text{pca}^i$ and $\mathbf{m}_\text{pca}^i$ form the training set for the training of the model transform net $f_W$. The weighting factors for the training loss in Eq.~\ref{eq_cnnpca_loss} are $\gamma_\text{rec}=500$, $\gamma_s=100$, and $\gamma_h = 10$. These values were found to provide accurate flow statistics and near-perfect hard-data honoring. The Adam optimizer \citep{Kingma2014} is used for updating parameters in $f_W$, with a default learning rate of $l_r=0.001$ and a batch size of $N_{\text{b}}=8$. The model transform net is trained for 10~epochs, which requires around 0.5~hour on one Tesla V100 GPU. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-petrel-train-chan-real21.jpg} \caption{One Petrel model} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-opca-chan-real41.jpg} \caption{One T-PCA model} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-cnnpca-chan-real41.jpg} \caption{One CNN-PCA model} \end{subfigure}% \caption{3D channel geometry from (a) one new Petrel model, (b) one T-PCA model, and (c) corresponding CNN-PCA model. The T-PCA and CNN-PCA models correspond to the leftmost geomodels in Fig.~\ref{fig_case1_models}b and c (Case~1).} \label{fig_case1_chans} \end{figure} After training, 200 new Petrel realizations and 200 new PCA models are generated. The new PCA models are fed through the trained model transform net to obtain the CNN-PCA models. Truncation is performed on the new PCA models and on the CNN-PCA models to render them strictly binary. Specifically, cutoff values are determined for each set of models such that the final sand-facies fractions match that of the Petrel models (6.53\%). These models, along with the new Petrel realizations, comprise the test sets. Figure~\ref{fig_case1_models} presents four test-set PCA models (Fig.~\ref{fig_case1_models}a), the corresponding truncated-PCA models (denoted T-PCA, Fig.~\ref{fig_case1_models}b) and the corresponding CNN-PCA models (after truncation, Fig.~\ref{fig_case1_models}c). Figure~\ref{fig_case1_chans} displays the 3D channel geometry for a Petrel model (Fig.~\ref{fig_case1_chans}a), a truncated-PCA model (Fig.~\ref{fig_case1_chans}b), and the corresponding CNN-PCA model (Fig.~\ref{fig_case1_chans}~c). From Figs.~\ref{fig_case1_models} and \ref{fig_case1_chans}, it is apparent that the CNN-PCA models preserve geological realism much better than the truncated-PCA models. More specifically, the CNN-PCA models display intersecting channels of continuity, width, sinuosity and depth consistent with reference Petrel models. \begin{figure}[!htb] \centering \includegraphics[width=0.32\textwidth]{kr.jpg} \caption{Relative permeability curves for all flow simulation models.} \label{fig_rel_perm} \end{figure} In addition to visual inspection, it is important to assess the CNN-PCA models quantitatively. We computed static channel connectivity metrics as well as flow responses for all of the test sets. The connectivity metrics suggested by \cite{Pardo-Iguzquiza2003} were found to be informative in our 2D study \citep{Liu2019}, but for the 3D cases considered here they appear to be too global to capture key interconnected-channel features that impact flow. Thus we focus on flow responses in the current assessment. The flow set up involves aqueous and nonaqueous liquid phases. These can be viewed as NAPL and water in the context of an aquifer remediation project, or as oil and water in the context of oil production via water injection. Our terminology will correspond to the latter application. Each grid block in the geomodel is of dimension 20~m in the $x$ and $y$ directions, and 5~m in the $z$ direction. Water viscosity is constant at 0.31~cp. Oil viscosity varies with pressure; it is 1.03~cp at a pressure of 325~bar. Relative permeability curves are shown in Fig.~\ref{fig_rel_perm}. The initial pressure of the reservoir (bottom layer) is 325~bar. The production and injection wells operate at constant bottom-hole pressures (BHPs) of 300~bar and 340~bar, respectively. The simulation time frame is 1500~days. The permeability and porosity for grid blocks in channels (sand) are $k=2000$~md and $\phi=0.2$. Grid blocks in mud are characterized by $k=20$~md and $\phi=0.15$. \begin{figure}[!htb] \begin{subfigure}[b]{1.0\textwidth} \centering \includegraphics[width=0.4\textwidth]{case7-2-opcapetrel-Field_OPR.jpg} \includegraphics[width=0.4\textwidth]{case7-2-cnnpcapetrel-Field_OPR.jpg} \caption{Field oil rate} \end{subfigure}% \begin{subfigure}[b]{1.0\textwidth} \centering \includegraphics[width=0.4\textwidth]{case7-2-opcapetrel-Field_WPR.jpg} \includegraphics[width=0.4\textwidth]{case7-2-cnnpcapetrel-Field_WPR.jpg} \caption{Field water rate} \end{subfigure}% \begin{subfigure}[b]{1.0\textwidth} \centering \includegraphics[width=0.4\textwidth]{case7-2-opcapetrel-Field_WIR.jpg} \includegraphics[width=0.4\textwidth]{case7-2-cnnpcapetrel-Field_WIR.jpg} \caption{Field water injection rate} \end{subfigure}% \caption{Comparison of Petrel and T-PCA (left) and Petrel and CNN-PCA (right) field-wide flow statistics over ensembles of 200 (new) test cases. Red, blue and black curves represent results from Petrel, T-PCA, and CNN-PCA, respectively. Solid curves correspond to $\text{P}_{50}$ results, lower and upper dashed curves to $\text{P}_{10}$ and $\text{P}_{90}$ results (Case~1).} \label{fig_case1_flow_stats_field} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-opcapetrel-P1_OPR.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-opcapetrel-P2_WPR.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-opcapetrel-I2_WIR.jpg} \end{subfigure}% \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-cnnpcapetrel-P1_OPR.jpg} \caption{P1 oil production rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-cnnpcapetrel-P2_WPR.jpg} \caption{P2 water production rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-cnnpcapetrel-I2_WIR.jpg} \caption{I2 water injection rate} \end{subfigure}% \caption{Comparison of Petrel and T-PCA (top) and Petrel and CNN-PCA (bottom) well-by-well flow statistics over ensembles of 200 (new) test cases. Red, blue and black curves represent results from Petrel, T-PCA, and CNN-PCA, respectively. Solid curves correspond to $\text{P}_{50}$ results, lower and upper dashed curves to $\text{P}_{10}$ and $\text{P}_{90}$ results (Case~1).} \label{fig_case1_flow_stats_well} \end{figure} Flow results are presented in terms of $\text{P}_{10}$, $\text{P}_{50}$ and $\text{P}_{90}$ percentiles for each test set. These results are determined, at each time step, based on the results for all 200 models. Results at different times correspond, in general, to different geomodels within the test set. Figure~\ref{fig_case1_flow_stats_field} displays field-level results for the 200 test-set Petrel models (red curves), truncated-PCA models (T-PCA, blue curves), and CNN-PCA models (black curves). Figure~\ref{fig_case1_flow_stats_well} presents individual well responses (the wells shown have the highest cumulative phase production or injection). The significant visual discrepancies between the truncated-PCA and Petrel geomodels, observed in Figs.~\ref{fig_case1_models} and \ref{fig_case1_chans}, are reflected in the flow responses, where large deviations are evident. The CNN-PCA models, by contrast, provide flow results in close agreement with the Petrel models, for both field and well-level predictions. The CNN-PCA models display slightly higher field water rates, which may be due to a minor overestimation of channel connectivity or width. The overall match in $\text{P}_{10}$--$\text{P}_{90}$ results, however, demonstrates that the CNN-PCA geomodels exhibit the same level of variability as the Petrel models. This feature is important for the proper quantification of uncertainty. \subsection{Impact of Style Loss on CNN-PCA Geomodels} We now briefly illustrate the impact of style loss by comparing CNN-PCA geomodels generated with and without this loss term. Comparisons of areal maps for different layers in different models are shown in Fig.~\ref{fig_model_style_loss_impact}. Maps for three PCA models appear in Fig.~\ref{fig_model_style_loss_impact}a, the corresponding T-PCA models in Fig.~\ref{fig_model_style_loss_impact}b, CNN-PCA models without style loss ($\gamma_s=0$) in Fig.~\ref{fig_model_style_loss_impact}c, and CNN-PCA models with style loss ($\gamma_s=100$) in Fig.~\ref{fig_model_style_loss_impact}d. The flow statistics for CNN-PCA models without style loss are presented in Fig.~\ref{fig_case1_flow_stats_no_style_loss}. Hard data loss is included in all cases. The CNN-PCA models with reconstruction loss alone ($\gamma_s=0$) are more realistic visually, and result in more accurate flow responses, than the truncated-PCA models. However, the inclusion of style loss clearly acts to improve the CNN-PCA geomodels. This is evident in both the areal maps and the field-level flow responses (compare Fig.~\ref{fig_case1_flow_stats_no_style_loss} to Fig.~\ref{fig_case1_flow_stats_field} (right)). We note finally that in a smaller 3D example considered in \cite{TangLiu2020}, the use of reconstruction loss alone was sufficient to achieve well-defined channels in the CNN-PCA geomodels (and accurate flow statistics). That case involved six wells (and thus more hard data) spaced closer together than in the current example. This suggests that style loss may be most important when hard data are limited, or do not act to strongly constrain channel geometry. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.65\textwidth} \includegraphics[width=1\textwidth]{pca_layer_1.jpg} \caption{Areal maps from PCA models} \end{subfigure}% \begin{subfigure}[b]{0.65\textwidth} \includegraphics[width=1\textwidth]{tpca_layer_1.jpg} \caption{Areal maps from T-PCA models} \end{subfigure}% \begin{subfigure}[b]{0.65\textwidth} \includegraphics[width=1\textwidth]{cnnpca_sw0_layer_1.jpg} \caption{Areal maps from CNN-PCA models with $\gamma_s=0$} \end{subfigure}% \begin{subfigure}[b]{0.65\textwidth} \includegraphics[width=1\textwidth]{cnnpca_sw100_layer_1.jpg} \caption{Areal maps from CNN-PCA models with $\gamma_s=100$} \end{subfigure}% \caption{Areal maps for layer 19 (left column) and layer 1 (middle and right columns) from (a) three different PCA models, (b) corresponding T-PCA models, (c) corresponding CNN-PCA models without style loss and (d) corresponding CNN-PCA models with style loss (Case~1).} \label{fig_model_style_loss_impact} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-sw0cnnpcapetrel-Field_OPR.jpg} \caption{Field oil rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-sw0cnnpcapetrel-Field_WPR.jpg} \caption{Field water rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-2-sw0cnnpcapetrel-Field_WIR.jpg} \caption{Field water injection rate} \end{subfigure}% \caption{Comparison of Petrel (red curves) and CNN-PCA without style loss (black curves) field-wide flow statistics over ensembles of 200 (new) test cases. Solid curves correspond to $\text{P}_{50}$ results, lower and upper dashed curves to $\text{P}_{10}$ and $\text{P}_{90}$ results (Case~1).} \label{fig_case1_flow_stats_no_style_loss} \end{figure} \subsection{Case 2 -- Three-Facies Channel-Levee-Mud System} This case involves three rock types, with the channel and levee facies displaying complex interconnected and overlapping geometries. The fluvial channel facies is of the highest permeability. The upper portion of each channel is surrounded by a levee facies, which is of intermediate permeability. The low-permeability mud facies comprises the remainder of the system. The average volume fractions for the channel and levee facies are 8.70\% and 5.42\%, respectively. Figure~\ref{fig_case2_petrel_models} displays four realizations generated from Petrel (channel in red, levee in green, mud in blue). The width and thickness of the levees is $\sim$55--60\% of the channel width and thickness. These models again contain $60 \times 60 \times 40$ cells. In this case there are three injection and three production wells. Well locations and hard data are summarized in Table~\ref{tab_well_case2}. We considered two different ways of encoding mud, levee and channel facies. The `natural' approach is to encode mud as 0, levee as 1, and channel as 2. This treatment, however, has some disadvantages. Before truncation, CNN-PCA models are not strictly discrete, and a transition zone exists between mud and channel. This transition region will be interpreted as levee after truncation, which in turn leads to levee surrounding channels on all sides. This facies arrangement deviates from the underlying geology, where levees only appear near the upper portion of channels. In addition, since we preserve levee volume fraction, the average levee width becomes significantly smaller than it should be. For these reasons we adopt an alternative encoding strategy. This entails representing mud as 0, levee as 2, and channel as 1. This leads to better preservation of the location and geometry of levees. \begin{table}[!htb] \centering \begin{tabular}{ c | c | c | c |c |c|c } & P1 & P2 & P3 & I1 & I2 & I3\\ \hline Areal location ($x,~y$)&(48, 48)&(58, 31)&(32, 57)&(12, 12)&(28, 3) & (4, 26)\\ Perforated layers in channel&15 - 21&25 - 30&1 - 5&15 - 20 & 25 - 30 & 1 - 6\\ Perforated layers in levee & - & - & 21 - 24 & - & 6 - 8 & 36 - 40 \\ \end{tabular} \caption{Well locations ($x$ and $y$ refer to areal grid-block indices) and perforations (Case~2)} \label{tab_well_case2} \end{table} We again generate $N_{\text{r}}=3000$ Petrel models, reconstructed PCA models and a separate set of new PCA models for training. We retain $l=800$ leading singular values, which capture $\sim$80\% of the total energy. The first 70 of these explain $\sim$40\% of the energy, so perturbation is performed on $\xi_j$, $j=71,...,800$ when generating the reconstructed PCA models. The training loss weighting factors in this case are $\gamma_\text{rec}=500$, $\gamma_s=50$ and $\gamma_h=10$. Other training parameters are the same as in Case~1. Test sets of 200 new realizations are then generated. The test-set PCA models are post-processed with the model transform net to obtain the CNN-PCA test set. The CNN-PCA geomodels are then truncated to be strictly ternary, with cutoff values determined such that the average facies fractions match those from the Petrel models. Four test-set CNN-PCA realizations are shown in Fig.~\ref{fig_models_case2}. These geomodels contain features consistent with those in the Petrel models (Fig.~\ref{fig_case2_petrel_models}). In addition to channel geometry, the CNN-PCA models also capture the geometry of the levees and their location relative to channels. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case8-3-petrel-real1.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case8-3-petrel-real2.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case8-3-petrel-real3.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case8-3-petrel-real4.jpg} \end{subfigure}% \caption{Four Petrel realizations of the three-facies system, with sand shown in red, levee in green, and mud in blue (Case~2).} \label{fig_case2_petrel_models} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.24\textwidth]{case8-3-cnnpca_real25.jpg} \includegraphics[width=0.24\textwidth]{case8-3-cnnpca_real3.jpg} \includegraphics[width=0.24\textwidth]{case8-3-cnnpca_real14.jpg} \includegraphics[width=0.24\textwidth]{case8-3-cnnpca_real34.jpg} \caption{Four test-set CNN-PCA realizations of the three-facies system, with sand shown in red, levee in green, and mud in blue (Case~2).} \label{fig_models_case2} \end{figure} We now present flow results for this case. Permeability for channel, levee and mud are specified as 2000~md, 200~md and 20~md, respectively. Corresponding porosity values are 0.25, 0.15 and 0.05. Other simulation specifications are as in Case~1. Field-wide and individual well flow statistics (for wells with the largest cumulative phase production/injection) are presented in Fig.~\ref{fig_case2_flow_stats}. Consistent with the results for Case~1, we observe generally close agreement between flow predictions for CNN-PCA and Petrel models. Again, the close matches in the $\text{P}_{10}$--$\text{P}_{90}$ ranges indicate that the CNN-PCA geomodels capture the inherent variability of the Petrel models. Truncated-PCA geomodels, and comparisons between flow results for these models against Petrel models, are shown in Figs.~S1 and~S2 in SI. The truncated-PCA models appear less geologically realistic, and provide much less accurate flow predictions, than the CNN-PCA geomodels. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case8-3-cnnpcapetrel-Field_OPR.jpg} \caption{Field oil rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case8-3-cnnpcapetrel-Field_WPR.jpg} \caption{Field water rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case8-3-cnnpcapetrel-Field_WIR.jpg} \caption{Field water injection rate} \end{subfigure}% \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case8-3-cnnpcapetrel-P1_OPR.jpg} \caption{P1 oil rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case8-3-cnnpcapetrel-P2_WPR.jpg} \caption{P2 water rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case8-3-cnnpcapetrel-I2_WIR.jpg} \caption{I2 water injection rate} \end{subfigure}% \caption{Comparison of Petrel (red curves) and CNN-PCA (black curves) flow statistics over ensembles of 200 (new) test cases. Solid curves correspond to $\text{P}_{50}$ results, lower and upper dashed curves to $\text{P}_{10}$ and $\text{P}_{90}$ results (Case~2).} \label{fig_case2_flow_stats} \end{figure} \subsection{Case 3 -- Bimodal Channelized System} \label{sec_case3} In the previous two cases, permeability and porosity within each facies were constant. We now consider a bimodal system, where log-permeability and porosity within the two facies follow Gaussian distributions. The facies model, well locations and perforations are the same as in Case~1. The log-permeability in the sand facies follows a Gaussian distribution with mean 6.7 and variance 0.2, while in the mud facies the mean is 3.5 and the variance is 0.19. The log-permeability at well locations in all layers is treated as hard data. We use the sequential Gaussian simulation algorithm in Petrel to generate log-permeability values within each facies. The final log-permeability field is obtained using a cookie-cutter approach. We use $\mathbf{m}_\text{gm}^\text{f} \in \mathbb{R}^{N_{\text{c}}}$ to denote the facies model, and $\mathbf{m}_\text{gm}^\text{s} \in \mathbb{R}^{N_{\text{c}}}$ and $\mathbf{m}_\text{gm}^\text{m} \in \mathbb{R}^{N_{\text{c}}}$ to denote log-permeability within the sand and mud facies. The log-permeability of grid block $i$, in the bimodal system $\mathbf{m}_\text{gm} \in \mathbb{R}^{N_{\text{c}}}$, is then \begin{equation} (m_\text{gm})_i = (m_\text{gm}^\text{f})_i (m_\text{gm}^\text{s})_i + [1 - (m_\text{gm}^\text{f})_i] (m_\text{gm}^\text{m})_i, \hspace{8px} i=1,...,N_{\text{c}}. \label{eq_bimodal_cc} \end{equation} Figure~\ref{fig_case3_petrel} shows four log-permeability realizations generated by Petrel. Both channel geometry and within-facies heterogeneity are seen to vary between models. For this bimodal system we apply a two-step approach. Specifically, CNN-PCA is used for facies parameterization, and two separate PCA models are used to parameterize log-permeability within the two facies. Since the facies model is the same as in Case~1, we use the same CNN-PCA model. Thus the reduced dimension for the facies model is $l^\text{f} = 400$. We then construct two PCA models to represent log-permeability in each facies. For these models we set $l^\text{s} = l^\text{m} = 200$, which explains \raisebox{-0.6ex}{\textasciitilde}\hspace{0.15em}$60\%$ of the total energy. A smaller percentage is used here to limit the overall size of the low-dimensional variable $l$ (note that $l=l^\text{f}+l^\text{s}+l^\text{m}$), at the cost of discarding some amount of small-scale variation within each facies. This is expected to have a relatively minor impact on flow response. To generate new PCA models, we sample $\boldsymbol{\xi}^\text{f} \in \mathbb{R}^{l^\text{f}}$, $\boldsymbol{\xi}^\text{s}\in \mathbb{R}^{l^\text{s}}$ and $\boldsymbol{\xi}^\text{m}\in \mathbb{R}^{l^\text{m}}$ separately from standard normal distributions. We then apply Eq.~\ref{eq_pca} to construct PCA models $\mathbf{m}_\text{pca}^\text{f} \in \mathbb{R}^{N_{\text{c}}}$, $\mathbf{m}_\text{pca}^\text{s} \in \mathbb{R}^{N_{\text{c}}}$ and $\mathbf{m}_\text{pca}^\text{m} \in \mathbb{R}^{N_{\text{c}}}$. The model transform net then maps $\mathbf{m}_\text{pca}^\text{f}$ to the CNN-PCA facies model (i.e., $\mathbf{m}_\text{cnnpca}^\text{f} = f_W(\mathbf{m}_\text{pca}^\text{f})$). After truncation, the cookie-cutter approach is applied to provide the final CNN-PCA bimodal log-permeability model: \begin{equation} (m_\text{cnnpca})_i = (m_\text{cnnpca}^\text{f})_i (m_\text{pca}^\text{s})_i + [1 -(m_\text{cnnpca}^\text{f})_i] (m_\text{pca}^\text{m})_i, \hspace{8px} i=1,...,N_{\text{c}}. \label{eq_bimodal_cc} \end{equation} Figure~\ref{fig_case3_cnnpca_models} shows four of the resulting test-set CNN-PCA geomodels. Besides preserving channel geometry, the CNN-PCA models also display large-scale property variations within each facies. The CNN-PCA models are smoother than the reference Petrel models because small-scale variations in log-permeability within each facies are not captured in the PCA representations, as noted earlier. The average histograms for the 200 test-set Petrel and CNN-PCA geomodels are shown in Fig.~\ref{fig_case3_histo}. The CNN-PCA histogram is clearly bimodal, and in general correspondence with the Petrel histogram, though variance is underpredicted in the CNN-PCA models. This is again because variations at the smallest scales have been neglected. To construct flow models, we assign permeability and porosity, for block $i$, as $k_i = \exp(m_i)$ and $\phi_i = m_i/40$. The median values for permeability within channel and mud facies are \textapprox800~md and \textapprox30~md, while those for porosity are \textapprox0.16 and \textapprox0.08. The simulation setup is otherwise the same as in Case~1. Flow statistics for Case~3 are shown in Fig.~\ref{fig_case3_flow_stats}. Consistent with the results for the previous cases, we observe close agreement between the $\text{P}_{10}$, $\text{P}_{50}$ and $\text{P}_{90}$ predictions from CNN-PCA and Petrel geomodels. There does appear to be a slight underestimation of variability in the CNN-PCA models, however, which may result from the lack of small-scale property variation. PCA geomodels and corresponding flow results for this case are shown in Figs.~S3--S5 in SI. Again, these models lack the realism and flow accuracy evident in the CNN-PCA geomodels. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-petrel-eval-real1.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-petrel-eval-real3.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-petrel-eval-real39.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-petrel-eval-real51.jpg} \end{subfigure}% \vspace{0.2cm} \begin{subfigure}[b]{0.6\textwidth} \includegraphics[width=1\textwidth]{colorbar.jpg} \end{subfigure}% \caption{Four Petrel log-permeability realizations of the bimodal channelized system (Case~3).} \label{fig_case3_petrel} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpca-eval-real16.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpca-eval-real128.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpca-eval-real129.jpg} \end{subfigure}% ~ \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpca-eval-real159.jpg} \end{subfigure}% \vspace{0.2cm} \begin{subfigure}[b]{0.6\textwidth} \includegraphics[width=1\textwidth]{colorbar.jpg} \end{subfigure}% \caption{Four test-set CNN-PCA log-permeability realizations of the bimodal channelized system (Case~3).} \label{fig_case3_cnnpca_models} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\textwidth]{histo_petrel.jpg} \caption{Petrel} \end{subfigure}% ~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\textwidth]{histo_cnnpca.jpg} \caption{CNN-PCA} \end{subfigure}% \caption{Average histograms of the 200 test-set realizations from (a) Petrel and (b) CNN-PCA (Case~3).} \label{fig_case3_histo} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpcapetrel-Field_OPR.jpg} \caption{Field oil rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpcapetrel-Field_WPR.jpg} \caption{Field water rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpcapetrel-Field_WIR.jpg} \caption{Field water injection rate} \end{subfigure}% \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpcapetrel-P1_OPR.jpg} \caption{P1 oil rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpcapetrel-P2_WPR.jpg} \caption{P2 water rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{case7-4-cnnpcapetrel-I2_WIR.jpg} \caption{I2 water injection rate} \end{subfigure}% \caption{Comparison of Petrel (red curves) and CNN-PCA (black curves) flow statistics over ensembles of 200 (new) test cases. Solid curves correspond to $\text{P}_{50}$ results, lower and upper dashed curves to $\text{P}_{10}$ and $\text{P}_{90}$ results (Case~3).} \label{fig_case3_flow_stats} \end{figure} \section{History Matching using CNN-PCA} \label{sec-hm} CNN-PCA is now applied for a history matching problem involving the bimodal channelized system. With the two-step CNN-PCA approach, the bimodal log-permeability and porosity fields are represented with three low-dimensional variables: $\boldsymbol{\xi}^{\text{f}} \in \mathbb{R}^{400}$ for the facies model, and $\boldsymbol{\xi}^{\text{s}} \in \mathbb{R}^{200}$ and $\boldsymbol{\xi}^{\text{m}} \in \mathbb{R}^{200}$ for log-permeability and porosity in sand and mud. Concatenating the three low-dimensional variables gives $\boldsymbol{\xi}_l = [\boldsymbol{\xi}^{\text{f}},\boldsymbol{\xi}^{\text{s}},\boldsymbol{\xi}^{\text{m}}] \in \mathbb{R}^{l}$, with $l=800$. This $\boldsymbol{\xi}_l$ represents the uncertain variables considered during history matching. Observed data include oil and water production rates at the two producers, and water injection rate at the two injectors, collected every 100~days for the first 500~days. This gives a total of $N_\text{d}=30$ observations. Standard deviations for error in observed data are 1\%, with a minimum value of 2~m$^3$/day. The leftmost Petrel realization in Fig.~\ref{fig_case3_petrel} is used as the true geomodel. Observed data are generated by performing flow simulation with this model and then perturbing the simulated production and injection data consistent with standard deviations of measurement errors. \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.44\textwidth} \includegraphics[width=1\textwidth]{hm-priorpost-P1_OPR.jpg} \caption{P1 oil rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.44\textwidth} \includegraphics[width=1\textwidth]{hm-priorpost-P2_OPR.jpg} \caption{P2 oil rate} \end{subfigure}% \centering \begin{subfigure}[b]{0.44\textwidth} \includegraphics[width=1\textwidth]{hm-priorpost-P1_WPR.jpg} \caption{P1 water rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.44\textwidth} \includegraphics[width=1\textwidth]{hm-priorpost-P2_WPR.jpg} \caption{P2 water rate} \end{subfigure}% \centering \begin{subfigure}[b]{0.44\textwidth} \includegraphics[width=1\textwidth]{hm-priorpost-I1_WIR.jpg} \caption{I1 water injection rate} \end{subfigure}% ~ \begin{subfigure}[b]{0.44\textwidth} \includegraphics[width=1\textwidth]{hm-priorpost-I2_WIR.jpg} \caption{I2 water injection rate} \end{subfigure}% \caption{Prior and posterior flow results for bimodal channelized system. Gray regions represent the prior $\text{P}_{10}$-–$\text{P}_{90}$ range, red points and red curves denote observed and true data, and blue dashed curves denote the posterior $\text{P}_{10}$ (lower) and $\text{P}_{90}$ (upper) predictions. Vertical dashed line divides simulation time frame into history match and prediction periods.} \label{fig_hm_data} \end{figure} We use ESMDA \citep{Emerick2013} for history matching. This algorithm has been used previously by \cite{Canchumuni2017, Canchumuni2018, Canchumuni2019a, Canchumun2020} for data assimilation with deep-learning-based geological parameterizations. ESMDA is an ensemble-based procedure that starts with an ensemble of prior uncertain variables. At each data assimilation step, uncertain variables are updated by assimilating simulated production data to observed data with inflated measurement errors. We use an ensemble size of $N_\text{e} = 200$. The prior ensemble consists of 200 random realizations of $\boldsymbol{\xi}_l \in \mathbb{R}^{800}$ sampled from the standard normal distribution. Each realization of $\boldsymbol{\xi}_l$ is then divided into $\boldsymbol{\xi}^{\text{f}}$, $\boldsymbol{\xi}^{\text{s}}$ and $\boldsymbol{\xi}^{\text{m}}$. CNN-PCA realizations of bimodal log-permeability and porosity are then generated and simulated. In ESMDA the ensemble is updated through application of \begin{equation} \boldsymbol{\xi}_l^{u,j} = \boldsymbol{\xi}_l^j + C_{\xi d}(C_{dd} + \alpha C_{d})^{-1}(\Bd^j - \mathbf{d}_{\text{obs}}^*), \hspace{8px} j=1,...,N_\text{e}, \end{equation} where $\Bd^j \in \mathbb{R}^{N_\text{d}}$ represents simulated production data, $\mathbf{d}_{\text{obs}}^*\in \mathbb{R}^{N_\text{d}}$ denotes randomly perturbed observed data sampled from $N(\mathbf{d}_{\text{obs}}, \alpha C_{d})$, where $C_{d}\in \mathbb{R}^{N_\text{d} \times N_\text{d}}$ is a diagonal prior covariance matrix for the measurement error and $\alpha$ is an error inflation factor. The matrix $C_{\xi d}\in \mathbb{R}^{l \times N_\text{d}}$ is the cross-covariance between $\boldsymbol{\xi}$ and $\Bd$ estimated by \begin{equation} C_{\xi d} = \dfrac{1}{N_\text{e} - 1}\sum_{j=1}^{N_\text{e}}(\boldsymbol{\xi}_l^j - \Bar{\boldsymbol{\xi}}_l)(\Bd^j - \Bar{\Bd})^T, \label{eq_cxid} \end{equation} and $C_{d d} \in \mathbb{R}^{N_\text{d} \times N_\text{d}}$ is the auto-covariance of $\Bd$ estimated by \begin{equation} C_{dd} = \dfrac{1}{N_\text{e} - 1}\sum_{j=1}^{N_\text{e}}(\Bd^j - \Bar{\Bd})(\Bd^j - \Bar{\Bd})^T. \label{eq_cdd} \end{equation} In Eqs.~\ref{eq_cxid} and \ref{eq_cdd}, the overbar denotes the mean over the $N_\text{e}$ samples at the current iteration. The updated variables $\boldsymbol{\xi}_l^{u,j}$, $j=1,...,N_\text{e}$, then represent a new ensemble. The process is applied multiple times, each time with a different inflation factor $\alpha$ and a new random perturbation of the observed data. Here we assimilate data four times using inflation factors of 9.33, 7.0, 4.0 and 2.0, as suggested by \cite{Emerick2013}. Data assimilation results are shown in Fig.~\ref{fig_hm_data}. The gray region denotes the $\text{P}_{10}$--$\text{P}_{90}$ range for the prior models, while the dashed blue curves indicate the $\text{P}_{10}$--$\text{P}_{90}$ posterior range. Red points show the observed data, and the red curves the true model response. We observe uncertainty reduction in all quantities over at least a portion of the time frame, with the observed and true data consistently falling within the $\text{P}_{10}$--$\text{P}_{90}$ posterior range. Of particular interest is the fact that substantial uncertainty reduction is achieved in water-rate predictions even though none of the producers experiences water breakthrough in the history matching period. Prior and posterior geomodels are shown in Fig.~\ref{fig_hm_logk}. In the true Petrel model (leftmost realization in Fig.~\ref{fig_case3_petrel}), wells I1 and P1, and I2 and P2, are connected via channels. In the first two prior models (Fig.~\ref{fig_hm_logk}a,~b), one or the other of these injector--producer connectivities does not exist, but it gets introduced in the corresponding posterior models (Fig.~\ref{fig_hm_logk}d,~e). The third prior model (Fig.~\ref{fig_hm_logk}c) already displays the correct injector--producer connectivity, and this is indeed retained in the posterior model (Fig.~\ref{fig_hm_logk}f). \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{prior_real96.jpg} \caption{Prior model \#1} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{prior_real121.jpg} \caption{Prior model \#2} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{prior_real118.jpg} \caption{Prior model \#3} \end{subfigure}% \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{post_real96_case2.jpg} \caption{Posterior model \#1} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{post_real121_case2.jpg} \caption{Posterior model \#2} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=1\textwidth]{post_real118_case2.jpg} \caption{Posterior model \#3} \end{subfigure}% \vspace{0.2cm} \begin{subfigure}[b]{0.6\textwidth} \includegraphics[width=1\textwidth]{colorbar.jpg} \end{subfigure}% \caption{Log-permeability for (a-c) three prior CNN-PCA models and (d-f) corresponding posterior CNN-PCA models. } \label{fig_hm_logk} \end{figure} \FloatBarrier \section{Concluding Remarks} \label{sec-concl} In this work, the 3D CNN-PCA algorithm, a deep-learning-based geological parameterization procedure, was developed to treat complex 3D geomodels. The method entails the use of a new supervised-learning-based reconstruction loss and a new style loss based on features of 3D geomodels extracted from the C3D net, a 3D CNN pretrained for video classification. Hard data loss is also included. A two-step treatment for parameterizing bimodal (as opposed to binary) models, involving CNN-PCA representation of facies combined with PCA for within-facies variability, was also introduced. The 3D CNN-PCA algorithm was applied for the parameterization of realizations from three different geological scenarios. These include a binary fluvial channel system, a bimodal channelized system, and a three-facies channel-levee-mud system. Training required the construction of 3000 object-based Petrel models, the corresponding PCA models, and a set of random PCA models. New PCA models were then fed through the trained model transform net to generate the 3D CNN-PCA representation. The resulting geomodels were shown to exhibit geological features consistent with those in the reference models. Flow results for injection and production quantities, generated by simulating a test set of CNN-PCA models, were found to be in close agreement with simulations using reference Petrel models. Enhanced accuracy relative to truncated-PCA models was also demonstrated. Finally, history matching was performed for the bimodal channel system, using the two-step approach and ESMDA. Significant uncertainty reduction was achieved, and the posterior models were shown to be geologically realistic. There are a number of directions for future work in this general area. The models considered in this study were Cartesian and contained $60\times60\times40$ cells (144,000 total grid blocks). Practical subsurface flow models commonly contain more cells and may be defined on corner-point or unstructured grids. Extensions to parameterize systems of this type containing, e.g., $O(10^6)$ cells, should be developed. Systems with larger numbers of wells should also be considered. It will additionally be of interest to extend our treatments to handle uncertainty in the geological scenario, with a single network trained to accomplish multi-scenario style transfer. Finally, the development of parameterizations for discrete fracture systems should be addressed. \section*{Computer Code Availability} Computer code and datasets will be available upon publication. \begin{acknowledgements} We thank the industrial affiliates of the Stanford Smart Fields Consortium for financial support. We are also grateful to the Stanford Center for Computational Earth \& Environmental Science for providing the computing resources used in this work. We also thank Obi Isebor, Wenyue Sun and Meng Tang for useful discussions, and Oleg Volkov for help with the ADGPRS software. \end{acknowledgements} \bibliographystyle{spbasic}
{ "timestamp": "2020-07-17T02:22:01", "yymm": "2007", "arxiv_id": "2007.08478", "language": "en", "url": "https://arxiv.org/abs/2007.08478" }
\section{Introduction} \label{intro} \textit{Functional data analysis} \citep{ramsay2013functional,horvath2012inference,ferraty2006nonparametric} considers statistical problems where the data and parameter spaces are comprised of \emph{functions} and \emph{operators}. The probabilistic models for such data/parameters usually involve notions of random elements in infinite dimensional Hilbert spaces and related (linear) operators, and their theoretical analysis involves many challenges deviating from those typically encountered with multivariate analysis. Namely, the analysis of infinite dimensional problems requires tools from functional analysis, while many standard inference problem become ill-posed. A (temporal) sequence of functional random elements is then called a \textit{functional time series} and constitutes a probabilistic framework for scenarios where functions are collected sequentially and subject to dependencies. Examples of such data include daily profiles of meteorological variables \citep{hormann2010weakly,rubin2020sparsely}, traffic data \citep{klepsch2017prediction}, DNA strings dynamics \citet{tavakoli2016detecting}, or intra-day trading data \citep{cerovecki2019functional}. The development of functional time series is historically started with the generalisation of univariate or multivariate time series models into infinite dimensions, and has evolved with gradual generalisation. Functional autoregressive (FAR) process was defined by \citet{bosq1999autoregressive,mas2007weak}, while prediction for functional moving average process (FMA) studied by \citet{chen2016functional}, and the two concepts were combined into the functional moving average process (FARMA) by \citet{klepsch2017prediction}. More recently, long-range dependence was incorporated into these models by \citet{li2019long} who defined functional autoregressive fractionally integrated moving average processes (FARFIMA). A detailed treatment of the foundations of linear functional process can be found in \citet{bosq2012linear}. A different line of development in functional time series domain abandoned the linear processes structure, and investigated more general stationary sequences from the point of view of weak dependence. \citet{hormann2010weakly} studied weakly dependent data and studied the estimation of the long-run covariance operator and \citet{horvath2013estimation} established a central limit theorem for weakly dependent functional data. Additional univariate or multivariate methods have been adapted for the functional time series setting that serve for estimation, prediction, or testing problems \citep{aue2017estimating,aue2015prediction,aue2017functional,laurini2014dynamic,hormann2013functional,gorecki2018testing,gao2019high}. Parallel to the time domain approaches, the statistical analysis of functional time series has been fruitful also in the spectral domain. The foundations for frequency domain methods were established in \citet{panaretos2013fourier}, while \citet{panaretos2013cramer} and \citet{hormann2015dynamic} introduced dimension reduction techniques based on the harmonic/dynamic principal component analysis. The spectral domain tools have been successfully used to solve other problems, such as functional lagged regression \citep{hormann2015estimation,pham2018methodology,rubin2019functional}, stationarity testing \citet{horvath2014testing}, periodicity detection \citep{hormann2018testing}, two-sample testing \citet{tavakoli2016detecting}, and white noise testing \cite{zhang2016white}, to mention but a few. The spectral analysis of functional time series was generalised by the introduction of the notion of \textit{weak} spectral density operator \citep{tavakoli2014} that allows for the analysis of long-range dependent functional time series. Some spectral domain results for possibly long-range dependent Gaussian processes are established by \citet{ruiz2019spectral}. Any methodological development in functional time series will be accompanied by a finite sample performance assessment of the novel method, given the complexity of the data involved. Such simulations require the generation of functional time series with prescribed model dynamics. Despite many new methods being generally applicable to time series (whether linear or not), their assessments is carried out predominately on simulated data coming from FARMA processes, typically functional AR processes, because their simulation is straightforward in the time-domain by applying the autoregressive equation sequentially on white noise (or a moving average of white noise). In order to assess the applicability of a method beyond linear processes, however, one should aim to cover as broad as possible a range of possible functional time series dynamics (including nonlinear dynamics). This is especially true for methods that are not specific to linear processes but whose assumptions, theory, and implementaton are more generally valid. Indeed, many functional time series methods \citep{hormann2015dynamic,hormann2015estimation,zhang2016white,tavakoli2016detecting} rely on the eigendecomposition of spectral density operators (the harmonic/dynamic principal components) and present performance tradeoffs that are best captured by their spectral structure. It is thus beneficial to be able to simulate functional time series specified by means of their spectral density structure. The objective of this article is to develop a general-purpose simulation method that is able to efficiently simulate stationary functional time series not restricted to the linear class. The approach is to use the spectral specification of such a time series, by means of its \emph{spectral density operator}. The general method, presented in Section~\ref{sec6:simulation_in_spectral_domain}, hinges on a discretisation and dimension reduction of the functional Cram\'er representation \citep{panaretos2013cramer}. It simulates an ensemble of independent complex random elements whose covariance operators match the designated spectral density operators, and transposes this ensemble into the time-domain by the means of the (inverse) fast Fourier transform. We show that this strategy is particularly effective when the series is defined by means of the eigendecomposition of its spectral density operator or by filtering a white noise, but consider various other specification scenarios, too. For FARMA and FARFIMA processes, in particular, we develop analytical expressions for their spectral density operators, and exploit these in conjunction with spectral methods. To our knowledge, the spectral density operators for these processes, while being infinite-dimensional analogues of the univariate/multivariate versions \citep{priestley1981spectral_1,priestley1981spectral_2}, have not yet been previously rigorously established in functional time series literature. Our functional time series simulation method in the spectral domain is inspired in part by the methods for scalar and multivariate time series simulation. The original idea of simulating a signal in the spectral domain and converting it to the time-domain by the inverse fast Fourier transform seems to be due to \citet{thompson1973generation}. This approach was further explored by \citet{percival1993simulating} who reviewed some variants of the algorithm and addressed some practical implementation questions, and \citet{davies1987tests} used the method for simulation of fractionally integrated noise processes. Furthermore, the simulation of multivariate time series with given spectral density matrices is due to \citet{chambres1995simulation}. However, pushing the general ideas forward to functional time series is not a matter of simple generalisation of the multivariate time series simulation methods. The intrinsic infinite dimensionality of functional data calls for the approximate generation of infinite dimensional objects approximated in finite dimension, which requires optimally reducing dimension (which we implement either via the Karhunen-Lo\'eve or the Cram\'er-Karhunen-Lo\`eve representation \citep{panaretos2013cramer}) and/or judicious discretisation (pixelisation) of the spatial domain (the argument of each function). An additional side effect of this, in contrast to the multivariate case, is that one must pay particular attention that the simulation algorithms scale well as the discretisation resolution refines and the dimension parameter grows, and these need to be incorporated in the time complexity assessments. Our spectral domain simulation method constitutes a general approach, able to simulate arbitrary functional time series that are specified in the frequency domain, with additional computational speed-ups that can be realised when assuming a special structure of the spectral density operators. In particular, simulation of the important FARFIMA$(p,d,q)${}{} processes can be much faster in the spectral domain than in the time-domain, while the spectral domain simulation of FARMA$(p,q)${}{} processes is competitive with time-domain methods. The rest of the article is structured as follows: Section~\ref{sec:framework} introduces the functional time series framework with special attention to their (doubly) spectral analysis and includes the aforementioned novel derivation of the spectral density operators of FARMA and FARFIMA processes as Theorem~\ref{thm:FARMA-part-ii} and \ref{thm:FARFIMA-part-ii} respectively. Section~\ref{sec6:simulation_in_spectral_domain} presents the high-level spectral domain simulation algorithm along with a discussion of its various implementation as subsections. Section~\ref{sec6:examples} provides with concrete examples followed by a short benchmark simulation study. Section~\ref{sec:general_redommendations} concludes the article by summarising key features and qualities of the proposed simulation methods, along with some recommendations for practitioners. The article is accompanied by an \texttt{R} package \texttt{specsimfts} (Section~\ref{sec:code_availability}) that implements all the proposed methods and includes several demo files that are easy to modify and can be easily made use of by practitioners. \section{Functional Time Series Framework} \label{sec:framework} \subsection{Spectral Analysis of Functional Time Series} We will throughout work in a real separable Hilbert space denoted as $\mathcal{H}$ with inner product $\langle f,g \rangle,\,f,g\in\mathcal{H}$ and induced norm $\|f\|,\,f\in\mathcal{H}$. The complexification of $\mathcal{H}$ is denoted as $\mathcal{H}^\mathbb{C}$ and we maintain the same notation for the inner product $\langle\cdot,\cdot\rangle$ and norm $\|\cdot\|$ on $\mathcal{H}^\mathbb{C}$. Though parts of the functional time series theory presented in this section are valid for any such $\mathcal{H}$ and $\mathcal{H}^\mathbb{C}$, the simulation methods are tailored to the space of real square-integrable functions defined on $[0,1]$, denoted as $L^2([0,1],\mathbb{R})$. The inner product on $L^2([0,1],\mathbb{R})$, or its complexification $L^2([0,1],\mathbb{C})$, is defined as $\langle g_1,g_2 \rangle = \int_0^1 f(x)\overline{g(x)}\D x,\,f,g\in\mathcal{H}$ (or $\in\mathcal{H}^\mathbb{C}$), and the norm $\|f\| = (\int_0^1 |f(x)|^2\D x)^{1/2},\,f\in\mathcal{H}$ (or $\in\mathcal{H}^\mathbb{C}$). The space of the bounded linear operators acting on $\mathcal{H}$ and $\mathcal{H}^\mathbb{C}$ is denoted $\mathcal{L}(\mathcal{H})$ and $\mathcal{L}(\mathcal{H}^\mathbb{C})$ respectively and the corresponding operator norm as $\|\cdot\|_{\mathcal{L}(\mathcal{H})}$ and $\|\cdot\|_{\mathcal{L}(\mathcal{H}^\mathbb{C})}$ respectively. The classical approach in functional data analysis is to probabilistically model the functional data as random elements in the Hilbert space $\mathcal{H}$. Considering $Z$ to be a random element in $\mathcal{H}$ with a finite second moment $\mathbb{E} \|Z\|^2<\infty$, we define its \textit{mean function} as $ \mu_Z = \mathbb{E} Z\in \mathcal{H} $ and the \textit{covariance operator} $$ \mathscr{R}^Z = \Ez{ (Z-\mu) \otimes (Z-\mu) } = \Ez{ \langle \cdot, Z-\mu\rangle (Z-\mu) },$$ where $x \otimes y$ denotes the tensor product of $x,y\in\mathcal{H}^\mathbb{C}$ defined as the operator $x \otimes y : \mathcal{H}^\mathbb{C}\to\mathcal{H}^\mathbb{C},\, v \mapsto \langle v, y \rangle x$. The covariance operator $\mathscr{R}^Z$ is a self-adjoint positive-definite trace class operator. A \textit{(real) functional time series} is conceptualized as a time ordered sequence of random elements in $\mathcal{H}$ and is denoted as $X \equiv \{X_t\}_{t\in\mathbb{Z}}$. Throughout this article we work with functional time series with finite second moments, i.e. $\mathbb{E} \|X_t\|^2 < \infty,\,t\in\mathbb{Z}$, and which are second-order stationary in the time variable $t$. If we additionally assume the random curves perspective, i.e. assuming $\mathcal{H}$ to be the function space $L^2([0,1],\mathbb{R})$, it is common to assume that the individual sample paths (trajectories) of the random curves are continuous. In this case, a functional time series can be interpreted pointwise as a sequence of random curves $X\equiv \{X_t(x):x\in[0,1]\}_{t\in\mathbb{Z}}$. The index variable $t$ is interpreted as a discrete time parameter, and argument variable $x$ can often be interpreted as a continuous spatial location in the domain $[0,1]$, and we choose to refer to $x$ as the spatial location for clarity. Under the above stated assumptions we may define the first and second order characteristics of the functional time series $X\equiv \{X_t\}_{t\in\mathbb{Z}}$, namely the \textit{mean function} $\mu_X = \mathbb{E} X_0$ and, for $h\in\mathbb{Z}$, the \textit{lag-$h$ autocovariance operator} $$ \mathscr{R}^X_h = \Ez{ \left( X_{h} - \mu_X \right) \otimes \left( X_0 - \mu_X \right) } = \Ez{ \left\langle \cdot, X_0 - \mu_X \right\rangle \left( X_{h} - \mu_X \right) } . $$ To simplify the notation and the presentation we shall only consider the centred functional time-series, i.e. $\mu\equiv 0$, in order to focus on second order structure, which is the essential part for simulation purposes. We now review key aspects of the analysis of functional time series in the spectral domain. First, we consider functional time series satisfying \textit{weak dependence} conditions, manifested in one of the following norms: \begin{align} \label{eq6:weak_dependence_trace_norm} \sum_{h\in\mathbb{Z}} \left\| \mathscr{R}^X_h \right\|_1 &< \infty, \\ \label{eq6:weak_dependence_HS_norm} \sum_{h\in\mathbb{Z}} \left\| \mathscr{R}^X_h \right\|_2 &< \infty, \\ \label{eq6:weak_dependence_op_norm} \sum_{h\in\mathbb{Z}} \left\| \mathscr{R}^X_h \right\|_{\mathcal{L}(\mathcal{H})} &< \infty \end{align} where $\|\cdot\|_1$, $\|\cdot\|_2$, $\|\cdot\|_{\mathcal{L}(\mathcal{H})}$ denote the trace-class norm, the Hilbert-Schmidt norm, and the operator norm respectively. The \emph{spectral density operator} was first defined under \eqref{eq6:weak_dependence_trace_norm} by \citet{panaretos2013fourier}, under the slightly weaker assumption \eqref{eq6:weak_dependence_HS_norm} by \citet{hormann2015dynamic}, and finally under \eqref{eq6:weak_dependence_op_norm} by \citet{tavakoli2014}. Because \eqref{eq6:weak_dependence_op_norm} is the weakest condition of the three, we shall be working with this assumption, under which the {spectral density operator} is defined by the formula \citep{tavakoli2014}[Proposition 2.3.5] \begin{equation}\label{eq6:definition_spectral_density_operator} \mathscr{F}^X_\omega = \frac{1}{2\pi} \sum_{h\in\mathbb{Z}} \mathscr{R}^X_h e^{-\I h \omega} \end{equation} where the sum converges in $\|\cdot\|_{\mathcal{L}(\mathcal{H}^\mathbb{C})}$ at each $\omega\in[0,2\pi]$. The spectral density operator $\mathscr{F}^X_\omega$ is self-adjoint, non-negative definite and trace-class for each $\omega\in[0,2\pi]$ and the inversion formula holds in $\|\cdot\|_{\mathcal{L}(\mathcal{H})}$: \begin{equation}\label{eq6:spectral_density_operator_inverse_formula} \mathscr{R}^X_h = \int_0^{2\pi} \mathscr{F}^X_\omega e^{\I h\omega} \D \omega, \qquad h\in\mathbb{Z}. \end{equation} Furthermore, whenever \begin{equation}\label{eq6:weak_dependence_traces} \sum_{h\in\mathbb{Z}} \left|\tr(\mathscr{R}^X_h)\right| < \infty, \end{equation} the spectral density operator is uniformly bounded $$ \sup_{\omega\in[0,2\pi]} \left\| \mathscr{F}^X_\omega \right\|_1 \leq \frac{1}{2\pi} \sum_{h\in\mathbb{Z}} \left|\tr(\mathscr{R}^X_h)\right| < \infty $$ and $$ \sup_{h\in\mathbb{Z}} \left\| \mathscr{R}^X_h \right\|_1 \leq \sum_{h\in\mathbb{Z}} \left|\tr(\mathscr{R}^X_h)\right| < \infty .$$ Finally, the definition of spectral density operator can be relaxed into the notion of the \textit{weak} spectral density operator \citep{tavakoli2014}. Denote $\mathcal{L}_1(\mathcal{H}^\mathbb{C})$ the space of trace-class operators on $\mathcal{H}^\mathbb{C}$. If there exists a function $\mathscr{F}^X : [0,2\pi] \to \mathcal{L}_1(\mathcal{H}^\mathbb{C})$ defined almost everywhere on $[0,2\pi]$ such that $\int_0^{2\pi}\| \mathscr{F}^X_\omega \|_1 \D\omega < \infty$ and the inversion formula\eqref{eq6:spectral_density_operator_inverse_formula} holds, then $\mathscr{F}^X$ is called the \textit{weak spectral density operator} of $X$. If the weak spectral density operator exists it is defined uniquely only almost everyone on $[0,2\pi]$. This is a consequence of the fact that $\mathscr{F}^X$ is defined as an element of the Bochner space $L^1( [0,2\pi], \mathcal{L}_1(\mathcal{H}^\mathbb{C}) )$. That being said, under the weak dependence \eqref{eq6:weak_dependence_op_norm}, the spectral density operator \eqref{eq6:definition_spectral_density_operator} is also the weak spectral density operator. Though the definition of the weak spectral density operator appears rather abstract, it is in fact required for the spectral analysis of long-range dependent FARFIMA processes (considered in Section~\ref{subsec:FARFIMA}) which do not satisfy the assumption \eqref{eq6:weak_dependence_op_norm} but will be shown to admit a weak spectral density operator. Lastly we point out that we opt for presenting the spectral theory with the spectral domain $[0,2\pi]$, as opposed to $[-\pi,\pi]$ often adopted in literature \citep{panaretos2013cramer,tavakoli2014,hormann2015dynamic}, because its connections to the simulation methods based on discrete (fast) Fourier transform in Section~\ref{sec6:simulation_in_spectral_domain} are more transparent. These two perspectives are equivalent and can be easily interchanged by the $2\pi$-periodicity $$ \mathscr{F}^X_{-\omega} = \mathscr{F}^X_{2\pi-\omega}, \qquad \omega\in[0,\pi]. $$ \subsection{The Cram\'{e}r-Karhunen-Lo\`{e}ve Representation} \label{subsec:CKL} The classical Karhunen-Lo\`{e}ve expansion decomposes i.i.d. functional data into uncorrelated components and achieves optimal dimensionality reduction at the same time. It has consequently been used as a main tool for simulating independent functional data. The situation for functional time series data becomes more involved due to the dependence between curves, and using a similar decomposition for the purpose of simulation will now require two steps. Firstly, the Cram\'er representation (Proposition~\ref{prop:cramer} and \eqref{eq6:cramer}), which separates the functional time series into distinct uncorrelated frequencies. And, in addition to that, applying the ideas of the classical Karhunen-Lo\`eve expansion at each frequency to obtain the Cram\'{e}r-Karhunen-Lo\`eve representation (Proposition~\ref{prop:optimality_CKL} and \eqref{eq6:CKL_truncated}). We now review these two representations because they, together with their discretised approximations \eqref{eq6:cramer_approx} and \eqref{eq6:CKL_approx_truncated}, will provide the basis for our simulation method presented in Section~\ref{sec6:simulation_in_spectral_domain}. Before venturing into the spectral domain, we recall the classical Karhunen-Lo\`{e}ve expansion \citep{karhunen1946spektraltheorie,loeve1946fonctions,ash2014topics,grenander1981abstract}. Let $\{X_t\}$ be i.i.d. zero-mean square-integrable random elements in $\mathcal{H}$ and denote the eigendecomposition of the corresponding covariance operator as $ \mathscr{R}^X_0 = \sum_{n=1}^\infty \lambda_n \varphi_n \otimes \varphi_n $ where $\{\lambda_n\}_{n=1}^\infty$ are the eigenvalues of $\mathscr{R}^X_0$ and $\{\varphi_n\}_{n=1}^\infty$ their associated eigenfunctions. Then, the classical Karhunen-Lo\`{e}ve expansion relies on truncating the sum $$ X_t = \sum_{n=1}^\infty \sqrt{\lambda_n} \xi^{(t)}_n \varphi_n $$ where $\xi^{(t)}_n = \langle X_t, \varphi_n \rangle / \sqrt{\lambda_n}$. The mode of convergence depends on the regularity of $\mathscr{R}^X_0$, but convergence in expected squared Hilbert norm is always valid when $\mathscr{R}^X_0$ is trace-class. In order to take into account the temporal dependence one begins by decomposing the time series into distinct frequencies, a step made rigorous by means of the functional Cram\'{e}r representation, due to \citet{panaretos2013cramer}[Theorem 2.1] and \citet{tavakoli2014}[Theorem 2.4.3]. We combine the two statements into a single statement, to be used for our purposes, below: \begin{proposition}[Functional Cram\'{e}r representation] \label{prop:cramer} Let the functional time series $X\equiv\{X_t\}_{t\in\mathbb{Z}}$ admit the weak spectral density operator $\mathscr{F}^X \in L^p([0,2\pi], \mathcal{L}_1(\mathcal{H}^\mathbb{C})$ for some $p\in(1,\infty]$. Then $X$ permits the functional Cram\'{e}r representation \begin{equation}\label{eq6:cramer} X_t = \int_0^{2\pi} e^{\I t\omega} \D Z_\omega, \qquad\text{almost surely}. \end{equation} where stochastic integral \eqref{eq6:cramer} can be understood in Riemann–Stieltjes limit sense \begin{equation}\label{eq6:cramer_riemann_stieltjes} \Ez{ \left\| X_t - \sum_{k=1}^K e^{\I t \omega_k} \left( Z_{\omega_{k+1}} - Z_{\omega_k} \right) \right\|^2 } \to \infty, \qquad\text{as}\quad K\to\infty, \end{equation} where $0=\omega_1 < \dots < \omega_{k+1} = 2\pi$ and $\max |\omega_{k+1}-\omega_k| \to 0$ as $K\to\infty$. For each $\omega\in[0,2\pi]$, $Z_\omega$ is a random element in $\mathcal{H}^\mathbb{C}$ defined by \begin{equation}\label{eq6:cramer_definition_Z} Z_\omega = \lim_{T\to\infty} \sum_{|t|<T} \left( 1 + \frac{|t|}{T} \right) g_\omega(t) X_{-t} \end{equation} where the limit holds with respect to $\mathbb{E}\|\cdot\|^2$ and $$ g_\omega(t) = \frac{1}{2\pi} \int_{0}^{\omega} e^{-\I t\alpha} \D\alpha, \qquad \omega\in[0,2\pi]. $$ Moreover, the process $\{Z_\omega\}_{\omega\in[0,2\pi]}$ satisfies $\mathbb{E}[ \| Z_\omega\|^2_2 ] = \int_0^\omega \|\mathscr{F}^X_\alpha\|_1 \D\alpha$, $ \mathbb{E}[ Z_\omega \otimes Z_{\omega'} ] = \int_0^{\min(\omega,\omega')} \mathscr{F}^X_\alpha \D\alpha $ for $\omega,\omega'\in [0,2\pi]$ and has orthogonal increments $$ \mathbb{E}\left\langle Z_{\omega_1}-Z_{\omega_2}, Z_{\omega_3} - Z_{\omega_4} \right\rangle = 0$$ with $\omega_1 > \omega_2 \geq \omega_3 > \omega_4.$ \end{proposition} The Cram\'{e}r representation \eqref{eq6:cramer} provides a scheme for decomposing $X$ into distinct frequencies. For $0=\omega_1 < \dots < \omega_{k+1} = 2\pi$ we have an approximation by \eqref{eq6:cramer_riemann_stieltjes} \begin{equation}\label{eq6:cramer_approx} X_t \approx \sum_{k=1}^K e^{\I t\omega_k} \left( Z_{\omega_{k+1}} - Z_{\omega_k} \right). \end{equation} The approximation \eqref{eq6:cramer_approx} essentially decomposes the functional time series $\{X_t\}_{t\in\mathbb{Z}}$ into uncorrelated components $Z_{\omega_{k+1}} - Z_{\omega_k},\,k=1,\dots,K$. Heuristically, the covariance operator of the increment $Z_{\omega_{k+1}} - Z_{\omega_k}$ is expected to be close to $\mathscr{F}^X_{\omega_k} (\omega_{k+1}-\omega_k)$. By virtue of being a non-negative definite operator, the spectral density operator $\mathscr{F}^X_{\omega}$, admits a spectral decomposition of its own at each frequency $\omega$, \begin{equation}\label{eq6:spectral_density_operator_harmonic_decomposition} \mathscr{F}^X_\omega = \sum_{n=1}^\infty \lambda_n(\omega) \varphi_n(\omega)\otimes\varphi_n(\omega) \end{equation} where $ \{\lambda_n(\omega)\}_{n=1}^\infty $ are the eigenvalues of $\mathscr{F}^X_{\omega}$, called the \textit{harmonic eigenvalues}, and their associate eigenfunctions $\{ \varphi_n(\omega) \}_{n=1}^\infty$, called the \textit{harmonic eigenfunctions}. This suggests a second level of approximation, namely using the Karhunen-Lo\`eve expansion to write $$ X_t \approx \sum_{k=1}^K e^{\I t\omega_k} \sum_{n=1}^\infty \xi_n^{(k)} \varphi_n(\omega_k) $$ with $\xi_n^{(k)} = \langle Z_{\omega_{k+1}} - Z_{\omega_k}, \varphi_n(\omega_k) \rangle / \sqrt{\lambda_n(\omega_k)}$ and then truncating at $N\in\mathbb{N}$ \begin{equation}\label{eq6:CKL_approx_truncated} X_t \approx \sum_{k=1}^K e^{\I t\omega_k} \sum_{n=1}^N \xi_n^{(k)} \varphi_n(\omega_k) . \end{equation} The approximation \eqref{eq6:CKL_approx_truncated} consists of finite number of uncorrelated random variables $\xi_n^{(k)},\,k=1\dots,K,\,n=1,\dots,N$ and will serve as the basis for our simulation method described in Section \ref{subsec6:simulation_CKL}. To rigorously define this approach, and show its optimality, we must consider the stochastic integral \begin{equation}\label{eq6:stoch_integral} \int_0^{2\pi} e^{\I t\omega} C(\omega)\D Z_\omega \end{equation} which can be defined by the means similar to the It\^{o} stochastic integral, rigorously proved in \citet{panaretos2013cramer} and \citet{tavakoli2014}. If $\mathscr{F}^X \in L([0,2\pi], \mathcal{L}_1(\mathcal{H}^\mathbb{C}) ) $ for $p\in(1,\infty]$, then \eqref{eq6:stoch_integral} is well defined for $C\in\mathbb{M}$ where $\mathbb{M}$ is the completion of $L^{2q}( [0,2\pi], \mathcal{L}(\mathcal{H}^\mathbb{C}) )$ with respect to the norm $\|\cdot\|_\mathbb{M} = \sqrt{\langle \cdot,\cdot \rangle_{\mathbb{M}}}$ where $$ \langle A,B \rangle_{\mathbb{M}} = \int_0^{2\pi} \tr\left( A(\omega) \mathscr{F}^X_\omega B(\omega)^* \right)\D\omega, \qquad A,B\in\mathbb{M}. $$ In this notation, one has (\cite{panaretos2013cramer}[Theorem 3.7], \citet{tavakoli2014}[Theorem 2.8.2]): \begin{proposition}[Optimality of Cram\'{e}r-Karhunen-Lo\`{e}ve representation] \label{prop:optimality_CKL} Let the functional time series $X\equiv\{X_t\}_{t\in\mathbb{Z}}$, satisfying the functional Cram\'{e}r representation \eqref{eq6:cramer}, admit the weak spectral density operator $\mathscr{F}^X \in L^1([0,2\pi], \mathcal{L}_1(\mathcal{H}^\mathbb{C}))$ such that the function $\omega\in[0,2\pi]\mapsto \mathscr{F}^X_\omega$ is continuous on $[0,2\pi]$ with respect to the operator norm $\|\cdot\|_{\mathcal{L}(\mathcal{H}^\mathbb{C})}$ and all the non-zero harmonic eigenvalues of $\mathscr{F}^X_\omega$ are distinct, $\omega\in[0,2\pi]$. Let $$ X_t^* = \int_0^{2\pi} e^{\I t\omega} C(\omega) \D Z_\omega $$ with $C \in \mathbb{M}$. Let $N: [0,2\pi]\to\mathbb{N}$ be a c\`{a}dl\`{a}g function. Then, the solution to \begin{align*} &\min \Ez{ \left\| X_t - X_t^* \right\|^2} \\ \text{subject to}\quad & \rank( C(\omega) ) \leq N(\omega) \end{align*} is given by $$ C(\omega) = \sum_{n=1}^{N(\omega)} \varphi_n(\omega) \otimes \varphi_n(\omega).$$ Moreover, the approximation error is given by $$ \Ez{ \left\| X_t - X_t^* \right\|^2} = \int_0^{2\pi} \left\{ \sum_{n=N(\omega)+1}^\infty \lambda_n(\omega) \right\} \D \omega .$$ \end{proposition} Proposition~\ref{prop:optimality_CKL} justifies that the process \begin{equation}\label{eq6:CKL_truncated} X^*_t = \int_0^{2\pi} \sum_{n=1}^N e^{\I t\omega} \left(\varphi_n(\omega) \otimes \varphi_n(\omega)\right) \D Z_\omega \end{equation} yields optimal dimension reduction when we set the rank requirement $N(\omega)\equiv N\in\mathbb{N}$ uniformly across all frequencies. Although the definition of the finite dimensional reduction \eqref{eq6:CKL_truncated} appears quite abstract, it turns out that one can represent $X^*$ in one-to-one manner as an $N$-dimensional multivariate time series using a particular choice of the filter of the original time series $X$. Because our simulation method presented in Subsection~\ref{subsec6:simulation_CKL} is based directly on the approximations \eqref{eq6:CKL_approx_truncated} and \eqref{eq6:CKL_truncated}, we do not pursue the multivariate time series representation here and refer the reader to \citet{panaretos2013cramer,tavakoli2014,hormann2015dynamic}. \subsection{Spectral Analysis of FARMA$(p,q)${}{} Processes} \label{subsec:FARMA} Linear models for processes in function spaces have been extensively studied in the literature, and many classical time series models from the scalar or vector time series domain have been gradually generalised to infinite dimensions. Functional autoregressive processes have been treated in depth by \citet{bosq2012linear} and \citet{mas2007weak}, and functional moving average process by \citet{chen2016functional}. Their combination, the functional autoregressive moving average (FARMA) mocel, has been presented by \citet{klepsch2017prediction}. In the following text we recall the time domain analysis of FARMA processes and then develop our new results on the frequency domain analysis thereof. The FARMA$(p,q)${}{} process, $p,q\in\mathbb{N}_0$, is a sequence $X=\{ X_t \}_{t\in\mathbb{Z}}$ of random $\mathcal{H}$-elements, satisfying the equation \begin{equation}\label{eq6:FARMA_def} X_t = \sum_{j=1}^p \mathcal{A}_j X_{t-j} + \epsilon_t + \sum_{j=1}^q \mathcal{B}_j \epsilon_{t-j}, \qquad t\in\mathbb{Z}, \end{equation} where $\mathcal{A}_1,\dots,\mathcal{A}_p$ and $\mathcal{B}_1,\dots,\mathcal{B}_q$ are bounded linear operators and $\{\epsilon_t\}_{t\in\mathbb{Z}}$ is a sequence of zero-mean i.i.d. random elements in $\mathcal{H}$ with the covariance operator $\mathcal{S}$. The time-domain analysis of the FARMA$(p,q)${}{} process was considered by \citet{klepsch2017prediction}, who in particular established: \begin{theorem}[{\citet{klepsch2017prediction}}] \label{thm:FARMA-part-i} Assume that there exists $j_0\in\mathbb{N}$ such that the operator $$ \tilde{\mathcal{A}} = \begin{bmatrix} \mathcal{A}_1 & \cdots & \mathcal{A}_{p-1} & \mathcal{A}_p \\ \Id & & & 0 \\ & \ddots & & \vdots \\ & & \Id & 0 \\ \end{bmatrix} $$ satisfies \begin{equation}\label{eq6:FARMA_condition} \| \tilde{\mathcal{A}}^{j_0} \|_{\mathcal{L}(\mathcal{H}^p)} < 1 \end{equation} where $\Id$ is the identity operator on $\mathcal{H}$ and $\|\cdot \|_{\mathcal{L}(\mathcal{H}^p)}$ denotes the operator norm on $\mathcal{L}(\mathcal{H}^p)$, the space of bounded linear operators acting on the product space $\mathcal{H}^p=\mathcal{H}\times\cdots\times\mathcal{H}$. Then the FARMA$(p,q)${}{} process defined by \eqref{eq6:FARMA_def} is uniquely defined, stationary, and causal. \end{theorem} We now show that, under the same assumptions as those by \citet{klepsch2017prediction}, we may analyse characterise the FARMA$(p,q)${}{} process in the spectral domain: \begin{theorem}\label{thm:FARMA-part-ii} Under the assumptions of Theorem~\ref{thm:FARMA-part-i}, the process satisfies the weak dependence condition \eqref{eq6:weak_dependence_op_norm} with $\mathscr{R}^X_h$, and its spectral density operator at frequency $\omega\in[0,2\pi]$ is given by \begin{equation}\label{eq6:FARMA_spectral_density_operator} \mathscr{F}^X_\omega = \frac{1}{2\pi} \mathscr{A}( e^{-\I\omega} )^{-1} \mathscr{B}( e^{-\I\omega} ) \mathcal{S} \mathscr{B}( e^{-\I\omega} )^* \left[\mathscr{A}( e^{-\I\omega} )^* \right]^{-1} \end{equation} where \begin{align} \label{eq6:FARMA_spectral_density_operator_def_A} \mathscr{A}(z) &= \Id - \mathcal{A}_1 z - \dots - \mathcal{A}_p z^p, \\ \label{eq6:FARMA_spectral_density_operator_def_B} \mathscr{B}(z) &= \Id + \mathcal{B}_1 z + \dots + \mathcal{B}_p z^q. \end{align} are $\mathcal{H}$-valued polynomials in the variable $z\in\mathbb{C}$. \end{theorem} Theorem~\ref{thm:FARMA-part-ii} is proved in Appendix~\ref{subsec:proof_of_thm:FARMA}. \subsection{Spectral Analysis of FARFIMA$(p,d,q)${}{} Process} \label{subsec:FARFIMA} Long range dependence (a.k.a. long memory) is a well known phenomenon in time series analysis, consisting in a time series exhibiting slow decay of its temporal dependence \citep{hurst1951long,mandelbrot1968fractional,beran1994statistics,palma2007long}. The need to model and analyse such series has led to the definition of autoregressive fractionally integrated moving average (ARFIMA) processes \citep{granger1980introduction,hosking1981fractional}. Such long-range dependencies have also been detected functional time series, for example in series of daily volatility \citep{casas2008econometric}, and inspired the theoretical framework of long-range dependent functional time series model \citep{li2019long} and associated estimation methods \citep{shang2020comparison}. \citet{li2019long} defined the functional ARFIMA process (FARFIMA) which and we recall its definition, before deriving its spectral analysis that will allow an efficient simulation of its realisations in Section~\ref{sec6:simulation_in_spectral_domain}. The FARFIMA$(p,d,q)${}{} model with $p,q\in\mathbb{N}_0$ and $d\in(-1/2,1/2)$ models a sequence $\tilde{X}=\{ \tilde{X}_t \}_{t\in\mathbb{Z}}$ of random $\mathcal{H}$-elements via the equation \begin{equation}\label{eq6:FARIMA_def} (\Id-\Delta)^d \tilde{X}_t = X_t \end{equation} where $\Delta$ is the backshift operator and $X=\{ X_t \}_{t\in\mathbb{Z}}$ is the FARMA$(p,q)${}{} process defined via equation \eqref{eq6:FARMA_def}. When $d=0$, the FARFIMA$(p,d,q)${}{} reduces to the FARMA$(p,q)${}{} model. \citet{li2019long} established the existence and uniqueness results of the FARFIMA$(p,d,q)${}{} process and its time-domain properties: \begin{theorem}[{\citet{li2019long}}] \label{thm:FARFIMA-part-i} The FARFIMA$(p,d,q)${}{} process $\tilde{X} = \{\tilde{X}_t\}_{t\in\mathbb{Z}}$ with $p,q\in\mathbb{N}_0$ and $d\in(-1/2,1/2)$ defined by the equation \eqref{eq6:FARIMA_def} exists and constitutes a uniquely defined stationary causal functional time series provided the autoregressive part satisfies the condition \eqref{eq6:FARMA_condition}. Furthermore, if $d\in(0,1/2)$ the FARFIMA$(p,d,q)${}{} process exhibits the long-memory dependence \end{theorem} Under the same assumptions as \citet{li2019long} we now determine the analytical expression of the spectral density operators of the FARFIMA$(p,d,q)${}{} process: \begin{theorem}\label{thm:FARFIMA-part-ii} Under the assumptions of Theorem~\ref{thm:FARFIMA-part-i}, the FARFIMA$(p,d,q)${}{} process admits the weak spectral density $\mathscr{F}^{\tilde{X}} \in L^1( [0,2\pi], \mathcal{L}_1( \mathcal{H}^\mathbb{C} ) )$ satisfying \begin{equation}\label{eq6:FARIMA_spectral_density_operator} \mathscr{F}^{\tilde{X}}_\omega = \frac{1}{2\pi} \left[ 2\sin\left(\frac{\omega}{2}\right) \right]^{-2d} \mathscr{A}( e^{-\I\omega} )^{-1} \mathscr{B}( e^{-\I\omega} ) \mathcal{S} \mathscr{B}( e^{-\I\omega} )^* \left[\mathscr{A}( e^{-\I\omega} )^* \right]^{-1},\quad\omega\in(0,2\pi), \end{equation} where $\mathscr{A}$ and $\mathscr{B}$ are given at \eqref{eq6:FARMA_spectral_density_operator_def_A} and \eqref{eq6:FARMA_spectral_density_operator_def_B}. The lag-$h$ autocovariance operators of $\tilde{X}$ satisfy $$ \mathscr{R}_h^{\tilde{X}} = \int_0^{2\pi} \mathscr{F}_\omega^{\tilde{X}} e^{\I h\omega} \D\omega,\qquad h\in\mathbb{Z}. $$ \end{theorem} Theorem~\ref{thm:FARFIMA-part-ii} is proved in Appendix~\ref{subsec:proof_of_thm:FARFIMA}. Note that for $d>0$, the term $[ 2\sin(\omega/2) ]^{-2d}$ in formula \eqref{eq6:FARIMA_spectral_density_operator} is unbounded in the neighbourhood of $0$ (and $2\pi$ due to the symmetry). The spectral density being unbounded in the neighbourhood of zero is quintessential also for the univariate ARFIMA processes \citep{hosking1981fractional}. \section{Simulation of Functional Time Series with Given Spectrum} \label{sec6:simulation_in_spectral_domain} In this subsection we will present a functional time series simulation method in the spectral domain. We focus our presentation on functional time series with values in $L^2([0,1],\mathbb{R})$ whose trajectories are continuous and whose spectral density operators are integral operators with continuous kernels, but note that our discussion equally applies to other function spaces constituting separable Hilbert spaces. The objective of the simulation is to generate a Gaussian sample $X_1,\dots,X_T$ for some $T\in\mathbb{N}$ given the spectral density operator $\{ \mathscr{F}_\omega^X \}_{\omega\in[0,2\pi]}$. Without loss of generality, we assume that $T$ is even and we furthermore define the canonical frequencies $\omega_k = (2\pi k)/T,\,k=1,\dots,T$. At a high level, our spectral domain simulation methods mimics the discrete approximation of the Cram\'er representation \eqref{eq6:cramer_approx}, which boils down to performing the following two steps. \begin{enumerate} \item Generate an ensemble of independent complex mean-zero Gaussian random elements $Z_k',\,k=1,\dots,T/2,T$ such that \begin{equation}\label{eq6:simulate_Z_covariance_operator} \Ez{ Z_k' \otimes Z_k' } = \mathscr{F}^X_{\omega_k},\qquad k=1,\dots,T/2,T, \end{equation} and, for $k=1,\dots,T/2-1$, generate independent copies $Z_k''$ thereof. Define \begin{equation}\label{eq6:simulation_Z_definition} Z_k = \begin{cases} \sqrt{2} Z_k' & k=T/2,T, \\ Z_k' + \I Z_k'' & k=1,\dots,T/2-1, \\ Z_{T-k}' - \I Z_{T-k}'' & k=T/2+1,\dots,T/2-1. \end{cases} \end{equation} \item By the inverse fast Fourier transform algorithm calculate \begin{equation} \label{eq6:simulation_iFFT} X_t = \left(\frac{\pi}{T}\right)^{1/2} \sum_{k=1}^T Z_k e^{\I t\omega_k},\qquad t=1,\dots,T. \end{equation} The formula \eqref{eq6:simulation_Z_definition} ensures that the sample of $\{Z_k\}$ is symmetric and thus inverse Fourier transform constitutes a real-valued functional time series, as will be proved later in Theorem~\ref{theorem6:abstract_method}. \end{enumerate} While the application of the inverse fast Fourier transform in Step 2 of the algorithm is computationally fast, the generation of the complex random elements $\{Z_k'\}$ in Step 1, whose covariance operators may in general have no structure in common, is not a trivial matter, and is discussed in the next three subsections, for three different specifications of the operator $\mathscr{F}^X_{\omega_k}$. In Subsection~\ref{subsec6:simulation_CKL}, these random elements are generated by their Karhunen-Lo\`eve expansions, therefore essentially enacting the Cram\'er-Karhunen-Lo\`eve representation \eqref{eq6:CKL_approx_truncated}. On the other hand, the filtering specification discussed in Subsection~\ref{subsec6:simulation_filter} leverages the special structure of the filtered white noise spectral density operators to generate the random elements $\{Z_k\}$ efficiently. This approach is further tailored to simulation of FARFIMA processes in Subsection~\ref{subsec6:simulation_FARFIMA}. Before moving on to the specifics, though, we establish that the sample generated by formula \eqref{eq6:simulation_iFFT} will indeed follow the correct dependence structure: \begin{theorem}\label{theorem6:abstract_method} Assume either of the two following conditions: \begin{enumerate}[label=(\roman*)] \item\label{item:theorem6:abstract_method:item_i} The condition \eqref{eq6:weak_dependence_op_norm} holds and thus the spectral density operator $\{ \mathscr{F}^X_\omega \}_{\omega\in[0,2\pi]}$ exists in the sense \eqref{eq6:definition_spectral_density_operator}. \item\label{item:theorem6:abstract_method:item_ii} The weak spectral density operator $\mathscr{F}^X_\omega \in L^1([0,2\pi],\mathcal{L}_1(\mathcal{H}^\mathbb{C}))$ is continuous with respect to the norm $\|\cdot\|_1 $ on $(0,2\pi)$, and we additionally set $\mathscr{F}^X_0 = \mathscr{F}^X_{2\pi} = 0$. \end{enumerate} Then, the functional time series sample $X = \{X_t\}_{t=1}^T$ generated by \eqref{eq6:simulation_iFFT} is a real-valued stationary Gaussian time series of zero mean, and asymptotically admits $\{\mathscr{F}_\omega\}$ as its spectral density operator when $T\to\infty$. \end{theorem} Theorem~\ref{theorem6:abstract_method} is proved in Appendix~\ref{subsec:proof_of_thm:abstract_method}.\\ Due to the periodicity of Fourier transform, the values $X_1$ and $X_T$ will tend to be similar which might be an undesirable trait, depending on the application. To overcome this artefact, \citet{mitchell1981generating,percival1993simulating} propose to simulate a sample of length $\tilde{T} = k T$ for some integer $k\geq 2$ and sub-sample a functional time series of length $T$. \subsection{Simulation under Spectral Eigendecomposition Specification} \label{subsec6:simulation_CKL} Perhaps the most direct means to generate (approximate versions of) the random elements $\{Z_k\}$ considered in Step 1 of the algorithm introduced at the beginning of Section~\ref{sec6:simulation_in_spectral_domain} is by means of a finite rank approximation to the spectral density operator at the corresponding frequencies, appearing in the definition (see equation \eqref{eq6:simulate_Z_covariance_operator}). For a given rank, the optimal such approximation is obtained by truncating the eigenexpansion \eqref{eq6:spectral_density_operator_harmonic_decomposition} at that value, thus using a finite number of the harmonic eigenfunctions and corresponding eigenvalues to approximately generate $\{Z_k\}$. Concretely, denoting $\{\lambda_n(\omega)\}_{n=1}^\infty$ and $\{\varphi_n(\omega)\}_{n=1}^\infty$ the harmonic eigenvalues and the harmonic eigenfunctions of the spectral density operator $\mathscr{F}^X_\omega$ at the frequency $\omega\in[0,2\pi]$, we may generate exact versions of $Z_k$ by setting \begin{equation}\label{eq6:simulate_Z_CKL} Z_k = \sum_{n=1}^\infty \sqrt{\lambda_n(\omega_k) } \varphi_n(\omega_k) \xi_n^{(k)} \end{equation} where $\{ \xi_n^{(k)}\}$ is an ensemble of i.i.d. standard Gaussian real-valued random variables. The random elements defined by \eqref{eq6:simulate_Z_CKL} clearly satisfy the requirement \eqref{eq6:simulate_Z_covariance_operator}. In practice one has to truncate the series in \eqref{eq6:simulate_Z_CKL} at a finite level, say $N$. This truncation is optimal in terms of preserving the second order structure of the functional time series (Proposition~\ref{prop:optimality_CKL}) and requires only a low number of inexpensive operations. If we are to evaluate the functional time series $X$ on a spatial grid of $[0,1]$ at resolution $M\in\mathbb{N}$, the simulation requires $O(N M T + M T\log T)$ operations, provided we have direct access to the decomposition \eqref{eq6:spectral_density_operator_harmonic_decomposition}. The $O(M T\log T)$ comes from the inverse fast Fourier transform \eqref{eq6:simulation_iFFT}. When the decomposition \eqref{eq6:spectral_density_operator_harmonic_decomposition} is not directly available, as for example is the case for the FARMA$(p,q)${}{} process with non-trivial autoregressive part, the evaluation of the spectral density operator \eqref{eq6:FARMA_spectral_density_operator} requires inversion of a bounded linear operator different at each frequency $\omega$. Unless a special structure of the autoregressive operator is assumed (e.g. as in Example~\ref{example6:long_range_FARFIMA}), the evaluation of this inversion is expensive. One could discretise the operator on a grid of $[0,1]^2$ and invert the resulting matrix, but this will become slow for dense grids, especially considering to do it for each frequency $\omega_k,\,k=1,\dots,T/2,T$. Moreover, to obtain the harmonic eigenvalues and eigenfunctions \eqref{eq6:spectral_density_operator_harmonic_decomposition} one would need to perform the eigendecomposition at each frequency $\omega_k$ which is also slow for large matrices. These operations, if performed on a spatial grid of resolution $M\times M$, require $O(M^3)$ operations, bringing the overall cost to $O(M^3 T + M T\log T)$. This can be reduced by calling a truncated eigendecomposition algorithm instead, e.g. the truncated singular value decomposition (SVD) algorithm, and evaluating only $N < M$ eigenfunctions. This yields computational gains when $N\ll M$, namely reducing the complexity of the said operations from $O(M^3)$ to $O(NM^2)$, and the overall cost to $O(N M^2 T + M T\log T)$. Though the simulation cost is high when the decomposition \eqref{eq6:spectral_density_operator_harmonic_decomposition} is not directly available, the approach still constitutes a general method to simulate a functional time series with arbitrary spectrum. Example~\ref{example6:custom_karhunen_loeve} illustrates a functional time series whose dynamics are defined through its Cram\'er-Karhunen-Lo\`eve expansion where we show that simulation is possible even when we do not leverage our knowledge of this expansion, but rather calculate it numerically. Finally, it is worth remarking that even though the functions $\{\varphi_n(\omega)\}_{n=1}^\infty$ appearing in \eqref{eq6:spectral_density_operator_harmonic_decomposition} are orthonormal for each $\omega\in[0,2\pi]$, orthonormality is not \emph{required} for the correct simulation of $Z_k$'s by \eqref{eq6:simulate_Z_CKL}. In other words, a practitioner can specify a spectral density operator by a sum similar to \eqref{eq6:spectral_density_operator_harmonic_decomposition} without insisting on using orthonormal functions, and still achieve rapid simulation in the spectral domain. \subsection{Simulation under Filtering Specification.} \label{subsec6:simulation_filter} The second implementation of Step 1 of the abstract algorithm introduced at the beginning of Section~\ref{sec6:simulation_in_spectral_domain} leverages a set-up where a white noise with covariance operator $\mathcal{S}$ is plugged into a filter with given frequency response function $\Theta(\omega)$ in which case the spectral density operator is given directly by the formula \begin{equation}\label{eq6:simulation_filter_spec_density} \mathscr{F}^X_\omega = \frac{1}{2\pi} \Theta(\omega) \mathcal{S} \Theta(\omega)^*, \qquad\omega\in[0,2\pi], \end{equation} where $\mathcal{S}$ is a positive-definite self-adjoint trace class operator and $\Theta : [0,2\pi] \to \mathcal{L}(\mathcal{H}^\mathbb{C})$, i.e. $\Theta(\omega)$ is a bounded linear operator on $\mathcal{H}^\mathbb{C}$ for each $\omega\in[0,2\pi]$. We only require that $$ \int_0^{2\pi} \left\| \Theta(\omega) \right\|^2_{\mathcal{L}(\mathcal{H}^\mathbb{C})} \D\omega <\infty$$ and $\Theta(\omega)g = \overline{\Theta(2\pi-\omega)(g)}$ for $\omega\in[0,\pi]$ and $g\in\mathcal{H}^\mathbb{C}$, which implies that $\{X\}$ is a stationary mean-zero functional time series with the weak spectral density operator $\mathscr{F}^X_\omega \in L^1([0,2\pi],\mathcal{L}_1(\mathcal{H}^\mathbb{C}))$. The operator $\mathcal{S}$, being a positive-definite self-adjoint trace class operator, admits the decomposition \begin{equation}\label{eq6:simulation_filtered_S_decomposition} \mathcal{S} = \sum_{n=1}^\infty \eta_n e_n \otimes e_n \end{equation} where $\{\eta_n\}$ are the eigenvalues and $\{e_n\}$ are the eigenfunctions of $\mathcal{S}$. We may simulate real random elements $\{Y_k\}$ by setting \begin{equation}\label{eq6:simulation_filtered_Y_k} \sum_{n=1}^\infty \sqrt{\eta_n} e_n \tilde{\xi}_n^{(k)} \end{equation} with an ensemble $\{\tilde{\xi}_n^{(k)}\}$ of i.i.d. standard Gaussian random variables. In reality, the sum \eqref{eq6:simulation_filtered_Y_k} is truncated at some $N\in\mathbb{N}$. If the decomposition \eqref{eq6:simulation_filtered_S_decomposition} is unknown, it can be numerically calculated by discretisation of the kernel corresponding to the operator $\mathcal{S}$ on the grid of $[0,1]^2$, say constituting an $M\times M$ matrix, and numerically calculating its eigendecomposition, in which case we may select $N=M$ eigenvalues. The advantage of this approach over numerical evaluation of the spectral density operators at each $\omega$, performing the numerical eigendecomposition of each spectral density operator, and applying the Cram\'er-Karhunen-Lo\`eve-based simulation algorithm presented in Subsection~\ref{subsec6:simulation_CKL} is that the filtered white noise approach requires only one runtime of this expensive step. Having defined the random elements $\{Y_k\}$ by \eqref{eq6:simulation_filtered_Y_k}, we define the elements $\{Z_k'\}$ in the notation of the algorithm presented at the beginning of Section~\ref{sec6:simulation_in_spectral_domain} by putting \begin{equation}\label{eq6:simulation_filtered_Z_k} Z_k' = \frac{1}{\sqrt{2\pi}} \Theta(\omega_k) Y_k, \qquad k=1,\dots,T/2,T. \end{equation} Such $\{Z_k'\}$ obviously satisfy \eqref{eq6:simulate_Z_covariance_operator}. If the decomposition \eqref{eq6:simulation_filtered_S_decomposition} is unknown and we opt to numerically evaluate it on a grid of size $M$, the total computational complexity turns out to be $O(M^3 + M^2 T + M T\log T)$ where $O(M^2 T)$ comes from the matrix application \eqref{eq6:simulation_filtered_Z_k} and $O(M T\log T)$ from the inverse fast Fourier transform \eqref{eq6:simulation_iFFT}. \subsection{Simulation under Linear Time Domain Specification} \label{subsec6:simulation_FARFIMA} One of the typical functional time series dynamics specifications is a linear process in the time domain. In this subsection we consider the flexible class of the FARFIMA$(p,d,q)${}{} processes, one of the most general classes of such linear processes, and show how to generate their trajectories by spectral domain simulation methods. The FARFIMA$(p,d,q)${}{} processes, thanks to being defined as a linear filter of white noise, admit the spectral density operators of the form \eqref{eq6:simulation_filter_spec_density}. However, the application of the simulation algorithm presented in Subsection~\ref{subsec6:simulation_filter} requires the frequency response function $\Theta(\omega)$ to be readily available, which is not always the case: the FARFIMA$(p,d,q)${}{} (or FARMA$(p,q)${}{}) process with a non-degenerate autoregressive part admit the frequency response function given by the formula prompting operator inversion: \begin{equation}\label{eq6:FARIMA_frequency_response_function} \Theta(\omega) = \mathscr{A}( e^{-\I\omega} )^{-1} \mathscr{B}( e^{-\I\omega} ),\qquad \omega\in[0,2\pi]. \end{equation} Therefore a naive implementation would require inversion of the linear bounded operator $\mathscr{A}( e^{-\I\omega} )$ for each frequency $\omega$. It may very well happen that $\mathscr{A}( e^{-\I\omega} )$ has special structure, e.g. as is the case for the FARFIMA(1,d,0) process considered in Example~\ref{example6:long_range_FARFIMA}, in which case the inversion evaluation is rapid. In the general case, however, the inversion on a spatial domain discretisation would require $O(M^3)$ operations where $M$ is the discretisation resolution. Fortunately, there are two ways to avoid this computational cost: \begin{itemize} \item A \textit{fully spectral approach} which consists in the efficient evaluation of \eqref{eq6:simulation_filtered_Z_k}. The discretization of this formula for the FARFIMA$(p,d,q)${}{} process involves evaluation of \begin{equation}\label{eq6:simulation_FARIMA_get_V_k} Z_k = \frac{[2\sin(\omega/2)]^{-d}}{\sqrt{2\pi}} \mathbf{A}( e^{-\I\omega_k} )^{-1} \mathbf{B}( e^{-\I\omega_k} ) Y_k \end{equation} where the matrices $\mathbf{A}( e^{-\I\omega_k} )$ and $\mathbf{B}( e^{-\I\omega_k} )$ are the discretizations of $\mathscr{A}( e^{-\I\omega_k} )$ and $\mathscr{B}( e^{-\I\omega_k} )$ respectively. The numerical evaluation of \eqref{eq6:simulation_FARIMA_get_V_k} requires solving the matrix equation with the matrix $\mathbf{A}( e^{-\I\omega_k} )$ and the right-hand side vector of $ \mathbf{B}( e^{-\I\omega_k} ) Y_k$, thus resulting in $O(M^2)$ complexity, as opposed to the $O(M^3)$ complexity of matrix inversion. \item A \textit{hybrid simulation approach}, where we simulate the FARFIMA$(p,d,q)${}{} processes by simulating the corresponding FARFIMA$(0,d,q)$ process in the spectral domain and then applying the autoregressive recursion in the time-domain. Concretely, we: \begin{enumerate} \item Choose a burn-in length $\tilde{T}$, and simulate a FARFIMA$(0,d,q)$ process with degenerate autoregressive part, denoted as $X'_1,\dots,X'_{T+\tilde{T}}$, by the means of the tools in Subsection~\ref{subsec6:simulation_filter}. Such a functional time series admits the spectral density operator $$ \mathscr{F}_\omega^{X'} = \frac{\left[2\sin(\omega/2)\right]^{-2d}}{2\pi} \mathscr{B}( e^{-\I\omega} ) \mathcal{S} \mathscr{B}( e^{\I\omega} )^* $$ whose corresponding frequency response function $\Theta(\omega) = [2\sin(\omega/2)]^{-d} \mathscr{B}( e^{-\I\omega} )$ can be evaluated fast. \item Set $X_1,\dots, X_p = 0$ and run the recursion $$ X_t = \mathcal{A}_1 X_{t-1} + \dots + \mathcal{A}_p X_{t-p} + X'_t, \qquad t = p+1,\dots,T+\tilde{T}.$$ \item Discard the first $\tilde{T}$ values of $X_1,\dots,X_{T+\tilde{T}}$ and keep only the last $T$ elements. \end{enumerate} \end{itemize} Both the fully spectral and the hybrid implementations involve the numerical eigendecomposition of the noise covariance operator $\mathcal{S}$, incurring an $O(M^3)$ computation cost, the applications of matrices on vectors or solving linear equations, yielding $O(M^2 T)$ operations, and the inverse fast Fourier transform at each point of the discretisation with the $O(M T\log T)$ complexity. Thus the total computational complexity is $O(M^3 + M^2T + M T\log T)$. Nevertheless, even though the application of a matrix on a vector has the same complexity as solving a linear system of equations, the constant hidden in the ``$O$" is different and the hybrid simulation method is faster than the fully spectral approach, which requires the solution of linear systems at each frequency, as the simulation study in Example~\ref{example6:FARMA_lowrank} demonstrates. \section{Examples and Numerical Experiments} \label{sec6:examples} This section presents three examples of functional time series specified according in various ways, similarly to the last three section. Thus, the spectral density operator may be directly or indirectly defined, depending on the scenario. The examples are accompanied by a small simulation study assessing the simulation speed and the simulation accuracy by comparing the lagged autocovariance operators of the simulated processes with the ground truth. The purpose of the simulation study is to illustrate the performance of the method in terms of speed and accuracy, and draw some qualitative conclusions about the choice of methods and parameters, rather than to provide with an extensive quantitative comparison. A parallel objective is to provide code that is accessible (Section~\ref{sec:code_availability}), simple to run, and easy to tailor for custom-defined spectral density operators used in functional time series research. \subsection{Specification by Spectral Eigendecomposition} \label{example6:custom_karhunen_loeve} Consider the spectral density operator defined by its eigendecomposition \begin{align} \label{eq6:custom_CKL_spec_density_operator_sum} \mathscr{F}^X_\omega &= \sum_{n=1}^\infty \lambda_n(\omega) \varphi_n(\omega)\otimes \varphi_n(\omega), \qquad \omega\in[0,2\pi],\\ \nonumber \lambda_n(\omega) &= \frac{1}{ (1-0.9 \cos(\omega)) \pi^2 n^2}, \qquad \omega\in[0,2\pi],\\ \nonumber \left(\varphi_n(\omega)\right)(x) &= \begin{cases} \sqrt{2} \sin( n (\pi \delta_{\omega/\pi}(x) ),& \quad x\in[0,1],\quad \omega\in[0,\pi], \\ \sqrt{2} \sin( n (\pi \delta_{-\omega/\pi}(x)) ),& \quad x\in[0,1],\quad \omega\in(\pi,2\pi], \end{cases} \end{align} where $$\delta_a(\cdot) = x-a \mod 1$$ is the periodic shift by $a\in\mathbb{R}$ with ``mod" denoting the modulo operation, the remainder after the division. Under such definition, which guarantees that $\delta_a(x) \in [0,1]$, the harmonic eigenfunctions at distinct frequencies are phase-shifted versions of each other. It turns out that the spectral density operator given by the sum \eqref{eq6:custom_CKL_spec_density_operator_sum} can be expressed in closed analytical form, as an integral operator with kernel \begin{equation} \label{eq6:custom_CKL_spec_density_kernel} f^X_\omega(x,y) = \begin{cases} \frac{1}{ (1-0.9 \cos(\omega))} K_{BB}( \delta_{\omega/\pi}(x), \delta_{\omega/\pi}(x) ),& \omega\in[0,\pi],\\ \frac{1}{ (1-0.9 \cos(\omega))} K_{BB}( \delta_{-\omega/\pi}(x), \delta_{-\omega/\pi}(x) ),& \omega\in(\pi,2\pi]. \end{cases} \end{equation} where $K_{BB}(\cdot,\cdot)$ is the covariance kernel of Brownian bridge \citep{deheuvels2003} defined as $$ K_{BB}(x,y) = \min(x,y) - xy,\qquad x,y\in[0,1]. $$ Figure~\ref{fig6:custom_CKL_trajectory} illustrates the simulated trajectories with varying number of the harmonic principal components $N$ used in the truncation of the sum \eqref{eq6:simulate_Z_CKL} when simulating by the means presented in Subsection~\ref{subsec6:simulation_CKL}. \begin{figure} \centering \includegraphics[width=1\textwidth]{figures/simulation_method/custom_ckl_accuracy.pdf} \caption[Simulation accuracy and speed of the Cram\'er-Karhunen-Lo\`eve method for the simulation in Example~\ref{example6:custom_karhunen_loeve}]{ The simulation accuracy \eqref{eq6:RMSE_simulation} and speed of the process defined in Example~\ref{example6:custom_karhunen_loeve}. \textbf{Left:} The simulation accuracy for lag-$h$ autocovariance operator with varying number of harmonic principal components used $N\in\{1,2,3,5,10,20,50,100,200,1000\}$ visualised as a function of the lag $h\in\{0,1,2,3,5,10,20,30,40,60,80,100\}$. The sample size parameters are set $T=1000$ and $M=1001$. \textbf{Right:} The simulation speed as a function of $N$ with fixed $T=1000$ and $M=1001$. } \label{fig6:custom_CKL_accuracy} \end{figure} \begin{figure} \centering \includegraphics[width=1\textwidth]{figures/simulation_method/custom_ckl_speed.pdf} \caption[Simulation speed for the simulation in Example~\ref{example6:custom_karhunen_loeve}]{ The simulation speed of the process defined in Example~\ref{example6:custom_karhunen_loeve}. \textbf{Left:} The dependence on varying the time horizon $T\in\{400,800,1600,3200,6400\}$ while setting the spatial resolution $M=101$. Both the simulation using the known Cram\'er-Karhunen-Lo\`eve expansion (\textsc{CKL}) and the method calculating this decomposition by the \textsc{SVD} algorithm use $N=101$ eigenfunctions. \textbf{Right:} The dependence on varying $M\in\{101,201,501,701,1001\}$ while setting $T=1000$. The simulation using the known Cram\'er-Karhunen-Lo\`eve expansion (\textsc{CKL}) uses $100$ eigenfunctions while the numerical \textsc{SVD ($N$)} decomposition finds $N\in\{5,10,50,100\}$ leading eigenfunctions (the lines mostly overlap each other) or all of them $N=M$ for \textsc{SVD (full)}. The \textsc{CKL} method has the running time below 0.1~minutes (6~seconds) even for $M=1001$. } \label{fig6:custom_CKL_speed} \end{figure} \begin{figure} \centering \includegraphics[width=1\textwidth]{figures/simulation_method/custom_ckl_trajectory.pdf} \caption[Simulated trajectories of the process defined in Example~\ref{example6:custom_karhunen_loeve}]{ Sample trajectories $X_1(\cdot)$ of the process defined in Example~\ref{example6:custom_karhunen_loeve} with varying number of harmonic principal components $N$ chosen in the truncation of \eqref{eq6:simulate_Z_CKL}. Simulated with $T=100$ and the grid resolution $M=1001$. } \label{fig6:custom_CKL_trajectory} \end{figure} In order to assess the simulation accuracy we opt to: simulate $I=1000$ independent realisations of the process $\{X^{(1)}_t\}_{t=1}^T,\dots,\{X^{(I)}_t\}_{t=1}^T$; evaluate its empirical autocovariance operators $\hat{\mathscr{R}}^X_{h,[i]}$ for each $i=1,\dots,I$ and some lags $h$; and define the average empirical autocovariance operator $\overline{\mathscr{R}^X_h} = \frac{1}{I}\sum_{i=1}^I \hat{\mathscr{R}}^X_{h,[i]}$. We then compare this with the true covariance operator $\mathscr{R}^X_h$ by calculating \begin{equation}\label{eq6:RMSE_simulation} rel.error(h) = \frac{\left\| \overline{\mathscr{R}^X_h} - \mathscr{R}^X_h \right\|_1}{\left\| \mathscr{R}^X_0 \right\|_1}, \qquad\text{for some lags}\,\, h. \end{equation} The true autocovariance operators $\mathscr{R}^X_h$ were calculated by numerically integrating \eqref{eq6:spectral_density_operator_inverse_formula}. Figure~\ref{fig6:custom_CKL_accuracy} the manner of error decay as $N\to\infty$ and the number of harmonic components $N=100$ seems to be satisfactory. The relative simulation errors for $N>100$ seem to be dominated by the random component of \eqref{eq6:RMSE_simulation} rather than the simulation error itself. We note that the spectral density operator \eqref{eq6:custom_CKL_spec_density_operator_sum} is non-differentiable near the spatial diagonal, and consequently features a relatively slow (quadratic) decay of its eigenvalues. It thus represents one of the more challenging cases one might wish to simulate from in an FDA context: functional data analyses typically feature smooth curves and differentiable corresponding operators, including spectral density operators, admitting a faster quicker eigenvalue requiring $N\ll 100$ eigenfunctions to capture a substantial amount of their variation. Figure~\ref{fig6:custom_CKL_speed} presents the simulation speed results with varying sample size parameters: the time horizon $T$ and the spatial resolution $M$. We compared the simulation using the known Cram\'er-Karhunen-Lo\`eve decomposition \eqref{eq6:custom_CKL_spec_density_operator_sum} with the method finding this decomposition numerically starting from the kernel \eqref{eq6:custom_CKL_spec_density_kernel}. Such method finds the harmonic eigendecomposition using the (truncated) SVD algorithm applied to discretization of \eqref{eq6:custom_CKL_spec_density_kernel}. Figure~\ref{fig6:custom_CKL_speed} shows that such routine can become very costly for higher spatial resolutions $M$, but if no other method is available, the method still constitutes an general approach how to simulate process with any dynamics structure defined through weak spectral density operators. \subsection{Long-range Dependent FARFIMA$(p,d,q)${}{} Process} \label{example6:long_range_FARFIMA} The next example is sourced from the work of \citet{li2019long} and \citet{shang2020comparison} on long-rang dependent functional time series. They consider the FARFIMA(1,0.2,0) process defined by \eqref{eq6:FARIMA_def} with the autoregressive operator $\mathcal{A}_1$ and the innovation covariance operator $\mathcal{S}$ defined as integral operators with respective kernels \begin{align} \label{eq6:example_FARIMA_def_A1} A_1(x,y) &= 0.34 \exp\left\{ (x^2+y^2)/2 \right\}, \qquad x,y\in[0,1],\\ \label{eq6:example_FARIMA_def_S} S(x,y) &= \min(x,y), \qquad x,y\in[0,1], \end{align} depicted in Figure~\ref{fig6:FARIMA_kernels}. Recall that $S(x,y)= \min(x,y)$ is the covariance kernel of the standard Brownian motion on $[0,1]$. Because $d=0.2 > 0$, the process exhibits long-rang dependence \citep{li2019long}. The constant $0.34$ ensures that condition \eqref{eq6:FARMA_condition} is satisfied, and thus the process is stationary and admits a weak spectral density operator (Theorem~\ref{thm:FARFIMA-part-ii}) given by \begin{equation}\label{eq6:example_FARIMA_spec_density} \mathscr{F}^X_\omega = \frac{\left[2\sin(\omega/2)\right]^{-2d}}{2\pi} \left( \Id - \mathcal{A}_1 e^{-\I\omega} \right)^{-1} \mathcal{S} \left( \Id - \mathcal{A}_1^* e^{\I\omega} \right)^{-1}, \qquad\omega\in[0,2\pi]. \end{equation} In fact, the operator $\mathcal{A}_1$ is of rank 1 and can be written as $\mathcal{A}_1 = -0.34 g\otimes g$ with $g(x)=\exp( x^2/2 ),\,x\in[0,1]$. This fact hugely simplifies the evaluation of \eqref{eq6:example_FARIMA_spec_density} because the inversion of the autoregressive part can be written by the Sherman–Morrison formula as \begin{equation}\label{eq6:sherman-morrison} \left( \Id - \mathcal{A}_1 e^{-\I\omega} \right)^{-1} = \Id + \frac{0.34 e^{-\I\omega}}{1-0.34 e^{-\I\omega} \|g\|^2_{L^2([0,1],\mathbb{R})}} g\otimes g, \qquad\omega\in[0,2\pi], \end{equation} thus allowing for fast evaluation. Further computation gains, though less considerable, are made by using the Mercer decomposition of the Brownian motion covariance kernel \citep{deheuvels2003} \begin{equation}\label{eq6:brownian_motion_KL} S(x,y) = \sum_{n=1}^\infty \frac{1}{\left[(n-0.5)\pi\right]^2} \sqrt{2} \sin\left\{ (n-0.5) \pi x \right\} \sqrt{2} \sin\left\{ (n-0.5) \pi y \right\},\qquad x,y\in[0,1], \end{equation} instead of numerical evaluation on a grid followed by an SVD decomposition. In what follows, we consider the following implementations the spectral and time-domain, and hybrid simulation methods: \begin{itemize} \item \textsc{spectral (bm)}: This method uses the known Mercer decomposition of the Brownian motion (\textsc{bm}) kernel \eqref{eq6:brownian_motion_KL} and simulates the process in the spectral domain using the method of Subsection~\ref{subsec6:simulation_filter} with the help of the Sherman-Morrison formula \eqref{eq6:sherman-morrison}. \item \textsc{hybrid (bm)}: This method again uses the known Mercer decomposition of the Brownian motion (\textsc{bm}) kernel \eqref{eq6:brownian_motion_KL} and simulates the FARFIMA$(0,d,0)$ process and then applies the autoregressive recustion in the time-domain as explained in Subsection~\ref{subsec6:simulation_FARFIMA}, thus constituting a \textsc{hybrid} simulation method combining spectral and time-domain. \item \textsc{spectral (svd)}, \textsc{hybrid (svd)}: These method correspond to \textsc{spectral (bm)} and \textsc{hybrid (bm)} but the Mercer decomposition of the Brownian motion kernel is calculated numerically using the \textsc{svd} algorithm. \item \textsc{temporal}: We use the original code by \citet{li2019long} available in the on-line supplement of their article and treat is as the benchmark for comparison with our spectral simulation methods. They simulate the realisations of the process by discretising the space domain $[0,1]$ and evaluating the integral operator $\mathcal{A}_1$ as a sum on this grid. Moreover, they perform the fractional integration \eqref{eq6:FARIMA_def} by analytically calculating the filter coefficients in the time-domain and thus expressing the process as FMA($\infty$), the functional moving average process of infinite order. Details on the FMA($\infty$) representation can be found in \citet{li2019long,hosking1981fractional}. The computational complexity of this method is $O(M^2 T^2)$. \end{itemize} In order to assess the simulation accuracy we opt to simulate $I=100$ independent realisations, and compare the mean empirical autocovariance operators \eqref{eq6:RMSE_simulation} with the true autocovariance operator for varying $T\in\{400,800,1600,3200,6400\}$ and $M\in\{101,201,501,1001\}$. We simulate the process with varying parameter $T$, the time horizon of the simulation, as well as varying spatial resolution $M$, based on a regular grid $\{x_m = (m-1)/(M-1)\}_{m=1}^M \subset [0,1]$. The simulation accuracy error, reported in Figure~\ref{fig6:FARFIMA} (in Appendix~\ref{sec:supplementary_figures}), is negligible for all the simulation methods and \eqref{eq6:RMSE_simulation} is dominated rather by the random component, which is higher for smaller $T$. Figures~\ref{fig6:FARIMA_speed} summarise how fast the different simulation methods were. It is obvious that the simulation by the \textsc{temporal} method used by \citet{li2019long} scales badly in $T$, while the other methods are linear in $T$, performing significantly better. On the other hand, the \textsc{spectral (bm)}, \textsc{hybrid (bm)}, and \textsc{temporal} methods taking advantage of the innovation error covariance eigendecomposition have complexity dominated by $O(M^2 T)$ and scale similarly. The \textsc{spectral (svd)} and \textsc{hybrid (svd)} methods require a further $O(M^3)$ operations for the SVD algorithm and this contribution becomes visible for $M\in\{501,1001\}$. \begin{figure} \centering \includegraphics[width=1\textwidth]{figures/simulation_method/farima_speed.pdf} \caption[The simulation speed for the FARFIMA(1,0.2,0) in Example~\ref{example6:long_range_FARFIMA}]{ The dependence of the \textbf{simulation speed} for the long-range dependent FARFIMA(1,0.2,0) process defined in Example~\ref{example6:long_range_FARFIMA} on the simulation parameters. \textbf{Left:} The simulation speed for varying time horizon $T\in\{400,800,1600,3200,6400\}$ with the spatial resolution is set $M=101$. \textbf{Right:} The dependence of the simulation speed on the grid size $M\in\{101,201,501,1001,\}$ with $T=800$. } \label{fig6:FARIMA_speed} \end{figure} \subsection{FARMA$(p,q)${}{} Process with Smooth Parameters} \label{example6:FARMA_lowrank} \begin{figure}[ht] \centering \includegraphics[width=1\textwidth]{figures/simulation_method/arma_speed.pdf} \caption[The simulation speed for the FARMA(4,3) in Example~\ref{example6:FARMA_lowrank}]{ The dependence of the \textbf{simulation speed} for the FARMA(4,3) process defined in Example~\ref{example6:FARMA_lowrank} on the simulation parameters. \textbf{Left:} The simulation speed for varying time horizon $T\in\{400,800,1600,3200,6400\}$ with the spatial resolution is set $M=101$. \textbf{Right:} The dependence of the simulation speed on the grid size $M\in\{101,201,501,1001\}$ with $T=800$. } \label{fig6:FARMA_speed} \end{figure} In this example we consider the FARMA(4,3) process \eqref{eq6:FARMA_def} with the autoregressive operators $\mathcal{A}_1,\dots,\mathcal{A}_4$, the moving average operators $\mathcal{B}_1,\dots,\mathcal{B}_3$, and the innovation covariance operator $\mathcal{S}$ defined as integral operators with kernels \begin{align*} A_1(x,y) &= 0.3 \sin(x-y), & B_1(x,y) &= x+y, \\ A_2(x,y) &= 0.3 \cos(x-y), & B_2(x,y) &= x, \\ A_3(x,y) &= 0.3 \sin(2x), & B_3(x,y) &= y, \\ A_4(x,y) &= 0.3 \cos(y), & \end{align*} and \begin{align}\label{eq6:FARMA_lowrank_def_S} S(x,y) = &\sin( 2\pi x )\sin( 2\pi y ) +\\ \nonumber + 0.6 &\cos( 2\pi x )\cos( 2\pi y ) +\\ \nonumber + 0.3 &\sin( 4\pi x )\sin( 4\pi y ) +\\ \nonumber + 0.1 &\cos( 4\pi x )\cos( 4\pi y ) +\\ \nonumber + 0.1 &\sin( 6\pi x )\sin( 6\pi y ) +\\ \nonumber + 0.1 &\cos( 6\pi x )\cos( 6\pi y ) +\\ \nonumber + 0.05 &\sin( 8\pi x )\sin( 8\pi y ) +\\ \nonumber + 0.05 &\cos( 8\pi x )\cos( 8\pi y ) +\\ \nonumber + 0.05 &\sin( 10\pi x )\sin( 10\pi y ) +\\ \nonumber + 0.05 &\cos( 10\pi x )\cos( 10\pi y ), \qquad x,y\in[0,1]. \end{align} These are depicted in Appendix~\ref{sec:supplementary_figures}, Figure~\ref{fig6:FARMA_kernels}. The constant $0.3$ guarantees stationarity of the process, hence it admits the spectral density \eqref{eq6:FARMA_spectral_density_operator}. Figure~\ref{fig6:FARMA}, included in Appendix~\ref{sec:supplementary_figures}, confirms that all the simulation methods approximate well the simulated process as the relative simulation error metric is affected more by the stochastic component. Figure~\ref{fig6:FARMA_speed} presents the simulation speed comparison between the spectral domain methods and the time-domain autoregressive recursion approach (\textsc{temporal}). The four considered spectral domain methods include: \begin{itemize} \item \textsc{spectral (lr)} This method uses the eigendecomposition \eqref{eq6:FARMA_lowrank_def_S} of the innovation noise covariance kernel. The simulation is conducted fully in the spectral domain as explained in Subsection~\ref{subsec6:simulation_FARFIMA}. \item \textsc{hybrid (lr)}: This method uses the eigendecomposition \eqref{eq6:FARMA_lowrank_def_S} of the innovation noise covariance kernel, simulates the corresponding moving average process in the spectral domain and applies the autoregressive part in the time-domain as explained in Subsection~\ref{subsec6:simulation_FARFIMA}. \item \textsc{spectral (svd)}, \textsc{hybrid (svd)}: As above, but the eigendecomposition of $S(x,y)$ is calculated numerically by the SVD algorithm. \end{itemize} Even though the time complexity, which is dominated by the term $O(M^2T)$, of the spectral domain simulation method matches the time complexity of the \textsc{temporal} domain approach with $O(M^2T)$ complexity, the results presented in Figure~\ref{fig6:FARMA_speed} show that the simulation of the FARMA$(p,q)${}{} process in the spectral domain, requiring solving matrix equation at each frequency, as well as the hybrid simulation are slower than the \textsc{temporal} approach. The low-rank definition of \eqref{eq6:FARMA_lowrank_def_S} does not yield any computational speed-up compared to infinite rank covariance kernels (such as the Brownian motion kernel in Example~\ref{example6:long_range_FARFIMA}). The purpose of such a definition is to allow for easy modification of the code if one wishes to specify the process via its harmonic eigenfunctions. \section{General Recommendations for Simulations} \label{sec:general_redommendations} Our methodology provides a general purpose toolbox for simulating stationary (Gaussian) functional time series, leveraging their spectral representation. The high-level skeleton outlined at the beginning of Section~\ref{sec6:simulation_in_spectral_domain} essentially reduces the problem to simulating a finite ensemble of independent random elements, and then applying the inverse fast Fourier transform. The generation of this i.i.d. ensemble depends on how one chooses to carry out discretisation and/or dimension reduction. We have demonstrated how knowledge of additional structure can significantly speed up the computations. \medskip \noindent Some take-away messages and recommendations are as follows. \begin{itemize} \item \textbf{Simulation of functional time series specified through their spectral density operator.} To date, this problem had not been addressed, presumably because the assessment of the functional time series methods has traditionally been done based on simulation of functional \emph{linear} processes. Key methods pertaining to regression and prediction, however, present performance tradeoffs that depend on the frequency domain properties, rather than the time domain properties of the time series \citep{hormann2015dynamic,hormann2015estimation,hormann2018testing,zhang2016white,tavakoli2016detecting,pham2018methodology,rubin2019functional,rubin2020sparsely}. One then wishes to simulate from a spectrally specified functional time serirs. More generally, our method can in principle be applied to any stationary model, linear or nonlinear, going well beyond the classical families of functional FARMA$(p,q)${}{} or FARFIMA$(p,d,q)${}{} processes, provided the process admits a weak spectral density operator. The method is fast and produces accurate results when the process is spectrally specified, courtesy of the Cram\'er-Karhunen-Lo\`{e}ve expansion (Subsection~\ref{subsec6:simulation_CKL}) which is provably the optimal way to carry out dimension reduction. Excellent performance can also be expected when the dynamics of a functional time series are specified by means of white noise filtering (Subsection~\ref{subsec6:simulation_filter}). For a general specification, the spectral domain simulation method of Subsection~\ref{subsec6:simulation_CKL} still provides means how to simulate arbitrary functional time series. If the Cram\'er-Karhunen-Lo\`{e}ve expansion is unknown, or a filtering representation is not available, the spectral density evaluation and the numeric eigendecomposition might require more time-consuming operations. Still, the approach constitutes the only general purpose recipe, where no previous method was available. \item \textbf{Simulation of FARFIMA$(p,d,q)${}{} processes.} The advantages of the spectral approach compared to time domain methods become quote considerable when dealing with processes that have an infinite order moving average representation, while having a simple formulation in the spectral domain. An important example being the FARFIMA$(p,d,q)${}{} processes with $d>0$ (long memory process) or $d<0$ (anti-persistent) as the fractional integration is straightforward in the spectral domain while it produces an infinite order dependence in the time-domain. Example~\ref{example6:long_range_FARFIMA} showed how to efficiently and effortlessly simulate a long-range dependent FARFIMA process. Therefore we submit that the simulation of FARFIMA$(p,d,q)${}{} processes with $d\neq 0$ is more accessible and easy to implement in the spectral domain. \item \textbf{Simulation of FARMA$(p,q)${}{} processes.} If one does specifically want to simulate a FARMA$(p,q)${}{} processes, simulation in the time-domain is straightforward and fast. Still, our spectral domain simulation method matches the time complexity of the time domain methods in these cases. The constant hidden in ``$O$", however, seems to be higher for the spectral domain methods, as Example~\ref{example6:FARMA_lowrank} confirms. One advantage that the simulation in the spectral domain attains over the time-domain, though, is that we do not need to worry about the burn-in to reach the stationary distribution. We tentatively conclude that if a practitioner wishes to simulate a FARMA$(p,q)${}{} process, then both the time-domain and the spectral domain methods are equally applicable, though the time-domain simulation seems to be more straightforward to implement. \end{itemize} Overall the presented methods provide a useful toolbox of simulation methods in the spectral domain which are fast and accurate, and allow for simulation of standard as well as unusual or ``custom defined" stationary time series defined through their weak spectral density operators. We hope that the accompanying code can be helpful for carrying out numerical experiments in future functional time series methodological research. \section{Code Availability and \texttt{R} Package \texttt{specsimfts}} \label{sec:code_availability} To facilitate the implementation of spectral domain simulation methods introduced in this article, we have created an \texttt{R} package \texttt{specsimfts} available on GitHub at \url{https://github.com/tomasrubin/specsimfts}. The package includes the implementations of all the methods presented in this article as well as the examples considered in Section~\ref{sec6:examples} as demos that are easy to use and modify.
{ "timestamp": "2020-07-17T02:21:28", "yymm": "2007", "arxiv_id": "2007.08458", "language": "en", "url": "https://arxiv.org/abs/2007.08458" }
\section{\uppercase{#1}}} \newcommand{\cmbav}[1]{\left\langle #1 \right\rangle} \newcommand{\pdmcomment}[1]{{\color{blue}{[pdm: #1]}}} \newcommand{\pdm}[1]{{\color{blue}{#1}}} \newcommand{\bscomment}[1]{{\color{red}{#1}}} \newcommand{\blake}[1]{{\color{red}{[bds: #1]}}} \newcommand{\tbcomment}[1]{{\color{purple}{[tb: #1]}}} \newcommand{\tobias}[1]{{\color{purple}{#1}}} \newcommand{\sfcomment}[1]{{\color{orange}{[SF: #1]}}} \newcommand{\simon}[1]{{\color{orange}{#1}}} \newcommand{\odcomment}[1]{{\color{magenta}{[od: #1]}}} \newcommand{\omar}[1]{{\color{magenta}{#1}}} \newcommand{\macomment}[1]{{\color{brown}{[ma: #1]}}} \newcommand{\muntazir}[1]{{\color{brown}{#1}}} \newcommand{\dontshow}[1]{} \newcommand{b_{1}}{b_{1}} \graphicspath{{./}{figures/}} \renewcommand{\em}{\emph} \renewcommand{\citealt}{\cite} \input{commands.tex} \begin{document} \title{Density reconstruction from biased tracers and its application to primordial non-Gaussianity} \author{Omar Darwish} \affiliation{Department of Applied Mathematics and Theoretical Physics, \\ University of Cambridge, Wilberforce Road, \\ Cambridge CB3 0WA, United Kingdom} \author{Simon Foreman} \affiliation{Perimeter Institute for Theoretical Physics, \\ 31 Caroline Street North, Waterloo, ON N2L 2Y5, Canada} \affiliation{Dominion Radio Astrophysical Observatory, Herzberg Astronomy \& Astrophysics Research Centre, \\ National Research Council Canada, P.O.\ Box 248, Penticton, BC V2A 6J9, Canada} \author{Muntazir M.~Abidi} \affiliation{Department of Applied Mathematics and Theoretical Physics, \\ University of Cambridge, Wilberforce Road, \\ Cambridge CB3 0WA, United Kingdom} \author{Tobias Baldauf} \affiliation{Department of Applied Mathematics and Theoretical Physics, \\ University of Cambridge, Wilberforce Road, \\ Cambridge CB3 0WA, United Kingdom} \author{Blake D.~Sherwin} \affiliation{Department of Applied Mathematics and Theoretical Physics, \\ University of Cambridge, Wilberforce Road, \\ Cambridge CB3 0WA, United Kingdom} \affiliation{Kavli Institute for Cosmology, \\ University of Cambridge, \\ Cambridge CB3 0HA, United Kingdom} \author{P.~Daniel Meerburg} \affiliation{Van Swinderen Institute for Particle Physics and Gravity,\\ University of Groningen, Nijenborgh 4, 9747 AG Groningen, The Netherlands} \begin{abstract} Large-scale Fourier modes of the cosmic density field are of great value for learning about cosmology because of their well-understood relationship to fluctuations in the early universe. However, cosmic variance generally limits the statistical precision that can be achieved when constraining model parameters using these modes as measured in galaxy surveys, and moreover, these modes are sometimes inaccessible due to observational systematics or foregrounds. For some applications, both limitations can be circumvented by reconstructing large-scale modes using the correlations they induce between smaller-scale modes of an observed tracer (such as galaxy positions). In this paper, we further develop a formalism for this reconstruction, using a quadratic estimator similar to the one used for lensing of the cosmic microwave background. We incorporate nonlinearities from gravity, nonlinear biasing, and local-type primordial non-Gaussianity, and verify that the estimator gives the expected results when applied to $N$-body simulations. We then carry out forecasts for several upcoming surveys, demonstrating that, when reconstructed modes are included alongside directly-observed tracer density modes, constraints on local primordial non-Gaussianity are generically tightened by tens of percents compared to standard single-tracer analyses. In certain cases, these improvements arise from cosmic variance cancellation, with reconstructed modes taking the place of modes of a separate tracer, thus enabling an effective ``multitracer" approach with single-tracer observations. \end{abstract} \keywords{cosmology, primordial non gaussianity --- quadratic estimators --- forecasting --- galaxy surveys} \maketitle \tableofcontents \section{Introduction} Our understanding of the Universe has benefited tremendously from measurements of the cosmic microwave background (CMB), primarily because of the linear relationship between fluctuations in the CMB and fluctuations generated in the very early universe. This relationship allows us to connect CMB measurements to the statistics of the initial fluctuations and their time evolution, and has led to the establishment of the current cosmological model. Extraction of similar information from the large-scale structure (LSS) of the universe is limited by nonlinear clustering at smaller distances and lower redshifts, requiring more elaborate modelling to interpret observations. This modelling burden is greatly reduced at the largest distances we can resolve with galaxy surveys, but this regime is in turn obscured by both statistical and systematic errors. In this paper, we explore a method to access these large scales while bypassing both types of errors: quadratic density reconstruction. This idea of density reconstruction relies on the fact that a fixed long-wavelength density fluctuation correlates two different small-scale modes due to non-linear evolution and higher-order biasing, with the amount of correlation proportional to the long-wavelength mode. This can be understood as arising from a violation of statistical homogeneity if the long-wavelength mode is considered as fixed and the shorter-wavelength modes are averaged over in an ensemble. Writing down a quadratic estimator that probes this induced correlation between two different modes, we can estimate the long-wavelength modes from the statistical properties of the smaller-scale modes.\footnote{In fact, these statements are independent of the relative wavelengths of the modes, and the formalism we present in this paper is not restricted to the so-called ``squeezed limit" of the three modes involved. However, for our applications, the modes we are seeking to reconstruct have longer wavelengths than the two modes whose correlations are used for the reconstruction, so we focus on that situation in this paper.} There is a close analogy between this procedure and the common method of CMB lensing reconstruction, in which a quadratic estimator, making use of the lensing-induced correlation between two different CMB temperature modes, is used to reconstruct the lensing field (e.g.\ \citealt{Hu:2001kj}). It is using this analogy that many of the methods for density reconstruction were derived. The idea of using a standard quadratic estimator in the CMB lensing form to perform this reconstruction was first proposed by \cite{Foreman:2018gnv}, building on earlier work (\citealt{Pen:2012ft,Zhu:2015zlh,Zhu:2016esh}, albeit with a somewhat modified estimator). Significant further work in this area has been presented by \cite{Li:2018izh,Modi:2019hnu,Karacayli:2019iyd,Li:2020uug,Li:2020luq}; see further discussion in Section~\ref{sec:discussion}. The work in this paper broadly divides in two parts. In the first part, we present a considerable expansion of current technology for density reconstruction. We discuss the application of density reconstruction to biased tracers, including, for the first time, a full non-linear bias model in such a formalism. We further validate our method on a suite of realistic $N$-body simulations, demonstrating that our methods perform just as expected from theoretical calculations for both the reconstruction and its noise level. We note that this reconstruction has a wealth of applications. One simple application is the following: LSS surveys are often plagued by observational systematics that manifest at large scales, impeding the direct observation of low-$k$ modes. Galaxy and quasar surveys are affected, for example, by variations in the density of foreground stars, seeing, and galactic dust extinction (e.g.\ \citealt{Ross:2012sx,Ho:2013lda,Kalus:2018qsy}), while \tcm surveys cannot access modes with low line-of-sight wavenumbers that are dominated by galactic foregrounds, and imperfect knowledge of the instrument can spread this contamination throughout a wider region of Fourier space (e.g.\ \citealt{Parsons:2012qh,Liu:2014bba,Liu:2014yxa}). A method of reconstructing these inaccessible modes using correlations between smaller-scale modes will improve the constraining power of a given survey for large-scale signals such as local non-Gaussianity, and allow cross-correlations involving \tcm surveys that would otherwise be impossible (e.g.\ \citealt{Li:2018izh}). In this paper, we parameterize large-scale systematics with a wavenumber $K_{\rm min}$ below which the tracer modes are assumed to be inaccessible, and explore the precision with which modes with $K<K_{\rm min}$ can be recovered by our estimator. We note that this assumes that the relevant systematics can be parameterized as a large-scale additive component, rather than a possible modulation that might also significantly affect small scales; while there is evidence that this is a reasonable assumption for some of the currently known systematics (e.g.~\cite{Kalus:2018qsy}), it may not hold in all cases. In contrast to this general application, the second goal of our paper is to explore, in detail, a much more subtle application of density reconstruction: improving constraints on local-type primordial non-Gaussianity. We will briefly motivate the measurement of primordial non-Gaussianity and the utility of density reconstruction for improving these constraints in the following paragraphs. The CMB has taught us that the statistics of the primordial fluctuations can be accurately described by a red-tilted power law. If the initial conditions are completely described by this power law, they have to be Gaussian distributed, with statistics determined by only two degrees of freedom: the amplitude ($A_{\rm s}$) and tilt ($n_{\rm s}$) of the power law. If this is the case, however, it will be difficult to reach beyond our current understanding of the early Universe. The most widely accepted theory is known as cosmic inflation, which postulates a short early period of accelerated cosmic expansion. Effectively, the constraints we derive from the CMB tell us that inflation can be very well described by a scalar field slowly rolling down a potential (``single-field slow roll", or SFSR), with only (weak) gravitational interactions. While such a model is certainly possible (it was the first to be considered \citep{Gut81,Lin82,Lin82b}), it will not provide us with simple opportunities to understand the physics of inflation. If a proposed model of the early Universe has to comply with Gaussian initial conditions, effectively the model will observationally resemble SFSR. Any further distinction could be extracted from the details of the scale dependence of the primordial power spectrum \citep{Slosar:2019gvt}, but so far, observations do not reveal any obvious deviations from a single parameter power law \citep{Akrami:2018odb,Planck2019IX}. A much more powerful model discriminator would be available if the initial conditions showed a (small) deviation from Gaussianity. In the presence of non-Gaussianity, all moments beyond the power spectrum will generically be excited (starting with the 3-point function or bispectrum). Technically, these higher-point spectra probe the dynamics of the field(s) driving inflation. As a result, a measurement of non-Gaussianity would reveal details of inflation that can be directly related to the underlying fundamental physics. For example, non-Gaussianity could reveal the presence of more fields relevant during inflation, or could provide clues to how strongly coupled the inflation field is (see e.g.~\cite{Meerburg2019} and references therein). These powerful constraints cannot be exposed through any other measurement, making non-Gaussianities a unique probe of the early Universe. To lowest order, primordial non-Gaussianities modulate the gravitational potential $\Phi$ via \begin{eqnarray} \Phi(\boldsymbol{k}) = \varphi_{\rm G} (\boldsymbol{k})+ f_{\rm NL}^X \int \frac{d^3 q}{(2\pi)^3} G_{\rm NL}^X(\boldsymbol{q},\boldsymbol{k}-\boldsymbol{q})\varphi_{\rm G} (\boldsymbol{q})\varphi_{\rm G}(\boldsymbol{k}-\boldsymbol{q})\ , \end{eqnarray} where $\varphi_{\rm G}$ is the Gaussian potential and $G_{\rm NL}^X$ is a kernel that describes how the potential is modulated. In this paper, we are interested in local non-Gaussianities for which $G_{\rm NL}^{\rm local} = 1$, i.e. \begin{equation} \Phi(\boldsymbol{x})=\varphi_{\rm G}(\boldsymbol{x})+f_\text{NL} (\varphi_{\rm G}^2(\boldsymbol{x})-\langle \varphi_{\rm G}^2\rangle), \label{eq:localNGs} \end{equation} where we have subtracted the mean to yield zero expectation value for the fluctuations and have renamed $f_{\rm NL}^{\rm local}$ to $f_{\rm NL}$. Current constraints set $\sigma(f_{\rm NL}) \sim \mathcal{O}(5)$ \citep{Planck2019IX}, while $f_{\rm NL} \sim 1$ has been identified as a compelling theoretical threshold~\citep{Alvarez:2014vva} which provides a strong motivation to go beyond current limits: if a measurement is made showing $f_{\rm NL}$ above this limit, it would effectively rule out SFSR inflation as a viable scenario. Future ground-based CMB experiments \citep{SO2019,Abazajian:2016yjj} may be able reach $\sigma(f_{\rm NL}) \sim \mathcal{O}(1)$, but poor scaling and galactic and cosmological foregrounds will likely prevent the CMB from reaching (far) beyond this limit. Fortunately, the large scale structure (LSS) in the universe provides access to many more modes, for which $\sigma(f_{\rm NL}) \propto ( k_{\rm max}^3 \log k_{\rm max}/k_{\rm min} )^{-1/2}$ \citep{Scoccimarro_2004}. While increased dimensionality will help to improve constraints, the use of LSS will introduce many complications. For one, the scaling argument breaks down when $k_{\rm max}$ exceeds the nonlinear scale $k_{\rm NL}$, which is of order $0.2h\, {\rm Mpc}^{-1}\,$ for current galaxy surveys \citep{DAmico:2019fhj,Ivanov:2019pdj}. Furthermore, line-of-sight information, which will be crucial in obtaining a sufficient number of modes, will require a careful treatment mainly due to redshift space effects \citep{Gil_Mar_n_2014}. Obtaining cosmological constraints from a measurement of the full LSS bispectrum will therefore be challenging, not least because of non-Gaussian covariance \citep{Scoccimarro_2004,Sefusatti_2006,kayo2013cosmological} which will likely require (a large number of) simulations to estimate \citep{Chan_2017}. Some of these difficulties can be overcome by simplifying the full bispectrum into more compressed statistics \citep{Schmittfull:2014tca,Fergusson:2010ia,Byun:2017fkz,Dai:2020adm,MoradinezhadDizgah:2019xun,Chiang:2014oga,dePutter:2018jqk,Gualdi:2018pyw}. The advantage of these statistics is that they should capture nearly all the information \citep{MoradinezhadDizgah:2019xun}, but are computationally and observationally less challenging. Unlike in the CMB, in LSS local primordial non-Gaussianity can also significantly affect the power spectrum of biased tracers, such as galaxies. Specifically, it has been shown \citep{Dalal:2007cu,Matarrese_2008,Slosar_2008,Desjacques:2010jw,Schmidt_2010} that tracer bias will be affected by the primordial non-Gaussianity, with the bias acquiring a unique $1/k^2$ contribution, which is hard to produce otherwise. This signature has been used to place constraints on $f_\text{NL}$ with current surveys \citep{Giannantonio:2013uqa,Leistedt:2014zqa,Castorina:2019wmr}. Unfortunately, although the signal should be distinguishable from other effects, on large scales, the precision with which we can measure the power spectrum is ultimately limited by cosmic variance from the number of available modes. However, it was shown that this cosmic variance can be mitigated \citep{Seljak:2008xr,McDonald:2008sh,Hamaus_2011,Schmittfull:2017ffw,Liu:2020izx} by using multiple tracers of the same underlying density field (with different biases), which essentially allows a measurement of scale-dependent bias via a mode-by-mode comparison of the different tracers. A combination of two (or more, e.g.\ \citealt{Schmittfull:2017ffw,Ballardini:2019wxj}) tracers will allow for cosmic variance cancellation, limiting a measurement of the scale-dependent bias from local primordial non-Gaussianity only by the number density of these tracers. Forecasts show that these techniques enable constraints to reach $\sigma(f_{\rm NL})\sim 1$ this decade \citep{Schmittfull:2017ffw,Ballardini:2019wxj,Munchmeyer:2018eey,SO2019}. In this paper, we show that this cosmic variance cancellation can also be achieved, to some extent, using only a single tracer. In order to do this, we compare our reconstructed density field (which provides information from higher-point functions) with a directly-measured tracer field. In the end, the constraints on $f_{\rm NL}$ will depend on the auto-correlation of the tracer field $P_{\rm gg}$\footnote{Since we will focus on the use of galaxies as tracers in this paper, we will use the subscript g to refer to these tracers, although the method we describe is equally applicable to quasars, line intensity maps, or other tracers.}, the cross correlation of the tracer and the reconstructed field $P_{\rm gr}$, and the auto-correlation of the reconstructed field $P_{\rm rr}$. This idea is related to~\cite{dePutter:2018jqk}, where similar ideas are used to simplify a forecast of the combined information in the power spectrum, bispectrum and trispectrum. However, unlike in \cite{dePutter:2018jqk}, we examine the reconstruction approach as a possible analysis tool rather than a method for more easily computing complex forecasts. In addition, whereas \cite{dePutter:2018jqk} relies on an extension of position-dependent power spectra \citep{Chiang_2014,Chiang_2015,Chiang_2017,Adhikari_2016}, which draw information only from the squeezed limit, here we use a quadratic estimator formalism for the reconstructed field without imposing a squeezed-limit constraint. Let us briefly summarize our most important results: \begin{itemize} \item The modes of the tracer overdensity will be coupled due to nonlinearities from gravity, nonlinear bias, and primordial non-Gaussianity. The amplitudes (parameterized with bias coefficients) of several of these mode-couplings are unknown a priori. We incorporate this in our characterization of the quadratic estimator for long-wavelength modes, and marginalize over the unknown coefficients in our forecasts. We also highlight the important contribution of tracer shot noise to the noise on the reconstructed modes. \item We demonstrate density reconstruction using dark matter halos in $N$-body simulations, verifying that the performance agrees well with that predicted from analytical formulas. Though additional work using simulations will be required for a practical analysis, our results indicate that our forecasts are realistic. \item We show that the quadratic estimator is able to reconstruct long-wavelength modes at high signal-to-noise for a wide range of upcoming surveys (see Fig.~\ref{fig:prr-ebars}). \item The addition of the reconstructed field to forecasts using the large-scale biased tracer field can improve constraints on $f_\text{NL}$ by tens of percents depending on the survey configuration. The improvement arises from a combination of two sources: sample variance cancellation of signal in the large-scale tracer field, and additional scale-dependent signal in the reconstructed field on scales where the tracer field may be obscured by observational systematics. The additional information in the reconstructed modes can be viewed as a signature of non-Gaussian signal in the three and four-point functions, and our approach can be viewed as a simple method to obtain combined information from the three- and four-point functions and the power spectrum. \item The performance of this approach to constraining $f_\text{NL}$ is limited by a combination of tracer number density and maximum wavenumber of modes that can be used for reconstruction, with the details again depending on the survey configuration. Potential improvements using response function approaches \citep{Barreira:2017sqa, Barreira:2017kxd} could be explored to extend the reconstruction wavenumber and gain signal-to-noise. \end{itemize} The outline of our paper is as follows. In Section~\ref{sec:formalism}, we describe our methodology for density reconstruction, including the quadratic estimator formalism and bias expansion we use. In Section~\ref{sec:sims}, we apply this method to halos in $N$-body simulations. In Section~\ref{sec:forecasts}, we present our forecasts for the expected precision on reconstructed modes, as well as constraints on local non-Gaussianity. We compare this reconstruction formalism to other work involving higher-point statistics in Section~\ref{sec:discussion}. Finally, we conclude in Section~\ref{sec:conclusions}. Several derivations and technical details are included in the appendices, and a summary of our notation can be found in Table~\ref{tab:notation}. Except for in Sec.~\ref{sec:sims}, we use cosmological parameters from the Planck 2015 results, given in the ``TT,TE,EE+lowP+lensing+ext" column of Table~4 of \citealt{Ade:2015xua}. \begin{table} \begin{centering} \begin{tabular}{ |l|l|l| } \hline Quantity & Symbol & Defined in \\ \hline\hline Dirac delta function in 3d & $\delta_{\rm D}(\vec{k})$ & --- \\ Wavenumbers of modes used in reconstruction & $\boldsymbol{k}$, $\boldsymbol{q}$, etc. & --- \\ Wavenumbers of modes used for $f_\text{NL}$ constraints & $\boldsymbol{K}$, $\boldsymbol{K}'$, etc. & --- \\ \hline Amplitude of local primordial non-Gaussianity & $f_\text{NL}$ & Eq.~\eqref{eq:localNGs} \\ Factor relating primordial potential and $\delta_1$ & $M(k,z)$ & Eqs.~\eqref{eq:phidef}-\eqref{eq:M} \\ \hline Linear matter overdensity & $\delta_1(\boldsymbol{k},z)$ & --- \\ Linear matter power spectrum & $P_\text{lin}(k,z)$ & --- \\ Tracer overdensity & $\delta_{\rm g}(\boldsymbol{k},z)$ & Eq.~\eqref{eq:deltag-generic} [generic]; \\ & & Eq.~\eqref{eq:deltag-condensed} [second-order bias model] \\ Second-order mode-coupling & $F_{\alpha}(\vec{k}_1, \vec{k}_2)$ & Eq.~\eqref{eq:deltag-generic} [generic]; \\ & & Eq.~\eqref{eq:deltag-condensed} [second-order bias model] \\ Second-order response of small-scale power spectrum to long mode & $f_{\alpha}(\vec{k}_1, \vec{k}_2,z)$ & Eq.~\eqref{eq:falpha} \\ Coefficient of $F_\alpha$ in second-order bias model for $\delta_{\rm g}$ & $c_\alpha$ & Eq.~\eqref{eq:deltag-condensed} \\ Linear tracer bias & $b_1\equiv b_{10}^\text{E}$ & Eq.~\eqref{eq:deltag-condensed} \\ Quadratic tracer bias & $b_2\equiv b_{20}^\text{E}$ & Eq.~\eqref{eq:deltag-condensed} \\ Other second-order bias parameters & $b_{s^2}^{\rm E}$, $b_{01}^{\rm E}$, $\cdots$ & Sec.~\ref{sec:bias} \\ \hline Quadratic estimator for mode with wavenumber $\vec{K}$ & $\hat{\Delta}_{\alpha}(\vec{K})$ & Eqs.~\eqref{eq:quad-def}, \eqref{eq:quadest} \\ Weight function in $\hat{\Delta}_{\alpha}(\vec{K})$ & $g_{\alpha}(\boldsymbol{k}_1,\boldsymbol{k}_2)$ & Eq.~\eqref{eq:galpha} \\ Normalization and Gaussian noise of $\hat{\Delta}_{\alpha}(\vec{K})$ & $N_{\alpha\beta}(\vec{K})$ & Eq.~\eqref{eq:nab} \\ Mode reconstructed with growth-coupling estimator $\hat{\Delta}_{\rm G}(\vec{K},z)$ & $\delta_{\rm r}(\boldsymbol{k},z)$ & --- \\ Power spectrum of $\delta_{\rm g}$, ignoring shot noise contribution & $P_{\rm gg}$ & --- \\ Sum of $P_{\rm gg}$ and shot noise contribution & $P_{\rm tot}$ & Sec.~\ref{sec:qe} \\ Cross power spectrum between $\delta_{\rm g}$ and $\delta_{\rm r}$, ignoring shot noise contribution & $P_{\rm gr}$ & --- \\ Power spectrum of $\delta_{\rm r}$, ignoring shot noise contribution & $P_{\rm rr}$ & --- \\ Shot noise contribution to $\delta_{\rm g}$ power spectrum & $P_{\rm gg,shot}$ & Eq.~\eqref{eq:pggshot} \\ Shot noise contribution to $\delta_{\rm r}$ power spectrum & $P_{\rm rr,shot}$ & Eqs.~\eqref{eq:nrrshot}-\eqref{eq:prrshot-appendix} \\ Shot noise contribution to $\delta_{\rm g}$-$\delta_{\rm r}$ cross power spectrum & $P_{\rm gr,shot}$ & Eqs.~\eqref{eq:nrtshot}-\eqref{eq:pgrshot-appendix} \\ \hline Lowest wavenumber within survey volume & $K_{\rm f}$ & Sec.~\ref{sec:scales} \\ Wavenumber below which we assume $\delta_{\rm g}$ cannot be measured & $K_{\rm min}$ & Sec.~\ref{sec:scales} \\ Maximum wavenumber used for $f_\text{NL}$ constraints & $K_{\rm max}$ & Sec.~\ref{sec:scales} \\ Maximum wavenumber used in quadratic estimator for reconstructed modes & $k_{\rm max}$ & Sec.~\ref{sec:scales} \\ \hline \end{tabular} \caption{ \label{tab:notation} Notation used for important quantities in this paper } \end{centering} \end{table} \section{Density reconstruction} \label{sec:formalism} \subsection{Quadratic estimator: general case} \label{sec:qe} In this section, we will develop the general formalism for reconstructing large-scale\footnote{We again remind the reader that the our formalism is generally applicable, without any strong assumptions on the wavelenghts of the modes.} density modes using observations of a biased tracer. This is largely based on the treatment in \cite{Foreman:2018gnv}, but we have adapted their expressions to 3D wavenumbers rather than a separate treatment of line-of-sight and transverse components of~$\boldsymbol{k}$. Suppose that the overdensity field of the tracer, $\delta_{\rm g}$, is well-described by a linear bias with respect to the linear matter overdensity $\delta_1$, plus a set of quadratic terms that couple modes of $\delta_1$ with kernels $F_\alpha$ and amplitudes $c_\alpha$: \begin{equation} \delta_{\rm g}(\vec{k}, z) \approx b_{1}(z)\delta_1(\vec{k}, z) +\sum_{\alpha}c_{\alpha}(z) \int_{\vec{q}}F_\alpha(\vec{q}, \vec{k}-\vec{q}; z) \delta_1(\vec{q}, z)\delta_1(\vec{k}-\vec{q}, z)\ , \label{eq:deltag-generic} \end{equation} where $\int_{\boldsymbol{q}} \equiv (2\pi)^{-3} \int d^3\boldsymbol{q}$. For example, if we took $\delta_{\rm g}$ to be the matter overdensity rather than a biased tracer, we would have $b_{1}=1$ and the sum would run over the second-order mode-couplings induced by gravitational evolution, which take the form (e.g.\ \citealt{Sherwin:2012nh}) \begin{equation} F_{\rm G}(\boldsymbol{k}_1,\boldsymbol{k}_2;z) \equiv \frac{17}{21}\ , \quad F_{\rm S}(\boldsymbol{k}_1,\boldsymbol{k}_2;z) \equiv \frac{1}{2} \left( \frac{1}{k_1^2}+\frac{1}{k_2^2} \right) \boldsymbol{k}_1\cdot\boldsymbol{k}_2\ , \quad F_{\rm T}(\boldsymbol{k}_1,\boldsymbol{k}_2;z) \equiv \frac{2}{7} \left[ \frac{\left( \boldsymbol{k}_1\cdot\boldsymbol{k}_2 \right)^2}{k_1^2 k_2^2} -\frac{1}{3} \right]\ , \label{eq:f2kernels} \end{equation} with $c_{\rm G} = c_{\rm S} = c_{\rm T} = 1$ and the subscripts indicating that these functions arise from isotropic {\bf G}rowth, a large-scale coordinate {\bf S}hift, and a {\bf T}idal coupling. For a biased tracer, nonlinear biasing will lead to $c_\alpha\neq 1$ for the above couplings, and primordial non-Gaussianity will introduce additional mode-couplings. In Sec.~\ref{sec:bias}, we will introduce the full set of mode-couplings that must be considered, but we note here that many of the corresponding $c_\alpha$ coefficients will not be known a priori, and this must be accounted for in the density reconstruction procedure. Henceforth, we will drop the $z$-dependence from the quantities defined above. Now, we would like to use the mode-couplings in Eq.~\eqref{eq:deltag-generic} to construct a quadratic estimator for a given mode of~$\delta_1$. We will present the logic in some detail, for readers who may not be familiar with the relevant arguments, but a reader who is comfortable with peak-background-split arguments or the CMB lensing formalism may wish to skip to the final result in Eqs.~\eqref{eq:covoffdiag}-\eqref{eq:falpha}. The analogous procedure for CMB lensing is to first consider an ensemble average over CMB fluctuations while keeping fluctuations in the lower-redshift matter density fixed. In this case, the fixed modes of the lensing potential~$\phi$ (which is a line-of-sight projection of the lower-redshift density field -- see e.g.\ \citealt{Hu:2001kj}) break the statistical isotropy of the CMB fluctuations, inducing correlations between CMB fluctuation modes with different wavenumbers: for temperature modes on the flat sky, the specific effect is given by \begin{equation} \left\langle T(\boldsymbol{\ell}) T(\boldsymbol{L}-\boldsymbol{\ell}) \right\rangle_{\phi\text{ fixed}} = (2\pi)^2 \delta_{\rm D}(\boldsymbol{L}) C_L + f_\phi(\boldsymbol{\ell},\boldsymbol{L}-\boldsymbol{\ell}) \phi(\boldsymbol{L})\ . \label{eq:cmblensing} \end{equation} When analyzing CMB simulations or data, the temperature two-point function is estimated by a (weighted) sum over~$\boldsymbol{\ell}$ within a given CMB realization, and this in fact approximates the ensemble average above, with $\phi$ modes effectively fixed because they do not explicitly enter the sum. Eq.~\eqref{eq:cmblensing} is an efficient starting point for deriving quadratic estimators for a specific mode of $\phi$, and we would like to find the analogous starting point for density reconstruction. To proceed, we consider an ensemble average over all modes of $\delta_1$ except those with wavenumbers in a small neighborhood around $\boldsymbol{K}$, with $\delta_1(\boldsymbol{K})$ being the mode we will eventually want to reconstruct. (We must consider a neighborhood around $\boldsymbol{K}$ because we are working in the continuum limit, where we have integrals instead of discrete sums over wavenumbers; we will return to this point below.) In this ensemble average, which we denote by ``$\sim$$\boldsymbol{K}$ fixed", and using Eq.~\eqref{eq:deltag-generic}, the two-point function of $\delta_{\rm g}$ is at next-to-leading-order in $\delta_1$ is \begin{align*} \left\langle \delta_{\rm g}(\vec{k}) \delta_{\rm g}(\vec{K}-\vec{k})\right\rangle_{\sim\boldsymbol{K}\text{ fixed}} &= b_{1}^2 \langle \delta_1(\vec{k})\delta_1(\vec{K}-\vec{k}) \rangle \\ &\quad + b_{1} \int_{\boldsymbol{q}} \sum_\alpha c_\alpha F_{\alpha}(\vec{q},\vec{k}-\vec{q}) \, \langle \delta_1(\vec{q}) \delta_1(\vec{k}-\vec{q})\delta_1(\vec{K}-\vec{k})\rangle_{\sim\boldsymbol{K}\text{ fixed}} + [\vec{k}\leftrightarrow\vec{K}-\vec{k}]\ . \numberthis \end{align*} In the first line, we have assumed that $\boldsymbol{k}$ is not within the chosen neighborhood of $\boldsymbol{K}$ or the equivalent neighborhood of $0$, so there is no difference between our special ensemble average and the standard one. In the second line, the integrand evaluates to zero if $\boldsymbol{q}$ and $\boldsymbol{K}-\boldsymbol{q}$ are not within the neighborhood of $\boldsymbol{K}$, since in that case, all three $\delta_1$ modes are averaged over, and the three-point function is zero for $\boldsymbol{K}\neq 0$. When $\boldsymbol{q} \sim \boldsymbol{K}$ or $\boldsymbol{K}-\boldsymbol{q} \sim \boldsymbol{K}$, where we use ``$\sim\boldsymbol{K}$" to indicate a vector falling within the neighborhood of $\boldsymbol{K}$, then $\delta_1(\boldsymbol{q})$ or $\delta_1(\boldsymbol{K}-\boldsymbol{q})$ factor out of the ensemble average because they are held fixed, and the remaining two modes are averaged over: \begin{align*} \left\langle \delta_{\rm g}(\vec{k}) \delta_{\rm g}(\vec{K}-\vec{k})\right\rangle_{\sim\boldsymbol{K}\text{ fixed}} &= b_{1}^2 \langle \delta_1(\vec{k})\delta_1(\vec{K}-\vec{k}) \rangle \\ &\quad + b_{1} \int_{\vec{q}\,\sim\, \vec{K}} \sum_\alpha c_\alpha F_{\alpha}(\vec{q},\vec{k}-\vec{q})\delta_1(\vec{q}) \, \langle \delta_1(\vec{k}-\vec{q})\delta_1(\vec{K}-\vec{k})\rangle + [\vec{k}\leftrightarrow\vec{K}-\vec{k}] \\ &\quad + b_{1} \int_{\vec{K}-\vec{q}\,\sim\, \vec{K}} \sum_{\alpha} c_{\alpha} F_{\alpha}(\vec{q},\vec{k}-\vec{q})\delta_1(\vec{K}-\vec{q}) \, \langle \delta_1(\vec{q})\delta_1(\vec{K}-\vec{k})\rangle + [\vec{k}\leftrightarrow\vec{K}-\vec{k}] \ . \numberthis \end{align*} From here, we simply evaluate the two-point correlators and use the resulting Dirac delta functions to collapse the $\boldsymbol{q}$ integrals.\footnote{One must integrate in a neighborhood around the argument of a Dirac delta function for this collapse to take place, and this is why we considered a neighborhood around $\boldsymbol{K}$ in the first place. In the discrete case, where we have sums instead of integrals over wavenumbers, we could define our ensemble average to keep a single mode $\delta_1(\boldsymbol{K})$ fixed, since we would then have Kronecker deltas instead of Dirac delta functions.} The final result is \begin{equation} \left\langle \delta_{\rm g}(\vec{k}) \delta_{\rm g}(\vec{K}-\vec{k})\right\rangle_{\sim\boldsymbol{K}\text{ fixed}} = (2\pi)^3 \delta_{\rm D}(\boldsymbol{K}) \, b_{1}^2 P_{\rm lin}(k) + b_{1} \sum_{\alpha} c_{\alpha} f_\alpha(\vec{k}, \vec{K}-\vec{k}) \delta_1(\vec{K})\ , \label{eq:covoffdiag} \end{equation} where \begin{equation} f_{\alpha}(\vec{k}_1, \vec{k}_2) \equiv 2 [F_{\alpha}(\vec{k}_1+\vec{k}_2, -\vec{k}_1) P_\text{lin}(\vec{k}_1,z) + 1\leftrightarrow 2]\ . \label{eq:falpha} \end{equation} In Eq.~\eqref{eq:covoffdiag}, we find the same structure as in the CMB lensing case in Eq.~\eqref{eq:cmblensing}: the standard power spectrum term, plus a term from off-diagonal correlations induced by the fixed background mode. Eq.~\eqref{eq:covoffdiag} suggests we can multiply two different modes of the measured tracer field and then simply ``divide" by the coupling strength $b_{1} \sum_{\alpha}c_{\alpha}f_\alpha(\vec{k}, \vec{K}-\vec{k})$ to obtain an estimate of the linear field $\delta_1$ at large scales. Unfortunately, in general we do not know the bias coefficients $b_1$ or $c_{\alpha}$ a priori, so the best we can do is to use the galaxy mode couplings to estimate the product $b_1 c_{\alpha}\delta_1$ for a chosen $\alpha$. To reduce variance on the estimate, we will sum over all the mode couplings that involve the same large-scale mode. This can be achieved by writing the following general quadratic estimator \begin{equation} \hat{\Delta}_{\alpha}(\vec{K}) \equiv \widehat{b_1 c_{\alpha}\delta_1}(\vec{K})=\int_{\vec{q}}g_{\alpha}(\vec{q}, \vec{K}-\vec{q}) \delta_{\text{g}}(\vec{q})\delta_{\text{g}}(\vec{K}-\vec{q})\ , \label{eq:quad-def} \end{equation} with weights $g_{\alpha}$, similar to what is done for CMB lensing \citep{Hu:2001kj} or ``clustering fossils" from primordial gravitational waves \citep{Masui:2010cz,Jeong:2012df,Masui:2017fzw}. For an alternative derivation of this estimator, based on optimizing the cross-correlation of a quadratic combination of measured modes with the true linear mode to be reconstructed, see Appendix~\ref{app:rec-from-b}. The covariance between two such estimators $\alpha$ and $\beta$ of the biased matter density field on large scales can be split into a Gaussian part, coming from all disconnected contributions, and a non-Gaussian part that includes all connected contributions: \begin{equation} \langle \hat{\Delta}_{\alpha}(\vec{K}) \hat{\Delta}^{*}_{\beta}(\vec{K'}) \rangle - \langle\hat{\Delta}_{\alpha}(\vec{K})\rangle\langle \hat{\Delta}^{*}_{\beta}(\vec{K'}) \rangle = (2\pi)^3\delta_{\rm D}(\vec{K}-\vec{K'}) \left[ {\rm Cov_G}(\hat{\Delta}_{\alpha}(\vec{K}), \hat{\Delta}^{*}_{\beta}(\vec{K'}) )+{\rm Cov_{NG}}(\hat{\Delta}_{\alpha}(\vec{K}), \hat{\Delta}^{*}_{\beta}(\vec{K'})) \right]\ . \end{equation} We constrain the weights to provide an estimator that is optimal in the sense of minimizing the Gaussian contributions to its variance, \begin{equation} \mathrm{Var}_{\rm{G}}[\hat{\Delta}_{\alpha}](\vec{K})\equiv \mathrm{Cov}_{\rm{G}}(\hat{\Delta}_{\alpha}(\vec{K}), \hat{\Delta}^{*}_{\alpha}(\vec{K'}) )\ , \end{equation} while requiring that it be unbiased if there were only a single mode-coupling, i.e. \begin{equation} \int_{\vec{q}}g_{\alpha}(\vec{q}, \vec{K}-\vec{q}) f_{\alpha}(\vec{q}, \vec{K}-\vec{q}) = 1\ . \label{eq:gab} \end{equation} These criteria lead to the familiar quadratic estimator weights: \begin{equation} g_\alpha(\boldsymbol{k}_1,\boldsymbol{k}_2) = N_{\alpha\alpha}(\boldsymbol{k}_1+\boldsymbol{k}_2) \frac{f_\alpha(\boldsymbol{k}_1,\boldsymbol{k}_2)}{2P_{\rm tot}(k_1) P_{\rm tot}(k_2)}\ , \label{eq:galpha} \end{equation} where $P_{\rm tot}$ is the sum of the clustering and shot noise contributions to the tracer power spectrum. The normalization is given by \begin{equation} \label{eq:nab} N_{\alpha\beta}(\vec{K})=\bigg( \int_{\vec{q}}\frac{f_{\alpha}(\vec{q}, \vec{K}-\vec{q})f_{\beta}(\vec{q}, \vec{K}-\vec{q})}{2P_{\rm tot}(q)P_{\rm tot}(|\vec{K}-\vec{q}|)} \bigg)^{-1} \end{equation} which guarantees that $N_{\alpha\alpha}$ is equal to the Gaussian part of the variance of $\hat{\Delta}_{\alpha}$. We will refer to $N_{\alpha\alpha}$ as the reconstruction noise, which incorporates cosmic variance in the reconstruction and the disconnected contribution from shot noise of the tracer field. (It should be noted that $N_{\alpha \beta}$ is not equal to the noise when $\alpha \neq \beta$.) With the weights in Eq.~\eqref{eq:galpha}, the estimator in Eq.~\eqref{eq:quad-def} becomes \begin{equation} \label{eq:quadest} \hat{\Delta}_{\alpha}(\vec{K}) = N_{\alpha \alpha}(\vec{K})\int_{\vec{q}}\frac{f_{\alpha}(\vec{q}, \vec{K}-\vec{q})}{2P_{\text{tot}}(q)P_{\text{tot}}(|\vec{K}-\vec{q}|)} \delta_{\text{g}}(\vec{q})\delta_{\text{g}}(\vec{K}-\vec{q})\ . \end{equation} The non-Gaussian part of the variance includes a trispectrum contribution from clustering of the tracers, and further contributions from tracer shot noise. We neglect the former, because it is subdominant to the latter; our comparisons with simulations in Sec.~\ref{sec:sims} show that this is a valid approximation. Importantly, the shot noise contributions can dominate over the Gaussian reconstruction noise in many cases, because these contributions couple to large-scale modes with large variance, while the Gaussian contribution only involves small-scale modes, which have smaller variance due to the shape of the matter power spectrum. We derive the full expressions for these contributions and discuss their hierarchy further in Appendix~\ref{app:shot}. The expectation value of the estimator in Eq.~\eqref{eq:quadest}, for a given realization of the linear field at wavevector~$\vec{K}$, is \begin{equation} \left\langle \hat{\Delta}_{\alpha}(\vec{K}) \right\rangle_{\delta_1(\vec{K})\text{ fixed}} = b_1\left[c_\alpha +\sum_{\beta \neq \alpha} c_{\beta}\frac{N_{\alpha\alpha}(K)}{N_{\alpha \beta}(K)} \right] \delta_1(\vec{K})\ .\label{eq:meanfield} \end{equation} We clearly see that there is a contamination of the estimator with respect to the case of only a single mode-coupling, given by the product of the (Gaussian) noise for the estimator $\alpha$ and a sum of bias terms divided by the cross normalization between estimators $\alpha$ and $\beta$.\footnote{It can be seen from Eq.~\eqref{eq:nab} that as the overlap integral of the two mode-couplings goes to zero, $N_{\alpha \beta}$ becomes very large and the contamination vanishes.} If the goal is to just reconstruct the linear mode of interest, then it is important to account for this contribution. One can attempt to construct a so-called ``bias-hardened" estimator by forming a linear combination of the original estimators that is free of this contamination at leading order (e.g.\ \citealt{Namikawa:2012pe,Osborne:2013nna,Foreman:2018gnv}). However, for the specific mode-couplings relevant in this situation, the high degree of correlation between the original estimators implies that the noise on the new estimator will be so high that it is no longer useful; see Appendix~\ref{app:biashardening} for details. We claim that, for extracting non-Gaussianity, this contamination can actually be useful. As we will see later, some of these contaminating terms induce scale-dependence that reproduces the $1/K^2$ scaling created by primordial non-Gaussianity. Depending of the signs of these terms, they can either raise or lower the signal to noise on $f_\text{NL}$ from the reconstructed field. We will discuss this further in Sec.~\ref{sec:noisecont}. \subsection{Non-Gaussianity and bias expansion} \label{sec:bias} As we discussed in the introduction, primordial non-Gaussianity of the local type introduces a quadratic contribution to the metric perturbation. The metric perturbation (gravitational potential) $\varphi$ is related to the linear matter overdensity through the usual Poisson equation (dropping the subscript $_{\rm G}$) \begin{equation} \varphi(\boldsymbol{k},z) = \frac{\delta_1(\boldsymbol{k},z)}{M(k,z)}\, , \label{eq:phidef} \end{equation} where the Poisson factor $M(k,z)$ is given by \begin{equation} M(k,z) = \frac{2c^2}{3H_0^2\Omega_{\rm m}} D(z) k^2 T(k)\, . \label{eq:M} \end{equation} Here, the growth factor $D(z)$ is normalized to agree with the scale factor $1/(1+z)$ during matter domination. Galaxies and \tcm fluctuations of the density field are biased tracers of the underlying, dynamically dominant matter distribution. In the presence of local primordial non-Gaussianity, the coupling of the long and short modes leads to an additional modulation of the abundance of collapsed objects by the long wavelength potential fluctuations~$\varphi$. To describe biased tracers we thus follow \cite{Giannantonio:2009ak,Baldauf:2010vn} in performing a double expansion of the Eulerian galaxy (or tracer) density field in the non-linear density and linear potential\footnote{In the peak-background split formalism, the abundance of collapsed objects is given by $e^{-\nu^2/2}$ where $\nu=\delta_\text{c}/\sigma$ with $\delta_\text{c}$ the collapse threshold and $\sigma$ the variance. The long wavelength density modulates the collapse threshold as $\delta_c\to \delta_c-\delta$, whereas the metric perturbation $\varphi$ modulates the variance $\sigma\to\sigma(1+2f_\text{NL} \varphi)$.} \begin{equation} \begin{split} \delta_{\rm g}^{\rm E}(\boldsymbol{x}) =& b_{10}^{\rm E} \delta(\boldsymbol{x}) + b_{01}^{\rm E} \varphi(\boldsymbol{x}_{\rm L}[\boldsymbol{x}]) + b_{20}^{\rm E} \delta^2(\boldsymbol{x}) + b_{11}^{\rm E} \delta(\boldsymbol{x}) \varphi(\boldsymbol{x}_{\rm L}[\boldsymbol{x}]) + b_{02}^{\rm E} \varphi^2(\boldsymbol{x})\\ &+ b_{s^2}^{\rm E} s_{ij}(\boldsymbol{x}) s^{ij}(\boldsymbol{x}) + \varepsilon(\boldsymbol{x}) + \varepsilon_\delta(\boldsymbol{x}) \delta(\boldsymbol{x}) + \varepsilon_\varphi(\boldsymbol{x}) \varphi(\boldsymbol{x}_{\rm L}[\boldsymbol{x}]) + \cdots \label{eq:nongausbiasexpansion} \end{split} \end{equation} Here the $b_{ij}^{\rm E}$ are the Eulerian bias parameters\footnote{In the introductory discussion in Sec.~\ref{sec:qe}, we employed the notation $b_1\equiv b_{10}^\text{E}$ for the sake of simplicity.}, $s_{ij}$ is the tidal tensor \begin{equation} s_{ij}(\boldsymbol{x}) = \left[ \frac{\nabla_i \nabla_j}{\nabla^2} - \frac{1}{3} \delta_{ij}^{\rm (K)} \right] \delta(\boldsymbol{x})\, , \end{equation} and $\varepsilon$ is the stochasticity, which correlates with itself but not with the linear density field. In the simplest case where galaxies are a Poisson sample of the underlying matter field, the stochasticity leads to the fiducial $1/\bar n$ power spectrum. The higher order stochasticity contributions $\varepsilon_\delta \delta$ and $\varepsilon_\varphi \varphi$ lead to stochasticity contributions in the bispectrum \citep{Desjacques:2016bnm}, as we review in App.~\ref{app:shot}. In simple local-Lagrangian bias models, the tidal tensor bias $b_{s^2}^\text{E}$ can be related to the linear density bias as $b_{s^2}^\text{E}=-2/7\ \left(b_{10}^\text{E}-1\right)$ \citep{Baldauf:2012ev}. Employing realistic simplifying assumptions, we will see that all of the bias parameters $b_{ij}^\text{E}$ can be expressed in terms of $b_{10}^\text{E}$ and $b_{20}^\text{E}$. We are truncating the above expansion at second order, since we will only consider tree level power spectra and bispectra as well as the Gaussian disconnected trispectrum in our derivations. We can thus also neglect higher derivative contributions, such as $k^2 \delta_1(\vec k)$, as they are equivalent to cubic contributions to the matter and galaxy density fields. Note that all of the $\delta$ terms in Eq.~\eqref{eq:nongausbiasexpansion} refer to the underlying non-linear matter density field including its quadratic couplings. The potential $\varphi$, in turn, is linear, as the dependence of the halo abundance on long wavelength potential fluctuations is set up in the early Universe. There is, however, a non-linearity in the potential terms that arises from the fact that the abundance of galaxies in the peak-background split is set up in Lagrangian space with coordinates $\boldsymbol{x}_{\rm L}$. These Lagrangian positions are related to the Eulerian coordinates by $\boldsymbol{x}_{\rm L}[\boldsymbol{x}] = \boldsymbol{x} - \boldsymbol{\Psi}(\boldsymbol{x})$ at leading order. % The potential is thus advected by long wavelength displacements as \citep{Tellarini:2015faa} \begin{equation} \varphi(\boldsymbol{x}_{\rm L}[\boldsymbol{x}]) = \varphi(\boldsymbol{x}) - \boldsymbol{\Psi}(\boldsymbol{x}) \cdot \boldsymbol{\nabla}\varphi(\boldsymbol{x}) + \cdots\ . \end{equation} The Fourier transform of the linear displacement field $\boldsymbol{\Psi}(\boldsymbol{x})$ is related to the linear matter overdensity by $\boldsymbol{\Psi}(\boldsymbol{k})=i(\boldsymbol{k}/k^2)\delta(\boldsymbol{k})$. At second order, the matter density field picks up a new quadratic contribution from primordial non-Gaussianity according to Eq.~\eqref{eq:localNGs}: \begin{equation} \delta(\boldsymbol{k}) = \delta_1(\boldsymbol{k}) + \int_{\boldsymbol{q}} \left[ \sum_{\alpha=\text{G,S,T}} F_\alpha(\boldsymbol{q},\boldsymbol{k}-\boldsymbol{q}) \right] \delta_1(\boldsymbol{q})\delta_1(\boldsymbol{k}-\boldsymbol{q}) + f_\text{NL} M(k) \int_{\boldsymbol{q}}\varphi(\boldsymbol{q}) \varphi(\boldsymbol{k}-\boldsymbol{q}) + \cdots, \end{equation} where the growth, shift and tidal components of the gravitational coupling kernel are given by Eq.~\eqref{eq:f2kernels}. For biased tracers, this expression gets multiplied by $b_{10}^\text{E}$. We can rewrite the last term in terms of the density field using the Poisson equation, resulting in a new quadratic coupling \begin{equation} F_{\varphi\varphi}(\vec k_1,\vec k_2)=\frac{M(|\boldsymbol{k}_1+\boldsymbol{k}_2|)}{M(k_1)M(k_2)}, \end{equation} such that \begin{equation} \delta(\boldsymbol{k}) = \delta_1(\boldsymbol{k}) + \int_{\boldsymbol{q}} \left[ \sum_{\alpha=\text{G,S,T},\varphi\varphi} c_\alpha f_\text{NL}^{p_\alpha} F_\alpha(\boldsymbol{q},\boldsymbol{k}-\boldsymbol{q}) \right] \delta_1(\boldsymbol{q})\delta_1(\boldsymbol{k}-\boldsymbol{q}) + \cdots, \end{equation} where now $c_\alpha=\left\{1,1,1,1\right\}$ and $p_\alpha=\left\{0,0,0,1\right\}$. Combining this result with the Fourier transform of the other second order bias terms in Eq.~\eqref{eq:nongausbiasexpansion} yields \begin{align*} \delta_\text{g}^{\rm E}(\boldsymbol{k}) &= \left[ b_{10}^{\rm E} + \frac{b_{01}^{\rm E}}{M(k)} \right] \delta_1(\boldsymbol{k}) + b_{01}^{\rm E} \int_{\boldsymbol{q}} \frac{1}{2} \left[ \frac{\boldsymbol{q}\cdot(\boldsymbol{k}-\boldsymbol{q})}{q^2 M(|\boldsymbol{k}-\boldsymbol{q}|)} + \frac{\boldsymbol{q}\cdot(\boldsymbol{k}-\boldsymbol{q})}{|\boldsymbol{k}-\boldsymbol{q}|^2 M(q)} \right] \delta_1(\boldsymbol{q})\delta_1(\boldsymbol{k}-\boldsymbol{q}) \\ &\quad + b_{10}^{\rm E}\int_{\boldsymbol{q}} \left[ \sum_{\alpha=\text{G,S,T,}\varphi\varphi} F_\alpha(\boldsymbol{q},\boldsymbol{k}-\boldsymbol{q}) \right] \delta_1(\boldsymbol{q})\delta_1(\boldsymbol{k}-\boldsymbol{q}) \\ &\quad + f_\text{NL} b_{10}^{\rm E} \int_{\boldsymbol{q}} \frac{M(k)}{M(q)M(|\boldsymbol{k}-\boldsymbol{q}|)} \delta_1(\boldsymbol{q})\delta_1(\boldsymbol{k}-\boldsymbol{q}) + b_{20}^{\rm E} \int_{\boldsymbol{q}} \delta_1(\boldsymbol{q})\delta_1(\boldsymbol{k}-\boldsymbol{q}) \\ &\quad + b_{11}^{\rm E} \int_{\boldsymbol{q}} \frac{1}{2} \left( \frac{1}{M(q)} + \frac{1}{M(|\boldsymbol{k}-\boldsymbol{q}|)} \right) \delta_1(\boldsymbol{q})\delta_1(\boldsymbol{k}-\boldsymbol{q}) + b_{02}^{\rm E} \int_{\boldsymbol{q}} \frac{1}{M(q)M(|\boldsymbol{k}-\boldsymbol{q}|)} \delta_1(\boldsymbol{q})\delta_1(\boldsymbol{k}-\boldsymbol{q}) \\ &\quad + b_{s^2}^{\rm E} \int_{\boldsymbol{q}} \left[ \frac{[\boldsymbol{q}\cdot(\boldsymbol{k}-\boldsymbol{q})]^2}{q^2|\boldsymbol{k}-\boldsymbol{q}|^2} - \frac{1}{3} \right] \delta_1(\boldsymbol{q})\delta_1(\boldsymbol{k}-\boldsymbol{q}). \numberthis \end{align*} The additional terms arising from the non-Gaussian bias can be encoded by the new quadratic coupling kernels \begin{equation} \begin{split} F_{01}=\frac{1}{2} \boldsymbol{k}_1\cdot \boldsymbol{k}_2 \left( \frac{1}{k_2^2 M(k_1)}+\frac{1}{k_1^2 M(k_2)} \right)\, , \ \ F_{11}=\frac{1}{2} \left(\frac{1}{M(k_1)}+\frac{1}{M(k_2)} \right)\, , \ \ F_{02}=\frac{1}{M(k_1)M(k_2)}\, . \end{split} \end{equation} The Eulerian bias parameters can be related to their Lagrangian counterparts through a spherical collapse calculation \citep{Giannantonio:2009ak,Baldauf:2010vn}: \begin{align} b_{10}^{\rm E} &= b_{10}^{\rm L}+1\ , \\ b_{20}^{\rm E} &= 2(a_1+a_2)b_{10}^{\rm L} + a_1^2 b_{20}^{\rm L}\ , \\ b_{01}^{\rm E} &= b_{01}^{\rm L}\ , \\ \label{eq:b11E} b_{11}^{\rm E} &= a_1 b_{11}^{\rm L} + b_{01}^{\rm L}\ , \\ \label{eq:b02E} b_{02}^{\rm E} &= b_{02}^{\rm L}\, , \end{align} where $a_1=1$ and $a_2=-17/21$ are spherical collapse expansion factors. The non-Gaussian Lagrangian bias parameters can be obtained using the peak background split. They are given as the the derivatives of the mass function with respect to the long wavelength potential fluctuations. Assuming a universal mass function, the derivatives with respect to the potential can be related to the derivatives with respect to the long wavelength density, and consequently the bias parameters of the potential terms can be related to the bias parameters of the density terms: \begin{align} \label{eq:b01Lfinal} b_{01}^{\rm L} &= 2f_\text{NL} \delta_{\rm c} \left( b_{10}^{\rm E} - 1 \right)\ , \\ \label{eq:b11Lfinal} b_{11}^{\rm L} &= 2f_\text{NL} \left( \delta_{\rm c} \left[ \frac{b_{20}^{\rm E} - 2(a_1+a_2) \left( b_{10}^{\rm E}-1 \right)}{a_1^2} \right] - \left[ b_{10}^{\rm E} - 1 \right] \right)\ , \\ \label{eq:b02Lfinal} b_{02}^{\rm L} &= 4 f_\text{NL}^2 \delta_{\rm c} \left( \delta_{\rm c} \left[ \frac{b_{20}^{\rm E} - 2(a_1+a_2) \left( b_{10}^{\rm E}-1 \right)}{a_1^2} \right] - 2 \left[ b_{10}^{\rm E} - 1 \right] \right)\, , \end{align} where $\delta_\text{c}$ is the spherical collapse threshold. Note that small deviations from this simple scaling of non-Gaussian bias $b_{01}^{\rm L}$ with Gaussian bias $b_{10}^{\rm E}$ have been found in simulations \citep{Biagetti:2016ywx} and seem to depend on the way halos are identified. \begin{table} \begin{centering} \renewcommand{\arraystretch}{1.7} \begin{tabular}{ c|c|c|c } Mode Coupling ($\alpha$) & $p_\alpha$ & $c_{\alpha}$ & $F_{\alpha}(\boldsymbol{k}_1, \boldsymbol{k}_2)$ \\ \hline \hline G & 0 & $b_1+\frac{21}{17} b_2$ & $\frac{17}{21}$ \\ S & 0 & $b_1$ & $\frac{1}{2}\left[\frac{1}{k_1^2}+\frac{1}{k_2^2}\right](\boldsymbol{k}_1\cdot\boldsymbol{k}_2)$ \\ T & 0 & $b_1 + \frac{7}{2}b_{s^2}$ & $\frac{2}{7} \left[ \frac{(\boldsymbol{k}_1\cdot \boldsymbol{k}_2)^2}{k_1^2 k_2^2} -\frac{1}{3} \right]$ \\ $\varphi\varphi$ & 1 & $b_1$ & $\frac{M(|\boldsymbol{k}_1+\boldsymbol{k}_2|, z)}{M(k_1)M(k_2)}$ \\ $01$ & 1 & $2\delta_c(b_{1}-1)$ & $\frac{1}{2} \boldsymbol{k}_1\cdot \boldsymbol{k}_2 \left( \frac{1}{k_2^2 M(k_1)} +\frac{1}{k_1^2 M(k_2)} \right)$ \\ $11$ & 1 & $2 \left( \delta_{\rm c} \left[ \frac{b_{2} - 2(a_1+a_2) \left( b_{1} -1 \right)}{a_1} \right] - a_1 \left[ b_{1} - 1 \right] \right) + 2\delta_{\rm c} \left( b_{1} - 1 \right)$ & $\frac{1}{2} \left(\frac{1}{M(k_1)}+\frac{1}{M(k_2)} \right)$ \\ $02$ & 2 & $4 \delta_{\rm c} \left( \delta_{\rm c} \left[ \frac{b_2 - 2(a_1+a_2) \left( b_1-1 \right)}{a_1^2} \right] - 2 \left[ b_1 - 1 \right] \right)$ & $\frac{1}{M(k_1)M(k_2)}$ \\ \end{tabular} \caption{ \label{tab:modecouplings} Mode couplings, $f_\text{NL}$ exponents, bias parameters and coupling kernels of the quadratic interactions for Eq.~\eqref{eq:deltag-condensed}. } \end{centering} \end{table} In summary, we can write for the galaxy density field up to second order in the presence of local type primordial non-Gaussianity: \begin{equation} \delta_{\rm g}(\boldsymbol{k}) = \left[ b_{10}^\text{E} + f_\text{NL} \frac{c_{01}}{M(k)} \right] \delta_1(\boldsymbol{k}) + \int_{\boldsymbol{q}} \left[ \sum_{\alpha} c_\alpha f_\text{NL}^{p_\alpha} F_\alpha(\boldsymbol{q},\boldsymbol{k}-\boldsymbol{q}) \right] \delta_1(\boldsymbol{q}) \delta_1(\boldsymbol{k}-\boldsymbol{q})\ , \label{eq:deltag-condensed} \end{equation} where $\alpha$ now runs over $\{ \text{G}, \text{S}, \text{T}, \varphi\varphi, 01, 11, 02 \}$ with the couplings given in Table~\ref{tab:modecouplings}. In this table, Eq.~\eqref{eq:deltag-condensed}, and throughout the rest of the paper, we have simplified the notation to $b_1 \equiv b_{10}^{\text{E}}$, $b_2 \equiv b_{20}^{\text{E}}$, and $b_{s^2} \equiv b_{s^2}^{\text{E}}$. Note that we have not included mode-couplings due to lensing, which are expected to be a subdominant contribution that is somewhat degenerate with the S term~\cite{Foreman:2018gnv}, nor have we incorporated redshift space distortions or anisotropic selection effects (see Sec.~\ref{sec:rsd} for discussion). \subsection{Reconstruction noise and contamination} \label{sec:noisecont} With this formalism in place, we can now examine the noise of the reconstructed modes, and the contamination arising from the presence of multiple mode-couplings in the tracer field used for reconstruction.\footnote{For producing matter power spectra for forecasts, we relied on the nbodykit code (\url{https://github.com/bccp/nbodykit}). } We will show these quantities for a DESI-like survey (with specifications given in Sec.~\ref{sec:config}), but we have checked that the conclusions we draw from this case also apply to the other surveys we consider. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth, trim = 10 10 10 10 ]{figures_new/noises.pdf} \caption{ \label{fig:noises} Reconstruction noise power spectra for estimators that use each of the quadratic mode-couplings discussed in Sec.~\ref{sec:bias}. We omit curves for the $c_{01}$ and $c_{02}$ estimators, which are greater than the upper limit of the plot. The G (``growth") estimator has the lowest noise by far. These curves are computed for a DESI-like survey, but the hierarchy between them is unchanged for the other surveys we consider. The signal to noise on reconstructed modes (not shown) is likewise much higher for the G estimator than for S or T, justifying our use of the G estimator for our main results. } \end{figure} Fig.~\ref{fig:noises} shows the reconstruction noise power spectrum corresponding to estimators that use each of the quadratic mode-couplings discussed in Sec.~\ref{sec:bias}. We see that the ``growth" estimator has the lowest noise by far. We compare the predicted noise for the G, S, and T estimators with results from $N$-body simulations in Sec.~\ref{sec:sims} (among other tests), finding good agreement. Thus, we use the growth estimator in our forecasts for reconstruction\footnote{ Out of the G, S, and T estimators, the G estimator yields both the lowest noise and the highest signal to noise on reconstructed modes. However, some of the other estimators (e.g.\ $\alpha=\varphi\varphi$) also have signal to noise approaching that of the G estimator, since the contaminating terms in Eq.~\eqref{eq:meanfield} act as ``signal" in a signal to noise computation. This indicates that a more optimal choice of estimator weights may be possible, although we leave this to future work. }, henceforth referring to reconstructed modes as $\delta_{\rm r}(\boldsymbol{K})$ instead of $\hat{\Delta}_{\rm G}(\boldsymbol{K})$. However, as we discussed in Sec.~\ref{sec:qe}, the output of the G estimator (or any other single estimator) will be contaminated by the other mode-couplings, with the specific contamination given by Eq.~\eqref{eq:meanfield}, and we must incorporate this contamination into our forecasts. \begin{figure}[t] \centering \includegraphics[width=\textwidth, trim = 10 10 10 10 ]{figures_new/cont.pdf} \caption{ \label{fig:contamination} {\it Left:} Contamination in the expectation value of G estimator, corresponding to separate multiplicative biases on the amplitude of a reconstructed mode, computed for a DESI-like survey. Blue solid line is the estimator growth bias shown for comparison. Dashed lines indicate negative values. Several of these curves inherit the $k^{-2}$ scaling of the scale-dependent bias in $\delta_{\rm g}$ arising from nonzero $f_\text{NL}$, implying that reconstructed modes can be used to constrain $f_\text{NL}$ in the same way. {\it Right:} Ratio of scale-dependent bias from $f_\text{NL}$ (for a fiducial value of $f_\text{NL}=1$) to total bias for $\delta_{\rm g}$ (solid) and $\delta_{\rm r}$ (dot-dashed). Local primordial non-Gaussianity has roughly the same relative contribution to the bias of $\delta_{\rm g}$ or reconstructed modes. } \end{figure} We show this contamination in the left panel of Fig.~\ref{fig:contamination}, in the form of each term $c_\beta N_{\rm GG}/N_{\rm G\beta}$ in the square brackets of Eq.~\eqref{eq:meanfield}. These curves each represent separate multiplicative biases on the amplitude of a reconstructed mode. Those arising from late-time gravitational evolution (S, T) or from advection of the primordial potential ($c_{01}$) are white in~$K$. In contrast, those arising from couplings between $\delta$ and $\varphi$ ($c_{11}$) or $\varphi$ and itself ($\varphi\varphi$, $c_{02}$) scale like $M(K)^{-1} \propto K^{-2}$. We derive these scalings analytically in the large-scale limit in Appendix~\ref{app:meanfield}. Importantly, all terms that scale like~$K^{-2}$ involve $f_\text{NL}$, such that, as for $\delta_{\rm g}$, low-$K$ scale-dependent bias in the reconstructed modes can be used as a probe of local primordial non-Gaussianity. The right panel of Fig.~\ref{fig:contamination} shows that the relative size of this scale-dependent bias is comparable for $\delta_{\rm g}$ and $\delta_{\rm r}$, reaching $\mathcal{O}(10\%)$ at $K\sim 0.001h\, {\rm Mpc}^{-1}\,$, assuming $f_\text{NL}=1$. Fig.~\ref{fig:contamination} also shows that the contamination from other mode-couplings is subdominant to the intrinsic bias on the reconstructed field (i.e.\ the $c_{\rm G}$ term in Eq.~\ref{eq:meanfield}). Thus, using $c_{\rm G}=b_1+(21/17)b_2$ from Table~\ref{tab:modecouplings}, we can derive the rough dependence of $P_{\rm rr}$ and $P_{\rm gr}$ on $b_1$ and $b_2$: \begin{equation} P_{\rm rr} \propto b_1^2 (b_1+b_2)^2\ , \quad P_{\rm gr} \propto b_1^2 (b_1+b_2)\ . \end{equation} If galaxy shot noise is negligible compared to $P_{\rm gg}$, then the reconstruction noise $N_{\rm GG}$ satisfies $N_{\rm GG} \propto b_1^4$, implying that $P_{\rm rr} / N_{\rm GG} \propto (1+b_2/b_1)^2$ in this regime. This scaling will be useful to help understand the behavior of our forecasts when we change the fiducial value of $b_2$. \input{Sections/simulations_appendix.tex} \section{Forecasts} \label{sec:forecasts} \subsection{Fisher matrix setup } To perform a Fisher forecast, we make the usual assumption that the measured tracer overdensity $\delta_{\rm g}$ and the reconstructed field $\delta_{\rm r}$ obey a Gaussian likelihood. For the matter and galaxy field, this approximation is partially justified by the fact that we are analyzing very large scales; for the reconstruction noise, this is partially justified by the fact that the reconstruction sums over a large number of mode pairs, so that to some extent the central limit theorem applies (although the pairs may not all be independent). Figure~\ref{fig:pdfsdh} supplies additional evidence that a simple Fisher forecast is sufficient, in that the PDFs of the density field (smoothed to correspond with our analysis range) do not greatly deviate from a Gaussian. This indicates that the influence of higher moments of the density field and noise is comparatively small for the purposes of a forecast. Making this approximation and including the fact that $\delta_{\rm g}$ has zero statistical mean, the Fisher matrix per mode $\vec{K}$ and redshift $z$ is given by (e.g.\ \citealt{Tegmark:1996bz}) \begin{equation} \tilde{F}_{\rm{a}\rm{b}}(\vec{K}, z) = \frac{1}{2}\text{Tr} \Big[ \rm{d}_{\rm{a}} C(\vec{K}, z) C^{-1}(\vec{K}, z) \rm{d}_{\rm{b}} C(\vec{K}, z) C^{-1}(\vec{K}, z)\Big], \label{eq:fisherpermodegeneral} \end{equation} where $C$ is the total (signal plus noise) covariance matrix for our data vector $\vec{d}(\vec{K})=\left(\delta_{\text{g}}(\vec{K}), \delta_{\text{r}}(\vec{K})\right)^{\text{T}}$, $\text{Tr}$ is the trace matrix operator, $\partial_{\rm{b}}C(\vec{K}, z)\equiv \frac{\partial}{\partial{\rm{b}}}C(\vec{K}, z)$, and $\rm{a}, \rm{b}$ are the parameters on which our quantities depend (in this case, $f_{\text{NL}}$ and bias parameters). If the data vector is drawn from a Gaussian distribution and nothing is known about the parameters, then the inverse of the Fisher matrix gives the covariance matrix of the parameters, and the square root of the diagonal elements of $F^{-1}$ give the errorbars on the parameters and represent the minimum error achievable. Our goal is to calculate this minimum error, as it will determine our best ability to constrain parameters. In reality, we do not just measure a single mode, but we measure several modes whose information can combined together in an integrated Fisher matrix for a specific redshift bin, i.e. \begin{equation} \label{eq:integratedfishermatrix} F_{\rm{a}\rm{b}}(z) = \frac{V}{(2\pi)^2} \int_{K_{\text{min}}}^{K_{\text{max}}} \text{d}K \int_{-1}^{1} \text{d}\mu \, K^2 \tilde{F}_{\rm{a}\rm{b}}(K, \mu, z). \end{equation} Here $V$ is the survey volume, $K_{\text{min}}$ and $K_{\text{max}}$ are the minimum and maximum moduli of the modes probed, and we already integrated over the azimuthal direction, supposing no dependence from it in the integrand. For our specific case, the original field $\delta_{\rm g}$ and the reconstructed field $\delta_{\rm r}$ will give the total covariance matrix (which only depends on the magnitude of $\boldsymbol{K}$) \begin{equation} C(K, z)=\left[ \begin{array}{ccc} C^{\text{gg}}(K, z) & C^{\text{gr}}(K, z)\\ C^{\text{gr}}(K, z) & C^{\text{rr}}(K, z)\ \end{array} \right], \label{eq:covmatrix} \end{equation} with elements \begin{align} \label{eq:cgg} C^{\text{gg}}(K, z) &= \left( b_1(z) + \frac{c_{01} f_\text{NL}}{M(K,z)} D(z) \right)^2 P_\text{lin}(K, z) + P_{\text{gg,shot}}(K,z)\ , \\ \label{eq:cgr} C^{\text{gr}}(K, z) &= \left( b_1(z) + \frac{c_{01} f_\text{NL}}{M(K,z)} D(z) \right) b_{\rm r}(K,z) P_\text{lin}(K, z) + P_{\text{gr,shot}}(K,z)\ , \\ \label{eq:crr} C^{\text{rr}}(K, z) &= b_{\rm r}(K,z)^2 P_\text{lin}(K, z) + N_{\rm GG}(K,z) + P_{\text{rr,shot}}(K,z)\ , \end{align} where \begin{equation} b_{\rm r} \equiv b_1 \left( c_{\rm G} + \sum_{\beta \neq {\rm G}} c_{\beta} \frac{N_{\rm GG}}{N_{\text{G} \beta}} \right), \end{equation} and the sum runs over the mode-couplings found in Table~\ref{tab:modecouplings}. We do not include redshift space distortions in these expressions; see Sec.~\ref{sec:rsd} for discussion. The tracer shot noise is simply \begin{equation} P_\text{gg,shot}(K,z) = \frac{1}{\bar{n}(z)}\, , \label{eq:pggshot} \end{equation} where $\bar{n}$ is the comoving number density of observed tracers, while $P_{\text{rr,shot}}$ and $P_{\text{gr,shot}}$ are given in Eqs.~\eqref{eq:nrrshot}-\eqref{eq:prrshot-appendix} and \eqref{eq:nrtshot}-\eqref{eq:pgrshot-appendix} respectively. We will neglect the dependence of the reconstruction shot noise on $f_\text{NL}$. This is because in general these shot noise terms include the small scale tracer power spectrum, whose response to a change of $f_\text{NL}$ is negligible compared to the response experienced by the large scale power spectrum. Moreover, even when the large scale tracer power spectrum enters the reconstruction shot noise, as in $P_{\rm rr,shot}$ where there is a coupling between large and small scales as we explain in Appendix \ref{app:shot}, a small change from $f_{\rm{NL}}=0$, our fiducial value, is barely detectable. In principle, it may be possible extract additional information from the $f_\text{NL}$-dependence of the shot noise contributions, but this will likely be difficult in practice, and therefore we conservatively choose not to consider these contributions as observables. Substituting Eq.~\eqref{eq:covmatrix} into Eq.~\eqref{eq:fisherpermodegeneral}, we can derive an explicit formula for the Fisher matrix per mode for our case, which can then be inserted into Eq.~\eqref{eq:integratedfishermatrix}: \begin{align*} \tilde{F}_{\rm{a}\rm{b}} &= \frac{1}{2} \left( \frac{1}{C^{\text{rr}}C^{\text{gg}}(1-r_{cc}^2)} \right)^2 \left[ C^{\text{gg}} \left\{ \partial_{\rm{b}}C^{\text{gr}}\Big(-C^{\text{gr}}\partial_{\rm{a}}C^{\text{rr}}+C^{\text{rr}}\partial_{\rm{a}}C^{\text{gr}}\Big)+\partial_{\rm{b}}C^{\text{rr}}\Big(C^{\text{gg}}\partial_{\rm{a}}C^{\text{rr}}-C^{\text{gr}}\partial_{\rm{a}}C^{\text{gr}}\Big) \right\} \right. \\ &\qquad\qquad\qquad\qquad\qquad\qquad -C^{\text{gr}} \left\{ \partial_{\rm{b}}C^{\text{gg}}\Big(-C^{\text{gr}}\partial_{\rm{a}}C^{\text{rr}}+C^{\text{rr}}\partial_{\rm{a}}C^{\text{gr}}\Big)+\partial_{\rm{b}}C^{\text{gr}}\Big(C^{\text{gg}}\partial_{\rm{a}}C^{\text{rr}}-C^{\text{gr}}\partial_{\rm{a}}C^{\text{gr}}\Big) \right\} \\ &\qquad\qquad\qquad\qquad\qquad\qquad -C^{\text{gr}}\left\{ \partial_{\rm{b}}C^{\text{gr}}\Big(-C^{\text{gr}}\partial_{\rm{a}}C^{\text{gr}}+C^{\text{rr}}\partial_{\rm{a}}C^{\text{gg}}\Big)+\partial_{\rm{b}}C^{\text{rr}}\Big(C^{\text{gg}}\partial_{\rm{a}}C^{\text{gr}}-C^{\text{gr}}\partial_{\rm{a}}C^{\text{gg}}\Big) \right\} \\ &\qquad\qquad\qquad\qquad\qquad\qquad + \left. C^{\text{rr}} \left\{ \partial_{\rm{b}}C^{\text{gg}}\Big(-C^{\text{gr}}\partial_{\rm{a}}C^{\text{gr}}+C^{\text{rr}}\partial_{\rm{b}}C^{\text{gg}}\Big)+\partial_{\rm{b}}C^{\text{gr}}\Big(C^{\text{gg}}\partial_{\rm{a}}C^{\text{gr}}-C^{\text{gr}}\partial_{\rm{a}}C^{\text{gg}}\Big) \right\} \right] \numberthis \label{eq:FisherMat}, \end{align*} where $r_{cc}$ is the g-r cross correlation coefficient: % \begin{equation} r_{cc} \equiv \frac{C^{\text{gr}}}{\sqrt{C^{\text{gg}}C^{\text{rr}}}} \ . \end{equation} For $\alpha=\beta$, we obtain \begin{align*} \tilde{F}_{\rm{a}\rm{a}} &= \frac{1}{2(1-r_{cc}^2)^2} \left[\left(\frac{\partial_{\rm{a}}C^{\text{gg}}}{C^{\text{gg}}}-2r_{cc}^2\frac{\partial_{\rm{a}}C^{\text{gr}}}{C^{\text{gr}}}\right)^2 +2r_{cc}^2\left(1-r_{cc}^2\right)\left(\frac{\partial_{\rm{a}}C^{\text{gr}}}{C^{\text{gr}}}\right)^2 \right. \\ &\qquad\qquad\qquad\quad\left. +\, 2r_{cc}^2\frac{\partial_{\rm{a}} C^{\text{rr}}}{C^{\text{rr}}}\left(\frac{\partial_{\rm{a}}C^{\text{gg}}}{C^{\text{gg}}}-2\frac{\partial_{\rm{a}}C^{\text{gr}}}{C^{\text{gr}}}\right)+\left(\frac{\partial_{\rm{a}}C^{\text{rr}}}{C^{\text{rr}}}\right)^2\right] \ . \numberthis \label{eq:fisheronepar} \end{align*} On the other hand, if we only use $\delta_{\rm g}$, we get \begin{equation} \tilde{F}_{\rm{a}\rm{a}}^\text{(g only)} = \frac{1}{2} \left( \frac{\rm{d}_{\rm{a}} C^{\rm gg}}{C^{\rm gg}} \right)^2\ . \end{equation} \subsection{Analytical derivation of cosmic variance cancellation} \label{sec:lowshotlimit} Cosmic variance cancellation will occur in the limit of low noise on the measured fields -- that is, low reconstruction noise on the quadratic estimator, and low galaxy shot noise. To investigate this case analytically, let us work in the limit of very low shot noise, so that \begin{align*} C^{\rm gg}(K) &= b_{\rm g}(K)^2 P_{\rm lin}(K)\ , \\ C^{\rm gr}(K) &= b_{\rm g}(K) b_{\rm r}(K) P_{\rm lin}(K)\ , \\ C^{\rm rr}(K) &= b_{\rm r}(K)^2 P_{\rm lin}(K) + N_{\rm GG}(K)\ . \numberthis \end{align*} Further, let us assume that $f_\text{NL}$ is the only unknown parameter. If we define \begin{equation} x(K) \equiv \frac{N_{\rm GG}(K)}{b_{\rm r}(K)^2 P_{\rm lin}(K)}\ , \qquad R_p(K) \equiv \left( \frac{\rm{d}_{f_\text{NL}} b_{\rm r}(K)}{b_{\rm r}(K)} \right) \left( \frac{\rm{d}_{f_\text{NL}} b_{\rm g}(K)}{b_{\rm g}(K)} \right)^{-1}\ , \end{equation} where $x(K)$ is the inverse of the signal to noise per mode of the reconstructed field and $R_p(K)$ is a measure of similarity between the response of the bias of the reconstructed field and the one of the original tracer field, then a short calculation gives the unmarginalized errorbar on $f_\text{NL}$ per $K$-mode: \begin{equation} \label{eq:lownoiseerror} \sigma^2_{f_\text{NL}}(K)=\sigma_{f_\text{NL},\text{ g only}}^2(K) \frac{2x(K)}{\left( R_p(K)-1 \right)^2} \frac{1}{1+ 2\left( R_p(K)-1 \right)^{-2} x(K)}\ , \end{equation} where $\sigma_{f_\text{NL},\text{ g only}} = [F_{\rm{a}\rm{a}}^\text{(g only)}]^{-1/2}$. Let us investigate the general behavior of this equation in some limiting cases. If $\left( R_p-1 \right)^{-2}x$ is small, the $R_p<0$ case (when $\rm{d}_{f_\text{NL}}b_{\rm r}$ and $b_{\rm r}$ have opposite signs) will result in smaller errorbars than the $R_p>0$ case, because the signatures of $f_\text{NL}$ in $b_{\rm r}$ and $b_{\rm g}$ will be more distinguishable in that case. Expanding Eq.~\eqref{eq:lownoiseerror} in limit of small $\left( R_p-1 \right)^{-2}x$ gives \begin{equation} \sigma^2_{f_\text{NL}}(K)=\sigma_{f_\text{NL},\text{ g only}}^2(K) \frac{2x(K)}{\left( R_p(K)-1 \right)^2} \sum_{n=0}^\infty \left[ - 2\left( R_p(K)-1 \right)^{-2} x(K) \right]^n\ . \end{equation} As we will see in Appendix~\ref{app:meanfield}, $N_{\rm GG} \propto k_{\rm max}^{-3}$ in the low-$K$ limit, so that we arrive at \begin{equation} \lim_{x\rightarrow 0} \sigma^2_{f_\text{NL}}(K) \propto 2 \sigma_{f_\text{NL},\text{ g only}}^2(K) \left[ k_{\rm max}^{-3} + \mathcal{O}(k_{\rm max}^{-6}) \right]\ , \label{eq:lownoiseapproximated} \end{equation} where we assume that $R_p-1$ varies slowly with $K$. This demonstrates that constraints on $f_\text{NL}$ that use both reconstructed modes and modes of the original tracer will improve on a tracer-only analysis in a way that is only limited by the noise on the reconstructed modes (if shot noise is negligible). Cosmic variance cancellation clearly requires that $\delta_{\rm r}$ and $\delta_{\rm g}$ are measured at the same wavenumber and in the same volume. To verify this, we can repeat the derivation above with $C^{\rm gr}=0$, corresponding to $\delta_{\rm r}$ and $\delta_{\rm g}$ being measured in different volumes. In this case, Eq.~\eqref{eq:lownoiseerror} becomes \begin{equation} \sigma^2_{f_\text{NL}}(K)=\sigma_{f_\text{NL},\text{ g-only}}^2(K) \frac{\left[ 1+x(K) \right]^2}{ R_p(K)^2 + \left[ 1+x(K) \right]^2}\ , \end{equation} which approaches a finite limit as $x\to 0$; thus, the improvement realized in Eq.~\eqref{eq:lownoiseapproximated} is only possible if $\delta_{\rm r}$ and $\delta_{\rm g}$ can be compared mode-by-mode in the same volume. \subsection{Assumptions and experimental configurations} \label{sec:config} \subsubsection{Scales} \label{sec:scales} In each forecast, for measuring $f_\text{NL}$, we use $\delta_{\rm g}$ modes and reconstructed modes with wavenumber $K$ satisfying $K_{\rm min} < K < K_{\rm max}$, and we also use reconstructed modes with $K_{\rm f} < K < K_{\rm min}$, where $K_{\rm f} \approx 0.002h\, {\rm Mpc}^{-1}\,$ is the lowest measurable wavenumber within each survey volume. In this way, $K_{\rm min}$ accounts for possible systematic effects that can prevent direct measurements of $\delta_{\rm g}$ on large scales, but that do not impede reconstruction of these large-scale modes using smaller-scale correlations; an example is foreground contamination for intensity mapping experiments, which as been a primary motivator for other work on reconstruction methods \cite{Zhu:2015zlh,Zhu:2016esh,Foreman:2018gnv,Li:2018izh,Karacayli:2019iyd,Modi:2019hnu}. As input to the density-field reconstruction, we use modes with wavenumber $k$ satisfying $K_{\rm max} <k<k_{\rm max}$. We consider a range of possible $K_{\rm min}$ values in our forecasts, while $k_{\rm max}$ and $K_{\rm max}$ are fixed for each survey, as described below. \subsubsection{Surveys} \begin{table} \begin{centering} \begin{tabular}{ l | c | c c | c c } & DESI-like & \multicolumn{2}{c|}{MegaMapper-like} & \multicolumn{2}{c}{PUMA-like} \\ & $0.6<z<1.6$ & $2<z<2.5$ & $4.5<z<5$ & $2<z<3$ & $5<z<6$ \\ \hline {\bf Survey parameters} & & & & \\ Survey volume (Gpc$^3$) & $100$ & $80$ & $66$ & $266$ & $203$ \\ Mean galaxy density $\Bar{n}$ (Mpc$^{-3}$) & $10^{-4}$ & $6\times10^{-4}$ & $2\times10^{-5}$ & $2\times10^{-3}$ ($6\times10^{-3}$) & $1\times10^{-3}$ ($2\times10^{-2}$) \\ $K_\text{max}$ for $f_\text{NL}$ constraint ($h \,\text{Mpc}^{-1}$) & $0.05$ & $0.08$ & $0.14$ & $0.09$ & $0.15$ \\ $k_\text{max}$ for reconstruction ($h \,\text{Mpc}^{-1}$) & $0.15$ & $0.24$ & $0.4$ & $0.26$ & $0.47$ \\ \hline {\bf Fiducial bias parameters} & & & & \\ $b_1$ & $1.6$ & $2.9$ & $7.0$ & $2.1$ & $3.7$ \\ $b_2$ & $-0.30$ & $1.1$ & $17$ & $0.041$ & $2.8$\\ $b_{s^2}$ & $-0.17$ & $-0.54$ & $-1.7$ & $-0.31$ & $-0.77$\\ $b_{11}^{\rm E}$ & $-3.0$ & $-2.5$ & $37$ & $-3.5$ & $0.58$\\ $b_{02}^{\rm E}$ & $-14$ & $-21$ & $85$ & $-19$ & $-16$\\ \hline \end{tabular} \caption{\label{tab:surveys} Survey characteristics used for our main forecasts. The DESI-like survey is based on the expected DESI emission-line galaxy sample, the MegaMapper-like survey is a next-generation survey targeting high-redshift ``dropout" galaxies, and the PUMA-like survey represents a future \tcm intensity mapping effort over half the sky. We marginalize over $b_1$, $b_2$, and $b_{s^2}$ in our forecasts, and determine $b_{11}^{\rm E}$ and $b_{02}^{\rm E}$ using the relationships in Sec.~\ref{sec:bias}. For the PUMA-like forecast, the main $\bar{n}$ values represent effective number densities that reproduce the same noise level as the sum of shot and instrumental noise power at $k=k_{\rm max}$, while the expected physical number densities are shown in parentheses. For this forecast, we also consider the effects of the so-called ``foreground wedge" that will prevent direct measurement of certain modes. See main text for details. } \end{centering} \end{table} In our main forecasts, we consider three galaxy surveys, with properties summarized in Table~\ref{tab:surveys}. The first is similar to the emission-line galaxy sample expected from DESI \citep{Aghamousa:2016zmz}. For this survey, following \cite{Munchmeyer:2018eey}, we consider $14000\,{\rm deg}^2$ of sky area over $0.6<z<1.6$, which translates into a total comoving volume of roughly $100\,{\rm Gpc}^3$ and a mean redshift of $\bar{z} \approx 1$. We use a mean galaxy number density of $\bar{n}=10^{-4}\,{\rm Mpc}^{-3}$, obtained by dividing the expected total number of redshifts in the DESI ELG sample ($1.7\times10^7$, from \citealt{Aghamousa:2016zmz}) by the survey volume, and assume a mean linear galaxy bias of $b_1=1.6$. We take $K_{\rm max}=0.05h\, {\rm Mpc}^{-1}\,$, since linear bias is expected to be an acceptable approximation for $K<K_{\rm max}$ at $z=1$, and $k_{\rm max}=0.15h\, {\rm Mpc}^{-1}\,$, since our quadratic bias expansion is valid for $k<k_{\rm max}$ at $z=1$ (see Sec.~\ref{sec:sims} for justification based on simulations). The second survey, which we call ``MegaMapper-like", is modelled on proposals for a next-generation spectroscopic survey targeting high-redshift ``dropout" galaxies in the southern hemisphere \citep{Wilson:2019brt,Ferraro:2019uce,Schlegel:2019eqc}. For this, we assume a $14000\,{\rm deg}^2$ survey, and separately consider two redshift bins, at $2<z<2.5$ and $4.5<z<5$, which have volumes of $80\,{\rm Gpc}^3$ and $66\,{\rm Gpc}^3$ respectively. The mean number density and linear bias in each bin are obtained from averages of the values at the bin edges, taken from Table 1 of \cite{Ferraro:2019uce}; this yields $\bar{n}=6\times10^{-4}\,{\rm Mpc}^{-3}$ and $b_1=2.9$ for the lower-redshift bin, and $\bar{n}=2\times10^{-5}\,{\rm Mpc}^{-3}$ and $b_1=7.0$ for the higher-redshift bin. For $K_{\rm max}$ and $k_{\rm max}$, we scale the DESI values using the ratio of linear growth factors between the mean redshifts of each redshift bin, to account for the increased range of validity of our perturbative expressions at higher redshift.\footnote{In reality, the scaling of $k_{\rm max}$ with redshift is more complicated, involving the power spectrum tilt at the relevant wavenumbers (e.g.~\citealt{Carrasco:2013mua}), but the simple growth factor scaling we use here should at least be roughly indicative of the useful scales for our forecasts.} The third survey is based on specifications for PUMA, an envisioned radio interferometer designed for \tcm intensity mapping \citep{Ansari:2018ury,Bandura:2019uvb}. We assume a survey over half the sky, and again consider two redshift bins, this time at $2<z<3$ and $5<z<6$, with volumes $266\,{\rm Gpc}^3$ and $203\,{\rm Gpc}^3$ respectively. For simplicity, we treat this survey as observing galaxy positions directly, rather than brightness temperature (which is just a rescaled biased tracer of the matter density). To do so, we set the noise contribution to the tracer power spectrum $P_{\rm gg}$ to equal the sum of the shot noise and instrumental noise power spectra computed using the PUMA noise calculator\footnote{\url{https://github.com/slosar/PUMANoise}}, evaluated at $k=k_{\rm max}$ in each redshift bin. We quote an effective number density that would result in the same noise level in Table~\ref{tab:surveys}. When computing the shot noise contributions to $P_{\rm gr}$ and $P_{\rm rr}$, we use the expected number densities of \tcm emitters, also taken from the PUMA noise calculator and quoted in parentheses in Table~\ref{tab:surveys}. For the linear bias in each bin, we use values from Fig.~33 of \cite{Ansari:2018ury}, evaluated at the mean redshifts. As for the MegaMapper-like survey, we scale~$K_{\rm max}$ and~$k_{\rm max}$ from DESI by the appropriate ratios of linear growth factors. In our derivation of stochastic contributions to the noise of the estimator and the cross-correlation between estimator and galaxy fields in App.~\ref{app:shot}, we assume that the noise is Poissonian, i.e., that $\left \langle \varepsilon \varepsilon\right \rangle =(2\pi)^3\delta_{\rm D}(\vec K+\vec K')/\bar n$. There is evidence for halo stochasticity being sub-Poissonian for high mass haloes and super-Poissonian for low mass haloes \citep{Hamaus:2010min,Baldauf:2013hka}. Since the stochasticity corrections arise from small-scale exclusion and higher order biases, the actual shot noise levels cannot be theoretically predicted, implying that it may be advisable to marginalize over the stochasticity parameter(s). This approach is indeed adopted by some for the $f_\text{NL}$ forecasting literature (e.g.~\citep{Castorina:2020blr}) but certainly not all of it (e.g.~\citep{Schmittfull:2017ffw,Munchmeyer:2018eey}). Here we decide to fix the stochasticity parameters to their fiducial Poissonian values and defer a more detailed investigation of the impact of noise corrections on the reconstructed fields to future work. We do note however, that we expect the impact of shot noise marginalization to be rather small, since we do not include the additional non-Gaussian signal arising in combination with stochastic terms in Eqs.~(\ref{eq:cgg}-\ref{eq:crr}). \subsubsection{\tcm foregrounds} An additional consideration for \tcm intensity mapping is the presence of foreground radiation, dominantly synchrotron from our own galaxy, that is brighter than the cosmological signal by several orders of magnitude. These foregrounds are extremely smooth in frequency, which implies that they mainly populate Fourier modes with low line-of-sight wavenumber $k_\parallel$; these modes will therefore likely not be usable for cosmology. Furthermore, the chromatic properties of interferometers generically spread foreground power from the low-$k_\parallel$ modes into a wedge-shaped region in the $k_\parallel-k_\perp$ plane (e.g.~\citealt{Parsons:2012qh,Liu:2014bba,Liu:2014yxa}), although this contamination can be removed with sufficiently precise instrumental calibration (e.g.~\citealt{Shaw:2014khi,Ghosh:2017woo}). For constraining $f_\text{NL}$, the wedge will have two effects: it will reduce the number of short-wavelength modes available for the quadratic estimator, therefore increasing the noise $N_{\rm GG}$ on the reconstructed modes, and it will also reduce the number of long-wavelength $\delta_{\rm g}$ modes available for measuring the scale-dependent bias induced by primordial non-Gaussianity. We account for both effects in our forecasts for the PUMA-like survey, assuming a foreground wedge defined by 3~times the primary beam width, following \cite{Ansari:2018ury}; see Appendix~\ref{app:fg-implementation} for details of how this is implemented in our computations. In addition, we perform forecasts that ignore the wedge, to represent the case when it can be completely eliminated via calibration. We account for lost low-$K_\parallel$ modes in two ways: either by restricting $\delta_{\rm g}$ to have $K>K_{\parallel,{\rm min}}$, or by approximating $K_{\parallel,{\rm min}}$ as an isotropic $K_{\rm min}$, matching our procedure for DESI and MegaMapper. The former approach is more realistic, while the latter is easier to compare with the other surveys, so we present the latter in the main text, and the former in Appendix~\ref{app:fg-pumakparmin}. \subsubsection{Bias parameters} For every survey, to perform forecasts, we assume a fiducial value of the quadratic bias parameter $b_2$ derived from the fitting formula of \cite{Lazeyras:2015lgp}, which was fit to halo bias in separate-universe simulations over the range $1\lesssim b_1\lesssim 10$: \begin{equation} b_2(b_1) = 2\left( 0.412 - 2.143b_1 + 0.929b_1^2 + 0.008b_1^3 \right)\ , \end{equation} where the extra factor of 2 arises from our different definition of $b_2$ compared to \cite{Lazeyras:2015lgp}. The fiducial value of the tidal bias $b_{s^2}$ is found from \begin{equation} b_{s^2} = -\frac{2}{7} \left( b_1 -1 \right)\ , \end{equation} which assumes that the tidal bias in Lagrangian space is zero. In our forecasts, $b_1$, $b_2$, and $b_{s^2}$ are allowed to vary independently (i.e.\ are marginalized over when we estimate uncertainties on $f_\text{NL}$), while $b_{11}$ and $b_{02}$ are assumed to obey the relationships in Eqs.~\eqref{eq:b11E}-\eqref{eq:b02E} and~\eqref{eq:b01Lfinal}-\eqref{eq:b02Lfinal}. We take wide, flat priors on $b_1$, $b_2$, and $b_{s^2}$; we have also implemented 10\% Gaussian priors on $b_2$ and $b_{s^2}$, but these have a negligible effect on our baseline results. \subsubsection{Redshift space distortions} \label{sec:rsd} The line-of-sight component of a galaxy's position is observationally inferred from the galaxy's redshift, and the associated ``redshift-space distortions" of $\delta_{\rm g}$ should be included in a full treatment of the observed galaxy clustering. The leading-order effect is to add a $f\mu^2$ term to the linear bias of $\delta_{\rm g}$, such that Eq.~\eqref{eq:deltag-condensed} is modified to \begin{equation} \delta_{\rm g}(\boldsymbol{k}) = \left[ b_1 + f_\text{NL} \frac{c_{01}}{M(k)} + f\mu^2 \right] \delta_1(\boldsymbol{k}) + \cdots \ , \end{equation} where $f\equiv d\log D / d\log a$, $\mu \equiv k_\parallel/k$, and $D$ is the linear growth factor \citep{Kaiser:1987qv}. Higher-order effects will create additional mode-couplings that can be described in perturbation theory (e.g.~\citealt{Perko:2016puo,delaBella:2017qjy}). In a real tracer catalogue, there will also be line-of-sight-dependent selection effects that can be treated perturbatively \citep{Desjacques:2018pfv}. We do not include any of these effects in our baseline forecasts, leaving them for future work. However, as a first step in this direction, we have checked the impact of including the Kaiser term. This raises the reconstruction noise $N_{\rm GG}$ by increasing $P_{\rm gg,tot}$ in the denominator of Eq.~\eqref{eq:quadest}, while also increasing the amplitude of $P_{\rm gg}$ and $P_{\rm gr}$, thereby increasing the signal to noise on those quantities. For all surveys we consider, the former effect overcomes the latter, with the result that $\sigma(f_\text{NL})$ increases by roughly 10\%, and the improvement in $\sigma(f_\text{NL})$ from including reconstructed modes decreases by no more than the same amount. Additional mode-couplings from nonlinear redshift-space effects will likely dominate over this change, and a detailed analysis will be worthwhile to pursue, especially since some of these mode-couplings could potentially carry additional information about $f_\text{NL}$ \citep{Castorina:2020blr}. \subsection{Expected precision on reconstructed modes} \label{sec:prec_on_r} Aside from primordial local non-Gaussianity, there are many other applications of reconstructing large-scale modes, including more general constraints on cosmology, tests of predictions for the power spectrum on the largest scales, calibration of photometric redshifts \citep{Modi:2019hnu}, cross-correlations with other tracers (such as kSZ fluctuations in the CMB, e.g.\ \citealt{Li:2018izh}), and removing contamination from measurements of lensing of \tcm fluctuations \citep{Foreman:2018gnv}. To represent the general utility of reconstructed modes from different surveys, in Fig.~\ref{fig:prr-ebars} we show the expected precision on the auto power spectrum of the reconstructed modes (plotted using the fiducial bias parameters from Table~\ref{tab:surveys}), computed in wavenumber bins with $\Delta K = 0.002h\, {\rm Mpc}^{-1}\,$. While these errorbars are substantial for $K\lesssim 0.01h\, {\rm Mpc}^{-1}\,$ in DESI and the high-$z$ bin of MegaMapper, the precision is expected to be much better for MegaMapper at low $z$ and across the entire redshift range of PUMA, with most errorbars approaching the cosmic variance limit. This will enhance many scientific applications of these surveys, particularly for PUMA, where large-scale modes can be reconstructed at high precision even in the presence of the foreground wedge. \begin{figure}[t] \includegraphics[width=\textwidth, trim = 10 10 10 10 ]{figures_new/prr_ebars.pdf} \caption{ \label{fig:prr-ebars} Expected errorbars on the reconstructed power spectrum $P_{\rm rr}$ for the surveys and redshift bins we consider {\it (blue)}, along with cosmic-variance-limited errorbars {\it (orange)}, computed for bandpowers with $\Delta K = 0.002h\, {\rm Mpc}^{-1}\,$. Downward arrows indicate errorbars whose lower limits fall outside of the $y$ axis range. High-precision measurements of the power spectrum of reconstructed modes will be possible in several cases, even in the presence of a \tcm foreground wedge for PUMA. } \end{figure} \cite{Ansari:2018ury} also estimates the total signal to noise in reconstructed modes from PUMA over $1<z<6$, following the methodology of \cite{Foreman:2018gnv}, finding $\mathcal{O}(1300)$ in the no-wedge case and $\mathcal{O}(500)$ for the same wedge model we use here. For comparison, we find a total S/N of 135 (108) for $2<z<3$ and 161 (134) for $5<z<6$ in the no-wedge (wedge) case. A direct comparison between the two sets of forecasts is difficult, because they use several distinct approximations: \cite{Ansari:2018ury} treats the \tcm brightness temperature as a linearly biased tracer of the matter density, while we have incorporated second-order biasing; \cite{Ansari:2018ury} neglects the shot noise contribution to the reconstructed mode power spectrum, while we include it; \cite{Ansari:2018ury} bias-harden their results against mode-couplings from gravitational lensing, while we do not; and, most importantly, \cite{Ansari:2018ury} only consider reconstruction of modes that are purely transverse to the line of sight ($k_\parallel=0$), while we use a 3d reconstruction formalism. Nevertheless, both forecasts reach the same broad conclusion that PUMA will be able to reconstruct long-wavelength density modes with total signal to noise of several hundred, which is strong motivation for continued studies of the density reconstruction method we have presented in this paper. \subsection{Results: constraints on non-Gaussianity} \label{sec:results} \subsubsection{DESI} \begin{figure}[t] \includegraphics[width=\textwidth, trim = 10 10 0 10 ]{figures_new/forecast_desi.pdf} \caption{ \label{fig:desi} Forecasts for a DESI-like survey. {\it Left:} Signal and noise power spectra involved in the forecast. The galaxy auto spectrum is well above the shot noise, while the auto spectrum of reconstructed modes ($P_{\rm rr}$) is roughly an order of magnitude below both the reconstruction noise ($N_{\rm GG}$) in the quadratic estimator and the shot noise contribution to the estimator variance. {\it Center:} Expected constraints on $f_\text{NL}$ when only $\delta_{\rm g}$ is used {\it (solid)}, or when $\delta_{\rm r}$ is also used {\it (dotted)}. We assume that $\delta_{\rm g}(\boldsymbol{K})$ cannot be directly measured for $K<K_{\rm min}$, and marginalize over the $b_1$, $b_2$, and $b_{s^2}$ bias parameters. {\it Right:} Ratio of $\delta_{\rm g}+\delta_{\rm r}$ and $\delta_{\rm g}$-only cases from the center panel. We only notice an improvement for higher values of $K_{\rm min}$, corresponding to using $\delta_{\rm r}$ but not $\delta_{\rm g}$ at $K<K_{\rm min}$. } \end{figure} Fig.~\ref{fig:desi} shows the results of our forecasts for the DESI-like survey. The left panel shows the various power spectra of interest, of linear matter density, galaxy number density, and reconstructed matter density modes, along with the cross spectrum between galaxies and reconstructed modes. This panel also shows the shot noise on $P_{\rm gg}$, $P_{\rm gr}$, and $P_{\rm rr}$, as well as the statistical noise ($N_{\rm GG}$) on reconstructed modes. For DESI, the galaxy power spectrum is well above the shot noise, while the reconstructed power spectrum is about an order of magnitude lower than the reconstruction noise. Despite the fact that galaxy shot noise is below $P_{\rm gg}$, the shot noise contributions for both $P_{\rm gr}$, and $P_{\rm rr}$ are above the signal power spectra. As explained in Appendix~\ref{app:shot}, this is due to coupling between galaxy shot noise and clustering at large scales, where the variance is larger than at small scales and therefore these shot noise spectra are significantly boosted compared to the $\bar{n}^{-1}$ contribution. The middle panel of Fig.~\ref{fig:desi} shows the expected constraints on $f_\text{NL}$ when only $\delta_{\rm g}$ is used, or when reconstructed modes are also incorporated. The right panel shows the ratio of $\sigma(f_\text{NL})$ in these two cases. The improvement in $\sigma(f_\text{NL})$ is negligible at the lowest $K_{\rm min}$ we consider, which corresponds to $\delta_{\rm g}$ being measured on all scales resolvable within the survey volume ($K_{\rm min}=K_{\rm f}$). However, a larger improvement is seen when $K_{\rm min}$ is assumed to be higher: for $K_{\rm min}=0.02h\, {\rm Mpc}^{-1}\,$, for example, $\sigma(f_\text{NL})$ improves by around 15\% when reconstructed modes are used. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth, trim = 10 10 0 10 ]{figures_new/variations_desi.pdf} \caption{ \label{fig:desi-options} The analog of the right panel of Fig.~\ref{fig:desi}, with a variety of (mostly artificial) modifications to the forecasts. There is no improvement in $\sigma(f_\text{NL})$ when $\delta_{\rm r}$ is neglected at $K<K_{\rm min}$, indicating that the inclusion of $\delta_{\rm r}$ at $K<K_{\rm min}$ drives the improvement. Greater improvements are achieved for higher galaxy number density or if $k_{\rm max}$ can be increased by a factor of~2, with milder changes if the fiducial $b_2$ value is set to zero or shot noise on $P_{\rm rr}$ and $P_{\rm gr}$ is neglected. } \end{figure} To determine the origin of this behavior, we show several modifications of this forecast in Fig.~\ref{fig:desi-options}. In particular, when reconstructed modes with $K<K_{\rm min}$ are not included, there is no improvement of $\sigma(f_\text{NL})$, indicating that these modes are entirely responsible for the improvement. Therefore, DESI is not powerful enough to allow for cosmic variance cancellation between $\delta_{\rm g}$ and $\delta_{\rm r}$ at the same scales; rather, the primary use of reconstruction is to access scales ($K<K_{\rm min}$) where $\delta_{\rm g}$ cannot be directly measured. This naturally explains why the improvement of $\sigma(f_\text{NL})$ grows for higher~$K_{\rm min}$. While the absolute values of $\sigma(f_\text{NL})$ are not impressive at such high $K_{\rm min}$ -- at $K_{\rm min}=0.02h\, {\rm Mpc}^{-1}\,$, for example, $\sigma(f_\text{NL}) \approx 50$ without reconstruction and $40$ with reconstruction -- the improvement comes ``for free," without requiring any other datasets. The other curves in Fig.~\ref{fig:desi-options} illuminate other aspects of this forecast. Increasing $\bar{n}$ to an unrealistically high value of $10^2\,{\rm Mpc}^{-3}$ improves the $\delta_{\rm g}$-only forecast by roughly 10\% (not shown), and also increases the improvement on $\sigma(f_\text{NL})$ from including $\delta_{\rm r}$, indicating that shot noise is a limiting factor in this improvement. Simply neglecting $P_{\rm rr,shot}$ and $P_{\rm gr,shot}$ has a similar effect, clarifying that shot noise in the galaxy power spectrum itself is comparatively less important than in these other spectra. Also, we see the same type of change if we alter the fiducial value of $b_2$. As mentioned at the end of Sec.~\ref{sec:noisecont}, if $P_{\rm gg} \gg P_{\rm gg,shot}$ (as it is here), then $P_{\rm rr}/N_{\rm GG} \propto (1+b_2/b_1)^2$, so increasing $b_2$ from $-0.3$ to $0$ boosts the signal to noise on the reconstructed modes. This would lead to a larger improvement if not for the large contribution of $P_{\rm rr,shot}$. Boosting $k_{\rm max}$ by a factor of 2 leads to a better $\sigma(f_\text{NL})$ improvement at low $K_{\rm min}$. This change lowers the Gaussian reconstruction noise $N_{\rm GG}$, but also raises $P_{\rm rr,shot}$ and $P_{\rm gr,shot}$ by different amounts, and the combination of these changes ends up slightly boosting the constraining power of $\delta_{\rm r}$. It may seem counterintuitive that $\delta_{\rm r}$ adds anything at all to our forecasts, since the reconstruction noise and shot noise on $P_{\rm rr}$ are much larger than $P_{\rm rr}$ itself: one would expect such large noise to lead to a low cross-correlation coefficient between $\delta_{\rm r}$ and $\delta_{\rm g}$, and also make it difficult to extract information from the auto spectrum of $\delta_{\rm r}$. However, the presence of a cross shot noise contribution to $P_{\rm gr}$ alters this picture, contributing to the $\delta_{\rm r}$-$\delta_{\rm g}$ cross-correlation coefficient and altering the structure of the covariance matrix. While it is not trivial to see in the Fisher matrix expression in Eq.~\eqref{eq:fisheronepar}, the net effect is to enhance the information content of $\delta_{\rm r}$ with respect to $f_\text{NL}$. \cite{Liu:2020izx} reached a similar conclusion when examining cosmic variance cancellation between different line intensity maps, noticing that lowering the cross shot noise contribution led to worsened constraints on $f_\text{NL}$. \subsubsection{MegaMapper} \begin{figure}[t] \includegraphics[width=\textwidth, trim = 10 10 0 10 ]{figures_new/forecast_mm_lowzbin.pdf} \includegraphics[width=\textwidth, trim = 10 10 0 10 ]{figures_new/forecast_mm_highzbin.pdf} \caption{ \label{fig:mm} As Fig.~\ref{fig:desi}, for low-redshift {\it (top panels)} and high-redshift {\it (bottom panels)} bins of a MegaMapper-like survey. The former has greater signal to noise on reconstructed modes than DESI, leading to a greater improvement in $\sigma(f_\text{NL})$ when these modes are included in the forecast. For the latter, the shot noise contributions to $P_{\rm rr}$ and $P_{\rm gr}$ are comparatively much larger, leading to a different scale-dependence for the improvement in $\sigma(f_\text{NL})$. } \end{figure} \begin{figure}[h] \includegraphics[width=\textwidth, trim = 10 10 0 10 ]{figures_new/variations_mm.pdf} \caption{ \label{fig:mm-options} Modifications to the base MegaMapper forecasts. For the low-redshift bin {\it (left panel)}, as for DESI, the improvement in $\sigma(f_\text{NL})$ is driven mostly by modes of $\delta_{\rm r}$ with $K<K_{\rm min}$, with further improvement possible for higher galaxy number density. For the high-redshift bin {\it (right panel)}, neglecting $\delta_{\rm r}$ at $K<K_{\rm min}$ makes no difference, indicating that for lower $K_{\rm min}$ values, cosmic variance cancellation between $\delta_{\rm g}$ and $\delta_{\rm r}$ at the same $K$ is driving the improvement in $\sigma(f_\text{NL})$. There are several ways to obtain greater improvements, as discussed in the main text. } \end{figure} Our results for the MegaMapper-like survey are shown in Fig.~\ref{fig:mm}. For the low-$z$ bin, the signal to reconstruction noise on the reconstructed modes is higher than for DESI, thanks to a combination of higher $\bar{n}$, higher $k_{\rm max}$, and higher bias, and the signal to shot noise ratio is also correspondingly smaller. This leads to a greater improvement in $\sigma(f_\text{NL})$ when reconstructed modes are included. The left panel of Fig.~\ref{fig:mm-options} shows that, like DESI, this improvement comes not from cosmic variance cancellation, but from reconstructed modes with $K<K_{\rm min}$, where we assume that $\delta_{\rm g}$ cannot be directly measured. We see large changes if $\bar{n}$ is boosted or $P_\text{gr,shot}$ and $P_\text{rr,shot}$ are neglected, indicating that shot noise is a limiting factor in this bin. Changing the fiducial $b_2$ from $1.1$ to $0$ reduces the usefulness of the reconstructed modes for the same reason that changing~$b_2$ increased their usefulness for DESI. We see rather different behavior in the high-$z$ bin. There, we find that the reconstruction noise is of the same order as $P_{\rm rr}$ while the shot noise contribution is much greater than the signal, and the shot noise contribution to $P_{\rm gr}$ is also greater than the signal. Despite this, the improvement in $\sigma(f_\text{NL})$ is larger than for the low-$z$ bin, reaching 50\% at $K_{\rm min}=K_{\rm f}$. The right panel of Fig.~\ref{fig:mm-options} shows that the improvement is the same whether or not we include modes of $\delta_{\rm r}$ with $K<K_{\rm min}$, and therefore, cosmic variance cancellation between $\delta_{\rm g}$ and $\delta_{\rm r}$ is solely responsible for the change in $\sigma(f_\text{NL})$. We also see from Fig.~\ref{fig:mm-options} that the low number density ($\bar{n}=2\times10^{-5}\,{\rm Mpc}^{-3}$) in the high-$z$ bin is not a huge limiting factor, with only a modest change if we use a much larger number density. This is because the reconstruction noise remains comparable to $P_{\rm rr}$ even for a much denser survey, while further improvements are possible for a higher $k_{\rm max}$ but the same number density. If $P_{\rm rr,shot}$ and $P_{\rm gr,shot}$ are ignored, the results revert to the same situation as the low-$z$ bin, with only slight gains in $\sigma(f_\text{NL})$ possible for low $K_{\rm min}$. Finally, if $b_2$ is changed from $17$ to $0$, there is significantly more improvement in $\sigma(f_\text{NL})$: the amplitudes of $P_{\rm rr}$ and $P_{\rm gr}$ are reduced, but the relative uncertainty on $f_\text{NL}$ from marginalizing over $b_2$ is also reduced, and the latter effect wins. \subsubsection{PUMA} \label{sec:puma} \begin{figure}[t] \includegraphics[width=\textwidth, trim = 10 10 0 10 ]{figures_new/forecast_puma_lowzbin_wedge_and_nowedge.pdf} \includegraphics[width=\textwidth, trim = 10 10 0 10 ]{figures_new/forecast_puma_highzbin_wedge_and_nowedge.pdf} \caption{ \label{fig:puma} As Fig.~\ref{fig:desi}, for low-redshift (\textit{top panels}) and high-redshift (\textit{bottom panels}) bins of a PUMA-like survey, treating the \tcm brightness temperature in the same way as $\delta_{\rm g}$ in our other forecasts, and translating thermal noise on the brightness temperature into an effective tracer number density for computing shot noise. We show $\sigma(f_\text{NL})$ either neglecting or incorporating the effects of the \tcm foreground wedge; at high $z$, the benefit to $\sigma(f_\text{NL})$ from including reconstructed modes is greater in the presence of the wedge, since there are fewer $\delta_{\rm g}$ modes that can be directly measured in that case. The results for the low-redshift bin are similar to those for MegaMapper, while larger improvements in $\sigma(f_\text{NL})$ are possible at higher redshift.} \end{figure} We show results for the PUMA-like survey in Fig.~\ref{fig:puma}, either neglecting or including the effects of the foreground wedge. Note that the left panels in Fig.~\ref{fig:puma} only show noise curves corresponding to the no-wedge case. As for the other surveys, we assume an isotropic $K_{\rm min}$ for $\delta_{\rm g}$ in Fig.~\ref{fig:puma}; we show results for a cutoff on $K_{\parallel}$, which are qualitatively similar to those in Fig.~\ref{fig:puma}, in Appendix~\ref{app:fg-pumakparmin}. For both redshift bins, the shot noise in $C^{\rm gg}$, $C^{\rm rr}$, and $C^{\rm gr}$ is below the signal. However, the reconstruction noise is high enough in the low-redshift bin that the effect of reconstructed modes on $\sigma(f_\text{NL})$ is similar to DESI and the low-$z$ MegaMapper bin, with the vast majority of the extra constraining power coming from reconstructed modes with $K<K_{\rm min}$ (see the left panel of Fig.~\ref{fig:puma-options}). The impacts of taking a higher $k_{\rm max}$ or tracer number density (equivalent to thermal noise in the interferometer) would only be mild. \begin{figure}[t] \includegraphics[width=\textwidth, trim = 10 10 0 10 ]{figures_new/variations_puma.pdf} \caption{ \label{fig:puma-options} Modifications to the base PUMA forecasts, neglecting the foreground wedge. In the low-$z$ bin, modes of $\delta_{\rm r}$ with $K<K_{\rm min}$ are entirely responsible for the improvement in $\sigma(f_\text{NL})$, with most other modifications having little effect. In the high-$z$ bin, the blue dotted curve demonstrates that the $\sigma(f_\text{NL})$ improvement comes from a combination of low-$K$ modes of $\delta_{\rm r}$ and cosmic variance cancellation at higher $K$. The improvement would get better if the thermal noise could be reduced (which maps onto a higher $\bar{n}$ in these forecasts). } \end{figure} Meanwhile, in the high-$z$ bin, the improvement in $\sigma(f_\text{NL})$ arises from a combination of low-$K$ reconstructed modes and cosmic variance cancellation between $\delta_{\rm g}$ and $\delta_{\rm r}$. There is greater improvement in the presence of the wedge, as reconstruction helps to recover modes that would otherwise be lost. This improvement is around 20\% at the lowest~$K_{\rm min}$, and increases as more $\delta_{\rm g}$ modes are lost, implying that reconstruction will be extremely useful for single-tracer constraints on $f_\text{NL}$ from PUMA or other high-$z$ intensity mapping. The right panel of Fig.~\ref{fig:puma-options} shows that lower thermal noise would lead to further improvements, while a lower value of $b_2$ would worsen the results due to a lowering of the signal to noise on $P_{\rm rr}$. \subsubsection{...and beyond} To demonstrate how the constraints on $f_\text{NL}$ scale for surveys with extremely low shot noise and reconstruction noise, we also examine forecasts for the PUMA high-redshift bin where $k_{\rm max}$ is artificially increased, assuming that our quadratic bias model is valid to arbitrarily high $k$.\footnote{In practice the quadratic bias model will break down at sufficiently high $k$, but a theoretical framework such as the response function formalism (e.g. \citealt{Barreira:2017sqa, Barreira:2017kxd}) may allow the use of higher $k_{\rm max}$, with suitable modifications of the reconstruction procedure. We leave this topic to future work.} We take the galaxy number density to infinity in these forecasts, to prevent shot noise from becoming the limiting factor. In this case, we expect the uncertainty on $f_\text{NL}$ to scale like the inverse of the signal to noise on the reconstructed modes (see Sec.~\ref{sec:lowshotlimit}). In turn, in this limit, the signal to noise scales like $k_{\rm max}^{3/2}$ because the reconstruction noise spectrum $N_{\rm GG}$ becomes proportional to the number of modes with $k_{\rm min}<k<k_{\rm max}$ (see Eq.~\ref{eq:NgglowK}). In Fig.~\ref{fig:sigfnl-scaling}, we show the ratio of $\sigma(f_\text{NL})$ from g+r or g-only forecasts as a function of $k_{\rm max}$ for two representative values of $K_{\rm min}$. We indeed find that as the signal to noise on reconstructed modes is increased, the improvement on $\sigma(f_\text{NL})$ also increases, with the unmarginalized forecasts quickly satisfying the expected scaling. (Marginalization over bias parameters causes small deviations from this scaling.) This demonstrates the huge increases in constraining power that are possible in principle for a survey with high galaxy number density and many small-scale modes whose correlations can be used in reconstruction. We have also numerically verified that the $\sigma(f_\text{NL})$ ratio stays flat with increasing $k_{\rm max}$ if the noise in $P_{\rm gg}$ is taken very high, or if zero cross-correlation between $\delta_{\rm g}$ and $\delta_{\rm r}$ is assumed, further demonstrating that the scaling seen in Fig.~\ref{fig:sigfnl-scaling} arises from the joint constraining power between $\delta_{\rm g}$ and $\delta_{\rm r}$ measured from the same volume. \begin{figure}[t] \centering \includegraphics[width=\textwidth, trim = 10 10 0 10 ]{figures_new/sigfnl_scaling_kmax.pdf} \caption{ \label{fig:sigfnl-scaling} The ratio of $\sigma(f_\text{NL})$ for the $\delta_{\rm g}+\delta_{\rm r}$ and $\delta_{\rm g}$-only forecasts for the high-$z$ PUMA bin, where $\bar{n}$ is taken to infinity and $k_{\rm max}$ is artificially increased, assuming that our quadratic bias model is valid to arbitrarily high $k$. The left and right panels correspond to two representatives values of $K_{\rm min}$. We show forecasts after marginalizing over bias parameters ({\it black points}) and without any marginalization ({\it red points}), along with the expected $k_{\rm max}^{-3/2}$ scaling ({\it dashed lines}, each normalized to the highest-$k_{\rm max}$ point plotted). We find the unmarginalized curves quickly approach the ideal scaling, while the marginalized forecasts show small deviations from this scaling. This shows that large increases in constraining power are possible in principle for surveys with very high number density and a large allowed value of $k_{\rm max}$. } \end{figure} \section{Discussion} \label{sec:discussion} The results presented here can be compared to other methods either utilizing reconstruction and/or combining a tower of $n$-point correlation functions. Compared to most methods proposed in the literature, this work presents an optimal quadratic estimator to reconstruct the large scale mode. As explained in Sec.~\ref{sec:prec_on_r}, in principle this reconstructed mode can be used for several (cosmological) applications, and here we only explored $f_\text{NL}$ as an application of interest. When comparing this work with previous works, the main question is if the amount of information captured in the statistics of the tracer field is fully exploited. While it will be hard to compare methods directly, here we propose some heuristic arguments where we think our methods overlap and where they differ. As mentioned in the introduction, some publications have aimed to simplify the search for primordial non-Gaussianities by proposing more compressed versions of the full bispectrum \citep{Schmittfull:2014tca,Fergusson:2010ia,Byun:2017fkz,Dai:2020adm,MoradinezhadDizgah:2019xun,Chiang:2014oga,dePutter:2018jqk,Gualdi:2018pyw}. Common to these works is the fact that the information accessed is captured by the 2- and 3-point functions. In this work, besides the 3-point function, the 4-point function is also used and is important in obtaining cosmic variance cancellation. In other words, as shown in Fig.~\ref{fig:sigfnl-scaling}, significant improvements are possible when some conditions are met that would not be possible when considering the compressed statistics proposed in these earlier works. Even if cosmic variance cancellation is not achieved, we generally observe improvements between $20-50\%$. These numbers are similar to those projected in for example \cite{Dai:2020adm} for compressed statistics, but direct comparisons between our method and others are generally difficult. In the method presented in this paper, the improvement can roughly be attributed to coming from access to larger scales through the reconstruction, or, when both the linear and reconstructed mode are combined, cosmic variance cancellation. The projected improvement on the amplitude $f_\text{NL}$ from compressed statistics is the result of adding the bispectrum information on top of the power spectrum. For a detailed comparison we would need to carefully associate every mode with improved signal-to-noise side by side for the two different methods. Although this would be interesting by itself, as it would help us understand to what extent these methods are overlapping or how they complement one another, we will leave this to future work. The paper which our work has most in common with is \cite{dePutter:2018jqk}, which discusses the information content of a joint analysis of the two point function and squeezed three- and four-point functions. This work has several commonalities with our analysis. To perform forecasts, \cite{dePutter:2018jqk} uses the squeezed-limit position-dependent power spectrum as a field, in an approach that is quite similar to our long wavelength mode reconstruction. The author also makes similar arguments for how sample variance cancellation can significantly influence and improve constraints. However, there are also many important differences to our approach. Most importantly, the specific squeezed-limit power spectrum picture in \cite{dePutter:2018jqk} is discussed as a tool to enable better forecasting of joint 2, 3 and 4-point analyses of local non-Gaussianity, rather than a practical data analysis method. In contrast, our method has been proposed as an analysis method and estimator to rapidly jointly analyze 2, 3 and 4- point functions, that is not only computationally tractable, but has been tested (to some extent) on simulations. There are also significant differences in the details of the methodology. Our reconstruction quadratic estimator can infer the long wavelength mode from mode-pairs that are not much smaller than the mode to be reconstructed; in contrast, \cite{dePutter:2018jqk} always operates in the squeezed limit when analyzing the position-dependent power spectrum. While it is expected that the majority of information about local non-Gaussianity in the 3 and 4-point functions is contained in very squeezed shapes, it is not clear that non-squeezed shapes do not contribute to long-wavelength mode reconstruction and hence sample variance cancellation. On the other hand, we note that in our analysis method we combine all quadratic estimator mode pairs into one long-wavelength mode estimate; in contrast, \cite{dePutter:2018jqk} shows that additional sample variance cancellation can be obtained when treating each mode pair (or position-dependent power spectrum bin) as a separate tracer. Although this suggests that further improvements to our method might be possible, the results of \cite{dePutter:2018jqk} suggest that this would only give significant improvements for very high $k$ and very low noise, beyond the capabilities of next-generation surveys. Finally, shortly before the completion of this work, in a follow-up to \cite{Li:2020uug}, \cite{Li:2020luq} presented results relating to reconstruction of large-scale density modes using biased tracers, although without discussing the application to constraining non-Gaussianity. While the core of this work is similar (using a quadratic estimator as proposed by \citealt{Foreman:2018gnv}), here we explicitly account for the mode-coupling from higher-order biasing (which is non-negligible) in our estimator and compare theoretical estimates of the reconstruction noise, including bi- and trispectrum shot noise with additional contributions from primordial non-Gaussianity, to simulations. \cite{Li:2020luq} include observations on the light-cone in their formalism, and also include the effect of redshift space distortions up to second order in the linear density, which we neglect in this work (although see Sec.~\ref{sec:rsd} for a discussion of the impact of the Kaiser term). \section{Conclusions} \label{sec:conclusions} In this paper, we have further developed a method for reconstructing modes of the cosmic density field using a quadratic estimator. This estimator extracts information about (typically) large-scale modes from correlations between smaller-scale modes, similar to standard methods for CMB lensing reconstruction. We have improved upon the estimator introduced in \cite{Foreman:2018gnv} by incorporating nonlinear biasing and local-type primordial non-Gaussianity, up to second order in the linear density field. At this order, there are several distinct sources of couplings between small-scale modes of the tracer density field, with amplitudes (i.e.\ bias coefficients) that are unknown a priori. We have found that an estimator based on the mode-coupling due to isotropic growth of the perturbations results in the lowest noise on reconstructed modes, and have enumerated the various multiplicative biases that will accompany the output of this estimator. We have also applied this estimator (along with those based on large-scale bulk flows and tidal interactions) to halos in $N$-body simulations, verifying that the results agree with analytical predictions. In the course of this study, we have identified that it is crucial to include the shot noise contribution to the covariance between directly-observed tracer modes and reconstructed modes when performing an analysis. The shot noise not only adds a white noise contribution to the tracer power spectrum itself (in the case that the tracers Poisson-sample the density field, which we assume here), but also adds noise to the reconstructed-mode power spectrum and the cross-spectrum with the tracer modes. For sufficiently low tracer number density, this contribution can actually overwhelm the reconstruction noise from the quadratic estimator, and the cross spectrum alters the correlation coefficient between the tracer and reconstructed modes. We self-consistently include these features in our forecasts. We have carried out forecasts that apply this formalism to several upcoming large-scale structure surveys: the emission-line galaxy survey from DESI \citep{Aghamousa:2016zmz}, the high-$z$ dropout survey envisioned in the MegaMapper proposal \citep{Ferraro:2019uce,Schlegel:2019eqc}, and the \tcm line-intensity survey from the PUMA proposal \citep{Ansari:2018ury,Bandura:2019uvb}, treated like a galaxy survey with effective number density derived from PUMA's thermal noise model. Examining the expected errorbars on the power spectrum of the reconstructed modes for $K<0.02h\, {\rm Mpc}^{-1}\,$, we find that these errorbars are several times larger than the signal for DESI and a high-redshift bin of MegaMapper. The latter is limited by the low number density of tracers, leading to a high shot noise contribution to the reconstruction noise, while the former's high reconstruction noise is sourced both by shot noise and a low number of modes used in the reconstruction (i.e.\ low $k_{\rm max}$). In the other forecasts, we find that high-S/N reconstructions of the large-scale density power spectrum can be obtained, with the caveat that this spectrum comes with multiplicative biases with known shapes but unknown amplitudes. We have also computed the expected improvement in constraints on the amplitude of local-type primordial non-Gaussianity, $f_\text{NL}$, arising from analyzing reconstructed modes along with directly-observed tracer modes. For DESI and low-$z$ bins of MegaMapper and PUMA, the improvement arises solely from being able to access reconstructed modes with $K<K_{\rm min}$, where $K_{\rm min}$ denotes the minimum wavenumber at which we assume tracer modes can be measured (for systematics obscuring tracer modes with $K<K_{\rm min}$ but not affecting the modes used for reconstruction). On the other hand, for a high-$z$ bin of MegaMapper, the improvement in $f_\text{NL}$ constraints arises solely from cosmic variance cancellation between tracer and reconstructed modes at the same wavenumbers, similar to what can happen with different tracer populations or tracer-lensing cross-correlations \citep{Seljak:2008xr,McDonald:2008sh,Schmittfull:2017ffw,Liu:2020izx}. For a high-$z$ bin of PUMA, the $\sigma(f_\text{NL})$ improvement comes from a combination of cosmic variance cancellation and reconstructed modes alone. Generally, cosmic variance cancellation depends on having a sufficiently high cross-correlation coefficient between the tracer and reconstructed modes, but this depends on shot noise in a somewhat complicated way, due to the aforementioned cross shot noise contribution. The improvement in $\sigma(f_\text{NL})$ also depends on the assumed value of $K_{\rm min}$, so we have plotted the expected constraints as a function of $K_{\rm min}$. In general, reconstructed modes improve $\sigma(f_\text{NL})$ by tens of percents: for example, at $K_{\rm min}=0.01h\, {\rm Mpc}^{-1}\,$, $\sigma(f_\text{NL})$ improves by a few percent for DESI, 15\% and 40\% for the low-$z$ and high-$z$ MegaMapper bins we consider, and at least 20\% for both $z$-bins of PUMA, depending on what is assumed for the \tcm foreground wedge. We have also shown that in the limit of zero shot noise, and if our quadratic bias model were valid to arbitrarily high~$k$, $\sigma(f_\text{NL})$ scales like $k_{\rm max}^{-3/2}$, reflective of the number of small-scale modes used for reconstruction. There are several possible ways that this work could be extended. For example, we have neglected redshift-space distortions, but they should clearly be incorporated in advance of applying this technique to data. One could also consider applying reconstruction to photometric surveys, after an assessment of the impact of photometric redshift errors on the results. It would be interesting to see how things change if one were to consider the bias model from \cite{Schmittfull:2018yuk}, based on shifted versions of bias operators designed to more fully incorporate large-scale displacements. Finally, one could consider investigating nonlinear response functions \citep{Barreira:2017sqa, Barreira:2017kxd} as a way to increase the number of small-scale modes that could be used in the quadratic estimator. Overall, we expect there to be many applications for reconstructed modes beyond constraints on local-type non-Gaussianity, and we therefore advocate for this reconstruction procedure as a useful tool to increase the scientific returns of upcoming large-scale structure surveys. \acknowledgments We thank Emanuele Castorina, Azadeh Moradinezhad Dizgah, Simone Ferraro, Mat Madhavacheril, Moritz M{\"u}nchmeyer, Will Percival, Matias Zaldarriaga, and Hong-Ming Zhu for useful conversations. We also thank Emanuele Castorina, Azadeh Moradinezhad Dizgah, Emmanuel Schaan and Marcel Schmittfull for thoughtful comments on a draft of this paper. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. S.F.\ thanks the Kavli Institute for Cosmology, Cambridge for hospitality while part of this work was carried out. The numerical part of this work was performed using the DiRAC COSMOS supercomputer and greatly benefited from the support of K. Kornet. M.A. acknowledges support from the Cambridge Commonwealth Trust, the Higher Education Commission, Pakistan, and the Cambridge Centre for Theoretical Cosmology. T.B. acknowledges support from the Cambridge Center for Theoretical Cosmology through the Stephen Hawking Advanced Fellowship. P.D.M.\ acknowledges support of the Netherlands organization for scientific research (NWO) VIDI grant (dossier 639.042.730). B.D.S. acknowledges support from an Isaac Newton Trust Early Career Grant, from a European Research Council (ERC) Starting Grant under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 851274), and from an STFC Ernest Rutherford Fellowship. OD is funded by the STFC CDT in Data Intensive Science. \section{Simulations} \label{sec:sims} To validate the quadratic estimator framework presented in Sec.~\ref{sec:formalism}, we use a suite of 15 realisations of a cosmological $N$-body simulation. The initial conditions are generated with the second order Lagrangian Perturbation Theory (\textbf{2-LPT}) code \citep{Scoccimarro:2011pz} at the initial redshift $z_\text{i}=99$ and are subsequently evolved using \textbf{Gadget-2} \citep{Springel:2005mi}. The simulations are performed with $N_\text{p} = 1024^{3}$ dark matter particles in a cubic box of length $L=1500 h^{-1}$ Mpc with periodic boundary conditions. We assume a flat $\Lambda$CDM cosmology with the cosmological parameters $\Omega_\text{m}=0.272$, $\Omega_\Lambda=0.728$, $h=0.704$, $n_\text{s}=0.967$, $\sigma_8 =0.81$. Dark matter halos in the final $z=0$ density field are identified using a Friends-of-Friends (FoF) algorithm with linking length $l=0.2$ times the mean interparticle distance. The halos are binned in mass, with each bin spanning a factor of three in mass. We have checked the viability of our reconstruction method for a range of masses, finding qualitatively similar results in all cases; however, for simplicity, we present only the results for the lowest mass bin, the properties of which are given in Table~\ref{halotable}. Particles and halos are assigned to a regular grid using the Cloud-in-Cell (CIC) scheme. We Fourier transform the matter and halo density fields using the publicly available \textbf{FFTW} library\footnote{\url{http://www.fftw.org}}. \begin{table}[t] \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Mass Bin & Mean Halo Mass $[10^{13} h^{-1}M_{\odot}]$ & $\bar{n}$ $[10^{-6}h^{3}$ Mpc$^{-3}$] & $b_1$ &$c_g$&$c_t$&$c_s$\\ [0.2ex] \hline I & 0.77 & 627 & 1.07&0.62&1.14&1.07 \\ \hline \end{tabular} \end{center} \caption{Properties of the halo mass bin employed in this study: the mean mass of the sample, the number density of halos~$\bar{n}$, the linear bias $b_1$, and the three relevant $c_\alpha$ parameters defined in Table~\ref{tab:modecouplings}. The measured bias parameters are taken from~\cite{Abidi2018}, which is based on the same simulations and mass bin we use here.} \label{halotable} \end{table} \subsection{Generation of quadratic estimators} We generate quadratic estimators from the halo density field $\delta_\text{g}$ in $N$-body simulations using the convolution theorem. This means that we use a sequence of multiplications with powers of wavenumbers in Fourier space, Fourier transforms, and subsequent multiplication of the weighted fields in configuration space. We generate three quadratic estimators corresponding to the growth term $\delta^2$, shift term $\Psi\cdot\nabla\delta$, and the tidal term $s^2$, with associated Fourier-space kernels given in Eq.~\eqref{eq:f2kernels}. The first step in our procedure is to remove very small scale modes by applying a cut-off $k_{\text{max}}$ in Fourier space through multiplication of the Fourier space density field with a filtering function. While the exact form of the cutoff is not important, we adopt a Gaussian filter $W(R \boldsymbol{k}) = \exp\left(-{k^2R^2}/{2}\right)$ for numerical stability. We define the smoothed density field by $\delta_{\text{g}}^{R}(\boldsymbol{k})$. We choose three external smoothing scales: $R=20 h^{-1}$ Mpc, $R=10 h^{-1}$ Mpc, and $R=4 h^{-1}$ Mpc, corresponding to maximum wavenumbers $k_{\text{max}}\approx 0.05 h\, {\rm Mpc}^{-1}\,$, $k_{\text{max}}\approx0.1 h\, {\rm Mpc}^{-1}\,$, and $k_{\text{max}}\approx 0.25h\, {\rm Mpc}^{-1}\,$ respectively. The smoothing scale removes all wavenumbers $k>k_{\text{max}}$, such that we reconstruct long wavelength modes using modes $k<k_{\text{max}}$ for three different cases. The mode coupling functions $g_{\alpha}(\boldsymbol{q},\boldsymbol{k}-\boldsymbol{q})$ defined by Eq.~\eqref{eq:galpha} contain a Wiener filter, which we implement by first generating the linear power spectrum on the simulation grid, and then defining two fields: \begin{equation} \delta_{A}(\boldsymbol{k}) = \frac{\delta_{\text{g}}^{R}(\boldsymbol{k})}{b_1^2 P_{\text{lin}}(\boldsymbol{k}) + \bar{n}^{-1}}\qquad \text{and} \qquad \delta_{B}(\boldsymbol{k}) = \frac{\delta_{\text{g}}^{R}(\boldsymbol{k})P_{\text{lin}}(\boldsymbol{k})}{b_1^2 P_{\text{lin}}(\boldsymbol{k}) + \bar{n}^{-1}}, \label{eq:dB} \end{equation} where $b_1$ and $\bar{n}$ are the linear bias and halo number density corresponding to the halo mass bin defined in Table~\ref{halotable}. Using $\delta_{A}$ and $\delta_B$ we generate growth, shift and tidal estimators using multiplications of powers of wavenumbers in Fourier space, Fourier transforms, and multiplication of fields in configurations space. For example, we generate the growth estimator as follows. First, we inverse Fourier transform both fields defined in Eq.~\eqref{eq:dB} to obtain $\delta_{A}(\boldsymbol{x})$ and $\delta_{B}(\boldsymbol{x})$. Next, in configuration space, we multiply the product of both fields by $17/21$ (Table~\ref{tab:modecouplings}) and finally Fourier transform back to obtain the growth estimator in Fourier space. We generate shift and tidal estimators with a similar procedure. Note that the main computational cost in generating the quadratic estimators comes from performing the Fourier transforms. The auto- and cross-spectrum analysis of quadratic estimators only requires the computational cost of a power spectrum analysis, which is quite efficient. In all our figures in this section, we estimate the errorbars of our measurements using the standard deviation of 15 simulation realisations. \begin{figure}[t] \centering \includegraphics[width=0.95\textwidth]{Figures_Sim/Aidelta_thsim_bin1} \caption{Cross correlations of estimators $\hat{\Delta}_{\alpha}$ corresponding to the growth, shift, and tidal mode-couplings with the linear density field $\delta_1$. We compare theory predictions (lines) with simulations (points) for three different smoothing scales, $R=20 h^{-1}$ Mpc, $R=10 h^{-1}$ Mpc and $R=4 h^{-1}$ Mpc, corresponding to maximum wavenumbers $k_{\text{max}} = 0.05 h\, {\rm Mpc}^{-1}\,$, $0.1h\, {\rm Mpc}^{-1}\,$, and $0.25 h\, {\rm Mpc}^{-1}\,$ respectively. In this figure, we plot $\langle\hat{\Delta}_{\alpha} \delta_1 \rangle/N_{\alpha \alpha}$ (in contrast to what is defined in Eq.~\eqref{eq:quadest}, in simulations we define the estimators $\hat{\Delta}_{\alpha}$ without a prefactor $N_{\alpha\alpha}$). We find very good agreement for the growth estimator for all smoothing scales, and also reasonably good agreement for the other estimators.} \label{fig:simAidelta} \end{figure} \subsection{Cross-correlation of quadratic estimators with the initial linear field} In this section, we describe our results for the cross-correlations of three quadratic estimators $\hat{\Delta}_{\alpha}(\boldsymbol{k})$ with the initial linear field $\delta_{1}(\boldsymbol{k})$, and compare the theory predictions with simulations. The prediction is given by \begin{align*} \langle\hat{\Delta}_{\alpha}(\boldsymbol{k}) \delta_1(\boldsymbol{k}')\rangle' &= b_1 N_{\alpha\alpha}(\boldsymbol{k})\sum_{\beta\in\{G,S,T\}}c_{\beta}P_{\text{lin}}(\boldsymbol{k}) \int_{\boldsymbol{q}} \frac{f_{\alpha}(\boldsymbol{q},\boldsymbol{k}-\boldsymbol{q})f_{\beta}(\boldsymbol{q},\boldsymbol{k}-\boldsymbol{q})}{2P_{\text{tot}}(\boldsymbol{q})P_{\text{tot}}(\boldsymbol{k}-\boldsymbol{q})} W(R\boldsymbol{q})W(R(\boldsymbol{k}-\boldsymbol{q}))+ P_{\alpha,\text{shot}}(\boldsymbol{k})\\ &= b_1P_{\text{lin}}(\boldsymbol{k})\sum_{\beta\in\{G,S,T\}}c_{\beta}\frac{N_{\alpha\alpha}(\boldsymbol{k})}{N_{\alpha\beta}(\boldsymbol{k})} + P_{\alpha,\text{shot}}(\boldsymbol{k})\ , \numberthis \label{eq:Aidelta} \end{align*} where the prime on the left-hand side denotes that the factor of $(2\pi)^3 \delta_{\rm D}(\boldsymbol{k}+\boldsymbol{k}')$ has been omitted, and $c_{\beta}$ are bias parameters corresponding to the growth, shift and tidal terms and can be measured from either simulations or data. In our analysis we use the bias parameters from Table~\ref{halotable}, measured in simulations in \cite{Abidi2018}. In Eq.~\eqref{eq:Aidelta}, $P_{\alpha,\text{shot}}$ is the bispectrum shot noise term. Since one field is the linear field, all contribution to this shot noise comes from the stochastic bias terms in the two galaxy fields $\delta_{\rm g}$ in the quadratic estimator, such as $\varepsilon$ and $\varepsilon_{\delta} \delta$ (see App.~\ref{app:shot} or \citealt{Desjacques:2016bnm} for more discussion about stochastic bias terms). The expression for this shot noise contribution in this case can also be derived from Eq.~\eqref{eq:nrtshot} and it takes the form \begin{equation} P_{\alpha,\text{shot}}(\boldsymbol{k}) = \frac{b_1}{\bar{n}}P_{\text{lin}}(\boldsymbol{k})N_{\alpha\alpha}(\boldsymbol{k})\int_{\boldsymbol{q}} \frac{f_{\alpha}(\boldsymbol{q},\boldsymbol{k}-\boldsymbol{q})}{2P_{\text{tot}}(\boldsymbol{q})P_{\text{tot}}(\boldsymbol{k}-\boldsymbol{q})} W(R\boldsymbol{q})W(R(\boldsymbol{k}-\boldsymbol{q})) . \label{eq:shotalpha} \end{equation} \begin{figure*}[t] \centering % \includegraphics[width=0.95\textwidth]{Figures_Sim/AiAjFullWiener_bin1} \caption{Auto-correlations of the quadratic estimators $\hat{\Delta}_{\alpha}$, for the same smoothing scales shown in Fig.~\ref{fig:simAidelta}. The predictions for the growth estimator agree with simulations for all smoothing scales. However, for other estimators predictions agree with simulations for large smoothing scales but for the low smoothing scales, the predictions slightly disagree with simulation results as higher-order terms become more important.} \label{fig:AiAjsim} \end{figure*} In Fig.~\ref{fig:simAidelta}, we compare theory with simulations for three different values of $k_{\text{max}}$. Although for the Fisher analysis in this work, we only use the growth estimator, here we also compare results in simulations for the shift and the tidal estimators. For the growth estimator, we find that the theory predictions agree very well with simulation results for up to $k_{\text{max}}=0.25h\, {\rm Mpc}^{-1}\,$ at redshift $z=0$. For the other estimators, we also find reasonably good agreement; however, upon close inspection we can see small disagreements which might arise from higher-order terms ignored in our theory predictions. Interestingly, for $k_{\text{max}}=0.25h\, {\rm Mpc}^{-1}\,$, we can see in Fig.~\ref{fig:simAidelta} that the shape of the cross-correlation of growth estimators with the density field is very similar to the linear power spectrum on large scales. The scale-dependent bias factor in Eq.~\eqref{eq:Aidelta} is flat on large scales, indicating that the reconstruction works very well for large $k_{\text{max}}$. \subsection{Auto- and cross-correlations of quadratic estimators} In this section we discuss our results for the auto- and cross-correlations of three quadratic estimators from simulations and compare the results with our linear order theoretical prediction, given by \begin{align*} \langle\hat{\Delta}_{\alpha}(\boldsymbol{k}) \hat{\Delta}_{\beta}(\boldsymbol{k}')\rangle' & = b_1^4N_{\alpha \alpha}(\boldsymbol{k})N_{\beta \beta}(\boldsymbol{k}) \int_{\boldsymbol{q}} \frac{f_{\alpha}(\boldsymbol{q},\boldsymbol{k}-\boldsymbol{q})f_{\beta}(\boldsymbol{q},\boldsymbol{k}-\boldsymbol{q})}{\left[ 2P_{\text{tot}}(\boldsymbol{q})P_{\text{tot}}(\boldsymbol{k}-\boldsymbol{q})\right]^2} W(R\boldsymbol{q})^2W(R(\boldsymbol{k}-\boldsymbol{q}))^2 P_{\text{lin}}(\boldsymbol{q}) P_{\text{lin}}(\boldsymbol{k}-\boldsymbol{q}) \\ &\quad+ b_1^2 P_{\text{lin}}(\boldsymbol{k})\sum_{i,j}c_ic_j\frac{N_{\alpha \alpha}(\boldsymbol{k})}{N_{\alpha i}(\boldsymbol{k})}\frac{N_{\beta \beta}(\boldsymbol{k})}{N_{\beta j}(\boldsymbol{k})} + P_{\alpha\beta,\text{shot}}(\boldsymbol{k})\ . \numberthis \label{eq:simAiAj} \end{align*} The first term is of order $\mathcal{O}(\delta_1^4)$, while the second and third are of order $\mathcal{O}(\delta_1^6)$. The third term, $P_{\alpha\beta,\text{shot}}$, is the contribution arising from halo shot noise, and is given in App.~\ref{app:shot}. In Fig.~\ref{fig:AiAjsim} we compare cross-correlation results from simulations with theory, for the growth, shift, and tidal estimators, using the same three smoothing scales as above. The simulations and theory agree very well up to $k_{\text{max}}=0.1h\, {\rm Mpc}^{-1}\,$ at $z=0$. For larger $k_{\text{max}}$ we see good agreement for the growth estimator and reasonable agreement for the tidal and shift estimators. The small disagreement of linear predictions for the tidal and shift estimators with simulations for the higher $k_{\text{max}}$ show that higher-order terms become important for these estimators. The detailed impact of these higher order corrections from biasing or scale dependent stochasticity will be subject of future inquiry. Although we appear to have excellent agreement for the growth term at higher $k_{\text{max}}$, to be conservative, we still set the scale $k_{\text{max}}=0.1 h$ Mpc$^{-1}$ at redshift $z=0$ in our forecasts in Sec.~\ref{sec:forecasts}. We scale this to other redshifts by making use of the fact that perturbation theory and the bias expansion at a given order will be valid at higher $k$ for higher redshifts. \begin{figure*}[t] \centering \includegraphics[height=4.8in,width=0.9\textwidth]{Figures_Sim/g_autocorrelations_new_rgd.pdf} \caption{Comparison of the auto power spectrum of the growth estimator $\hat{\Delta}_{\text{G}}$, normalised by $N_{\text{GG}}$ computed from theory (in red), with $(\langle\hat{\Delta}_{\text{G}}\delta_1\rangle)^2/P_{\text{lin}}$ (in blue). We compare simulation results (points) with theory predictions (lines) for the same smoothing scales as Figs.~\ref{fig:simAidelta} and~\ref{fig:AiAjsim}. We again find excellent agreement between simulations and theory. In the bottom right panel, we plot the cross-correlation coefficient $r_{\text{G}\delta_1}$ between the growth estimator and the linear density field for three smoothing scales. We see that $r_{\text{G}\delta_1} >0.9$ for $R=4 h^{-1}$ Mpc which is why in the bottom left panel, $\langle\hat{\Delta}_{\text{G}} \hat{\Delta}_{\text{G}}\rangle$ is signal dominated.} \label{fig:growthsim} \end{figure*} In Fig.~\ref{fig:growthsim}, we plot the auto spectra of the growth estimator, normalized with $N_{\rm GG}$ (unlike in the previous plots), in order to compare them to an approximation of the signal power spectrum, given by the second term in Eq.~\eqref{eq:simAiAj} (the first and third terms represent noise). Since the contribution of the cross-shot noise is small, the signal part can be approximated by cross-correlating the growth estimator with the linear density field and dividing it by the linear power spectrum to ensure the correct normalization, i.e. $(\langle\hat{\Delta}_{\text{G}}\delta_1\rangle)^2/P_{\text{lin}}$; we show this in blue in Fig.~\ref{fig:growthsim}. For the two larger smoothing scales, the spectra of the estimator are dominated by the noise contribution (which is white at low $k$). The excellent agreement between theory (red solid lines) and simulations (red points) for all smoothing scales serves as an additional verification that the reconstruction procedure is working as expected for reasonable values of $k_{\rm max}$. In addition to the auto spectra, to check how well the reconstruction is working, we plot the cross-correlation coefficients between the growth estimator and the linear density field in the bottom right panel of Fig.~\ref{fig:growthsim} for three different $k_{\text{max}}$. The cross-correlation coefficient for low $k_{\text{max}}$ is very low, $r_{\text{G}\delta_1} < 0.4$, explaining why the auto spectra in the top left panel are noise dominated. However, for highest $k_{\rm max}$ we consider, $0.25h\, {\rm Mpc}^{-1}\,$, the cross-correlation coefficient is $r_{\text{G}\delta_1} > 0.9$, which explains why the reconstruction works very well and the auto spectra for high $k_{\text{max}}$ are signal dominated. \subsection{Visualization of reconstructed field} To visualize how well we are reconstructing the linear density field on large scales in simulations, we compare 2D slices of thickness $6 h^{-1}\text{Mpc}$ of the linear density field and the reconstructed field in Fig.~\ref{fig:densplot}. We perform the reconstruction using $k_{\text{max}} = 0.25 h$ Mpc$^{-1}$, i.e., smoothing at a scale of $R= 4 h^{-1}$ Mpc. In the visualization, we apply an external smoothing of $R=20 h^{-1}$ Mpc to both the linear field and the reconstructed field, which removes all modes with $k > 0.05 h\, {\rm Mpc}^{-1}\,$. Our comparison of the linear and reconstructed fields in Fig.~\ref{fig:densplot} shows that the reconstruction indeed recovers most of the large scale features in the linear density field. In Fig.~\ref{fig:pdfsdh} we show histograms, probing the one-point probability distribution functions, of the linear density field and the reconstructed field. We see that the reconstructed field is nearly Gaussian, partially justifying our approximation of a Gaussian likelihood in the next section. \begin{figure}[t] \centering \includegraphics[width=0.42\textwidth]{Figures_Sim/linear.jpg} \includegraphics[width=0.485\textwidth]{Figures_Sim/recon.jpg} \caption{2D slices of the 3D linear density field (left panel) and the growth estimator $\hat{\Delta}_{\text{G}}$ (right panel). For the growth estimator we used $R=4 h^{-1}$ Mpc smoothing which corresponds to $k_{\text{max}}=0.25 h$ Mpc$^{-1}$. We apply an external smoothing of $R=20 h^{-1}$ Mpc to both the linear and reconstructed fields. As expected, we find that the reconstruction reproduces many of the large-scale features in the linear density field.} \label{fig:densplot} \end{figure} \begin{figure} \centering \includegraphics[width=0.55\textwidth]{Figures_Sim/pdfs_kmin.pdf} \caption{Probability distribution functions (histograms) of the linear density field and the reconstructed field from the halo density field of mass bin I. As in Fig.~\ref{fig:densplot}, we use $k_{\text{max}} = 0.25 h\, {\rm Mpc}^{-1}\,$ for the reconstruction and apply an external smoothing scale of $R=20h^{-1}$ Mpc to both the linear field and the reconstructed field. The PDFs of the reconstructed field are scaled to have the same variance as the linear field, and shifted to have mean 0. We find that the PDF of the reconstructed field is very close to Gaussian. Note that here we have applied a low-$k$ cutoff to the modes used for reconstruction of $k_\text{min}=0.05 h\ \text{Mpc}^{-1}$ in order to match the approach in our forecast section below.} \label{fig:pdfsdh} \end{figure}
{ "timestamp": "2020-07-17T02:21:50", "yymm": "2007", "arxiv_id": "2007.08472", "language": "en", "url": "https://arxiv.org/abs/2007.08472" }
\section{INTRODUCTION} Stability and convergence analysis of adaptive controller schemes has traditionally been based on Lyapunov stability notions and techniques \cite{sun,fidan,narendrabook,goodwinbook,krsticbookadaptive} . Lyapunov-like functions are selected in the design of adaptive control to penalize the magnitude of the tracking or regulation error but at the same time useful in designing an adaptive law to generate the parameter estimates to feed the control law. Control designs targeting to drive the Lyapunov-like functions to zero lead to gradient based adaptive laws using a constant adaptive gains due to its convenient structure. On the other hand, it is well observed that least-squares (LS) algorithms have the advantage of faster convergence; hence, LS based adaptive control has potential to enhance convergence performance in direct adaptive control approaches as well \cite{fidan,guler1,guler2,krstic,ls3}. Despite wide use of gradient on-line parameter based identifiers, LS adaptive algorithms with forgetting factor are developed to be capable of faster settling and/or being less sensitive to measurement noises in \cite{guler1},\cite{guler2}. Such properties being justified by simulation and experiment results. LS parameter estimation have been used for convergence and robustness analysis in either indirect adaptive controllers or combination of indirect adaptive controller with direct one \cite{ls3,ls4,ls6,ls7,ls9,ls10,ls11,cho,krstic,karafyllis2018,jiang2015}. In addition to the existing LS based adaptive control theory studies, there are some publications in the recent literature on real-time applications to robotic manipulators \cite{ls_app3,ls_app4,ls_app7}, unmanned aerial vehicles \cite{ls_app1,ls_app8}, and passenger vehicles \cite{vahidi,bae,pavkovic2009,acosta2016,chen2018,seyed2019,patra2015}. Most of the existing studies on LS based adaptive control follow the indirect approach as opposed to direct adaptive control. One reason for this is that the analysis of direct adaptive control is complicated for producing an LS based adaptive control scheme in the Lyapunov-based design. Unlike indirect ones, in direct adaptive control schemes, the estimated parameters are those directly used in the adaptive control laws without any intermediate step. This paper proposes a constructive analysis framework for recursive LS (RLS) on-line parameter identifier based direct adaptive control. In the literature, \cite{sun,fidan} considered the possible use of LS on-line parameter identifier in direct model reference adaptive control (MRAC); however, full details of design was not provided, and Lyapunov analysis with LS parameter estimation was not mentioned. Constructive Lyapunov analysis of RLS parameter estimation based direct adaptive control, aiming to construct the adaptive and control laws of the adaptive control scheme via this analysis, is approached in this paper following the steps of the analysis for gradient parameter identification based schemes in the literature. The main difference is replacement of the constant adaptation gain in gradient based schemes, with a time varying adaptive gain (covariance) matrix. Aiming to make this replacement systematic and to formally quantify the use of the time-varying adaptive gain, a new Lyapunov-like function is constructed through the provided analysis. Later in the paper, to demonstrate the transient performance of RLS parameter identification based direct adaptive control a simulation based case study on adaptive cruise control (ACC) is provided, where the performance is compared in detail with that of gradient parameter estimation based schemes. The comparative simulation testing and analysis are performed in Matlab/Simulink and CarSim environments. The following sections of the paper are organized as follows. Section II is dedicated to basics of direct MRAC design. Section III provides Lyapunov-like function composition and analysis. Comparative simulation testing and analysis of an ACC case study for RLS based vs gradient based MRAC is presented in Section IV. Final remarks of the paper are given in Section V. \section{Background: Direct Model Reference Adaptive Control (MRAC) Design} In MRAC, desired plant behaviour is described by a reference model which is often formulated in the form of a transfer function driven by a reference signal. Then, a control law is developed via model matching so that the closed loop system has a transfer function equal to the reference model \cite{sun,fidan,narendrabook,goodwinbook}. In MRAC, the plant has to be minimum phase, i.e, all zeros have to be stable. Consider the SISO LTI plant \begin{equation} \label{plant_AC} \begin{aligned} \dot{x}_p &= A_p x_p + B_p u_p, \quad x(0) = x_0, &\\ y_p& = C_p^T x_p, & \end{aligned} \end{equation} \noindent where $x_p \in \mathbb{R}^n, y_p,u_p \in \mathbb{R}$ and $A_p, B_p, C_p$ have the appropriate dimensions. The transfer function of the plant is given by \begin{equation} G_p (s)=k_p \frac{Z_p (s)}{R_p (s)}, \end{equation} \noindent where $k_p$ is the high frequency gain. The reference model is described by \begin{equation} \label{refmodel_mrac} \begin{aligned} \dot{x}_m &= A_m x_m + B_m r, \quad x_m(0) = x_{m0}, &\\ y_m& = C_m^T x_m & \end{aligned} \end{equation} \noindent The transfer function of the reference model \eqref{refmodel_mrac} is given by \begin{equation} \label{wm} W_m (s)=k_m \frac{Z_m (s)}{R_m (s)}, \end{equation} \noindent with constant design parameter $k_m$. The control task \cite{sun,fidan} is to find the plant input $u_p$ so that all signals are bounded and the plant output $y_p$ tracks the reference model output $y_m$ with given reference input $r(t)$, under the following assumptions: \begin{assumption} \textbf{Plant Assumptions} \begin{itemize} \item[i] $Z_p (s)$ is a monic Hurwitz polynomial. \item[ii] Upper bound $n$ of the degree $n_p$ of $R_p (s)$ is known. \item[iii] Relative degree $n^*=n_p-m_p$ of $G_p (s)$ is known and $m_p$ is the degree of $Z_p (s)$. \item[iv] The sign of $k_p$ is known. \end{itemize} \end{assumption} \begin{assumption} \textbf{Reference Model Assumptions} \begin{itemize} \item[i] $Z_m (s), R_m (s)$ are monic Hurwitz polynomials of the degree of $q_m,p_m$, respectively. \item[ii] Relative degree $n_m=p_m-q_m$ of $W_m (s)$ is the same as that of $G_p (s)$, i.e, $n^*=n^*_m$. \end{itemize} \end{assumption} \noindent Consider the fictitious feedback control law \cite{sun,fidan} \begin{equation} \label{direct_u} u_p={\theta^*_1}^T \frac{\alpha(s)}{\Lambda (s)} u_p+ {\theta^*_2}^T \frac{\alpha(s)}{\Lambda(s)} y_p+\theta^*_3 y_p+c^*_0 r, \end{equation} \noindent where \begin{equation} \begin{aligned} & c^*_0=\frac{k_m}{k_p}, &\\ &\alpha (s) \triangleq \alpha_{n-2} (s) = [s^{n-2}, s^{n-3}, \cdots, s, 1]^T \quad \quad \text{for} \quad n\geq2, &\\ &\alpha (s) \triangleq 0 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \text{for} \quad n = 1. & \end{aligned} \end{equation} \noindent $\Lambda (s)$ is an arbitrary monic Hurwitz polynomial of degree $n-1$ containing $Z_m (s)$ as a factor, i.e., \begin{equation*} \Lambda (s) = \Lambda_0 (s) Z_m(s) \end{equation*} \noindent implying that $\Lambda_0 (s)$ is monic and Hurwitz. The fictitious ideal model reference control (MRC) parameter vector $ \theta^*=\begin{bmatrix} \theta^{*T}_1 & \theta^{*T}_2 & \theta^*_3 & c^*_0 \end{bmatrix}^T$ is chosen so that the transfer function from $r$ to $y_p$ is equal to $W_m (s)$. \noindent The closed-loop reference to output relation for the MRAC scheme above is derived in \cite{sun,fidan} as \begin{equation} \label{gc} y_p = G_c (s) r, \end{equation} \noindent where \begin{equation*} G_c(s) = \frac{c^*_0 k_p Z_p \Lambda^2}{\Lambda ( \Lambda- \theta^{*T}_1 (\alpha ) R_p - k_p Z_p [\theta^{*T}_2 \alpha + \theta^*_3 \Lambda]}. \end{equation*} \noindent The ideal MRC parameter vector $\theta^*$ is selected to match the coefficients of $G_c (s)$ in \eqref{gc} and $W_m (s)$ in \eqref{wm}. Nonzero intial conditions will affect the transient response of $y_p(t)$. \noindent A state-space realization of the control law (\ref{direct_u}) is given by \cite{sun,fidan} \begin{equation} \label{ne} \begin{aligned} &\dot{\omega}_1=F\omega_1+g u_p, \quad \omega_1(0)=0, &\\& \dot{\omega}_2=F\omega_2+g y_p, \quad \omega_2(0)=0, &\\& u_p=\theta^{*T} \omega,& \end{aligned} \end{equation} \noindent where $\omega_1, \omega_2 \in \mathbb{R}^{n-1},$ \begin{equation} \begin{aligned} &\theta^*=\begin{bmatrix} \theta^{*T}_1 & \theta^{*T}_2 & \theta^*_3 & c^*_0 \end{bmatrix}^T, \quad \omega=\begin{bmatrix} \omega_1 & \omega_2 & y_p & r \end{bmatrix}^T, & \\ &F= \begin{bmatrix} -\lambda_{n-2} & -\lambda_{n-3} & -\lambda_{n-4} & \cdots & -\lambda_{0} \\ 1 & 0 &0 & \cdots & 0 \\ 0 & 1 & 0 & \cdot & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \end{bmatrix}, &\\ & \Lambda (s)=s^{n-1}+\lambda_{n-2} s^{n-2} +\cdots+\lambda_1 s+\lambda_0=det(sI-F), &\\& g=\begin{bmatrix} 1 & 0 & \cdots&0 \end{bmatrix}^T.& \end{aligned} \end{equation} \noindent Following the certainty equivalence approach,state-space realization of the actual adaptive control law is given by (\ref{ne}): \begin{equation} \label{mracsummary} \begin{aligned} &\dot{\omega}_1=F\omega_1+g u_p, \quad \omega_1(0)=0, &\\& \dot{\omega}_2=F\omega_2+g y_p, \quad \omega_2(0)=0, &\\& u_p=\theta^{T} \omega,& \end{aligned} \end{equation} \noindent where $\theta(t)$ is the online estimate of the unknown ideal MRC parameter vector $\theta^*$. In order to find the adaptive law generating $\theta(t)$, first, a composite state space representation of the plant and controller \cite{fidan} is considered as follows: \begin{equation} \label{sys_eq} \begin{aligned} \dot{Y}_c &= A_0 Y_c + B_c u_p, &\\ y_p &= C_c^T Y_c, &\\ u_p &= \theta^T \omega, \end{aligned} \end{equation} \noindent where $Y_c=[x_P^T,\omega_1^T, \omega_2^T]^T$, \begin{equation} A_0 = \begin{bmatrix} A_p & 0 & 0 \\ 0 & F & 0 \\ g C_p^T & 0 & F \end{bmatrix} , \quad B_c=\begin{bmatrix} B_p \\ g \\ 0 \end{bmatrix}, \quad C_c^T= \begin{bmatrix} C_p^T, & 0,& 0 \end{bmatrix}. \end{equation} \noindent The system equation \eqref{sys_eq} can be further written \cite{fidan} in compact form \begin{equation} \begin{aligned} \dot{Y}_c &= A_c Y_c + B_c c_0^* r + B_c ( u_p - \theta^{*T} \omega ), \quad Y_c(0)=Y_0 &\\ y_p &= C_c^T Y_c, & \end{aligned} \end{equation} \noindent where \begin{equation} A_c=\begin{bmatrix} A_p+B_p \theta^*C_p^T & B_p \theta_1^{*T} & B_p \theta_2^{*T} \\ g\theta^*_3C^T_p & F+g\theta^{*T}_1 & g\theta^{*T}_2 \\ g C^T_p & 0 & F \end{bmatrix}. \end{equation} \noindent Let the state error be \begin{equation} \label{state_error} e=Y_c-Y_m, \end{equation} \noindent and output tracking error be \begin{equation} \label{track_error} e_1=y_p-y_m. \end{equation} \noindent Error equation is written using \eqref{state_error} and \eqref{track_error} as follows: \begin{equation} \label{error_eq} \begin{aligned} &\dot{e}=A_c e+B_c (u_p-\theta^{*T} \omega), \quad e(0)=e_0, &\\& e_1=C^T_c e, & \end{aligned} \end{equation} \noindent where $A_c, B_c, C_c$ represent the parameter matrices of the plant in state space realization. We have \begin{equation} W_m(s) = C_c^T (sI-A_c)^{-1} B_c c_0^*, \end{equation} \noindent then $e_1$ becomes \begin{equation} e_1 = W_m(s) \rho^* ( u_p - \theta^{*T} \omega ). \end{equation} \noindent The estimate $\hat{e}_1$ of $e_1$ is defined as \begin{equation} \hat{e}_1 = W_m(s) \rho ( u_p - \theta^{T} \omega ), \end{equation} \noindent where $\rho$ is the estimate of $\rho^*$. Since the control input is \begin{equation} \label{controlinputdirect} u_p = \theta^T(t) \omega, \end{equation} \noindent the estimate $\hat{e}_1$ and the estimation error $\epsilon_1$ becomes \begin{equation} \label{errors} \hat{e}_1=0, \quad \epsilon_1 = e_1-\hat{e}_1 = e_1. \end{equation} \noindent Substituting \eqref{errors} into \eqref{error_eq}, we obtain \begin{equation} \label{error_eq2} \begin{aligned} &\dot{e}=A_c e+\bar{B}_c \rho^* \theta^{T} \omega), &\\& e_1=C^T_c e, & \end{aligned} \end{equation} \noindent where \begin{equation} \label{error_eq2} \begin{aligned} &\dot{e}=A_c e+B_c \tilde{\theta}^{T} \omega, &\\& e_1=C^T_c e, & \end{aligned} \end{equation} \noindent where \begin{equation} \tilde{\theta} = \theta(t)-\theta^*. \end{equation} \begin{comment} The parametric model based on the tracking error and reference model is obtained as follows: \begin{equation} \begin{aligned} &z=\rho^* ({\theta^*}^T \phi + z_0), \quad \theta^*=\begin{bmatrix} \theta^*_1 & \theta^*_2 & \theta^*_3 & c^*_0 \end{bmatrix}^T, &\\& c^*_0=1, \quad \rho^*=\frac{1}{c^*_0}=1, &\\& z=e, \quad \phi= -W_m (s) \omega, \quad z_0=W_m (s) u_p, &\\& \omega=\begin{bmatrix} \omega^T_1 & \omega^T_2 & y_p & r \end{bmatrix}^T=\begin{bmatrix} \frac{\alpha(s)}{\Lambda (s)} u & \frac{\alpha(s)}{\Lambda (s)} y_p & y_p & r \end{bmatrix}^T. & \end{aligned} \end{equation} \noindent Estimation model and estimation error are written as \begin{equation} \begin{aligned} & z=\rho ({\theta}^T \phi + z_0), \quad \theta=\begin{bmatrix} \theta^T_1 & \theta^T_2 & \theta^T_3 & c_0 \end{bmatrix}^T, &\\& \varepsilon=\frac{z(t)-\hat{z}(t)}{m^2_s (t)}=\frac{z(t)-\rho (t) \zeta (t)}{m^2_s (t)}, \quad \zeta (t)=\theta^T (t) \phi (t)+z_o (t). & \end{aligned} \end{equation} \end{comment} \section{Lyapunov-Like Function Composition and Analysis for Least-Squares Based Direct MRAC} \noindent In the typical direct adaptive control designs of the literature, which are gradient adaptive law based, the Lyapunov-like function is chosen as \begin{equation} \label{lyapunov_direct} V(\tilde{\theta}, e)=\frac{e^T P_c e}{2}+\frac{\tilde{\theta}^T \Gamma^{-1} \tilde{\theta}}{2}|\rho^*|, \end{equation} \noindent where $\tilde{\theta}=\theta-\theta^*$, $\theta^*$ and $\theta$, respectively, are the ideal MRC and actual MRAC parameter vectors defined in Section II, $P_c=P^T_c$ is a positive definite matrix satisfying certain conditions to be detailed in the sequel, and $\Gamma=\Gamma^T$ is a constant positive definite adaptive gain matrix. $P_c$ is selected to satisfy the Meyer-Kalman-Yakubovich Lemma \cite{fidan} algebraic equations \begin{equation} \begin{aligned} P_c A_c+ A_c^T P_c&=-qq^T-\nu_c L_c, &\\P_c B_c c^*_0&=C_c, & \end{aligned} \end{equation} \noindent where $q$ is a vector, $L_c=L^T_c >0$, and $\nu_c>0$ is small constant. The time derivative $\dot{V}$ of $V$ along the solution of (\ref{lyapunov_direct}) is \begin{equation} \dot{V}=-\frac{e^T qq^T e}{2}-\frac{\nu_c}{2}e^T L_c e+e^T P_c B_c c^*_0\rho^* \tilde{\theta}^T \omega+\tilde{\theta}^T \Gamma^{-1} \dot{\tilde{\theta}}|\rho^*|. \end{equation} \noindent Since $e^T P_c B_c c^*_0=e^T C_c=e_1$ and $\rho^*=|\rho^*| sgn(\rho^*)$, $\dot{V} \leq 0$ is established by defining the gradient based adaptive law \begin{equation} \label{gradient_lyapunov} \dot{\theta}=-\Gamma e_1 \omega sgn(\rho^*), \end{equation} \noindent which, noting that $\dot{\tilde{\theta}}=\dot{\theta}$ leads to \begin{equation} \label{gradient_lyapunov2} \dot{V}=-\frac{e^T qq^T e}{2}-\frac{\nu_c}{2}e^T L_c e. \end{equation} \noindent \eqref{lyapunov_direct} and \eqref{gradient_lyapunov2} imply that $V, e, \tilde{\theta} \in \mathcal{L}_\infty$. Since $e = Y_c - Y_m$ and $Y_m \in \mathcal{L}_\infty$, $Y_c \in \mathcal{L}_\infty$ that gives use $ x_p, y_p, \omega_1, \omega_2 \in \mathcal{L}_\infty$. We also know that $u_p = \theta^T\omega$ and $\theta, \omega \in \mathcal{L}_\infty$; therefore, $u_p \in \mathcal{L}_\infty$. All the signals in the closed-loop plant are bounded. Hence, the tracking error $e_1 = y_p - y_m$ goes to zero as time goes to infinity. With the gradient based adaptive law \eqref{gradient_lyapunov} with constant adaptive $\Gamma$ gain, fast adaptation can be achieved only by using a large adaptive gain to reduce the tracking error rapidly. However, introduction of a large adaptive gain $\Gamma$ in many cases leads to high-frequency oscillations which adversely affect robustness of the adaptive control law. As adaptive gain increases, time delay for a standard MRAC decreases causing loss of robustness. Unlike the gradient based adaptive law \eqref{gradient_lyapunov} with constant adaptive gain $\Gamma$, generation of a time varying adaptive law gain matrix $P(t)$ that is adjusted based on identification error during estimation process, would allow an initial large adaptive gain to be set arbitrarily and then to be driven to lower values to adaptively achieve the desired tracking performance. For generation of the time varying gain $P(t)$, an efficient systematic approach is use of LS based adaptive laws, which are observed to have the advantage of faster convergence and robustness to measurement noises \cite{fidan,guler1,guler2,krstic,ls3}. Next, we propose a formal constructive analysis framework for integration of recursive LS (RLS) based estimation to direct adaptive control, following the typical steps above, but constructing a new Lyapunov-like function for the analysis to replace \eqref{lyapunov_direct} and end up with a control law that is either the same as or similar to \eqref{controlinputdirect} together with and adaptive law that is the RLS based alternative of \eqref{gradient_lyapunov}. A reverse process of this analysis, starting with the following RLS based alternative of the adaptive law will be given in the following design. We define $\dot{\theta}$ in terms of RLS algorithm by replacing the constant adaptive gain $\Gamma$ with the time-varying gain matrix $P(t)$. In this regard, we write Lyapunov-like function in \eqref{lyapunov_direct} as follows: \begin{equation} \label{lyapunov_direct_ls} V(\tilde{\theta}, e)=\frac{e^T P_c e}{2}+\frac{\tilde{\theta}^T P^{-1} \tilde{\theta}}{2}|\rho^*|, \end{equation} The time derivative $\dot{V}$ of $V$ along the solution of (\ref{lyapunov_direct_ls}) is \begin{equation} \label{lyapunov_direct_ls_deriv} \begin{aligned} \dot{V}=&-\frac{e^T qq^T e}{2}-\frac{\nu_c}{2}e^T L_c e+e^T P_c B_c c^*_0\rho^* \tilde{\theta}^T \omega &\\ &+\frac{1}{2} \tilde{\theta}^T \frac{d(P^{-1})}{dt} \tilde{\theta}|\rho^*| + \tilde{\theta}^T P^{-1} \dot{\tilde{\theta}} |\rho^*|,& \end{aligned} \end{equation} where \begin{equation} \label{fraction} \frac{d(P^{-1})}{dt} = -P^{-1} \dot{P} P^{-1}. \end{equation} If $P(t)$ is updated according to the RLS adaptive law with forgetting factor, \begin{equation} \label{pdotnew} \dot{P}=\beta P - P \omega \omega^T P, \end{equation} then \eqref{fraction} becomes \begin{equation} \label{fraction2} \frac{d(P^{-1})}{dt} = -\beta P^{-1} + \omega \omega^T. \end{equation} Substituting \eqref{fraction2} into \eqref{lyapunov_direct_ls_deriv}, we have \begin{equation} \label{lyapunov_direct_ls_deriv1} \begin{aligned} \dot{V}=&-\frac{e^T qq^T e}{2}-\frac{\nu_c}{2}e^T L_c e +e_1 \rho^* \tilde{\theta}^T \omega - \frac{\beta}{2} \tilde{\theta}^T P^{-1} \tilde{\theta}|\rho^*| &\\&+ \tilde{\theta}^T P^{-1} \dot{\tilde{\theta}} |\rho^*| + \frac{\epsilon^2}{2} |\rho^*|,& \end{aligned} \end{equation} where \begin{equation}\label{epsilon33} \epsilon = \tilde{\theta}^T \omega. \end{equation} $\dot{V} \leq 0$ can be established by choosing \begin{equation} \label{ls_lyapunov1} \dot{\theta}=-P e_1 \omega sgn(\rho^*) +\frac{1}{2} P \epsilon \omega, \end{equation} and noting that $\dot{\tilde{\theta}}=\dot{\theta}$. The equations \eqref{ls_lyapunov1} and \eqref{pdotnew} constitute a new adaptive law based on RLS algorithm for generating the time-varying gain $P(t)$. Substituting these equations into \eqref{lyapunov_direct_ls_deriv1}, \eqref{lyapunov_direct_ls_deriv} becomes \begin{equation} \label{lyapunov_direct_ls_deriv2} \dot{V}=-\frac{e^T qq^T e}{2}-\frac{\nu_c}{2}e^T L_c e \leq 0, \end{equation} leading to the following theorem, which summarizes the stability properties of the LS based direct MRAC scheme \eqref{mracsummary},\eqref{pdotnew},\eqref{ls_lyapunov1}. \begin{theorem} The RLS parameter estimation based MRAC scheme \eqref{mracsummary},\eqref{pdotnew},\eqref{ls_lyapunov1} has the following properties: \begin{itemize} \item[i.] All signals in the closed-loop are bounded and tracking error converges to zero in time for any reference input $r \in \mathcal{L}_\infty$. \item[ii.] If the reference input $r$ is sufficiently rich of order $2n$, $\dot{r}\in \mathcal{L}_\infty$, and $Z_p(s), R_p(s)$ are relatively coprime, then $\omega$ is persistently exciting (PE), viz., \begin{equation} \label{PE} \int^{t+T_0}_{t} \omega(\tau) \omega^T(\tau) d\tau \geq \alpha_0T_0 I, \quad \alpha_0, T_0 >0, \quad \forall t\geq 0, \end{equation} which implies that $P,P^{-1} \in \mathcal{L}_{\infty}$ and $\theta(t)\to\theta^{*}$ as $t\to\infty$. When $\beta>0$, the parameter error $\parallel \tilde{\theta} \parallel = \parallel \theta-\theta^* \parallel$ and the tracking error $e_1$ converges to zero exponentially fast. \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item[i.] $e \in \mathcal{L}_2$, $\theta,\omega, \in \mathcal{L}_\infty$, and $\dot{e} \in \mathcal{L}_\infty$. Therefore, all signals in the closed loop plant are bounded. In order to complete the design, we need to show tracking error $e_1$ converges to the zero asymptotically with time. Using \eqref{lyapunov_direct_ls}, \eqref{lyapunov_direct_ls_deriv2}, we know that $e, e_1 \in \mathcal{L}_2$. Using, $\theta,\omega,e \in \mathcal{L}_\infty$ in \eqref{error_eq2}, we have $\dot{e},\dot{e}_1 \in \mathcal{L}_\infty$. Since $\dot{e},\dot{e}_1 \in \mathcal{L}_\infty$ and $e_1 \in \mathcal{L}_2$, the tracking error $e_1$ goes to zero as $t$ goes to infinity. \item[ii.] By Theorem 3.4.3 of \cite{fidan}, if $r$ is sufficiently rich of order $2n$ then the $2n$ dimensional regressor vector $\omega$ is PE. Let $Q=P^{-1}$ and \eqref{fraction2} can be rewritten as \begin{equation} \label{fraction2_new} \dot{Q} = -\beta Q + \omega \omega^T. \end{equation} and the solution becomes \begin{equation} \label{Q_integral} Q(t) = e^{-\beta t} Q_0 + \int^t_0 e^{-\beta (t-\tau)} \omega(\tau) \omega^T(\tau) d\tau. \end{equation} Since $\omega(t)$ is PE, \begin{equation} \label{Q_integral2} \begin{aligned} Q(t) &\geq \int^t_{T_0} e^{-\beta (t-\tau)} \omega(\tau) \omega^T(\tau) d\tau &\\&\geq \bar{\alpha}_0 e^{-\beta T_0} \int^t_{T_0} e^{-\beta (t-\tau)} \omega(\tau) \omega^T(\tau) d\tau &\\&\geq \beta_1 e^{-\beta T_0} I, \qquad \forall t \geq T_0,& \end{aligned} \end{equation} where $\beta_1=\bar{\alpha}_0\alpha_0T_0,$ and $\alpha_0,\bar{\alpha}_0,T_0>0$ are design constants, given in \eqref{PE}. For $t\leq T_0$, \begin{equation} \label{c12} Q(t) \geq e^{-\beta T_0}Q_0 \geq \lambda_{min}(Q_0)e^{-\beta T_0}I \geq \gamma_1 I \quad \forall t \geq 0, \end{equation} where $\gamma_1 = min\{\frac{\alpha_0 T_0}{\beta},\lambda_{min}(Q_0)\}e^{-\beta T_0}$. Since $\omega$ is PE, \begin{equation} \label{c13} Q(t) \leq Q_0 +\beta_2\int^t_{0} e^{-\beta (t-\tau)}d\tau I \leq \gamma_2 I, \quad \beta_2>0. \end{equation} where $\gamma_2 = \lambda_{max}(Q_0)+\frac{\beta_2}{\beta}>0$. Using \eqref{c12} and \eqref{c13}, we obtain \begin{equation} \gamma^{-1}_2 I \leq P(t)=Q((t)\leq \gamma^{-1}_1 I. \end{equation} Therefore, $P(t),Q(t)\in\mathcal{L}_\infty$. Exponential convergence is established following steps similar to those in \cite{fidan}. \end{itemize} \end{proof} \noindent Comparing two adaptive laws in \eqref{gradient_lyapunov} and \eqref{ls_lyapunov1}, we can clearly see the effect of time varying covariance matrix reflected as an additional term to the similar part of \eqref{gradient_lyapunov}. \section{Case Study} For the application of RLS based adaptive control, ACC case study is considered. A basic ACC scheme is given in Fig. \ref{acc_scheme}. ACC regulates the following vehicle's speed $v$ towards the leading vehicle's speed $v_l$ and keeps the distance between vehicles $x_r$ close to desired spacing $s_d$. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth , height=0.1 \textwidth]{car.pdf} \caption{Leading and following vehicles.} \label{acc_scheme} \end{figure} \noindent The control objective in ACC is to make the speed error close to zero as time increases. This objective can be expressed as \begin{equation} v_r \rightarrow 0, \quad \delta \rightarrow 0, \quad t\rightarrow \infty, \end{equation} \noindent where $v_r =v_l-v$ which is defined as the speed error or sometimes relative speed, $\delta=x_r-s_d$ is the spacing error. The desired spacing is proportional to the speed since the desired spacing between vehicles is given as \begin{equation} s_d=s_0+h v \end{equation} \noindent where $s_0$ is the fixed spacing for safety so that the vehicles are not touching each other at zero speed and $h$ is constant time headway. Control objective should also satisfies that $a_{min} \leq \dot{v} \leq a_{max}$, and small $|\ddot{v}|$. First constraint restricts ACC vehicle generating high acceleration and the second one is given for the driver's comfort. For ACC system, a simple model is considered approximating the actual vehicle longitudinal model without considering nonlinear dynamics which is given by \begin{equation} \dot{v}=-a v+b u+d, \end{equation} where $v$ is the longitudinal speed, $u$ is the throttle/brake command, $d$ is the modeling uncertainty, $a$ and $b$ are positive constant parameters. We assume that $d,\dot{d}v_l,\dot{v}_l$ are all bounded. MRAC is considered so that the throttle command $u$ forces the vehicle speed to follow the output of the reference model \begin{equation} v_m=\frac{a_m}{s+a_m} (v_l+k\delta), \end{equation} where $a_m$ and $k$ are positive design parameters.We first assume that $a,b,$ and $d$ are known and consider the control law as follows: \begin{equation} u=k^*_1 v_r+k^*_2 \delta +k^*_3, \end{equation} \begin{equation} k^*_1=\frac{a_m-a}{b}, \quad k^*_2=\frac{a_m k}{b}, \quad k^*_3=\frac{a v_l-d}{b}. \end{equation} Since $a,b,$ and $d$ are unknown, we change the control law as \begin{equation} \label{u_acc} u=k_1 v_r+k_2 \delta +k_3, \end{equation} where $k_i$ is the estimate of $k^*_i$ to be generated by the adaptive law so that the closed-loop stability is guaranteed. The tracking error is given as \begin{equation} \label{acc_bdpm} e=v-v_m=\frac{b}{s+a_m}(k^*_1 v_r+k^*_2 \delta +k^*_3+u). \end{equation} (\ref{acc_bdpm}) is in the form of B-DPM. Substituting the control law in (\ref{u_acc}) into \eqref{acc_bdpm}, we obtain \begin{equation} \label{acc_error} e=\frac{b}{s+a_m}(\tilde{k}_1 v_r+\tilde{k}_2 \delta +\tilde{k}_3), \end{equation} where $\tilde{k}_i=k_i-k^*_i$ for $i=1,2,3$. In order to find the adaptive law, consider the Lyapunov function and its time derivative \cite{fidan} as \begin{equation} \label{acc_lyapunov} V=\frac{e^2}{2}+\sum ^{3}_{i=1} \frac{b}{2 \gamma _i} \tilde{k}^2_i \quad \gamma_i>0, b>0, \end{equation} \begin{equation} \dot{V}=-a_m e^2+b e (\tilde{k}_1 v_r+\tilde{k}_2 \delta +\tilde{k}_3) + \sum ^{3}_{i=1} \frac{b}{2 \gamma _i} \tilde{k}_i \dot{\tilde{k}}_i . \end{equation} Therefore, the following gradient based adaptive laws are applied to ACC \begin{equation} \begin{aligned} \dot{k}_1&=Pr\{-\gamma_1 e v_r\}, &\\ \dot{k}_2&=Pr\{-\gamma_2 e \delta\}, &\\ \dot{k}_3&=Pr\{-\gamma_3 e\},& \end{aligned} \end{equation} where the projection operator keeps $k_i$ within the lower and upper intervals and $\gamma_i$ are the positive constant adaptive gains. These adaptive laws lead to \begin{equation} \dot{V}=-a_m e^2 - \frac{b}{ \gamma _i} \tilde{k}_i \dot{k}^*_{3 }, \end{equation} where $\dot{k}^*_{3 } = \frac{a\dot{\nu}_l-\dot{d}}{b}$. By projection operator, estimated parameters are guaranteed to be bounded by forcing them to remain inside the bounded sets, $\dot{V}$ implies that $e\in\mathcal{L}_\infty$, in turn all other signals in the closed loop are bounded. We apply RLS based adaptive law to (\ref{acc_lyapunov}) and obtain following equations to be used in simulations \begin{figure} [h!] \includegraphics[width=0.49 \textwidth , height=0.42\textwidth]{ACC_direct_mrac_result2.png} \caption{ACC comparison results, speed tracking and separation error in Matlab/Simulink.} \label{acc3} \end{figure} \begin{equation} \label{rls_dmracacc} \begin{aligned} \dot{\hat{\theta}}&=Pr\{P_{ii} e \phi \}, &\\ \dot{P}&= \beta P- P\phi \phi^T P, \quad P(0)=I_{3\times3},& \end{aligned} \end{equation} with \begin{equation} \begin{aligned} &e=v-v_m, & \\ &\theta=\begin{bmatrix} k_1, & k_2, & k_3 \end{bmatrix} ^T, &\\ &\phi= \begin{bmatrix} \frac{ v_r}{s+a_m}, & \frac{ \delta}{s+a_m}, & \frac{1}{s+a_m} \end{bmatrix}^T, & \end{aligned} \end{equation} where $P_{ii}$ are the diagonal elements of $P$ covariance matrix, i=1,2,3. For gradient based adaptive law, $\gamma_1=50I, \gamma_2=30I, \gamma_3=40I$ constant gains are given. For RLS based algorithm $\beta=0.95$ and $P(0)=100I_3$ are given. For both RLS and gradient schemes, a Gaussian noise is applied ($\sigma = 0.05$). Simulation results from Matlab/Simulink for throttle system are given in Fig. \ref{acc3}. Fig. \ref{acc3} shows the vehicle following for both gradient based adaptive law and RLS based adaptive law. The speed error in velocity tracking shows the better performance for RLS adaptive law. \begin{figure} [h!] \includegraphics[width=0.49 \textwidth , height=0.38\textwidth]{Carsim_ACC_result.png} \caption{RLS based ACC results in CarSim.} \label{acc4} \end{figure} We also implemented RLS based adaptive control algorithm in \eqref{rls_dmracacc} to CarSim for more realistic results. The vehicle parameters used in CarSim are as follows: $m=567.75~ kg$, $R=0.3~ m$, $I=1.7 ~kgm^2$, $B=0.01 ~kg/s$. The adaptive gains for both gradient and RLS are used the same as in Matlab/Simulink. CarSim results for RLS based ACC can be found in Fig. \ref{acc4}. Results demonstrate the ability of the following vehicle equipped with RLS based adaptive law on dry road by adjusting the speed and the distance between the leading vehicle and itself. \section{CONCLUSIONS} In this paper, a constructive Lyapunov analysis of RLS parameter estimation based direct adaptive control is proposed. A systematic replacement of the constant adaptation gain in gradient based schemes, with a time varying adaptive gain (covariance) matrix is explained, following the steps of Lyapunov analysis. A simulation based case study on ACC is provided to demonstrate the transient performance of RLS parameter identification based direct adaptive control, where the performance is compared in detail with that of gradient parameter estimation based schemes in Matlab/Simulink. A more realistic vehicle software, CarSim, is used to provide the performance of RLS parameter identification based direct adaptive control. \bibliographystyle{IEEEtran}
{ "timestamp": "2020-07-20T02:02:04", "yymm": "2007", "arxiv_id": "2007.08578", "language": "en", "url": "https://arxiv.org/abs/2007.08578" }
\section{Deep Learning -- Challenges presented by Hyperspectral Imagery} \label{advanDL-why} Since AlexNet~\cite{krizhevsky2012imagenet} won the ImageNet challenge in 2012, deep learning approaches have gradually replaced traditional methods becoming a predominant tool in a variety of computer vision applications. Researchers have reported remarkable results with deep neural networks in visual analysis tasks such as image classification, object detection, and semantic segmentation. A major differentiating factor that separates deep learning from conventional neural network based learning is the amount of parameters in a model. With hundreds of thousands even millions or billions of parameters, deep neural networks use techniques such as error backpropagation~\cite{rumelhart1988learning}, weight decay~\cite{krogh1992simple}, pretraining~\cite{hinton2006fast}, dropout~\cite{srivastava2014dropout}, and batch normalization~\cite{ioffe2015batch} to prevent the model from overfitting or simply memorizing the data. Combined with the increased computing power and specially designed hardware such as Graphics Processing Units (GPU), deep neural networks are able to learn from and process unprecedented large-scale data to generate abstract yet discriminative features and classify them. Although there is a significant potential to leverage from deep learning advances for hyperspectral image analysis, such data come with unique challenges which must be addressed in the context of deep neural networks for effective analysis. It is well understood that deep neural networks are notoriously data hungry insofar as training the models is concerned. This is attributed to the manner in which neural networks are trained. A typical training of a network comprises of two steps: 1) pass data through the network and compute a task dependent loss; and 2) minimize the loss by adjusting the network weights by back-propagating the error~\cite{rumelhart1988learning}. During such a process, a model could easily end up overfitting~\cite{caruana2001overfitting}, particularly if we do not provide sufficient training data. Data annotation has always been a major obstacle in machine learning research -- and this requirement is amplified with deep neural networks. Acquiring extensive libraries such as ImageNet~\cite{deng2009imagenet} for various applications may be very costly and time consuming. This problem becomes even more acute when working with hyperspectral imagery for applications to remote sensing and biomedicine. Not only does one need specific domain expertise to label the imagery, annotation itself is challenging due to the resolution, scale and interpretability of the imagery even by domain experts. For example, it can be argued that it is much more difficult to tell the different types of soil tillage apart by looking at a hyperspectral image than discerning everyday objects in color imagery. Further, the ``gold-standard'' in annotating remotely sensed imagery would be through field campaigns where domain experts verify the objects at exact geolocations corresponding to the pixels in the image. This can be very time consuming and for many appplications unfeasible. It is hence common in hyperspectral image analysis tasks to have a very small set of labeled ground truth data to train models from. In addition to the label scarcity, the large inter-class variance of hyperspectral data also increases the complexity of the underlying classification task. Given the same material or object, the spectral reflectance (or absorbance) profiles from two hyperspectral sensors could be dramatically different because of the differences in wavelength range and spectral resolution. Even when the same sensor is used to collect images, one can get significant spectral variability due to the variation of view angle, atmospheric conditions, sensor altitude, geometric distortions etc.~\cite{camps2014advances}. {Another reason for high spectral variability is mixed pixels arising from imaging platforms that result in low spatial resolution -- as a result, the spectra of one pixel corresponds to more than one object on the ground \cite{bioucas2012hyperspectral}. } For robust machine learning and image analysis, there are two essential components -- deploying an appropriate machine learning model, and leveraging a library of training data that is representative of the underlying inter-class and intra-class variability. For Image analysis, specifically for classification tasks, deep learning models are variations of convolution neural networks (CNNs)~\cite{lecun2010convolutional}, which conduct a series of 2D convolutions between input images and (spatial) filters in a hierarchical fashion. It has been shown that such hierarchical representations are very efficient in recognizing objects in natural images~\cite{boureau2010learning}. When working with hyperspectral images, however, CNN based features ~\cite{yosinski2015understanding} such as color blobs, edges, shapes etc. may not be the only features of interest for the underlying analysis. There is important information encoded in the spectral profile which can be very helpful for analysis. Unfortunately, in traditional applications of CNNs to hyperspectral imagery, modeling of spectral content in conjunction with spatial content is ignored. Although one can argue that spectral information could still be picked up when 2D convolutions are applied channel by channel or features from different channels are stacked together, such approaches would not constitute optimal modeling of spectral reflectance/absorbance characteristics. It is well understood that when the spectral correlations are explicitly exploited, spectral-spatial features are more discriminative -- from traditional wavelet based feature extraction ~\cite{shen2011three,zhou2016wavelet} to modern CNNs~\cite{chen2016deep,li2017spectral,paoletti2018new,zhong2018spectral}. In chapters 3 and 4, we have reviewed variations of convolutional and recurrent neural networks that model the spatial and spectral properties of hyperspectral data. In this chapter, we review recent works that specific address issues arising from deploying deep learning neural networks in challenging scenarios. In particular, our emphasis is on challenges presented by (1) limited labeled data, wherein one must leverage the vast amount of available unlabeled data in conjunction with limited data for robust learning, and (2) multi-source optical data, wherein it is important to transfer models learned from one source (e.g. a specific sensor/platform/viewpoint/timepoint), and transfer the learned model to a different source (a different sensor/platform/viewpoint/timepoint), with the assumption that one source is rich in the quality and/or quantity of labeled training data while the other source is not. \section{Robust Learning with Limited Labeled Data} \label{advanDL-train} To address the labeled data scarcity, one strategy is to recruit resources (time and money, for example) with the goal of expanding the training library by annotating more data. However, for many applications, human annotation is neither scalable nor sustainable. An alternate (and more practical) strategy to address this problem is to design algorithms that do not require a large library of training data, but can instead learn from the extensive unlabeled data in conjunction with the limited amount of labeled data. Within this broad theme, we will review unsupervised feature learning, semi-supervised and active learning strategies. {We will present results of several methods discussed in this chapter with three hyperspectral datasets - two of these are benchmark hyperspectral datasets, University of Pavia~\cite{paviau} and University of Houston~\cite{uhdata}, and represent urban land-cover classification tasks. The University of Pavia dataset is a hyperspectral scene representing 9 urban land cover classes, with 103 spectral channels spanning the visible through near-infrared region. The 2013 University of Houston dataset is a hyperspectral scene acquired over the University of Houston campus, and representing 15 urban land cover classes. It has 144 spectral channels in the visible through near infrared region. The third dataset is a challenging multi-source (multi-sensor/multi-viewpoint) hyperspectral dataset~\cite{zhou2017domain} that is particularly relevant in a transfer learning context -- details of this dataset are presented later in section \ref{advanDL-trans}.} \subsection{Unsupervised Feature Learning} \label{advanDL-us} In contrast to the labeled data, unlabeled data are often easy and cheap to acquire for many applications, including remotely sensed hyperspectral imagery. Unsupervised learning techniques do not rely on labels, and that makes this class of methods very appealing. Compared to supervised learning where labeled data are used as a ``teacher'' for guidance, models trained with unsupervised learning tend to learn relationships between data samples and estimate the data properties class-specific labelings of samples. In the sense that most deep networks can be comprised of two components - a feature extraction front-end, and an analysis backend (e.g. undertaking tasks such as classification, regression etc.), an approach can be completely unsupervised relative to the training labels (e.g. a neural network tasked with fusing sensors for super-resolution), or completely supervised (e.g. a neural network wherein both the features and the backend classifiers are learned with the end goal of maximizing inter-class discrimination. There are also scenarios wherein the feature extraction part of the network is unsupervised (where the labeled data are not used to train model parameters), but the backend (e.g. classification) component of the network is supervised. In this chapter, whenever the feature extraction component of a network is unsupervised (whether the backend model is supervised or unsupervised), we refer to this class of methods as carrying out ``unsupervised feature learning''. The benefit of unsupervised feature learning is that we can learn useful features (in an unsupervised fashion) from a large amount of unlabeled data (e.g. spatial features representing the natural characteristics of a scene) despite not having sufficient labeled data to learn object-specific features, with the assumption that the features learned in an unsupervised manner can still positively impact a downstream supervised learning task. In traditional feature learning (e.g. dimensionality reduction, subspace learning or spatial feature extraction), the processing operators are often based on assumptions or prior knowledge about data characteristics. Optimization of feature learning to a task at hand is hence non-trivial. Deep learning-based methods address this problem in a data-adaptive manner, where the feature learning is undertaken in the context of the overall analysis task in the same network. Deep learning-based strategies, such as autoencoders ~\cite{rumelhart1985learning} and their variants, restricted Boltzmann machines (RBM)~\cite{smolensky1986information,hinton2002training}, and deep belief networks (DBN)~\cite{hinton2006reducing}, have exhibited a potential to effectively characterize hyperspectral data. For classification tasks, the most common way to use unsupervised feature learning is to extract (\textit{learn}) features from the raw data that can then be used to train classifiers downstream. Section 3.1 in Chapter 3 describes such a use of autoencoders for extracting features for tasks such as classification. In Chen \textit{et al.}\ ~\cite{chen2014deep}, the effectiveness of autoencoder derived features was demonstrated for hyperspectral image analysis. Although they attempted to incorporate spatial information by feeding the autoencoder with image patches, a significant amount of information is potentially lost due to the flattening process. To capture the multi-scale nature of objects in remotely sensed images, image patches with different sizes were used as inputs for a stacked sparse autoencoder in~\cite{tao2015unsupervised}. To extract similar multi-scale spatial-spectral information, Zhao \textit{et al.}\ ~\cite{zhao2017spectral} applied a scale transformation by upsampling the input images before sending them to the stacked sparse autoencoder. Instead of manipulating the spatial size of inputs, Ma \textit{et al.}\ ~\cite{ma2016spectral} proposed to enforce the local constraint as a regularization term in the energy function of the autoencoder. By using a stacked denoising autoencoder, Xing \textit{et al.}\ ~\cite{xing2016stacked} sought to improve the feature stability and robustness with partially corrupted inputs. Although these approaches have been effective, they still require input signals (frames/patches) to be reshaped as one dimensional vectors, which inevitably results in a loss of spatial information. To better leverage the spatial correlations between adjacent pixels, several works have proposed to use the convolutional autoencoder to extract features from hyperspectral data~\cite{kemker2017self,ji2017learning,han2017spatial}. Stacking layers has been shown to be an effective way to increase the representation power of an autoencoder model. The same principle applies to deep belief networks ~\cite{le2008representational}, where each layer is represented by a restricted Boltzmann machine. With the ability to extract a hierarchical representation from the training data, promising results have been shown for DBN/RBM for hyperspectral image analysis~\cite{li2014classification,midhun2014deep,chen2015spectral,tao2017unsupervised,zhou2017deep,li2019deep,tan2019parallel}. In recent works, some alternate strategies to unsupervised feature learning for hyperspectral image analysis have also emerged. In~\cite{romero2016unsupervised}, a convolutional neural networks was trained in a greedy layer-wise unsupervised fashion. A special learning criteria called enforcing population and lifetime sparsity (EPLS)~\cite{romero2015meta} was utilized to ensure that the generated features are unique, sparse and robust at the same time. In~\cite{haut2018new}, the hourglass network~\cite{newell2016stacked}, which shares a similar network architecture as an autoencoder, was trained for super-resolution using unlabeled samples in conjunction with noise. The reconstructed image was downsampled and compared with the real low-resolution image. The offset between these two was used as the loss function that was minimized to train the entire network. A minimized loss (offset) indicates the reconstruction from the network would be a good super-resolved estimate of the original image. \subsection{Semi-supervised learning} \label{advanDL-ss} Although the feature learning strategy allows us to extract informative features from unlabeled data, the classification part of the network still requires labeled training samples. Methods that rely completely on unsupervised learning may not provide discriminative features from unlabeled data entirely for challenging classification tasks. Semi-supervised deep learning is an alternate approach where unlabeled data are used in conjunction with a small amount of labeled data to train deep networks (both the feature extraction and classification components). It falls between supervised learning and unsupervised learning and leverages benefits of both approaches. In the context of classification, semi-supervised learning often provides better performance compared to unsupervised feature learning, but without the annotation/labeling cost needed for fully supervised learning~\cite{chapelle2009semi}. Semi-supervised learning has been shown to be beneficial for hyperspectral image classification in various scenarios~\cite{liu2017semi,he2017generative,kemker2018low,niu2018weakly,wu2018semi,kang2019semi}. Recent research~\cite{kemker2018low} has shown that the classification performance of a multilayer perceptron (MLP) can be improved by adding an unsupervised loss. In addition to the categorical cross entropy loss, a symmetric decoder branch was added to the MLP and multiple reconstruction losses, {measured by the mean squared error of the encoder and decoder}, were enforced to help the network generate effective features. The reconstruction loss in fact served as a regularizer to prevent the model from overfitting. A similar strategy has been used with convolutional neural networks in~\cite{liu2017semi}. A variant of semi-supervised deep learning, proposed by Wu and Prasad in~\cite{wu2018semi} entails learning a deep network that extracts features that are discriminative from the perspective of the intrinsic clustering structure of data (i.e., these deep features can discriminate between cluster labels -- also referred to as pseudo-labels in this work) -- in short, the cluster labels generated from clustering of unlabeled data can be used to boost the classification performance. To this end, a constrained Dirichlet Process Mixture Model {(DPMM)} was used, and a variational inference scheme was proposed to learn the underlying clustering from data. The clustering labels of the data were used as \emph{pseudo labels} for training a convolutional recurrent neural network, where a CNN was followed by a few recurrent layers (akin to a pretraining with pseudo-labels). Figure~\ref{fig:advanDL-crnn} depicts the architecture of network. {The network configuration is specified in Table~\ref{tab:advanDL-crnn}, where convolutional layers are denoted as ``conv <filter size> - <number of filters> and recurrent layers are denoted as ``recur-<feature dimension>''.} \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{figures/crnn.png} \caption{Architecture of the convolutional recurrent neural network. Cluster labels are used for pretraining the network. (Source adapted from~\cite{wu2018semi})} \label{fig:advanDL-crnn} \end{figure} \begin{table}[h] \centering \caption{Network configuration summary for the Aerial view wetland hyperspectral dataset. Every convolutional layer is followed by a max pooling layer, which is omitted for the sake of simplicity. (Source adapted from \cite{wu2018semi})} \begin{tabular}{c} \hline input-103 $\rightarrow$ conv3-32 $\rightarrow$ conv3-32 $\rightarrow$ conv3-64 $\rightarrow$ conv3-64 \\ $\rightarrow$ recur-256 $\rightarrow$ recur-512 $\rightarrow$ fc-64 $\rightarrow$ fc-64 $\rightarrow$ softmax-9 \\ \hline \end{tabular} \label{tab:advanDL-crnn} \end{table} After pretraining with unlabeled data and associated pseudo-labels, the network was fine-tuned with labeled data. This entails adding a few more layers to the previously trained network and learning only these layers from the labeled data. {Compared to traditional semi-supervised methods, the pseudo-label-based network, PL-SSDL, achieved higher accuracy on the wetland data {(a detailed description of this dataset is provided in Section~\ref{advanDL-trans})} as shown in Table~\ref{tab:advanDL-crnn-acc}. The effect of varying depth of the pretrained network on the classification performance is shown as Fig.~\ref{fig:advanDL-crnn-depth}. Accuracy increases as the model goes deeper, i.e., more layers.} {In addition to the environmental monitoring application represented by the wetland dataset, the efficacy of PL-SSDL was also verified for urban land-cover classification tasks using the University of Pavia~\cite{paviau} and the University of Houston~{\cite{uhdata} datasets, having 9 and 15 land cover classes, and representing spectra 103 and 144 spectral channels spanning the visible through near infrared regions respectively.} As we can see from Figure~\ref{fig:advanDL-plssdl}, features extracted with pseudo label (middle column) are separated better than the raw hyperspectral data (left column), which implies pretraining with unlabeled data makes the features more discriminative. Compared to a network that is trained solely using labeled data, the semi-supervised method requires much less labeled samples due to the pretrained model. With only a few labeled samples per class, features are further improved by fine-tuning (right column) the network. Similar to this idea, Kang~\cite{kang2019semi} later trained a CNN with pseudo labels to extract spatial deep features through pretraining. \begin{table}[h] \centering \begin{tabular}{cccccc} \toprule Methods & Label propagation & TSVM & SS-LapSVM & Ladder Networks & PL-SSDL \\ \midrule Accuracy & $89.28 \pm 1.04$ & $92.24 \pm 0.81$ & $95.17 \pm 0.85$ & $93.17 \pm 1.49$ & $97.33 \pm 0.48$ \\ \bottomrule \end{tabular} \caption{Overall classification accuracies of different methods on the aerial view wetland dataset. (Source adapted from \cite{wu2018semi})} \label{tab:advanDL-crnn-acc} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{figures/crnn_depth.png} \caption{Classification accuracy as a function of the depth of the pretrained model. (Source adapted from \cite{wu2018semi})} \label{fig:advanDL-crnn-depth} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figures/tsne_plssdl.png} \caption{t-SNE visualization of features at different training stages on the University of Pavia~\cite{paviau} (top row) and University of Houston~\cite{uhdata} (bottom row) datasets. Left column represents raw image features, middle column represents features after unlabeled data pretraining, and right column represents feature after labeled data fine-tuning. (Source adapted from \cite{wu2018semi})} \label{fig:advanDL-plssdl} \end{figure} \subsection{Active learning} \label{advanDL-al} Leveraging unlabeled data is the underlying principle of unsupervised and semi-supervised learning. Active learning, on the other hand, aims to improve the efficiency of acquiring labeling data as much as possible. Figure~\ref{fig:advanDL-al} shows a typical active learning flow, which contains four components: a labeled training set, a machine learning model, an unlabeled pool of data, and an oracle (a human annotator / domain expert). The labeled set is initially used for training the model. Based on the model's prediction, queries are then selected from the unlabeled pool and sent to the oracle for labeling. The loop is iterated until a pre-determined convergence criterion is met. The criteria used for selecting samples to query determines the efficiency of model training -- efficiency here refers to the machine learning model reaching its full discriminative potential using as few queried labeled samples as possible. If every queried sample provides significant information to the model when labeled and incorporated into training, the annotation requirement will be small. A large part of active learning research is focused on designing suitable metrics to quantify the information contained in an unlabeled sample, that can be used for querying samples from the data pool. A common thread in these works is the notion that choosing samples that confuse the machine the most would result in a better (efficient) active learning performance. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{figures/al.png} \caption{Illustration of an active learning system.} \label{fig:advanDL-al} \end{figure} Active learning with deep neural networks has obtained increasing attention within the remote sensing community in recent years~\cite{sun2016active,liu2017active,deng2018active,lin2018active,haut2018active}. Liu \textit{et al.}\ ~\cite{liu2017active} used features produced by a DBN to estimate the representativeness and uncertainty of samples. Both~\cite{deng2018active} and~\cite{lin2018active} explored using an active learning strategy to facilitate transferring knowledge from one dataset to another. In~\cite{deng2018active}, a stacked sparse autoencoder was initially trained in the source domain and then fine-tuned in the target domain. To overcome the labeled data bottleneck, an uncertainty-based metric was used to select the most informative samples from the source domain for active learning. Similarly, Lin \textit{et al.}\ ~\cite{lin2018active} trained two separate autoencoders from the source and target domains. Representative samples were selected based on the density in the neighborhood of the samples in the feature space. This allowed autoencoders to be effectively trained using limited data. In order to transfer the supervision from source to target domain, features in both domains were aligned by maximizing their correlation in a latent space. Unlike autoencoders and DBN, convolution neural networks (CNNs) provide an effective framework to exploit the spatial correlation between pixels in a hyperspectral image. However, when it comes to training with small data, CNNs tends to overfit due to the large number of trainable network parameters. To solve this problem, Haut~\cite{haut2018active} present an active learning algorithm that uses a special network called Bayesian CNN~\cite{gal2015bayesian}. {Gal and Ghahramani~\cite{gal2015bayesian} have shown that dropout in neural network can be considered as an approximation to the Gaussian process, which offers nice properties such as uncertainty estimation and robustness to overfitting. By performing dropout after each convolution layer, the training of Bayesian CNN can be cast as approximate Bernoulli variational inference. During evaluation, outputs of a Bayesian CNN are averaged over several passes, which allows us to estimate the model prediction uncertainty and the model suffers less from overfitting. Multiple uncertainty-based query criteria were then deployed to select samples for active learning.} \section{Knowledge transfer between sources} \label{advanDL-knowtrans} Another common image analysis scenario entails learning with multiple sources, in particular where one source is label ``rich'' (in the quantity and/or quality of labeled data), and the other source is label ``starved''. Sources in this scenario could imply different sensors, different sensing platforms (e.g. ground-based imagers, drones or satellites), different time-points and different imaging view-points. In this situation, when it is desired to undertake analysis in the label starved domain (often referred to as the target domain), a common strategy is to transfer knowledge from the label rich domain (often referred to as the source domain). \subsection{Transfer learning and Domain Adaptation} \label{advanDL-trans} Effective training has always been a challenge with deep learning models. Besides requiring large amounts of data, the training itself is time-consuming and often comes with convergence and generalization problems. One major breakthrough of effective training of deep networks is the pretraining technique introduced by Hinton \textit{et al.}\ in~\cite{hinton2006fast}, where a DBN was pretrained with unlabeled labeled data in a greedy layer-wise fashion, followed by a supervised fine-tuning. {In particular, the DBN was trained one layer at a time by reconstructing outputs from the previous layer for the unsupervised pretraining. At the last training stage, all parameters were fine-tuned together by optimizing a supervised training criterion.} In Erhan \textit{et al.}\ in~\cite{erhan2010does}, the authors suggested that unsupervised pretraining works as a form of regularization. It not only provides a good initialization but also helps the generalization performance of the network. Similar to unsupervised pretraining, networks pretrained with supervision have also achieved huge success. Infact, using pretrained models as a starting point for new training has become a common practice for many analysis tasks~\cite{sermanet2013overfeat,donahue2014decaf}. The main idea behind transfer learning is that knowledge gained from related tasks or a related data source can be transferred to a new task by fine-tuning on the new data. This is particular useful when there is a data shortage in the new domain. In the computer vision community, a common approach to transfer learning is to initialize the network with weights that are pretrained for image classification on the ImageNet dataset~\cite{deng2009imagenet}. The rationale for this is that ImageNet contains millions of natural images that are manually annotated and models trained with it tend to provide a ``baseline performance' with generic and basic features commonly seen in natural images. Researchers have shown that features from lower layers of deep networks are color blobs, edges, shapes ~\cite{yosinski2015understanding}. These basic features are usually readily transferable across datasets (e.g. data from different sources) ~\cite{yosinski2014transferable}. In~\cite{penatti2015deep}, Penatti \textit{et al.}\ discussed the feature generalization in the remote sensing domain. Empirical results suggested that transferred features are not always better than hand-crafted features, especially when dealing with unique scenes in remote sensing images. Windrim \textit{et al.}\ ~\cite{windrim2018pretraining} unveiled valuable insights of transfer learning in the context of hyperspectral image classification. In order to test the effect of filter size and wavelength interval, multiple hyperspectral datasets were acquired with different sensors. The performance of transfer learning was examined through a comparison with training the network from scratch, i.e., randomly initializing network weights. Extensive experiments were carried out to investigate the impact of data size, network architecture, and so on. The authors also discussed the training convergence time and feature transferability under various conditions. Despite the open questions which requires more investigations, extensive studies have empirically shown the effectiveness of transfer learning on hyperspectral image analysis~\cite{marmanis2016deep,zhang2016weakly,yang2017learning,mei2017learning,othman2017domain,yuan2017hyperspectral,shi2017can,tao2017unsupervised,ma2018super,liu2018classifying,niu2018weakly,sumbul2018fine,zhou2018deep}. Marmanis \textit{et al.}\ ~\cite{marmanis2016deep} introduced the pretrained model idea~\cite{krizhevsky2012imagenet} for hyperspectral image classification. A pretrained AlexNet~\cite{krizhevsky2012imagenet} was used as a fixed feature extractor and a two-layer CNN was attached for the final classification. Yang \textit{et al.}\ ~\cite{yang2017learning} proposed a two-branch CNN for extracting spectral-spatial features. To solve the data scarcity problem, weights of lower layers were pretrained from another data set and the entire network was then fine-tuned on the source dataset. Similar strategies have also been followed in~\cite{xie2016transfer,mei2017learning,liu2018classifying}. Along with pretraining and fine-tuning, domain adaptation is another mechanism to transfer knowledge from one domain to another. Domain adaptation algorithms aim at learning a model from source data that can perform well on the target data. It can be considered as a sub-category of transfer learning, where the input distribution $p(X)$ changes while the label distribution $p(Y|X)$ remains the same across the two domains. Unlike the pretraining and fine-tuning method, which can be used when both distributions change, domain adaptation usually assumes the class-specific properties of the features within the two domains are correlated. This allows us to enforce stronger connections while transferring knowledge. Othman \textit{et al.}\ ~\cite{othman2017domain} proposed a domain adaptation network that can handle cross-scene classification when there is no labeled data in the target domain. Specifically, the network used three loss components for training: a classification loss (cross entropy) in the source domain, a domain matching loss based on maximum mean discrepancy (MMD)~\cite{fortet1953mmd}, and a graph regularization loss that aims to retain the geometrical structure of the unlabeled data in the target domain. The cross entropy loss ensures that features produced by the network are discriminative. Having discriminative features in the original domain has also been found to be beneficial for the domain matching process~\cite{zhou2017domain}. In order to undertake domain adaptation, features from the two domains were aligned by minimizing the distribution difference. Zhou and Prasad~\cite{zhou2018deep} proposed to align domains (more specifically, features in these domains) based on domain adaptation transformation learning (DATL) ~\cite{zhou2017domain} -- DATL aligns class-specific features in the two domains by projecting the two domains onto a common latent subspace such that the ratio of within-class distance to between-class distance is minimized in that latent space. Next, we briefly review how a projection such as DATL can be used to align deep networks for domain adaptation and present some results with multi-source hyperspectral data. Consider the distance between a source sample $x^s_i$ and a target sample $x^t_j$ in the latent space, \begin{equation} \label{eq:distance} d(x^s_i, x^t_j) = \| f_s(x^s_i) - f_t(x^t_j) \|^2, \end{equation} where $f_s$ and $f_t$ are feature extractors, e.g., CNNs, that transform samples from both domains to a common feature space. To make the feature space robust to small perturbations in original source and target domains, the stochastic neighborhood embedding is used to measure classification performance~\cite{hinton2003stochastic}. In particular, the probability $p_{ij}$ of the target sample $x^t_j$ being the neighbor of the source sample $x^s_i$, is given as \begin{equation} p_{ij} = \frac{\exp(-\| f_s(x^s_i) - f_t(x^t_j)\|^2)}{\sum_{x^s_k \in \mathcal{D}^S} \exp(-\| f_s(x^s_k) - f_t(x^t_j)\|^2) }, \end{equation} where $\mathcal{D}^S$ is the source domain. Given a target sample with its label $(x^t_j, y^t_j = c)$, the source domain $\mathcal{D}^s$ can be split into a \emph{same-class} set $\mathcal{D}^s_c = \{x^s_k| y_k = c\}$ and a \emph{different-class} set $\mathcal{D}^s_{\not c} = \{x^s_k| y_k \neq c\}$. In the classification setting, one wants to maximize the probability of making the correct prediction for $x_j$. \begin{equation} \label{datl} p_{j} = \frac{\sum_{x^s_i \in \mathcal{D}^s_c} \exp(-\| f_s(x^s_i) - f_t(x^t_j)\|^2)}{\sum_{x^s_k \in \mathcal{D}^s_{\not c}} \exp(-\| f_s(x^s_k) - f_t(x^t_j)\|^2)}. \end{equation} Maximizing the probability $p_j$ is equivalent to minimizing the ratio of intra-class distances to inter-class distances in the latent space. {This ensures that classes from the target domain and the source domain are aligned in the latent space. Note that the labeled data from the target domain (albeit limited) can be further used to make the feature more discriminative. The final objective function of DATL can then be written as} \begin{equation} \mathcal{L} = \beta \frac{\sum_{x^s_i \in \mathcal{D}^s_c} \exp(-\| f_s(x^s_i) - f_t(x^t_j)\|^2)}{\sum_{x^s_k \in \mathcal{D}^s_{\not c}} \exp(-\| f_s(x^s_k) - f_t(x^t_j)\|^2)} + (1-\beta) \frac{\sum_{x^t_i \in \mathcal{D}^t_c} \exp(-\| f_t(x^s_i) - f_t(x^t_j)\|^2)}{\sum_{x^t_k \in \mathcal{D}^t_{\not c}} \exp(-\| f_t(x^s_k) - f_t(x^t_j)\|^2)}. \label{eq:datl-obj} \end{equation} The first term can be domain alignment term and the second term can be seen as a class separation term. $\beta$ is the trade-off parameter that is data dependent. The greater the difference between source and target data, the larger value of $\beta$ should be used to put more emphasis on domain alignment. Depending on the feature extractors, Eq.~\ref{datl} can be either solved using conjugate gradient-based optimization~\cite{zhou2017domain} or treated as a loss and solved using stochastic gradient descent~\cite{xu2019d}. DATL has been shown to be effective for addressing large domain shifts such as between street-view and satellite hyperspectral images~\cite{zhou2017domain} acquired with different sensors and imaged with different viewpoints. Figure~\ref{fig:advanDL-fann} shows the architecture of feature alignment neural network (FANN) that leverages DATL. Two convolutional recurrent neural networks (CRNN) were trained separately for the source and target domain. Features from corresponding layers were connected through an adaptation module, which is composed of a DATL term and a trade-off parameter that balances the domain alignment and the class separation. Specifically, the trade-off parameter $\beta$ is automatically estimated by a proxy A-distance (PAD)~\cite{ben2007analysis}. \begin{equation} \beta = \text{PAD}/2 = 1 - 2 \epsilon, \label{eq:fann-beta} \end{equation} where PAD is defined as $\text{PAD} = 2(1-2\epsilon)$ and $\epsilon \in [0, 2]$ is the generalization error of a linear SVM trained to discriminate between two domains. Aligned features were then concatenated and fed to a final softmax layer for classification. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figures/fann.png} \caption{The architecture of feature alignment neural network. (Source adapted from \cite{zhou2018deep})} \label{fig:advanDL-fann} \end{figure} \begin{table}[h] \centering \caption{Network configuration summary for the Aerial and Street view wetland hyperspectral dataset (A-S view wetland). (Source adapted from \cite{zhou2018deep})} \begin{tabular}{c} \hline FANN (A-S view wetland)\\ \hline CRNN (Street) $\rightarrow$ DATL $\leftarrow$ CRNN (Aerial) \\ \hline (conv4-128 + maxpooling) $\rightarrow$ DATL $\leftarrow$ (conv5-512 + maxpooling) \\ (conv4-128 + maxpooling) $\rightarrow$ DATL $\leftarrow$ (conv5-512 + maxpooling) \\ (conv4-128 + maxpooling) $\rightarrow$ DATL $\leftarrow$ (conv5-512 + maxpooling) \\ (conv4-128 + maxpooling) $\rightarrow$ DATL $\leftarrow$ (conv5-512 + maxpooling) \\ (conv4-128 + maxpooling) $\rightarrow$ DATL $\leftarrow$ (conv5-512 + maxpooling) \\ recur-64 $\rightarrow$ DATL $\leftarrow$ recur-128 \\ \hline fully connected-12\\ \hline \end{tabular} \label{tab:advanDL-config} \end{table} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figures/wetland.png} \caption{Aerial and Street view wetland hyperspectral dataset. Left: aerial view of the wetland data (target domain). Right: street view of wetland data (source domain). (Source adapted from \cite{zhou2018deep})} \label{fig:advanDL-wetland} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{figures/meansig_Dt.pdf} \includegraphics[width=0.45\textwidth]{figures/meansig_Ds.pdf} \\ \includegraphics[width=0.75\textwidth]{figures/wetland_cls_names.png} \caption{Mean spectral signature of the aerial view (target domain) wetland data (a) and the street view (source domain) wetland data (b). Different wetland vegetation species (classes) are indicated by colors. (Source adapted from \cite{zhou2018deep})} \label{fig:advanDL-sigSH} \end{figure} The performance of FANN was evaluated on a challenging domain adaptation dataset introduced in~\cite{zhou2017domain}. See Fig.~\ref{fig:advanDL-wetland} for the true color images for the source and target domains. The dataset consists of hyperspectral images of ecologically sensitive wetland vegetation that were collected by different sensors from two viewpoints -- ``aerial'', and ``street-view'' (and using sensors with different spectral characteristics) in Galveston, TX. {Specifically, the aerial data were acquired using the ProSpecTIR VS sensor aboard an aircraft, and has 360 spectral bands ranging from 400 nm to 2450 mm with a 5 nm spectral resolution. The aerial view data were radiometrically calibrated and corrected. The resulting reflectance data has a spatial coverage of $3462 \times 5037$ pixels at a 1 m spatial resolution. On the other hand, the street view data were acquired through the Headwall Nano-Hyperspec sensor on a different date and represents images acquired by operating the sensor on a tripod and imaging the vegetation in the field during ground-reference campaigns. Unlike the aerial view data, the street view data represent at-sensor radiance data with 274 bands spanning 400 nm and 1000 nm at 3 nm spectral resolution.} As can be seen from Fig.~\ref{fig:advanDL-sigSH}, spectral signatures for the same class are very different between the source and target domains. With very limited labeled data in the aerial view, FANN achieved significant classification improvement compared to traditional domain adaptation methods (see Table~\ref{tab:advanDL-fann}). \begin{table}[h] \centering \begin{tabular}{lcccc} \toprule Methods & SSTCA & KEMA & D-CORAL & FANN \\ \midrule Accuracy & $85.3 \pm 5.6$ & $87.3 \pm 1.7$ & $92.5 \pm 1.9$ & $95.8 \pm 1.1$ \\ \bottomrule \end{tabular} \caption{Overall classification accuracies of different domain adaptation methods on the aerial and street view wetland dataset. (Source adapted from \cite{zhou2018deep})} \label{tab:advanDL-fann} \end{table} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figures/fann_ASview.png} \caption{t-SNE feature visualization of the aerial and street view wetland hyperspectral data at different stages of FANN. (a) Raw spectral features of street view data in source domain. (b) CRNN features of street view data in source domain. (c) Raw spectral features of aerial view data in source domain. (d) CRNN features of aerial view data in source domain. (e) FANN features for both domains in the latent space. (Source adapted from \cite{zhou2018deep})} \label{fig:advanDL-fann-tsne} \end{figure} As can be seen from Fig.~\ref{fig:advanDL-fann-tsne}, raw hyperspectral features from source and target domains are not aligned with each other. Due to the limited labeled data in aerial view data, the mixture of classes happens in a certain level. The cluster structures are improved slightly by CRNN, see Fig.~\ref{fig:advanDL-fann-tsne} (c) and (d) for comparison. On the contrary, the source data, i.e., street view data, have well-separated cluster structure. However, the classes are not aligned between the two domains, therefore, labels from the source domain cannot be used to directly train a classifier for the target domain. After passing all samples through the FANN, the two domains are aligned class-by-class in the latent space, as shown in Fig.~\ref{fig:advanDL-fann-tsne} (e). \begin{table}[h] \centering \caption{Overall accuracy of the features of alignment layers and concatenated features for the Aerial and Street view wetland dataset. (Source adapted from \cite{zhou2018deep})} \begin{tabular}{cccccccc} \toprule \textit{\textbf{Layer}} & \textit{\textbf{FA-1}} & \textit{\textbf{FA-2}} & \textit{\textbf{FA-3}} & \textit{\textbf{FA-4}} & \textit{\textbf{FA-5}} & \textit{\textbf{FA-6}} & \textit{\textbf{FANN}} \\ \midrule \textit{\textbf{OA}} &88.1 &86.2 &83.9 &75.7 &72.0 &86.4 &95.8 \\ \bottomrule \end{tabular} \label{tab:advanDL-layer_acc} \end{table} {To better understand the feature adaptation process, features from all layers were investigated individually and compared to the concatenated features. Performance of each alignment layer are shown in Table~\ref{tab:advanDL-layer_acc}. Consistent with observations in~\cite{yosinski2014transferable}, accuracies drop from the first layer to the fifth layer as features become more and more specialized toward the training data. Therefore, the larger domain gap makes domain adaptation challenging. Although the last layer (FA-6) was able to mitigate this problem, this is because the recurrent layer has the ability to capture contextual information along the spectral direction of the hyperspectral data. Features from the last layer are the most discriminative ones, which allow the aligning module (DATL) to put more weight on the domain alignment (c.f. $\beta$ in Eq.~\ref{eq:datl-obj} and Eq.~\ref{eq:fann-beta}). The concatenated features obtained the highest accuracy compared to individual layers. As mentioned in~\cite{zhou2018deep}, an improvement of this idea would be to learn a combination weights for different layers instead of a simple concatenation.} \subsection{Transferring Knowledge -- Beyond Classification} \label{advanDL-od} In addition to image classification / semantic segmentation tasks, the notion of transferring knowledge between sources and datasets has also been used for many other tasks, such as object detection~\cite{zhang2016weakly}, image super-resolution~\cite{yuan2017hyperspectral}, and image captioning~\cite{shi2017can}. Compared to image-level labels, training an object detection model requires object-level labels and corresponding annotations (e.g. through bounding boxes). This increases the labeling requirement/costs for efficient model training. Effective feature representation is hence crucial to the success of these methods. As an example, in order to detect aircraft from remote sensing images, Zhang \textit{et al.}\ ~\cite{zhang2016weakly} proposed to use the UC Merced land use dataset~\cite{yang2010bag} as a background class to pretrain Faster RCNN~\cite{ren2015faster}. By doing this, the model gained an understanding of remote sensing scenes which facilitated robust object detection. The underlying assumption in such an approach is that even though the foreground objects may not be the same, the background information remains largely unchanged across the sources (e.g. datasets), and can hence be transferred to a new domain. Another important application of remote sensed images is pansharpening, where a panchromatic image (which has a coarse/broad spectral resolution, but very high spatial resolution) is used to improve the spatial resolution of multi/hyperspectral image. However, a high-resolution panchromatic image is not always available for the same area that is covered by the hyperspectral images. To solve this problem, Yuan \textit{et al.}\ ~\cite{yuan2017hyperspectral} pretrained a super-resolution network with natural images and applied the model to the target hyperspectral image band by band. The underlying assumption in this work is that the spatial features in both the high- and low-resolution images are the same in both domains irrespective of the spectral content. Traditional visual tasks like classification, object detection, and segmentation interpret an image at either pixel or object level. Image captioning takes this notion a step further and aims to summarize a scene in a language that can be interpreted easily. Although many image captioning methods have been proposed for natural images, this topic has not been rigorously developed in the remote sensing domain. Shi \textit{et al.}\ ~\cite{shi2017can} proposed satellite image captioning by using a pretrained fully convolutional network (FCN)~\cite{long2015fully}. The base network was pretrained for image classification on ImageNet. To understand the images, three losses were defined at the object, environment, and landscape scale respectively. Predicted labels at different levels were then sent to a language generation model for captioning. In this work, the task in target domain is very different from the one in the source domain. Despite that, pretrained model still provides features that are generic enough to help understanding the target domain images. \section{Data augmentation} \label{advanDL-aug} Flipping and rotating images usually does not affect the class labels of objects within the image. A machine learning model can benefit if the training library is augmented with samples with these simple manipulations. By changing the input training images in a way that does not affect the class, it allows algorithms to train from more examples of the object, and the models hence generalize better to test data. Data generation and augmentation share the same philosophy -- to generate synthetic or transformed data that is representative of real-world data and can be used to boost the training. Data augmentation such as flipping, rotation, cropping and color jittering have been shown to be very helpful for training deep neural networks~\cite{krizhevsky2012imagenet,liu2016ssd,chen2017deeplab}. These operations infact have become common practice when training models for natural image analysis tasks. Despite the differences between hyperspectral and natural images, standard augmentation methods like rotation, translation and flipping have been proven to be useful in boosting the classification accuracy of hyperspectral image analysis tasks \cite{lee2017going} and \cite{yu2017deep}. {To simulate the variance in the at-sensor radiance and mixed pixels during the imaging process, Chen \textit{et al.}\ ~\cite{chen2016deep} created \emph{virtual samples} by multiplying random factors to existing samples and linearly combining samples with random weights respectively.} Li \textit{et al.}\ ~\cite{li2019data} showed the performance can be further improved by integrating spatial similarity through pixel block pairs, {in which a $3 \times 3$ window around the labeled pixel was used as a block and different blocks were paired together based on their labels to augment the training set.} A similar spatial constraint was also used by Feng \textit{et al.}\ ~\cite{feng2019cnn}, where {unlabeled pixels were assigned labels for training if their $k$-nearest neighbors (in both spatial and spectral domains) belong to the same class}. Haut \textit{et al.}\ ~\cite{haut2019hyperspectral} used a random occlusion idea to augment data in the spatial domain. It randomly erases regions from the hyperspectral images during the training. As a consequence, the variance in the spatial domain increased and led to a model that generalized better. Some flavors of data fusion algorithms can be thought of as playing the role of data augmentation, wherein supplemental data sources are helping the training of the models. For instance, a building roof and a paved road both can be made from similar materials -- in such a case, it may be difficult for a model to tell differential these classes from the reflectance spectra alone. However, this distinction can be easily made by comparing their topographic information (e.g. using LiDAR data). {A straightforward approach to fuse hyperspectral and LiDAR data would be training separate networks -- one for each source/sensor and combining their features either through concatenation~\cite{xu2017multisource,li2018hyperspectral} or some other schemes such as a composite kernel~\cite{feng2019multisource}. Zhao \textit{et al.}\ ~\cite{zhao2017superpixel} presented data fusion of multispectral and panchromatic images. Instead of applying CNN to the entire image, features were extracted for superpixels that were generated from the multispectral image. Particularly, a fixed size window around each superpixel was split into multiple regions and the image patch in each region was feed into a CNN for extracting local features. These local features were sent to an auto-encoder for fusion and a softmax layer was added at the end for prediction. Due to its relatively high spatial resolution, the panchromatic image can produce spatial segments at a finer scale than the multispectral image. This was leveraged to refine the predictions by further segmenting each superpixel based on panchromatic image.} Aside from augmenting the input data, generating synthetic data that resembles real-life data is another approach to increase training samples. Generative adversarial network (GAN) ~\cite{goodfellow2014generative} introduced a trainable approach to generate new synthetic samples. GAN consists of two sub-networks, a generator and a discriminator. During the training, two components play a game with each other. The generator is trying to fool the discriminator by producing samples that are as realistic as possible, and the discriminator is trying to discern whether a sample is synthetically generated or belongs to the training data. After the training process converges, the generator will be able to produce samples that look similar to the training data. Since it does not require any labeled data, there has been an increasing interest in using GAN for data augmentation in many applications. This has recently been applied to the hyperspectral image analysis in recent years~\cite{he2017generative,ma2018super,zhan2018semisupervised,zhu2018generative}. Both~\cite{he2017generative} and~\cite{zhu2018generative} used GAN for hyperspectral image classification, where a softmax layer was attached to the discriminator. Fake data were treated as an additional class in the training set. Since a large amount of unlabeled was used for training the GAN, the discriminator became good at classifying all samples. A transfer learning idea was proposed for super-resolution in ~\cite{ma2018super}, where a GAN is pretrained on a relatively large dataset and fine-tuned on the UC Merced land use dataset~\cite{yang2010bag}. \section{Future Directions} In this chapter, we reviewed recent advances in deep learning for hyperspectral image analysis. Although a lot of progress has been made in recent years, there is still a lot of open problems, and related research opportunities. In addition to making advances in algorithms and network architectures (e.g. networks for multi-scale, multi-sensor data analysis, data fusion, image super-resolution etc.), there is a need for addressing fundamental issues that arise from insufficient data and the fundamental nature of the data being acquired. Towards this end, the following directions are suggested \begin{itemize} \item Hyperspectral ImageNet. We have witnessed the immense success brought about in part by the ImageNet dataset for traditional image analysis. The benefit of building a similar dataset for hyperspectral image is compelling. If such libraries can be created for various image analysis tasks (e.g. urban land-cover classification, ecosystem monitoring, material characterization etc.), they will enable learning truly deep networks that learn highly discriminative spatial-spectral features. \item Interdisciplinary collaboration. Developing an effective model for analyzing hyperspectral data requires a deep understanding of both the properties of the data itself and machine learning techniques. With this in mind, networks that reflect the optical characteristics of the sensing modalities (e.g. inter-channel correlations) and variability caused in acquisition (e.g. varying atmospheric conditions) should add more information for the underlying analysis tasks compared to ``black-box'' networks. \end{itemize} \bibliographystyle{elsarticle-num}
{ "timestamp": "2020-07-20T02:02:32", "yymm": "2007", "arxiv_id": "2007.08592", "language": "en", "url": "https://arxiv.org/abs/2007.08592" }
\section{Introduction} Accurate 6D object pose estimation plays an important role in a variety of tasks, such as augmented reality, robotic manipulation, scene understanding, etc. In recent years, substantial progress has been made for instance-level 6D object pose estimation, where the exact 3D object models for pose estimation are given. Unfortunately, these methods \cite{peng2019pvnet, zakharov2019dpod, wang2019densefusion} cannot be directly generalized to category-level 6D object pose estimation on new object instances with unknown 3D models. Consequently, the category, 6D pose and size of the objects have to be concurrently estimated. Although some other object instances from each category are provided as priors, the high variation of object shapes within the same category makes generalization to new object instances extremely challenging. To the best of our knowledge, \cite{sahin2018category} is the first work to address the 6D object pose estimation problem at category-level. This approach defines 6D pose on semantically selected centers and trains a part-based random forest to recover the pose. However, building part representations upon 3D skeleton structures limits the generalization capability across unseen object instances. Another work \cite{wang2019normalized} proposes the first data-driven solution and creates a benchmark dataset for this task. They introduce the Normalized Object Coordinate Space (NOCS) to represent different object instances within a category in a unified manner. A region-based network is trained to infer correspondences from object pixels to the points in NOCS. Class label and instance mask of each object are also obtained at the same time. These predictions are used together with the depth map to estimate the 6D pose and size of the object via point matching. However, the lack of explicit representation of shape variations limits their performance. In this work, we propose to reconstruct the complete object models in the NOCS to capture the intra-class shape variation. More specifically, we first learn the categorical shape priors from the given object instances, and then train a network to estimate the deformation field of the shape prior (that is used to get the reconstructed object model) and the correspondences between object observation and the reconstructed model. The shape prior serves as prior knowledge of the category and encodes geometrical characteristics that are shared by objects of a given category. The deformation predicted by our network captures the instance-specific shape details, i.e. shape variation of that particular instance. We present a method which is applicable across different object categories and data representations to learn the shape priors. In particular, an autoencoder is trained on a collection of object models from various categories. For each category, we compute the mean latent embedding over all instances in the respective category. The categorical shape prior is constructed by passing the mean embedding through a decoder. Note that there is no restriction on the data representation (point cloud, mesh, or voxel) of shape priors or collected models as long as we choose a proper architecture for the encoder and decoder. We use the Umeyama algorithm \cite{umeyama1991least} to recover the 6D pose and metric size of the object from the correspondences estimated by our network that maps the point cloud obtained from the observed depth map to the points in NOCS. We evaluate our method on two standard benchmarks. Extensive experiments demonstrate the advantage of our network and prove the effectiveness of explicitly modeling the deformation. In summary, the main contributions of this work are: \begin{itemize} \item We propose a novel deep network for category-level 6D object pose and size estimation; our network explicitly models the deformation from the categorical shape prior to the object model. \item We present a learning-based method which utilizes the latent embeddings to construct the shape prior; our method is applicable across different categories and data representations. \item Our network achieves significantly higher mean average precisions on both synthetic and real-world benchmark datasets. \end{itemize} \section{Related Work} \noindent\textbf{Instance-Level Pose Estimation.} Existing instance-level pose estimation approaches broadly fall into three categories. The first category casts votes in the pose space and further refines coarse pose with algorithms such as iterative closest point. LINEMOD \cite{hinterstoisser2012model} uses holistic template matching to find the nearest viewpoint. \cite{sundermeyer2018implicit} generates a latent code for the input image and search for its nearest neighbor in the codebook. \cite{tejani2014latent, kehl2016deep} aggregate the 6D votes cast by locally-sampled RGB-D patches. The second category directly maps input image to object pose. \cite{kehl2017ssd, xiang2017posecnn} extend 2D object detection network such that it can predict orientation as an add-on to the identity and 2D bounding box of the object. \cite{li2018unified, wang2019densefusion} regress 6D pose from RGB-D images in an end-to-end framework. The third category relies on establishing point correspondences. \cite{brachmann2014learning, krull2015learning, michel2017global} regress the corresponding 3D object coordinate for each foreground pixel. \cite{rad2017bb8, tekin2018real, peng2019pvnet} detect the keypoints of the object on image and then solve a Perspective-n-Point problem. \cite{zakharov2019dpod} estimates a dense 2D-3D correspondence map between the input image and object model. Although our approach follows the approach from the third category, our task focuses on a more general setting where the object models are not available during inference. \noindent\textbf{Category-Level Object Detection.} The task of 3D object detection aims to estimate 3D bounding boxes of objects in the scene. \cite{song2016deep} runs sliding windows in 3D space and generates amodal proposals for objects. \cite{gupta2015aligning, lahoud20172d, qi2018frustum} first generate 2D object proposals and then lift the proposals to 3D space. \cite{yang2018pixor, zhou2018voxelnet} are single-stage detectors which directly detect objects from 3D data. Although the above mentioned methods address the problem at category-level, the considered objects are usually constrained to the ground surface, e.g. instances of typical furniture classes in indoor scenes, cars, pedestrians, and cyclists in outdoor scenes. Consequently, the assumption that rotation is constrained to be only along the gravity direction has to be made. On the contrary, our approach can recover the full 6D pose of objects. \noindent\textbf{Category-Level Pose Estimation.} There are only a few pioneering works focusing on estimating 6D pose of unseen objects. \cite{burchfiel2019probabilistic} leverages a generative representation of 3D objects and produces a multimodal distribution over poses with mixture density network. However, only rotation is considered in their work. \cite{sahin2018category} introduces a part-based random forest which employs simple depth comparison features, but our approach deals with RGB-D images. \cite{wang2019normalized} proposes a canonical representation for all instances within an object category. Our approach also makes use of this representation. Instead of directly regressing the coordinates in NOCS, we account for intra-class shape variation by explicitly modeling the deformation from shape prior to object model. \cite{chen2020learning} trains a variational autoencoder to generate the complete object model. However, the reconstructed shape is not utilized for pose estimation. In our network, shape reconstruction and pose estimation are integrated together. \cite{wang20196} proposes the first category-level pose tracker, while our approach performs pose estimation without using temporal information. \noindent\textbf{Shape Deformation.} 3D deformation is commonly applied for object reconstruction from a single image. \cite{yumer2016learning, pontes2018image2mesh, kurenkov2018deformnet} use free-form deformation in conjunction with voxel, mesh and point cloud representations, respectively. \cite{wang2018pixel2mesh, wen2019pixel2mesh++} starts from a coarse shape and predicts a series of deformations to progressively improve the geometry. Similar to us, \cite{wang20193dn} also supervise the deformation with global feature of the target. However, we circumvent the fixed topology assumption of mesh representation by using point clouds instead. \section{Our Method} \noindent\textbf{Background.} Given an RGB-D image as the input, our goal is to detect and localize all visible object instances in the 3D space. The object instances are not seen previously, but must come from known categories. Each object is represented by a class label and an amodal 3D bounding box parameterized by its 6D pose and size. The 6D pose is defined to be the rigid transformation (i.e. rotation and translation) that transforms the object from the reference to the camera coordinate frame. It is common to choose the coordinate frame of the given 3D object models as reference in instance-level 6D object pose estimation. Unfortunately, this is not viable for our category-level task since the instances of the 3D models are not available. To mitigate this problem, we leverage on the Normalized Object Coordinate Space (NOCS) -- a shared canonical representation for all possible object instances within a category proposed in \cite{wang2019normalized}. The categorical 6D object pose and size estimation problem is then reduced to finding the similarity transformation between the observed depth map of each object instance and its corresponding points in the NOCS (i.e. \textit{NOCS coordinates}). \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{overview.pdf} \caption{ \textbf{Overview of our approach.} We first obtain a foreground mask for each object instance. Next our network reconstructs the instance (\textit{bowl} as an example) and establishes the correspondences between the observed points and the reconstructed model. Finally, the 6D pose is recovered by estimating a similarity transformation. Refer to Fig. \ref{fig:network} for the details of our network. } \label{fig:overview} \end{figure} \noindent\textbf{Overview.} In contrast to \cite{wang2019normalized} that directly outputs the NOCS coordinates from a Convolutional Neural Network (CNN), we propose an intermediate step to estimate the deformation of a pre-learned shape prior to improve the learning of intra-class shape variation. Our shape priors are learned from a collection of models spanning all categories (Section \ref{sect:prior}). As shown in Fig.~\ref{fig:overview}, our approach consists of three stages. The first stage performs instance segmentation on color image using an off-the-shelf network (e.g. Mask R-CNN \cite{he2017mask}). Next we convert the masked depth map into a point cloud with camera intrinsic parameters for each instance and crop an image patch according to the bounding box of the mask. Taking the point cloud, image patch, and the corresponding shape prior as inputs, our network outputs a deformation field that deforms the shape prior into the shape of the desired object instance (a.k.a. reconstructed model). Furthermore, our network outputs a set of correspondences that associates each point in the point cloud obtained from the observed depth map of the object instance with the points of the reconstructed model. This set of correspondences is used to mask the reconstructed model into the NOCS coordinates (Section \ref{sect:network}). Finally, the 6D pose and size of the object can be estimated by registering the NOCS coordinates and the point cloud obtained from the observed depth map (Section \ref{sect:registration}). \subsection{Categorical Shape Prior} \label{sect:prior} Although object shape varies among different instances, an investigation over the 3D models reveals that objects of the same category (especially artificially generated objects) tend to have semantically and geometrically similar components. For example, cameras are usually made up of a nearly cuboid body and a cylindrical lens; and mugs are typically cylindrical with a handle. These categorical characteristics provide strong priors on the shape reconstruction of novel instances. We propose the learning of a mean shape to capture the high-level characteristics from all the available models for each respective category. To this end, we first train an autoencoder with all available object models and then compute the mean latent embedding of each object category with the encoder. These latent embeddings are passed into the decoder to get the mean shape priors for each object category. Unlike methods such as simple averaging \cite{wallace2019few} and principal component analysis (PCA) \cite{burchfiel2019probabilistic} that operate on voxel representations, our autoencoder framework can be easily altered to take any 3D representations. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{shape_priors.pdf} \caption{ (a) Architecture of the autoencoder. (b) The latent embeddings of all instances are mapped to $\mathcal{R}^2$ with T-SNE for visualization. These instances are from 6 categories - \textit{bottle}, \textit{bowl}, \textit{camera}, \textit{can}, \textit{laptop} and \textit{mug}. (c) Shape priors are reconstructed by passing mean latent embedding of each category through the decoder.} \label{fig:shape_priors} \end{figure} Given a shape collection $\mathcal{M} = \{M_c^i \, \mid \, i=1, 2, \cdots, N; \, c = 1, 2, \cdots, C\}$, where $M_c^i$ is the 3D point cloud model of instance $i$ from category $c$, we independently apply a similarity transformation to each model such that it is properly aligned in the NOCS. This step ensures that the learned shape prior has the same scale and orientation as the target shape to be reconstructed. The encoder $\Phi$ takes the point cloud and outputs a low-dimensional feature vector, i.e. the latent embedding $z_c^i \in \mathcal{R}^n$. The decoder $\Psi$ takes this feature vector and outputs a point cloud that reconstructs the input: \begin{equation} \hat{M_c^i} = (\Psi \circ \Phi)(M_c^i) = \Psi (z_c^i). \end{equation} Specifically, we adopt the PointNet-like encoder proposed in \cite{yuan2018pcn}, and a three-layer fully-connected decoder as shown in Fig. \ref{fig:shape_priors}a. The reconstruction error is measured by the Chamfer distance: \begin{equation} \label{eq:cd_loss} d_{\text{CD}}(M_c^i, \hat{M_c^i}) = \sum_{x \in M_c^i} \min_{y \in \hat{M_c^i}} \| x - y \|_2^2 + \sum_{y \in \hat{M_c^i}} \min_{x \in M_c^i} \| x - y \|_2^2 . \end{equation} The autoencoder is trained on a shape collection by minimizing the reconstruction error. Once the training is converged, we obtain the latent embeddings $\{ z_c^i \}$ of all instances in $\mathcal{M}$. Although not explicitly enforced during training, these latent embeddings form clusters in the latent space according to their categories. Fig. \ref{fig:shape_priors}b visualizes the clustering effect of the embeddings. We use T-SNE \cite{maaten2008visualizing} to further embed these features in $\mathcal{R}^2$ for visualization. Similar clustering results are also observed on a different set of models \cite{yang2018foldingnet}. Based on this observation, we compute the mean latent embedding for each category and then pass it through the decoder to construct the shape prior: \begin{equation} M_c = \Psi (z_c) = \Psi \left( \frac{1}{N_c} \sum_{i} z_c^i \right). \end{equation} The resulting categorical shape priors $\{ M_c \}$ are shown in Fig. \ref{fig:shape_priors}c. \subsection{Our Network Architecture} \label{sect:network} We denote the observation of an object instance as $(V, I)$, where $V \in \mathcal{R}^{N_v \times 3}$ is the point cloud and $I \in \mathcal{R}^{H \times W \times 3}$ is the image patch. $N_v$ denotes the number of foreground pixels with a valid depth value. The corresponding shape prior is $M_c \in \mathcal{R}^{N_c \times 3}$, where $N_c$ is the number of points in $M_c$. Our network takes $V$, $I$ and $M_c$ as inputs, and outputs the per-point deformation field $D \in \mathcal{R}^{N_c \times 3}$ and a correspondence matrix $A \in \mathcal{R}^{N_v \times N_c}$. The final reconstructed model is $M = M_c + D$. Each row of $A$ sums to 1 since it represents the soft correspondences between a point in $V$ and all points in $M$. As shown in Fig. \ref{fig:network}, our network is composed of four parts: (1) extracts features from the object instance; (2) extracts features from the shape prior; (3) regresses the deformation field $D$; and (4) estimates the correspondence matrix $A$. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{network.pdf} \caption{ \textbf{Our Network Architecture.} The upper-left and lower-left branches extract point and global features from the instance and the shape prior respectively. The upper-right branch estimates the correspondence matrix, and the lower-right branch predicts the deformation field. The exchange of global features is the key part of our network. } \label{fig:network} \end{figure} On the consideration that the depth and color are two different modalities, we follow the pixel-wise dense fusion approach proposed in \cite{wang2019densefusion} to effectively extract RGB-D features from the observation. For point cloud $V$, we use an embedding network similar to PointNet \cite{qi2017pointnet} to generate per-point geometric features by mapping each point in $V$ to the $d_g$-dimensional feature space. The image patch $I$ is processed with a fully convolutional network which follows an encoder-decoder architecture and maps $I$ to $\mathcal{R}^{H \times W \times d_c}$. Next we associate the geometric feature of each point with its corresponding color feature and concatenate the feature pairs. Since each point in $V$ has a corresponding pixel in $I$, not vice versa, redundant color features are discarded. The concatenated features are termed ``instance point features" and fed to another shared multi-layer perceptron. An average pooling layer is used to generate the ``instance global feature". The categorical shape prior $M_c$ is a point cloud with purely geometric information. We apply a simpler embedding network to extract the ``category point features" and the ``category global feature". The shape prior $M_c$ provides the prior knowledge of the category, i.e. the coarse shape geometry and canonical pose. Although the observation $(V, I)$ is partial, it provides instance-specific details of the target shape. A natural way to reconstruct the object in NOCS is to deform $M_c$ under the guidance of $(V, I)$. Consequently, we concatenate the category and instance global features, and enrich the category point features with the concatenated features. The obtained feature vectors are successively convolved with $1 \times 1$ kernels to generate the deformation field $D$. Similar intuition and feature concatenation strategy also apply to the estimation of $A$. We combine the instance point features and global feature to aggregate both local and global information for each point. Each point in $V$ gets mapped to the points of the reconstructed model through concatenation with the category global feature. We obtain the NOCS coordinates, denoted as $P$, of the points in $V$ by multiplying $A$ and $M$, i.e. \begin{equation} \label{eq:noc} P = A \times M = A (M_c + D) \in \mathcal{R}^{N_v \times 3}. \end{equation} \subsection{6D Pose Estimation} \label{sect:registration} Our goal is to estimate the 6D pose and size of the object instance. Given depth observation $V$ and its NOCS coordinates $P$, the optimal similarity transformation parameters (rotation, translation, and scaling) can be computed by solving the absolute orientation problem using Umeyama algorithm \cite{umeyama1991least}. We also implement the RANSAC algorithm \cite{fischler1981random} for robust estimation. \subsection{Loss Functions} \label{sect:loss} In this section, we define the loss functions used to train our network, and explain how we handle object symmetry during training. \noindent\textbf{Reconstruction Loss.} We assume that ground-truth model $M_{gt}$ is available during training. The deformation field $D$ is supervised indirectly by minimizing the Chamfer distance (c.f. Eq. \ref{eq:cd_loss}) between $M$ and $M_{gt}$, i.e. $ L_{\text{cd}} = d_{\text{CD}}(M, M_{gt}) = d_{\text{CD}}(M_c + D, M_{gt}) $. \noindent\textbf{Correspondence Loss.} It is impractical and unnecessary to pre-compute the ground-truth value for $A$. Alternatively, we supervise $A$ indirectly via the NOCS coordinates $P$ (which is a result of applying the correspondence matrix on the reconstructed model) since the ground-truth NOCS coordinates $P_{gt}$ can be obtained easily from the object model and its 6D pose through image rendering. We use the smooth $L_1$ loss function: \begin{equation} L_{\text{corr}} (P, P_{gt}) = \frac{1}{N_v} \sum_{\mathbf{x} \in P} \sum_{i=1,2,3} \begin{cases} 5(x_i - y_i)^2 , & \text{if} \; |x_i - y_i| \leq 0.1 \\ |x_i - y_i| -0.05 , & \text{otherwise} \end{cases}, \end{equation} where $\mathbf{x} = (x_1, x_2, x_3) \in P$, and $\mathbf{y} = (y_1, y_2, y_3) \in P_{gt}$. Object symmetry is an inevitable problem for pose estimation algorithms, especially for those that require supervised training. For symmetrical objects, there exists at least one rotation such that appearance of the object is preserved under this rotation. In other words, two observations of a symmetric object can be very similar but with different rotation labels. We follow the solution proposed by \cite{pitteri2019object} to map ambiguous rotations to a canonical one. More specifically, the Map operator for an arbitrary rotation $R \in SO(3)$ is defined as: \begin{equation} \text{Map}(R) = R \hat{S}, \: \text{with} \; \hat{S} = \underset{S \in \mathcal{S}(M_c^i)}{\arg\min} {\| R S - I_3 \|_F}, \end{equation} where the proper symmetry group $ \mathcal{S}(M_c^i) $ is the set of rotations which preserve the appearance of a given object $M_c^i$. The experimental datasets assume continuous symmetry and the axis of symmetry is the y-axis of the NOCS. Hence, $\hat{S}$ takes the following form: \begin{equation} \hat{S} = \setlength{\arraycolsep}{5pt} \begin{bmatrix} \cos \hat{\theta} & 0 & - \sin \hat{\theta} \\ 0 & 1 & 0 \\ \sin \hat{\theta} & 0 & \cos \hat{\theta} \end{bmatrix}, \; \text{with} \; \hat{\theta} = \arctan 2 (R_{13} - R_{31}, R_{11} + R_{33}), \end{equation} where $R_{11}$, $R_{13}$, $R_{31}$, and $R_{33}$ are the elements of $R$. During training, we apply the Map operator to the rotation label: $R_{gt} \leftarrow R_{gt} \hat{S}$ to eliminate the rotation ambiguity of any symmetric object with the ground-truth pose $(R_{gt}, T_{gt})$. In practice, our network is supervised by ground-truth NOCS coordinates $P_{gt}$. Equivalently, we transform $P_{gt}$ by $\hat{S}^T$: $P_{gt} \leftarrow \hat{S}^T P_{gt}$. \noindent\textbf{Regularization Losses.} Row $A_i$ of the matrix $A$ represents the distribution over the correspondences between $i$-th point of $V$ and the points in $M$. $A_i$ can be understood as a relaxed one-hot vector, since each point of $V$ usually can be well approximated by at most three points of $M$. We encourage $A_i$ to be a peaked distribution by minimizing the average cross entropy: $L_{\text{entropy}} = \frac{1}{N_v} \sum_{i}\sum_{j} - A_{i,j} \log A_{i,j}$. We also regularize $D$ to discourage large deformations: $L_{\text{def}} = \frac{1}{N_C} \sum_{\mathbf{d}_i \in D} \| \mathbf{d}_i \| _2$. Minimal deformation preserves the semantic consistency between shape prior and the reconstructed model. For example, we want that the point belongs to the handle of the ``mug" prior remains in the handle after deformation. This consistency loss is beneficial for the prediction of the correspondence matrix $A$. In summary, the overall objective is a weighted sum of all four losses: \begin{equation} L = \lambda_{1} L_{\text{cd}} + \lambda_{2} L_{\text{corr}} + \lambda_{3} L_{\text{entropy}} + \lambda_{4} L_{\text{def}} . \end{equation} \section{Experiments} \subsection{Experimental Setup} \noindent\textbf{Datasets.} The CAMERA \cite{wang2019normalized} dataset is generated by rendering and compositing synthetic objects into real scenes in a context-aware manner. In total, there are 300K composite images, where 25K are set aside for evaluation. The training set contains 1085 object instances selected from 6 different categories - \textit{bottle}, \textit{bowl}, \textit{camera}, \textit{can}, \textit{laptop} and \textit{mug}. The evaluation set contains 184 different instances. The REAL \cite{wang2019normalized} dataset is complementary to the CAMERA. It captures 4300 real-world images of 7 scenes for training, and 2750 real-world images of 6 scenes for evaluation. Each set contains 18 real objects spanning the 6 categories. The two evaluation sets are referred to as CAMERA25 and REAL275. \noindent\textbf{Evaluation Metric.} Following \cite{wang2019normalized}, we independently evaluate the performance of 3D object detection and 6D pose estimation. We report the average precision at different Intersection-Over-Union (IoU) thresholds for object detection. For 6D pose evaluation, the average precision is computed at $\text{n} ^\circ \, \text{m} cm$. We ignore the rotational error around the axis of symmetry for symmetrical object categories (e.g. \textit{bottle}, \textit{bowl}, and \textit{can}). Specially, we treat \textit{mug} as symmetric object in the absence of the handle, and asymmetric object otherwise. \noindent\textbf{Baseline.} Wang et al. \cite{wang2019normalized} is currently the only publicly available code and datasets for the 6D object pose and size estimation task. Futhermore, it is also the state-of-the-art performance on the task. Hence, we choose \cite{wang2019normalized} as our baseline for comparison. \subsection{Implementation Details} We collect all the instances in the CAMERA training dataset to train the autoencoder. Shape priors are learned from this collection and used in all experiments. Each prior consists of 1024 points. We use the Mask R-CNN implemented by matterport \cite{matterport_maskrcnn_2017} for instance segmentation. For each detected instance, we resize the image patch to $192 \times 192$, and randomly sample 1024 points by repetition (if insufficient point count) or downsampling (if sufficient point count). To extract instance color features, we choose the PSPNet \cite{zhao2017pyramid} with ResNet-18 \cite{he2016deep} as backbone. We randomly select 5 point-pairs to generate a hypothesis for the RANSAC-based pose fitting. The maximum number of iteration is 128 and inlier threshold is set to 10\% of the object diameter. For the hyperparameters of the total loss, we empirically find that $\lambda_{1} = 5.0$, $\lambda_{2} = 1.0$, $\lambda_{3} = 1e-4$, and $\lambda_{4} = 0.01$ are good choices. \subsection{Comparison to Baseline} We compare our approach to the Baseline \cite{wang2019normalized} on CAMERA25 and REAL275. Quantitative results are summarized in Table \ref{table:map_comparison}. \noindent\textbf{CAMERA25.} In the setting of estimating 6D object pose and size from an RGB-D image, we achieve a mAP of 83.1\% for 3D IoU at 0.75, and a mAP of 54.3\% for 6D pose at $ 5^\circ \, 2 \text{cm} $. Our results are 14\% and 22\% higher than the Baseline \cite{wang2019normalized}, respectively. We naively remove the depth input and related sub-networks in our network (i.e. RGB image as the only input) to make fair comparisons with the Baseline \cite{wang2019normalized}, which takes an RGB image as its input. As shown in Table~\ref{table:map_comparison}, our results without depth input are still significantly better than the Baseline \cite{wang2019normalized} (i.e. +15.5\% and +17.9\%). On one hand, this experiment shows the advantage of explicit handling of the intra-class shape variation, and the effectiveness of our method which reconstructs the object via deformation. On the other hand, it also shows that adding depth to the network does help to improve overall performance, although our improved performance does not rely on it solely. Given that depth image is required to uniquely determine the scale of the object, we recommend it in practical applications. The top row of Fig. \ref{fig:map} shows the average precision at different error thresholds for all 6 object categories. It provides independent analysis for 3D IoU, rotation, and translation error. \begin{table}[t] \centering \caption{ Comparisons on CAMERA25 and REAL275. We report the mAP w.r.t. different thresholds on 3D IoU, and rotation and translation errors. } \label{table:map_comparison} \setlength{\tabcolsep}{5pt} \begin{adjustbox}{width=\columnwidth} \begin{tabular}{ c | c | c c c c c c } \toprule \multirow{2}{*}{Data} & \multirow{2}{*}{Method} & \multicolumn{6}{c}{mAP} \\ \cline{3-8} & & $3\text{D}_{50}$ & $3\text{D}_{75}$ & $5^\circ \, 2 \text{cm}$ & $5^\circ \, 5 \text{cm}$ & $10^\circ \, 2 \text{cm}$ & $10^\circ \, 5 \text{cm}$ \\ \midrule \multirow{3}{*}{CAMERA25} & {Baseline \cite{wang2019normalized}} & 83.9 & 69.5 & 32.3 & 40.9 & 48.2 & 64.6 \\ & {Ours (RGB)} & 93.1 & \textbf{84.6} & 50.2 & 54.5 & 70.4 & 78.6 \\ & {Ours (RGB-D)} & \textbf{93.2} & 83.1 & \textbf{54.3} & \textbf{59.0} & \textbf{73.3} & \textbf{81.5} \\ \midrule \multirow{3}{*}{REAL275} & {Baseline \cite{wang2019normalized}} & \textbf{78.0} & 30.1 & 7.2 & 10.0 & 13.8 & 25.2 \\ & {Ours (RGB)} & 75.2 & 46.5 & 15.7 & 18.8 & 33.7 & 47.4 \\ & {Ours (RGB-D)} & 77.3 & \textbf{53.2} & \textbf{19.3} & \textbf{21.4} & \textbf{43.2} & \textbf{54.1} \\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \noindent\textbf{REAL275.} The REAL training set only contains 3 object instances per category, we enlarge this training set such that the network can generalize well to unseen objects. Following the Baseline \cite{wang2019normalized}, we randomly select data from CAMERA and REAL training set according to a ratio of $3:1$. In fair comparison to the Baseline \cite{wang2019normalized}, our approach improves the mAP by 23.1\% for 3D IoU at 0.75 and 12.1\% for 6D pose at $ 5^\circ \, 2 \text{cm} $. In strict comparison, we can still outperform by 16.4\% and 8.5\%, respectively. These results provide further evidence to support our approach. Fig. \ref{fig:map} (bottom) shows more detailed analysis of the errors. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{map.png} \caption{Average precision vs. error thresholds on CAMERA25 (top row) and REAL275 (bottom row).} \label{fig:map} \end{figure} \subsection{Evaluation of Shape Reconstruction} \begin{table}[t] \centering \caption{Evaluation of shape reconstruction with CD metric ($\times 10^{-3}$).} \label{table:shape_evaluation} \setlength{\tabcolsep}{3pt} \begin{adjustbox}{width=\columnwidth} \begin{tabular}{ c | c | c c c c c c c} \toprule {Data} & {Model} & {Bottle} & {Bowl} & {Camera} & {Can} & {Laptop} & {Mug} & {Average} \\ \midrule \multirow{2}{*}{CAMERA25} & {Reconstruction} & 1.81 & 1.63 & 4.02 & 0.97 & 1.98 & 1.42 & 1.97 \\ & {Shape Prior} & 3.41 & 2.20 & 9.01 & 2.21 & 3.27 & 2.10 & 3.70 \\ \midrule \multirow{2}{*}{REAL275} & {Reconstruction} & 3.44 & 1.21 & 8.89 & 1.56 & 2.91 & 1.02 & 3.17 \\ & {Shape Prior} & 4.99 & 1.16 & 9.85 & 2.38 & 7.14 & 0.97 & 4.41 \\ \bottomrule \end{tabular} \end{adjustbox} \end{table} To evaluate the quality of the reconstruction, we compute the CD metric (c.f. Eq. \ref{eq:cd_loss}) of the reconstructed model from our method with the ground truth model in the NOCS. We get a CD metric of 1.97 on CAMERA25 and 3.17 on REAL275. In comparison, the CD metrics are 3.70 and 4.41 on the respective dataset for the shape priors from our autoencoder. The better CD metrics of the reconstructed models compared to the shape priors show that the deformation estimation in our framework improves the quality of the 3D model reconstruction. Table \ref{table:shape_evaluation} shows the CD metric of our reconstructed models and the shape priors for each category. \subsection{Ablation Studies} \noindent\textbf{Different shape priors.} We first evaluate how different shape priors influence the performance. All settings are kept the same in this experiment except for the choices of the priors. Results are summarized in Table \ref{table:ablation_camera} and \ref{table:ablation_real}. ``Embedding" refers to the priors obtained from decoding the mean latent embeddings. We also try the instance whose latent embedding has the minimum $L_2$ distance to the mean latent embedding (denoted as ``NN"). In addition, we explore random selection of one instance per category from the shape collection to compose our priors (denoted as ``Random"). In general, our approach remains stable under different priors. Our network can adapt to different shape priors because the deformation is explicitly estimated. We achieve the best result for accurate pose (i.e. $5^\circ \, 2 \text{cm}$) estimation when the learned categorical shape prior is used. Since our main target is to recover the 6D pose, we choose ``Embedding" as our best model. To validate whether the priors are necessary, we use a point cloud uniformly sampled from a sphere of diameter one as our prior (denoted as ``None"). The mAP decreases by 3.7\% on real dataset when there are no priors, but the best result is achieved for object size estimation. Although shape priors are beneficial for estimating 6D pose, they sometimes bias shape reconstruction. \begin{table}[t] \centering \caption{Ablation studies on CAMERA25. Refer to text for more details.} \label{table:ablation_camera} \setlength{\tabcolsep}{5pt} \begin{adjustbox}{width=\columnwidth} \begin{tabular}{ c | c | c c c c c c } \toprule \multirow{2}{*}{} & \multirow{2}{*}{Network} & \multicolumn{5}{c}{mAP} \\ \cline{3-8} & & $3\text{D}_{50}$ & $3\text{D}_{75}$ & $5^\circ \, 2 \text{cm}$ & $5^\circ \, 5 \text{cm}$ & $10^\circ \, 2 \text{cm}$ & $10^\circ \, 5 \text{cm}$ \\ \midrule \multirow{4}{*}{Shape Priors} & {Embedding} & 93.2 & 83.1 & \textbf{54.3} & \textbf{59.0} & 73.3 & 81.5 \\ & {NN} & 93.3 & 85.7 & 52.7 & 57.3 & 72.9 & 81.3 \\ & {Random} & 93.3 & 85.7 & 53.4 & 58.0 & 72.8 & 81.0 \\ & {None} & \textbf{93.3} & \textbf{85.8} & 54.0 & 58.8 & 73.1 & 81.6 \\ \midrule \multirow{1}{*}{NOCS Coords} & {Regression} & 93.3 & 85.3 & 51.2 & 55.6 & \textbf{73.8} & \textbf{82.1} \\ \midrule \multirow{2}{*}{Regularization} & {w/o Def.} & 93.2 & 85.1 & 53.9 & 58.7 & 73.1 & 81.4 \\ & { w/o Entropy} & 93.2 & 85.1 & 53.2 & 57.9 & 73.2 & 81.8 \\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \begin{table}[t] \centering \caption{Ablation studies on REAL275. Refer to text for more details.} \label{table:ablation_real} \setlength{\tabcolsep}{5pt} \begin{adjustbox}{width=\columnwidth} \begin{tabular}{ c | c | c c c c c c } \toprule \multirow{2}{*}{} & \multirow{2}{*}{Network} & \multicolumn{5}{c}{mAP} \\ \cline{3-8} & & $3\text{D}_{50}$ & $3\text{D}_{75}$ & $5^\circ \, 2 \text{cm}$ & $5^\circ \, 5 \text{cm}$ & $10^\circ \, 2 \text{cm}$ & $10^\circ \, 5 \text{cm}$ \\ \midrule \multirow{4}{*}{Shape Priors} & {Embedding} & 77.3 & 53.2 & \textbf{19.3} & \textbf{21.4} & \textbf{43.2} & \textbf{54.1} \\ & {NN} & 75.9 & 52.6 & 17.0 & 19.0 & 42.0 & 51.6 \\ & {Random} & 75.8 & 52.2 & 17.9 & 20.1 & 42.3 & 51.3 \\ & {None} & 77.2 & \textbf{55.5} & 15.6 & 19.8 & 38.4 & 53.6 \\ \midrule \multirow{1}{*}{NOCS Coords} & {Regression} & \textbf{78.7} & 54.9 & 13.7 & 14.9 & 42.5 & 51.4 \\ \midrule \multirow{2}{*}{Regularization} & {w/o Def.} & 77.1 & 50.2 & 13.4 & 15.4 & 37.3 & 49.8 \\ & {w/o Entropy} & 77.3 & 53.3 & 15.7 & 18.8 & 38.5 & 51.3 \\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \noindent\textbf{Directly regress the NOCS Coordinates?} As indicated by Eq. \ref{eq:noc}, our approach decouples the NOCS coordinates $P$ to shape reconstruction $M$ and dense correspondences $A$. However, both the network architecture and the training will be much simpler when we follow \cite{wang2019normalized} to regress $P$ directly (denoted as ``Regression" in Table \ref{table:ablation_camera} and \ref{table:ablation_real}). For 6D pose estimation, the mAP of ``Regression" at $5^\circ \, 2 \text{cm}$ is notably lower than ``Embedding" on CAMERA25 (-3.1\%) and REAL275 (-5.6\%). This result further supports the benefit of handling shape variation via reconstruction over naive regression of the NOCS coordinates. ``Regression" achieves slightly better mAP for object size estimation since it only finds the NOCS coordinates for the observed part, while ``Embedding" needs to complete the unknown part of the object. \noindent\textbf{Regularization losses.} To validate the necessity of the two regularization losses, we train the network without regularizing deformation or correspondence, while keeping all the rest same as "Embedding". The mean average precisions of both variants are still comparable to "Embedding" on synthetic dataset. However, mAP of 6D pose at $5^\circ \, 2 \text{cm}$ drops noticeably (-5.9\% and -3.6\%) on the more difficult real dataset. \subsection{Qualitative Results.} In Fig. \ref{fig:qualitative}, we provide several qualitative results from both synthetic and real instances. The 6D pose and object size can be reliably recovered from noisy point correspondences using RANSAC-based pose fitting. Shape reconstruction can capture the variations between instances. The qualities of our predictions are generally better on synthetic data than real data, which is an indication that observation noise needs more attention in our future work. Out of the six categories, \textit{camera} shows less accurate reconstruction due to its more complicated and varying geometry. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{qualitative_results.pdf} \caption{Examples of qualitative results from CAMERA25 (top rows) and REAL275 (bottom rows). For each example, we visualize the results of pose estimation and the reconstructed model $M$. Our estimations are shown in red, while the ground truths are shown in green. } \label{fig:qualitative} \end{figure} \section{Conclusions} We present a novel approach for category-level 6D object pose estimation. Our network explicitly models intra-class shape variation by the estimation of the deformation from a shape prior to the object model. Shape priors are learned form a collection of object models and constructed in the latent space. Experiments on synthetic and real datasets demonstrate the advantage of our proposed approach. \paragraph{\bf{Acknowledgements.}} This research is supported in parts by the Singapore MOE Tier 1 grant R-252-000-A65-114, and the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project \#A18A2b0046). \clearpage \bibliographystyle{splncs04}
{ "timestamp": "2020-07-17T02:21:24", "yymm": "2007", "arxiv_id": "2007.08454", "language": "en", "url": "https://arxiv.org/abs/2007.08454" }
\section{Introduction} \label{sec:intro} High-sensitivity, unbiased surveys of the gamma-ray sky are important to finding new astrophysical objects -- both to understand their bulk properties, and to constrain new physics beyond the standard model. Discovering the sources of cosmic rays and determining the underlying acceleration mechanisms requires precise measurements of gamma-ray spectra of objects above several tens of TeV. In addition, the indirect search for dark matter particles in the GeV--TeV regime also hinges on detecting a steady flux of photons from several galactic and extra-galactic targets of interest. The current generation of ground-based gamma-ray telescopes, in particular Imaging Atmospheric Cherenkov Telescopes \citep{Holder:2008ux,2014APh....54...67B,2017APh....94...29A,2013JInst...8P6008A}, are capable of resolving gamma-ray sources to $\leq 0.1^\circ$ precision. The highly complementary survey instruments, such as Tibet-AS$\gamma$ \citep{PhysRevLett.123.051101}, LHAASO \citep{Zhen:2014zpa}, ARGO-YBJ \citep{Bartoli:2013qxm}, and HAWC, have further extended the reach of TeV astronomy with their high up-time and unprecedented sensitivity to spatially extended sources. The High Altitude Water Cherenkov (HAWC) observatory has been continuously monitoring the Northern sky in TeV cosmic rays and gamma rays since commencing full operations in 2015, and has achieved a sensitivity down to a few percent of the Crab flux in five years. This work presents the results of an all-sky time-integrated search for point-like and extended sources using 1523 days of HAWC data. As a follow-up to the 2HWC catalog of TeV gamma-ray sources \citep{2HWC}, we introduce an updated catalog using more data and improved analysis methods. This paper is structured as follows. Section \ref{sec:hawc} provides a brief description of the HAWC detector, as well as the data and the analysis method we use in the construction of the catalog. Section \ref{sec:results} presents the results of the catalog search and provides preliminary spatial and spectral information on all sources. A broad discussion of the results is also presented. Section \ref{sec:discuss} discusses the systematic uncertainties and methodological limitations of this work. Section \ref{sec:conclude} concludes the paper. \section{Instrument and Data Analysis}\label{sec:hawc} \subsection{The HAWC Gamma-ray Observatory} HAWC consists of 300 water tanks, each filled with $\sim$ 200,000 liters of purified water and instrumented with four photo-multiplier tubes (PMTs). Very-high-energy primary particles interact with Earth's atmosphere and create an extensive air shower of secondary particles. Charged particles produced in an air shower emit Cherenkov radiation as they pass through the water in HAWC's tanks, which in turn produces photo-electrons in the PMTs. The HAWC observatory is sensitive to gamma rays in an energy range from hundreds of GeV to hundreds of TeV, with the exact energy threshold depending on the declination and energy spectrum of each source. Due to its location at a latitude of 19$^\circ$ N and its wide field-of-view, HAWC can observe about two thirds of the gamma-ray sky (from about \ang{-26} to +\ang{64} in declination) every day, with an instantaneous field-of-view of $>$2.0 sr. HAWC's angular resolution (68\% containment radius for photons) ranges between \ang{0.1} and \ang{1.0} depending on the energy and zenith angle of the signal. More details about the HAWC detector can be found in \cite{2HWC,oldCrab}. \subsection{Data Selection and Reduction} The temporal and spatial distribution of charge deposited in HAWC's PMTs is used to reconstruct the properties of the primary particle producing the air shower. The difference in timing between the signals recorded in different tanks allows us to reconstruct the direction of the primary particle. The spatial distribution and magnitude of the charges can be used to reconstruct the primary energy and to separate gamma-ray induced showers from cosmic-ray induced ones. We distribute the reconstructed events into nine analysis bins according to the fraction of the operating PMTs that recorded a signal for a given event. The fraction of PMTs hit is correlated with the primary energy, allowing us to extract the energy spectrum of gamma-ray sources. We apply gamma-hadron separation cuts to reduce the background of cosmic-ray induced showers. A detailed description of air shower reconstructions and quality cuts applied to the data can be found in \cite{2HWC,oldCrab}. We further bin the gamma-ray candidate events according to the direction of the primary particle in celestial coordinates (Right Ascension and Declination, J2000 epoch). We use the HEALPix binning scheme \citep{healpix}, with an NSIDE parameter of 1024. The dominant background is given by hadronic showers that pass the gamma-hadron cuts. For each pixel, we estimate the expected number of remaining background events after cuts using the method of Direct Integration as described in \cite{DI_milagro}. \subsection{Construction of the Catalog} \label{sec:analysis} The 3HWC catalog is based on data collected by the HAWC observatory between November 2014 and June 2019, corresponding to a livetime of 1523 days -- about three times the livetime of the 2HWC catalog data set. For the most part, the construction of the catalog follows the same method as the previous 2HWC catalog, which is described in \cite{2HWC}. We summarize the algorithm below, with particular focus on the differences from the previous catalog search. \subsubsection{Source Search} We perform a blind search for sources across HAWC's field-of-view using the likelihood framework discussed in \cite{liff}. The likelihood calculation assumes that the number of counts in each bin/pixel is distributed according to a Poisson distribution, with the mean given by the estimated background counts plus (if applicable) the predicted number of gamma-ray counts from the convolution of the source model with the detector response. For each HEALPix pixel, we calculate a likelihood ratio $\lambda = \hat{\mathcal{L}}_{s+b}/\mathcal{L}_b$, comparing the likelihood, $\hat{\mathcal{L}}_{s+b}$, of the best-fit model with a gamma-ray source centered on that pixel to that of a background-only model, $\mathcal{L}_{b}$. We define a test statistic, $TS=2\,\log\left( \Lambda \right)$. Assuming that the null hypothesis is true, the $TS$ is distributed according to a $\chi^2$ distribution with one degree of freedom \citep{wilks1938}, which can be approximated by a gaussian distribution. Then, $\pm \sqrt{TS}$ corresponds to the (``pre-trials'') significance. The negative sign is used for pixels in which the best-fit flux normalization is negative. The signal hypothesis considers a fixed source morphology and an $E^{-2.5}$ power-law energy spectrum. The only free parameter of the likelihood fit is the flux normalization. We repeat source searches for four different hypothetical morphologies: point sources, and extended disk-like sources with radii of \ang{0.5}, \ang{1.0}, and \ang{2.0}. This procedure is very similar to what was used in the previous 2HWC catalog, with the only change being the spectral index hypothesis (the prior catalog used a spectral index of -2.7 for point sources, -2.0 for extended sources). The resulting all-sky significance map for a point-source assumption can be seen in Figure \ref{fig:allsky}. \begin{figure}[b] \includegraphics[width=\textwidth, trim=0cm 0cm 0cm 3cm, clip=true]{{allskyC_0.0_magma}.pdf} \caption{All-sky significance map in celestial coordinates, assuming a point-source hypothesis. The bright band on the left is part of the Galactic plane (c.f. Figures \ref{fig:plane1}-\ref{fig:plane4}), and the bright region on the right is the Galactic anti-center region containing the Crab Nebula and the Geminga halo (c.f. Figure \ref{fig:plane5}). The two off-plane hotspots are the two TeV-bright blazars Mrk 421 (right) and Mrk 501 (left).} \label{fig:allsky} \end{figure} For each significance map, we compile a list of candidate sources comprising the local maxima with $\sqrt{TS} > 5$. Due to Poisson fluctuations in the number of detected gamma rays, a single source may produce multiple local maxima. To avoid double-counting such sources, candidate sources are promoted to sources if they pass the following ``TS valley'' criterion: the significance profile connecting the source candidate with any other source within \ang{5} of the source candidate in question has to ``dip'' by $\Delta TS>2$ to promote a source candidate to a ``primary'' source status. A ``secondary'' source is defined with a relaxed criterion, such that $1 < \Delta TS < 2$. We mark secondary sources with a dagger (\dag) in Table \ref{tab:sources}. The $\Delta TS$ criterion used for 3HWC is a little stricter than the one used in the 2HWC catalog, in which a source only had to pass the $\Delta TS$ with its closest neighboring source. We then combine the four source lists (for the four different assumptions of the source morphology) to yield the 3HWC catalog. We include all sources found in the point source search in 3HWC. We only include sources found in the extended source searches if they are more than \ang{1} away from any point source or smaller extended source already in the catalog. Table \ref{tab:sources} shows the resulting list of sources comprising the 3HWC catalog. For each source, we also show the closest known TeV source listed in the TeVCat\footnote{\url{http://tevcat.uchicago.edu/}} \citep{2008ICRC....3.1341W} if it is within \ang{1} of the HAWC source. \subsubsection{Spectral Fits} After identifying the primary and secondary sources, we perform likelihood fits to obtain each source's energy spectrum. We assume a simple power-law spectrum for each source, fitting for the spectral index and flux normalization as done for the 2HWC catalog \citep{2HWC}. There are two changes to the spectral fitting procedure used in this work compared to 2HWC. First, for 3HWC, we only report the spectral fit for the same morphology for which the source was first found in the source-search stage. Second, we dynamically treat the instrument response during the fit. 2HWC fits relied on a method where the angular resolution was pre-calculated (and fit with a double-Gaussian function in each analysis bin) before the spectral fit, for a fixed spectral assumption. 3HWC spectral fits use a new method in which we recalculate the angular resolution for each tested spectral assumption during the fit. Additionally, we do not assume that the angular resolution follows a specific analytical shape. This allows for a more complete characterization of HAWC's PSF and of the systematic uncertainties on the fit parameters. See \cite{israelThesis} for more details on the new spectral fit method. Table \ref{tab:fluxes} shows the results of the spectral fits. \section{Results}\label{sec:results} \subsection{3HWC Sources} The 3HWC catalog contains 65 sources, 17 of which are considered secondary sources (not well separated from neighboring sources according to the $\Delta TS$ criterion). The source positions can be found in Table \ref{tab:sources} and the results of the spectral fits, as well as the energy range from which we expect 75\% of the observed significance, can be found in Table \ref{tab:fluxes}. Twenty-eight of these sources do not lie within \ang{1} of any 2HWC source. We discuss some of these sources in more detail in Section \ref{sec:new_sources}. We compare the flux measurements with the sensitivity for the underlying dataset. The flux sensitivity is defined as the flux normalization required to have a 50\% probability of detecting a source at the $5 \sigma$ level. Figure \ref{fig:sensi} shows the HAWC 1523-days sensitivity and the flux measurements from Table \ref{tab:fluxes} as a function of declination. HAWC is more sensitive to sources transiting directly overhead, corresponding to a declination of \ang{19.0}, than to sources transiting at larger zenith angles. HAWC is also more sensitive to hard-spectrum sources. For the optimal case (an $E^{-2}$ source transiting directly overhead), HAWC's sensitivity approaches $\sim 2\%$ of the flux of the Crab Nebula. The sensitivity is nearly constant with respect to the Right Ascension of a source (it varies by less than 3\% across the sky). Most of the sources were found in the point source search. With about three times the livetime compared to the 2HWC catalog, many extended sources are now also significantly detected in the point source map. For example, Figure \ref{fig:plane5} shows five 3HWC sources (\textbf{3HWC J0630+186}, \textbf{3HWC J0631+169}, \textbf{3HWC J0633+191}, \textbf{3HWC J0634+165}, and \textbf{3HWC J0634+180}, all found in the point source search) clustering near the Geminga pulsar. We believe that these five sources are all part of the extended halo around Geminga, described in \cite{Abeysekara:2017old}. Similarly, both \textbf{3HWC J0659+147} and \textbf{3HWC J0702+147} are part of the extended source \textbf{2HWC J0700+143} announced in the aforementioned publication. It is not clear if these sources correspond to real features in the morphology of the two pulsar halos, or if they are just due to statistical fluctuations in the number of photons recorded by HAWC. As seen in the the all-sky significance map (Figure \ref{fig:allsky}), the majority of the sources in the 3HWC catalog are located along the Galactic plane. Figures \ref{fig:plane1}, \ref{fig:plane2}, \ref{fig:plane3}, and \ref{fig:plane4} show the significance maps of the Galactic plane from the Cygnus region (l=\ang{85}) to the inner Galaxy (l=\ang{2}). The Galactic center itself falls outside of the part of the sky visible to HAWC. Figure \ref{fig:plane5} shows a region near the Galactic anti-center containing the Crab Nebula, Geminga, and other sources. For this region, both the point-source significance map and the significance map from the \ang{1} extended source search are shown. For convenience, the locations of 3HWC sources and TeVCat sources have been marked in these images. Figures \ref{fig:b_dist} and \ref{fig:l_dist} show the distribution of 3HWC sources as a function of galactic latitude and longitude respectively. \clearpage \startlongtable \begin{deluxetable*}{l c c c c c c c c c h h h} \tabletypesize{\small} \tablecaption{Source list and nearest TeVCat sources (within \ang{1} of each 3HWC source). Secondary sources (i.e., sources that are not separated from their neighbor(s) by a large TS gap) are marked with a dagger (\dag). The position uncertainty reported here is statistical only. The systematic uncertainty on the position is discussed in Section \ref{pointingbias}. TeVCat source names within \ang{0.5} of a 3HWC source are printed in bold. For sources without a TeVCat counterpart within \ang{1}, the angular distance to the nearest TeVCat source is printed for reference. \label{tab:sources}} \tablehead{ \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \multicolumn{2}{c}{Nearest TeVCat source}\\ \cline{9-10} \colhead{Name} & \colhead{Radius} & \colhead{TS} & \colhead{RA} & \colhead{Dec} & \colhead{\textit{l}} & \colhead{\textit{b}} & \colhead{1$\sigma$ stat. unc.} & \colhead{Dist.} & \colhead{Name} & \nocolhead{TeVCat flux} & \nocolhead{TeVCat index} & \nocolhead{TeVCat extent}\\ \colhead{} & \colhead{[$^\circ$]} & \colhead{} & \colhead{[$^\circ$]} & \colhead{[$^\circ$]} & \colhead{[$^\circ$]} & \colhead{[$^\circ$]} & \colhead{[$^\circ$]} & \colhead{[$^\circ$]} & \colhead{} & \nocolhead{[CU]} & \nocolhead{} & \nocolhead{[$^\circ$]} } \startdata \input{table1data} \enddata \end{deluxetable*} \clearpage \startlongtable \begin{deluxetable*}{l c c c c} \tabletypesize{\small} \tablecaption{Source radius, best-fit spectrum, and energy range. The flux $F_7$ is the differential flux at 7\,TeV. The two sets of reported uncertainties correspond to statistical and systematic, respectively. The spectral fit for \textbf{3HWC J0659+147} did not converge. The energy range quoted here is the true energy interval from which we expect to get 75\% of a given source's significance. \label{tab:fluxes}} \tablehead{ \colhead{Name} & \colhead{Radius} & \colhead{Index} & \multicolumn{1}{c}{$F_{7}$} & \colhead{Energy Range}\\ \colhead{} & \colhead{[$^\circ$]} & \colhead{} & \multicolumn{1}{c}{[$10^{-15}$ TeV$^{-1}$cm$^{-2}$s$^{-1}$]} & \colhead{[TeV]} } \startdata \decimals \input{table3data} \enddata \end{deluxetable*}% \clearpage \begin{figure}[tbp] \plotone{Differential_Sensitivity_7TeV_3hwc_new.pdf} \caption{3HWC sensitivity for the point source search as a function of declination. The flux sensitivity is shown at a pivot energy of 7 TeV for three spectral hypotheses: $E^{-2.0}$,$E^{-2.5}$ and $E^{-3.0}$. The sensitivity does not depend on the Right Ascension. Also shown is the best-fit flux normalization at 7 TeV for all sources in the 3HWC catalog.} \label{fig:sensi} \end{figure} \begin{figure}[tbp] \includegraphics[width=0.5\textwidth, trim=0cm 0cm 0cm 1.5cm, clip=true]{{galactic_plane_181-221_0.0_magma}.pdf}\includegraphics[width=0.5\textwidth, trim=0cm 0cm 0cm 1.5cm, clip=true]{{galactic_plane_181-221_1.0_magma}.pdf} \caption{Significance maps of the Galactic anti-center region for $-176^\circ \leq l \leq -142^\circ$, showing the Crab Nebula and the Geminga halo among other sources. Left: Point-source hypothesis. Right: \ang{1} extended-source hypothesis. The grey lines show significance contours starting at $\sqrt{TS}=40$, increasing in steps of $\Delta \sqrt{TS}=20$. Top labels indicate positions of known TeV sources (from TeVCat), bottom labels indicate positions of 3HWC sources.} \label{fig:plane5} \end{figure} \clearpage \begin{figure}[t] \includegraphics[width=0.9\textwidth, trim=0cm 0cm 0cm 1.5cm, clip=true]{{galactic_plane_62-85_0.0_magma}.pdf} \caption{Significance map of part of the Galactic plane for $62^\circ \leq l \leq 85^\circ$; point-source hypothesis. The green lines show significance contours starting at $\sqrt{TS}=26$, increasing in steps of $\Delta \sqrt{TS}=2$. Top labels indicate positions of known TeV sources (from TeVCat), bottom labels indicate positions of 3HWC sources.} \label{fig:plane1} \end{figure} \begin{figure}[b] \includegraphics[width=0.9\textwidth, trim=0cm 0cm 0cm 1.5cm, clip=true]{{galactic_plane_42-65_0.0_magma}.pdf} \caption{Significance map of part of the Galactic plane for $42^\circ \leq l \leq 65^\circ$; point-source hypothesis. Top labels indicate positions of known TeV sources (from TeVCat), bottom labels indicate positions of 3HWC sources. } \label{fig:plane2} \end{figure} \clearpage \begin{figure}[t] \includegraphics[width=0.9\textwidth, trim=0cm 0cm 0cm 1.5cm, clip=true]{{galactic_plane_22-45_0.0_magma}.pdf} \caption{Significance map of part of the Galactic plane for $22^\circ \leq l \leq 45^\circ$; point-source hypothesis. The green lines show significance contours starting at $\sqrt{TS}=26$, increasing in steps of $\Delta \sqrt{TS}=2$. Top labels indicate positions of known TeV sources (from TeVCat), bottom labels indicate positions of 3HWC sources.} \label{fig:plane3} \end{figure} \begin{figure}[b] \includegraphics[width=0.9\textwidth, trim=0cm 0cm 0cm 1.5cm, clip=true]{{galactic_plane_2-25_0.0_magma}.pdf} \caption{Significance map of the inner Galactic plane for $2^\circ \leq l \leq 25^\circ$; point-source hypothesis. The green lines show significance contours starting at $\sqrt{TS}=26$, increasing in steps of $\Delta \sqrt{TS}=2$. Top labels indicate positions of known TeV sources (from TeVCat), bottom labels indicate positions of 3HWC sources.} \label{fig:plane4} \end{figure} \subsection{Comparison with the 2HWC catalog} Thirty-three of the 40 the sources detected in the 2HWC catalog have a 3HWC counterpart within \ang{1}. Most of HAWC's sources are supernova remnants, pulsar wind nebula, or pulsar halos, and are expected to have constant emission, with the exception of the two Markarians, which are known to be variable at TeV energies on time scales of hours to weeks \citep[e.g.][]{Abeysekara_2017_Mrk}. For these sources, the TS increased by a factor of 2.3, on average. This is slightly less than the expected improvement due to the increase in livetime (factor of 3). The apparent deficit is explained by the new instrument response functions and the change in spectral index that was used in the source searches, verified by re-running the catalog search with the new settings on the 2HWC dataset. There are seven 2HWC sources without a 3HWC counterpart within \ang{1}. Two of these sources (\textbf{2HWC J1902+048} and \textbf{2HWC J2024+417}) still show significant emission ($TS>25$) in the new data set, but do not pass the TS dip test. \textbf{2HWC J1902+048} is now considered part of the \textbf{3HWC~J1908+063} complex, and \textbf{2HWC J2024+417} is part of the \textbf{3HWC~J2031+415} complex. The remaining five sources do not pass the $TS>25$ criterion with the new data set. Out of these, \textbf{2HWC J1921+131} lies in the Galactic plane near the W51 supernova remnant. In the 3HWC data set (point-source search), \textbf{2HWC J1921+131} has a TS of 21.7, which is below the threshold for significant detection. It is possible that the 2HWC detection was due to an upward fluctuation of a combination of diffuse emission and emission from the nearby W51 complex. The other four sources (\textbf{2HWC J0819+157}, \textbf{2HWC J1040+308}, \textbf{2HWC J1309-054}, and \textbf{2HWC J1829+070}) were located off the Galactic plane and had no known counterparts. Due to the low detection significance of these sources, searches for variability of these sources on time-scales of several months have not yielded conclusive results. (We note that these searches were mainly focused on a change in detection significance scaled over time. A more detailed analysis of the light-curves of these sources will follow in future HAWC publications.) It is unclear whether the previous detections were false positives due to random background fluctuations or upward fluctuations of real, but weak, sources, or indicative of flaring activity/temporal variability. We estimate the expected number of false positives, i.e. random background fluctuations that pass the detection threshold, to be 0.75 for the 3HWC catalog (see Section \ref{sec:discuss}), compared to 0.4 for 2HWC. Four of the seven 2HWC sources without a 3HWC equivalent (\textbf{2HWC~J0819+157}, \textbf{2HWC~J1040+308}, \textbf{2HWC~J1829+070}, and \textbf{2HWC~J1902+048}) are also not detected anymore when applying the new search methods/settings to the 2HWC dataset. \subsection{New 3HWC Sources Potentially Associated with Known TeV Sources} Eight of the 3HWC sources have no counterpart (within \ang{1}) in 2HWC, but are potentially associated with known TeV emitters. These sources are listed below. We define positional coincidence within \ang{1} as the criterion for two sources to be considered as candidates for association. \textbf{3HWC~J0540+228} ($TS=28.8$) and \textbf{3HWC~J0543+231} ($TS=34.2$) are part of the extended source that had been previously announced as \textbf{HAWC~J0543+233} \citep{2017ATel10941....1R} -- a potential TeV halo around the pulsar \textbf{PSR~B0540+23}. \textbf{3HWC~J0617+224} ($TS=32.3$) is potentially associated with the shell-type supernova remnant (SNR) \textbf{IC~443} (\textbf{SNR~G189.1+03.0}), which has been detected at TeV energies by MAGIC \citep{2007ApJ...664L..87A} and VERITAS \citep{2009ApJ...698L.133A}. HAWC previously announced the detection of gamma-ray emission from \textbf{IC~443} without naming the source \citep{Fleischhack:2019njo}. \textbf{3HWC~J0634+067} ($TS=36.2$) had been previously announced as \textbf{HAWC~J0635+070} \citep{2018ATel12013....1B} -- a potential TeV halo around the pulsar \textbf{PSR~J0633+0632}. \textbf{3HWC~J2227+610} ($TS=52.5$) is within \ang{1} of the known TeV gamma-ray sources \textbf{VER~J2227+608} \citep{2009ApJ...703L...6A} and \textbf{MGRO~J2228+61} \citep{Abdo_2009_MGRO,Abdo_2009_erratum}. The most likely source of the emission is the shell-type supernova remnant \textbf{SNR~G106.3+2.7}. \textbf{3HWC~J2227+610} was recently announced as \textbf{HAWC~J2227+610} in a dedicated publication \citep{boomerangPaper}. \textbf{3HWC~J1913+048} ($TS=44.7$) is within \ang{1} of the eastern lobe of the micro-quasar \textbf{SS~433}. Detection of TeV gamma-ray emission from this source (as well as the western lobe of \textbf{SS~433}) had previously been announced by HAWC \citep{Abeysekara:2018qtj}. \textbf{3HWC J1757-240} ($TS=28.6$) was found in the \ang{1} extended source search. It overlaps with the W28 region, which contains at least four known TeV sources (\textbf{HESS J1801-233}, and \textbf{HESS J1800-240A/B/C}) \citep{2008A&A...481..401A}. The emission seen by H.E.S.S. has been attributed to an SNR interacting with nearby molecular clouds. Due to HAWC's limited sensitivity and relatively poor angular resolution at these declinations, we are currently unable to resolve the individual H.E.S.S. sources. \textbf{3HWC J1803-211} ($TS=38.4$) is located near the known, but unidentified, TeV source \textbf{HESS J1804-216} \citep{2005Sci...307.1938A,2006ApJ...636..777A}. \subsection{New TeV Sources} \label{sec:new_sources} For each source in 3HWC, we scan several catalogs of known or potential gamma-ray sources for potential associations within \ang{1} including the TeVCat \citep{2008ICRC....3.1341W}, the fourth \textit{Fermi}-LAT source catalog (4FGL) \citep{Fermi-LAT:2019yla}, the ATNF Pulsar Catalog \footnote{\url{https://www.atnf.csiro.au/research/pulsar/psrcat/}} (v 1.62) \citep{Manchester:2004bp}, and the Galactic supernova remnant catalog SNRCat \footnote{\url{http://www.physics.umanitoba.ca/snr/SNRcat}} \citep{2012AdSpR..49.1313F}.. We report 20 new sources that do not have a potential counterpart in the TeVCat (see Table \ref{tab:gev}). Fourteen of these new sources are within \ang{1} of a previously observed GeV source. Table \ref{tab:gev} lists the GeV associations and their source classifications obtained from the fourth \textit{Fermi}-LAT source catalog, 4FGL. Two new sources, \textbf{3HWC J0630+186} ($TS=38.9$) and \textbf{3HWC J1918+159} ($TS=31.6$), have no GeV counterpart in 4FGL. These two sources are, however, potentially associated with pulsars in the ATNF catalog. \textbf{3HWC J0630+186} is within \ang{0.95} of the pulsar \textbf{PSR J0630+19}. \textbf{3HWC J1918+159} is potentially associated with \textbf{PSR J1918+1541} with a separation distance of \ang{0.26}. The age and spin-down luminosity of these objects are not available. \begin{deluxetable}{c c c c c c c c} \tabletypesize{\small} \tablecaption{New HAWC sources with no TeV counterpart. For each source we list the following information in the various columns: Galactic longitude; Galactic latitude; the nearest GeV source in 4FGL \citep{Fermi-LAT:2019yla} and its separation from the 3HWC source; the source class as listed in 4FGL where available (bcu: active galaxy of uncertain type, PSR: pulsar, identified by pulsations, unk: unknown); the nearest pulsar and corresponding separation from the ATNF pulsar catalog \citep{Manchester:2004bp}; and the nearest SNR, separation distance, and type from the SNRCat \citep{2012AdSpR..49.1313F}.} \label{tab:gev} \tablehead{ \colhead{HAWC} & \colhead{\textit{l} [$^\circ$]} & \colhead{\textit{b} [$^\circ$]} & \colhead{4FGL ($^\circ$)} & \colhead{Class} & \colhead{ATNF ($^\circ$)} & \colhead{SNRCat ($^\circ$)} & \colhead{SNR Type} } \include{GeV_table3} \end{deluxetable} \subsubsection{Unassociated New TeV Sources} We observe four sources that do not have an apparent counterpart in any of the catalogs that we scanned for potential associations: \textbf{3HWC J0633+191} ($TS=37.5$), \textbf{3HWC J2010+345} ($TS=27.6$), \textbf{3HWC J2022+431} ($TS=29.0$), and \textbf{3HWC J1743+149} ($TS=25.9$). We note that three of these new sources are not well isolated from known extended sources in the catalog. \textbf{3HWC J0633+191} is in a dense region of known pulsars including Geminga. \textbf{3HWC J2010+345} is near \textbf{3HWC J2004+343}, which itself is a 1$^\circ$ extended source. \textbf{3HWC J2022+431} is in the Cygnus-X region with a number of star-forming clusters nearby, most notably the \textit{Fermi}-LAT cocoon \citep{Hona:2019ysf}. Without a detailed morphological study of their respective regions, we cannot definitively exclude the above new sources as appendages of existing sources. Such a study, however, is beyond the scope of this paper. \textbf{3HWC J1743+149} (TS = 25.9) is the only new unassociated source that is not in spatial proximity of a region of known TeV sources. It is also notably distant from the Galactic plane with a Galactic latitude of $b = $ \ang{21.7}. The nearest potential GeV gamma-ray counterpart is 4FGL~J1741.4+1354, associated to the pulsar PSR~J1741+1351, at a distance of \ang{1.3} from 3HWC~J1743+149 (outside of our nominal search radius for counterparts). \begin{figure}[tbp] \plotone{b_distribution.pdf} \caption{Distribution of HAWC sources (excluding the known blazars) with respect to galactic latitude for \ang{-5} $< b <$ \ang{25}. The darker shaded histogram shows the new TeV sources in 3HWC that were not present in 2HWC. \label{fig:b_dist}} \end{figure} \begin{figure}[tbp] \plotone{l_distribution_sensi.pdf} \caption{Distribution of HAWC sources as a function of galactic longitude. The darker shaded histogram shows the new TeV sources in 3HWC that were not present in 2HWC. The blue solid line shows the sensitivity at $b =$ \ang{0}. Due to its location, HAWC is most sensitive towards the Galactic anticenter region, $l\approx$ \ang{-180} and, to the inner Galaxy at $l\approx +$\ang{50}. Most known TeV gamma-ray sources are located in the inner Galaxy.} \label{fig:l_dist} \end{figure} \subsubsection{Pulsars and TeV Halo Candidates in the 3HWC} A significant fraction of 3HWC sources are candidates for association with pulsars in the ATNF catalog (c.f. Table \ref{tab:gev}). Figure \ref{fig:mw_dist} shows all the 3HWC sources potentially associated with pulsars in the galaxy for which the distance information is available. After the discovery of extended gamma-ray emission around the Geminga and Monogem pulsars by HAWC \citep{Abeysekara:2017old}, and the discovery of several extended TeV PWNe by H.E.S.S. \citep{Abdalla:2017vci}, it has been suggested that extended pulsar ``Halos'' are a common feature or such objects and that several unassociated/not firmly associated HAWC sources may be dominated by such Halo emission \citep[see e.g.][]{PhysRevD.96.103016,Linden:2017blp,Giacinti:2019nbu,PhysRevD.100.043016,Fleischhack:2019rjb,DiMauro:2019yvh,Manconi:2020ipm,PhysRevD.101.103035}. The observed gamma-ray emission is thought to be due to inverse Compton up-scattering of cosmic microwave background photons by relativistic electrons and positrons diffusing freely in the vicinity around the pulsar. TeV halos are thought to form around older pulsars (at least several tens of thousands of years old) that have either left their SNR shell or whose SNR shell has already dissipated. They are thus distinct from (classical) PWNe, where the electron-positron plasma is confined by the ambient medium. We produce an updated list of pulsars that are likely candidates to have a TeV Halo detectable with HAWC, following similar criteria as \cite{PhysRevD.96.103016}. We select pulsars from the ATNF with ages between 100\,kyr and 400\,kyr, declinations between \ang{-25} and +\ang{64}, and an estimated spindown flux of at least 1\% of that of the Geminga pulsar. We find sixteen such pulsars, out of which eight are coincident with at least one 3HWC source (within \ang{1}). Table \ref{tab:psrhalo} lists the 3HWC sources that are coincident with these TeV Halo candidate pulsars. Some pulsars have more than one 3HWC source nearby. This is not unexpected as our source search sometimes finds multiple point sources associated with the same extended emission region. One of these pulsars, \textbf{PSR J1740+1000}, has not previously been detected at TeV energies. \begin{deluxetable}{c c c c c c c c c} \tabletypesize{\small} \tablecaption{HAWC Sources with the corresponding TeV halo candidate pulsars within $1^\circ$. The age of the pulsar in kyr and the spin-down luminosity, $\dot{E}$, in erg s$^{-1}$ are also given. The Separation column indicates the angular distance between the HAWC source and the ATNF pulsar \citep{Manchester:2004bp}. The TeVCat column lists the previously detected TeV counterpart of each source.} \label{tab:psrhalo} \tablehead{ \colhead{HAWC }& \colhead{\textit{l} [$^\circ$]} & \colhead{\textit{b} [$^\circ$]} & \colhead{Pulsar} & \colhead{Age [kyr]} &\colhead{$\dot{E}$ [erg s$^{-1}$]} & \colhead{Distance [kpc]} & \colhead{Separation [$^\circ$]} & \colhead{TeVCat} } \include{halo_table3} \end{deluxetable} \begin{figure}[tb] \plotone{mw_dist.pdf} \caption{Face-on view of the galaxy showing positions of HAWC sources associated with (i.e., spatially coincident within 1$^\circ$ of) pulsars for which distances are estimated. Spatial coincidence does not necessarily imply that the observed gamma-ray emission is (fully) powered by the pulsar in question. The color scale corresponds to the measured flux normalization from Table \ref{tab:fluxes}. The annotated Milky Way background is taken from \cite{mw_pic}. \label{fig:mw_dist}} \end{figure} \section{Limitations and Systematic Uncertainties}\label{sec:discuss} \subsection{Background Fluctuations and Spurious Detections} It is possible for mere fluctuations in the background and/or the Galactic diffuse emission to pass the selection criteria and produce a spurious source. In order to estimate the frequency of false positive sources, we create twenty simulated significance maps using the background counts from the original source search. For each map, we obtain the simulated number of signal events in each pixel by Poisson-fluctuating the number of background events in the corresponding pixel. We then run each of these randomized background maps through the full analysis pipeline, including point and extended searches. In the 20 total randomized background maps, we find 15 local maxima with a $TS > 25$. Therefore, the estimated number of false positive sources is $15/20 = 0.75$. The fluctuations are distributed evenly across the sky and typically occur just above the threshold value of $TS = 25$. \subsection{Limitations of the Source Search} As in the 2HWC catalog, we conduct blind source searches for four different fixed morphological assumptions (point sources, and \ang{0.5}, \ang{1.0}, \ang{2.0} extended sources). We then combine these results, with preference given to sources found in the point source search and the smaller radius searches to avoid double counting of sources. This approach can lead to sources being misidentified or missed. First, some extended sources may be significant enough to be detected in the point source analysis. Poisson fluctuations of the signal could potentially lead to several hotspots being detected around the center of such an extended source. As HAWC collects more data, this issue is increasing in prevalence, as evidenced by the five point sources detected inside the Geminga halo. Second, it is also possible that multiple smaller sources located near each other are detected as one source in the extended source search if the individual sources are not strong enough to cross the detection threshold. This might be happening near the Galactic center where \textbf{3HWC J1757-240}, found in the \ang{1} extended-source search, overlaps with several known TeV sources. Third, weaker sources may be missed if they are located near a stronger source, as they may not produce a well-defined peak in the significance map. In-depth studies (such as \cite{Abeysekara:2017old, Abeysekara:2018qtj}) are needed to properly resolve source-dense regions. Such studies include multi-source fits, fitting the extensions/shapes, locations, and spectra of several sources the same time. Additionally, measurements by other gamma-ray observatories as well as measurements at other wavelengths might help disentangle the morphology of complex regions. Further studies of selected regions of the Galactic plane are in preparation. \subsection{Systematic Uncertainty on the Source Locations}\label{pointingbias} Earlier publications \citep{2HWC} quoted an \ang{0.1} systematic pointing uncertainty, which was estimated using simulations and verified through the observation of the Crab Nebula, Mrk421, and Mrk501. New studies of HAWC's pointing calibration as a function of source position suggest that the uncertainty could be larger than previously thought for sources that transit near the edge of HAWC's field of view. HAWC's absolute pointing uncertainty increases to \ang{0.15} for sources at \ang{-10} or +\ang{50} declination and could be as high as \ang{0.3} at declinations of \ang{-20} or +\ang{60}. (There are no well-isolated point sources detected by HAWC that could be used to unambiguously verify the instrument's pointing at these declinations.) In Figure \ref{fig:pointing}, we compare the measured declinations of 3HWC sources to the locations of their likely TeV counterparts as measured by other experiments. For this comparison, we consider relatively well-localised 3HWC sources that have a TeV association within \ang{1} detected by a different experiment. We do not include sources in regions of extended emission or multiple components such as \textbf{3HWC J2020+403}. It can be seen that for most of the declination range spanned by HAWC's sensitivity, the 3HWC positions agree with the literature values within statistical and systematic uncertainties. Below source declinations of about \ang{-10}, HAWC measures systematically higher values than the IACTs, between an offset of \ang{0.1} and \ang{0.4}. The trend observed in Figure \ref{fig:pointing} could indicate a bias in HAWC's pointing at low declinations. However, all of HAWC's southern sources lie on the Galactic plane, in a region rich in sources and diffuse emission. HAWC's angular resolution is poor for low-declination sources compared to sources transiting overhead. Accordingly, Galactic diffuse emission or emission from nearby unresolved sources might affect the peak position detected by HAWC, especially for low-declination sources. IACTs tend to have better angular resolution and are thus affected less by large-scale emission or neighboring sources. The shift could also be an indication of an energy-dependent morphology of some sources \cite{Hona:2019ysf}. Future in-depth studies of some of these sources as well as an improved understanding of HAWC's pointing are needed to resolve this apparent discrepancy. \begin{figure}[tbp] \plotone{deltadecvdec.pdf} \caption{Measured declination of HAWC sources relative to their TeV counterpart measurements from IACT experiments, MAGIC, H.E.S.S. and VERITAS. HAWC measurements agree with the source locations measured by IACTs within uncertainties for most of its declination range. See text for discussion on source declinations below \ang{-10}.} \label{fig:pointing} \end{figure} \subsection{Systematic Uncertainty of the Spectral Fits} In Table \ref{tab:fluxes}, we report the best-fit fluxes and spectral indices of the 3HWC sources. These fits assume a power-law spectrum; we do not test other spectral models including a curvature or cutoff term. The reported spectral index should be interpreted as an average or effective spectral index across HAWC's energy range. For the two known extra-galactic sources, we also do not account for absorption by the extra-galactic background light. Additional studies are ongoing for sources that are detected with sufficient statistics to allow more sophisticated spectral models to be fit. In Table \ref{tab:fluxes}, we report systematic uncertainties related to the modeling of the HAWC detector response individually for each source. More details about the sources of uncertainty considered here can be found in \cite{HighCrab}. In order to compute the uncertainties, we repeat spectral fits with certain properties of the detector model shifted up or down. We assign an additional uncertainty of 10\% to the flux normalization to account for effects, such as variations in the atmosphere, that are not considered otherwise. We add the resulting positive shifts to the spectral fit parameters in quadrature to obtain the total upward systematic uncertainty, and add the negative shifts in quadrature for the total negative downward systematic uncertainty. There are other systematic issues affecting the spectral fits. Some fluxes may be overestimated due to ``leakage'' from nearby (detected or unresolved) sources or from the Galactic diffuse emission. These may also affect the spectral indices. In cases where the apparent extent of a source is larger than HAWC's angular resolution (\ang{0.1} at high energies), but the source is strong enough to be significantly detected already in the point-source search, we only report the spectrum assuming a point-source hypothesis. This leads to the flux normalization being underestimated and the spectral index to be biased towards softer spectra (as the angular resolution improves for high energies and thus more of the high-energy emission is ``lost''). Upcoming publications will provide better spectral fits for extended sources. \section{Conclusion}\label{sec:conclude} The HAWC observatory has been conducting the most sensitive, unbiased survey of the Northern sky at TeV energies for over five years. We have presented the third catalog of steady gamma-ray emitters detected by HAWC using 1523 days of data. The catalog consists of 65 sources, including two blazars. The most abundant source class among the potential counterpart of HAWC sources in the Galactic plane is pulsars (56). The 3HWC catalog provides many targets for multi-wavelength and multi-messenger follow-up studies that are crucial to several open problems in high-energy astrophysics. Detailed morphological and spectral studies of several sources are being conducted and will be the subject of future publications. A dedicated survey to constrain the emission from various extra-galactic objects of interest is under preparation. Future gamma-ray observatories such as CTA \citep{CTAConsortium:2018tzg} and SWGO \citep{Albert:2019afb} will be able to extend both the sensitivity and energy range of this survey. \acknowledgments We acknowledge the support from: the US National Science Foundation (NSF); the US Department of Energy Office of High-Energy Physics; the Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory; Consejo Nacional de Ciencia y Tecnolog\'ia (CONACyT), M\'exico, grants 271051, 232656, 260378, 179588, 254964, 258865, 243290, 132197, A1-S-46288, A1-S-22784, c\'atedras 873, 1563, 341, 323, Red HAWC, M\'exico; DGAPA-UNAM grants IG101320, IN111315, IN111716-3, IN111419, IA102019, IN112218; VIEP-BUAP; PIFI 2012, 2013, PROFOCIE 2014, 2015; the University of Wisconsin Alumni Research Foundation; the Institute of Geophysics, Planetary Physics, and Signatures at Los Alamos National Laboratory; Polish Science Centre grant, DEC-2017/27/B/ST9/02272; Coordinaci\'on de la Investigaci\'on Cient\'ifica de la Universidad Michoacana; Royal Society - Newton Advanced Fellowship 180385; Generalitat Valenciana, grant CIDEGENT/2018/034; Chulalongkorn University’s CUniverse (CUAASC) grant. Thanks to Scott Delay, Luciano D\'iaz and Eduardo Murrieta for technical support. \section{} \textit{Research Notes of the \href{https://aas.org}{American Astronomical Society}} (\href{http://rnaas.aas.org}{RNAAS}) is a publication in the AAS portfolio (alongside ApJ, AJ, ApJ Supplements, and ApJ Letters) through which authors can promptly and briefly share materials of interest with the astronomical community in a form that will be searchable via ADS and permanently archived. The astronomical community has long faced a challenge in disseminating information that may not meet the criteria for a traditional journal article. There have generally been few options available for sharing works in progress, comments and clarifications, null results, and timely reports of observations (such as the spectrum of a supernova), as well as results that wouldn’t traditionally merit a full paper (such as the discovery of a single exoplanet or contributions to the monitoring of variable sources). Launched in 2017, RNAAS was developed as a supported and long-term communication channel for results such as these that would otherwise be difficult to broadly disseminate to the professional community and persistently archive for future reference. Submissions to RNAAS should be brief communications - 1,000 words or fewer \footnote{An easy way to count the number of words in a Research Note is to use the \texttt{texcount} utility installed with most \latex\ installations. The call \texttt{texcount -incbib -v3 rnaas.tex}) gives 57 words in the front matter and 493 words in the text/references/captions of this template. Another option is by copying the words into MS/Word, and using ``Word Count'' under the Tool tab.}, and no more than a single figure (e.g. Figure \ref{fig:1}) or table (but not both) - and should be written in a style similar to that of a traditional journal article, including references, where appropriate, but not including an abstract. Unlike the other journals in the AAS portfolio, RNAAS publications are not peer reviewed; they are, however, reviewed by an editor for appropriateness and format before publication. If accepted, RNAAS submissions are typically published within 72 hours of manuscript receipt. Each RNAAS article is issued a DOI and indexed by ADS \citep{2000A&AS..143...41K} to create a long-term, citable record of work. Articles can be submitted in \latex\ (preferably with the new "RNAAS" style option in AASTeX v6.2), MS/Word, or via the direct submission in the \href{http://www.authorea.com}{Authorea} or \href{http://www.overleaf.com}{Overleaf} online collaborative editors. Authors are expected to follow the AAS's ethics \citep{2006ApJ...652..847K}, including guidance on plagiarism \citep{2012AAS...21920404V}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.85,angle=0]{aas.pdf} \caption{Top page of the AAS Journals' website, \url{http://journals.aas.org}, on October 15, 2017. Each RNAAS manuscript is only allowed one figure or table (but not both). Including the \href{http://journals.aas.org//authors/data.html\#DbF}{data behind the figure} in a Note is encouraged, and the data will be provided as a link in the published Note.\label{fig:1}} \end{center} \end{figure} \acknowledgments Acknowledge people, facilities, and software here but remember that this counts against your 1000 word limit.
{ "timestamp": "2021-01-27T02:23:30", "yymm": "2007", "arxiv_id": "2007.08582", "language": "en", "url": "https://arxiv.org/abs/2007.08582" }
\section{Introduction} \label{m_exampls} There are many applications where it is necessary to predict the number of future events from a population of units associated with an on-going time-to-event process. Such applications also require a prediction interval to quantify statistical prediction uncertainty arising from the combination of process variability and parameter uncertainty. Some motivating applications are given below. \noindent\textbf{Product-A Data}: This example is from \citet{elawqm1999}, where, during a particular month, $n$=10000 units of Product-A were put into service. Over the next 48 months, 80 failures occurred and the failure times were recorded. A prediction interval on the number of failures among the remaining 9920 units during the next 12 months was requested by the management. \noindent\textbf{Heat Exchanger Tube Data}: This example is based on data described in \citet{nelson2000}. Nuclear power plants have steam generators that contain many stainless steel heat-exchanger tubes. Cracks initiate and grow in the tubes due to a stress-corrosion mechanism over time. Periodic inspections of the tubes are used to detect cracks. Consider a fleet of steam generators having a total of $n$=20,000 tubes. One crack was detected after the first year of operation, which was followed by another crack during the second year and six more cracks during the third year. The data are interval-censored as the exact initiation times are unknown. A prediction interval was needed for the number of tubes that would crack from the end of the third year to the end of the tenth year. \noindent\textbf{Bearing-Cage Data}: The bearing-cage failure-time data are from \citet{weibullhandbook} and are provided in the online supplementary material. Groups of aircraft engines employing this bearing cage were put into service over time (staggered entry). At the data freeze date, 6 bearing-cage failures had occurred while the remaining 1697 units with various service times were still in service (multiple right-censored data). To assure that a sufficient number of spare parts would be available to repair the aircraft engine in a timely manner, management requested a prediction interval for the number of bearing-cages that would fail in the next year, assuming 300 hours of service for each aircraft. \medskip The purpose of this paper is to show how to construct prediction intervals for the number of future events from an on-going time-to-event process, investigate the properties of different prediction methods, and give recommendations on which methods to use. This paper is organized as follows. Section~\ref{background} provides concepts and background for prediction inference. Section~\ref{single_cohort_within_sample_pred} describes the single-cohort within-sample prediction problem. Section~\ref{plugin_not_regular} defines how the within-sample prediction is irregular and demonstrates that the plug-in method fails to provide an asymptotically correct prediction interval. Section~\ref{calibration} describes the calibration method for prediction intervals and establishes its asymptotic correctness. Section~\ref{pred:method} presents two other prediction interval methods based on predictive distributions. The first one is a general method using parametric bootstrap samples, while the second method is inspired by generalized pivotal quantities and applies to a log-location-scale family of distributions. Section~\ref{sec:multiple-cohort} extends the single-cohort within-sample prediction to the multiple-cohort problem. Section~\ref{simu:study} compares different prediction methods, through simulation, while Section~\ref{sec:applications} applies the prediction methods to the motivating examples. Section~\ref{choice-of-dist} discusses the choice of distribution for the time-to-event process and addresses the issue of distribution misspecification. Section~\ref{sec:conclusion} gives recommendations and describes potential areas for future research. \section{Background} \label{background} In a general prediction problem, denote the observable data by $\boldsymbol{D}_n$ and the future random variable by $Y_n\equiv Y$; while generic for now, later this paper will focus on the within-sample prediction where $Y$ is a count. The conditional cdf for $Y$ given $\boldsymbol{D}_n$ is denoted by $G_n(\cdot|\boldsymbol{D}_n; \boldsymbol{\theta})\equiv G(\cdot|\boldsymbol{D}_n; \boldsymbol{\theta})$, where $\boldsymbol{\theta}$ is a vector of parameters. The goal is to make inference for $Y$ through a prediction interval, as a useful tool for quantifying uncertainty in prediction. \subsection{Prediction Intervals} \label{predinterval} When parameters in $\boldsymbol{\theta}$ are known, the one-sided upper $100(1-\alpha/2)\%$ prediction bound $\tilde{Y}_{1-\alpha/2}$ is defined as the $100(1-\alpha/2)\%$ quantile of the conditional cdf for $Y$, which is \begin{equation} \tilde{Y}_{1-\alpha/2}=\inf\{y\in\mathbb{R}:G(y|\boldsymbol{D}_n;\boldsymbol{\theta})=\Pr(Y\leq y\vert \boldsymbol{D}_n, \boldsymbol{\theta})\geq1-\alpha/2\}, \label{upperbound::true} \end{equation} and the one-sided lower $100(1-\alpha/2)\%$ prediction bound may be defined as \begin{equation} \underaccent{\tilde}{Y}_{1-\alpha/2}=\sup\{y\in\mathbb{R}:\Pr(Y\geq y\vert \boldsymbol{D}_n, \boldsymbol{\theta})\geq1-\alpha/2\}, \label{lowerbound::true} \end{equation} where this modification of the usual $\alpha/2$ quantile of $Y$ ensures that $\Pr(Y\geq\underaccent{\tilde}{Y}_{1-\alpha/2}|\boldsymbol{D}_n, \boldsymbol{\theta})$ is at least $100(1-\alpha/2)\%$ when $Y$ is a discrete random variable. We may obtain an equal-tail $100(1-\alpha)\%$ prediction interval (approximate when $Y$ is a discrete random variable) by combining these two prediction bounds. In most applications, equal-tail prediction intervals are preferred over unequal ones, even though it is sometimes possible to find a narrower prediction interval with unequal tail probabilities. This is because the equal-tail prediction interval can be naturally decomposed into a practical one-sided upper prediction bound and a lower prediction bound where the separate consideration of one-sided bounds is needed when the cost of being outside the prediction bound is much higher on one side than the other. When the parameters in $\boldsymbol{\theta}$ are unknown, an estimation of $\boldsymbol{\theta}$ from the observed data $\boldsymbol{D}_n$ is required. The plug-in method, also known as the naive or estimative method (cf.~Section~\ref{literature_review}), is to replace $\boldsymbol{\theta}$ with a consistent estimator $\widehat{\boldsymbol{\theta}}_n$ in the prediction bounds (\ref{upperbound::true}) and (\ref{lowerbound::true}). The $100(1-\alpha)\%$ plug-in upper prediction bound is then $\tilde{Y}_{1-\alpha}^{PL}=\inf\{y\in\mathbb{R}:G(y|\boldsymbol{D}_n;\widehat{\boldsymbol{\theta}}_n)\geq1-\alpha\}$ while the $100(1-\alpha)\%$ plug-in lower prediction bound is $\underaccent{\tilde}{Y}_{1-\alpha/2}^{PL}=\sup\{y\in\mathbb{R}:\Pr(Y\geq y\vert \boldsymbol{D}_n, \widehat{\boldsymbol{\theta}}_n)\geq1-\alpha\}$. \subsection{Coverage Probability} \label{coverageprob} Besides the plug-in method, other methods for computing prediction bounds or intervals are available. Let $\mathrm{PI}(1-\alpha)$ generically denote a prediction interval (or bound) of a nominal coverage level $100(1-\alpha)\%$, where researchers would like the probability of $Y$ falling within the interval to be (or close to) $1-\alpha$ (i.e., $\Pr[Y\in\mathrm{PI}(1-\alpha)]=1-\alpha$). To be clear, there are two possible types of coverage probability: conditional coverage probability and unconditional (overall) coverage probability. The conditional coverage probability of a particular $\mathrm{PI(1-\alpha)}$ method is defined as \begin{align*} \mathrm{CP}[\mathrm{PI}(1-\alpha)| \boldsymbol{D}_n; \boldsymbol{\theta}]=\Pr[Y\in\mathrm{PI}(1-\alpha)| \boldsymbol{D}_n; \boldsymbol{\theta}], \end{align*} where $\Pr(\cdot|\boldsymbol{D}_n; \boldsymbol{\theta})$ denotes the conditional probability of $Y$ given the observable data $\boldsymbol{D}_n$. The conditional coverage probability $\mathrm{CP}[\mathrm{PI}(1-\alpha)|\boldsymbol{D}_n; \boldsymbol{\theta}]$ is a random variable because it is a function of the data $\boldsymbol{D}_n$. The unconditional coverage probability of a prediction interval method can be obtained by taking an expectation with respect to the data $\boldsymbol{D}_n$ and it is defined as \begin{align*} \mathrm{CP}[\mathrm{PI}(1-\alpha); \boldsymbol{\theta}]=\boldsymbol{\mathrm{E}}\left\{\Pr[Y\in\mathrm{PI}(1-\alpha)| \boldsymbol{D}_n; \boldsymbol{\theta}]\right\}. \end{align*} The unconditional coverage probability is a fixed property of a prediction method and, as such, can be most readily studied and used to compare alternative prediction interval methods. We focus on unconditional coverage probability in this paper and use the term coverage probability to refer to the unconditional probability, unless stated otherwise. We say a prediction method is exact if $\mathrm{CP}[\mathrm{PI}(1-\alpha); \boldsymbol{\theta}]=1-\alpha$ holds. If $\mathrm{CP}[\mathrm{PI}(1-\alpha); \boldsymbol{\theta}]$ converges to $1-\alpha$ as the sample size $n$ increases, we say the corresponding prediction method is asymptotically correct. When $Y$ is a discrete random variable, however, asymptotic correctness and exactness may not generally hold or be possible for a prediction interval method, due to the discreteness in the distribution of $Y$. \subsection{Related Literature} \label{literature_review} Extensive research exists regarding some methods for computing prediction intervals. While the plug-in method has been criticized for ignoring the uncertainty in $\widehat{\boldsymbol{\theta}}_n$, this method is often widely viewed as being asymptotically correct (related to ``regular predictions'' described in Section~\ref{regular_prediction}). For example, \citet{cox1975}, \citet{beran1990}, and \citet{hall1999} showed that the coverage probability of the plug-in method has an accuracy of $O(n^{-1})$ for a continuous predictand under certain conditions. In Section~\ref{plugin_not_regular} we show, however, that the plug-in method is not asymptotically correct in the context of within-sample prediction. Section~\ref{calibration} presents a calibration method for within-sample prediction intervals. \citet{cox1975} originally proposed the calibration idea to improve on the plug-in method and also provided analytical forms for prediction intervals based on general asymptotic expansions. \citet{atwood1984} used a similar method. \citet{beran1990} employed bootstrap in the calibration method, avoiding the complicated analytical expressions. \citet{elawqm1999} described similar methods for constructing prediction intervals for failure times and the number of future failures, based on censored life data. This paper does not specifically address Bayesian prediction methods, but the classic idea of a Bayesian predictive distribution can be extended to non-Bayesian methods and two such methods are considered in Section~\ref{pred:method}. Several authors have considered similar notions of a non-Bayesian predictive distribution (e.g., \citet{aitchison1975}, \citet{davison1986}, \citet{barncox1996}). \citet{lawless2005} demonstrated a relationship between predictive distributions and (approximate) pivotal-based prediction intervals, including the calibration method described in \citet{beran1990}. \citet{fonseca2012} further elaborated on the relationship between predictive distributions and the calibration method. \citet{shen_liu_xie_2018} proposed a general framework to construct a predictive distribution by replacing the posterior distribution in the definition of a Bayesian predictive distribution with a confidence distribution. \section{Single Cohort Within-Sample Prediction} \label{single_cohort_within_sample_pred} \subsection{Within-Sample Prediction and New Sample Prediction} The term ``within-sample'' prediction has been used to distinguish from the more widely known ``new sample'' prediction. In new-sample prediction, past data are used, for example, to compute a prediction interval for the lifetime of a single unit from a new and completely independent sample. For within-sample prediction, however, the sample has not changed; the future random variable that researchers wish to predict (i.e., a count) relates to the same sample that provided the original (censored) data. \subsection{Single-Cohort Within-Sample Prediction and Plug-in Method} \label{withinsample} Let $({T}_1,...,{T}_n)$ be an unordered random sample from a parametric distribution $F(t;\boldsymbol{\theta})$ having support on the positive real line and $\boldsymbol{\theta}\in\mathbb{R}^q$. Under Type~\Romannum{1} censoring at $t_c>0$, the available data may then be expressed by $D_i=(\delta_i, T_i^{obs}),i=1,...,n$, where $\delta_i=\mathrm{I}(T_i\leq t_c)$ is a variable indicating whether $T_i$ is observed before the censoring time $t_c$, so that the actual observed variables are given as $T_i^{obs}=T_i\delta_i+t_c(1-\delta_i)$. The observed number of events (uncensored units) in the sample will be denoted by $r_n=\sum_{i=1}^{n}\mathrm{I}(T_i\leq t_c)$. For a future time $t_w>t_c$, let $Y_n=\sum_{i=1}^{n}\mathrm{I}(T_i\in(t_c, t_w])$ denote the (future) number of values from $T_1,...,T_n$, that occur in the interval $(t_c, t_w]$. The conditional distribution of $Y_n$ is then $\mathrm{binomial}(n-r_n, p)$ given the observed data $\boldsymbol{D}_n=(D_1,...,D_n)$, where $p$ is the conditional probability that $T_i\in(t_c, t_w]$ given that $T_i>t_c$. As a function of $\boldsymbol{\theta}$, we may define $p$ by \begin{equation} p\equiv\pi(\boldsymbol{\theta})=\frac{F(t_w;\boldsymbol{\theta})-F(t_c;\boldsymbol{\theta})}{1-F(t_c;\boldsymbol{\theta})}. \label{piequa} \end{equation} The goal is to construct a prediction interval for $Y_n$ based on the observed data $\boldsymbol{D}_n=(D_1,...,D_n)$ when $\boldsymbol{\theta}$ is unknown. This is referred to as single-cohort within-sample prediction because all the units enter the system at the same time and are homogeneous; and both the data $\boldsymbol{D}_n$ and the predictand $Y_n$ are functions of the uncensored random sample $(T_1,...,T_n)$. Let $\widehat{\boldsymbol{\theta}}_n$ denote an estimator of $\boldsymbol{\theta}$ based on $\boldsymbol{D}_n$, then a plug-in estimator $\widehat{p}_n=\pi(\widehat{\boldsymbol{\theta}}_n)$ of the conditional probability $p$ follows from (\ref{piequa}). Analogous to the bounds in Section~2.1, a $100(1-\alpha)\%$ plug-in lower prediction bound is defined as \begin{align*} \underaccent{\tilde}{Y}^{PL}_{n, 1-\alpha}&=\sup\{y\in\{0\}\cup\mathbb{Z}^{+}; \mathrm{pbinom}(y-1, n-r_n, \widehat{p}_{n})\leq\alpha\} \\&= \begin{cases} \mathrm{qbinom}(\alpha, n-r_n, \widehat p_n), &\text{if } \mathrm{pbinom}(\mathrm{qbinom}(\alpha, n-r_n, \widehat p_n), n-r_n, \widehat p_n)>\alpha.\\ \mathrm{qbinom}(\alpha, n-r_n, \widehat p_n)+1, &\text{if } \mathrm{pbinom}(\mathrm{qbinom}(\alpha, n-r_n, \widehat p_n), n-r_n, \widehat p_n)=\alpha.\\ \end{cases} \end{align*} where $\mathrm{pbinom}$ and $\mathrm{qbinom}$ are, respectively, the binomial cdf and quantile function. Similarly, the $100(1-\alpha)\%$ plug-in upper prediction bound for $Y_n$ is defined as \begin{align*} \tilde{Y}^{PL}_{n, 1-\alpha}&=\inf\{y\in\{0\}\cup\mathbb{Z}^{+}; \mathrm{pbinom}(y, n-r_n, \widehat{p}_{n})\geq1-\alpha\}=\mathrm{qbinom}(1-\alpha, n-r_n, \widehat p_n). \end{align*} Section~\ref{coverageprob} mentioned that asymptotically correct coverage may not generally be possible for prediction intervals involving a discrete predictand. However, for within-sample prediction here, prediction interval methods can be sensibly examined for properties of asymptotic correctness, which we consider in the following section. This is because discreteness in the (conditionally) binomial predictand $Y_n$ essentially disappears in large sample sizes $n$, due to normal approximations. \section{The Irregularity of the Within-Sample Prediction} \label{plugin_not_regular} \subsection{A Regular Prediction Problem} \label{regular_prediction} Under the general prediction framework described in Section~\ref{background}, the conditional cdf $G_n(\cdot| \boldsymbol{D}_n;\boldsymbol{\theta})$ of a predictand $Y_n$ given the observed data $\boldsymbol{D}_n$ is often estimated by the plug-in method as $G_n(\cdot| \boldsymbol{D}_n;\widehat{\boldsymbol{\theta}}_n)$ (also known as a predictive distribution), where $\widehat{\boldsymbol{\theta}}_n$ is a consistent estimator of $\boldsymbol{\theta}$ based on $\boldsymbol{D}_n$. To frame much of the literature related to the plug-in method (Section~\ref{literature_review}), we may define the prediction problem most often commonly related to the plug-in method as ``regular'' according to the following definition. \begin{definition} In the notation of Section~\ref{background}, a prediction problem is called regular if \begin{equation*} \sup_{y\in\mathbb{R}}|G_n(y|\boldsymbol{D}_n; \boldsymbol{\theta})-G_n(y|\boldsymbol{D}_n; \widehat{\boldsymbol{\theta}}_n)|\xrightarrow{p}0 \end{equation*} holds as $n\to\infty$ for any consistent estimator $\widehat{\boldsymbol{\theta}}_n$ of $\boldsymbol{\theta}$ (i.e., $\widehat{\boldsymbol{\theta}}_n\xrightarrow{p} \boldsymbol{\theta}$). \label{def1} \end{definition} Unlike coverage probability (where exactness may again not be possible for discrete predictands), the above definition reflects the underlying sense of how the plug-in method for prediction intervals is often asymptotically valid for both discrete and continuous predictands. By the nature of many prediction problems (e.g., new sample prediction), the conditional form of cdf $G_n$ may also not necessarily vary with $n$ (e.g., $G_n( \cdot |\boldsymbol{D}_n;\boldsymbol{\theta} )= G( \cdot; \boldsymbol{\theta})$). Hence, in a regular prediction problem, the plug-in predictive distribution (estimated cdf) asymptotically captures the true conditional cdf of the predictand, so that differences are expected to vanish between quantiles of the true predictand $Y_n$ and the associated plug-in prediction bounds. Further, when the predictand has a continuous and asymptotically tight conditional distribution (with probability 1), such as when the conditional cdf $G_n(\cdot| D_n; \boldsymbol{\theta}) = G(\cdot;\boldsymbol{\theta})$ of the predictand does not vary with $n$, then the plug-in method will be asymptotically correct. \subsection{Failure of the Plug-in Method} This section shows that the within-sample prediction problem described in Section~\ref{single_cohort_within_sample_pred} is not regular and that the plug-in method is not asymptotically valid for within-sample prediction. To avoid redundancy, the presentation of results will focus on the plug-in upper prediction bound; the lower bound is analogous by Remark~1 below. In the context of within-sample prediction (cf.~Section~\ref{withinsample}), recall that the $100(1-\alpha)\%$ plug-in upper prediction bound for the future count $Y_n \equiv \sum_{i=1}^n \mathrm{I}(T_i \in (t_c,t_w])$ is defined as \begin{align*} \tilde{Y}^{PL}_{n, 1-\alpha}=\inf\{y\in\mathbb{Z}; \mathrm{pbinom}(y, n-r_n, \widehat{p}_{n})\geq1-\alpha\}. \end{align*} The following theorem shows that the coverage probability of $\tilde{Y}^{PI}_{n, 1-\alpha}$ will not correctly converge to $1-\alpha$ as $n$ increases. \begin{theorem} Let $T_{1}, ..., T_{n}$ denote a random sample from a parametric distribution with cdf $F(\cdot; \boldsymbol{\theta}_{0})$ (at the true value of $\boldsymbol{\theta}=\boldsymbol{\theta}_{0}\in\mathbb{R}^q$), which is observed under Type~I censoring at $t_c>0$. Suppose also that $F(t_c;\boldsymbol{\theta}_0) <1$, $p_0 = \pi(\boldsymbol{\theta}_0)\in(0, 1)$ in (\ref{piequa}), $F(t_c;\boldsymbol{\theta})$ is continuous at $\boldsymbol{\theta}_0$, and that the conditional probability (parametric function) $p\equiv\pi(\boldsymbol{\theta})$ is continuously differentiable in a neighborhood of $\boldsymbol{\theta}_0$ with non-zero gradient $\nabla_0\equiv\partial \pi(\boldsymbol{\theta})/\partial \boldsymbol{\theta}|_{\boldsymbol{\theta}=\boldsymbol{\theta}_{0}}$. Based on the censored sample, suppose $\widehat{\boldsymbol{\theta}}_n$ is an estimator of $\boldsymbol{\theta}$ satisfying $\sqrt{n}(\widehat{\boldsymbol{\theta}}_n-\boldsymbol{\theta}_{0})\xrightarrow{d} \mathrm{MVN}(\boldsymbol{0}, \boldsymbol{V}_0)$, as $n\rightarrow\infty$, for a multivariate normal distribution with mean vector $\boldsymbol{0}$ and positive definite variance matrix $\boldsymbol{V}_{0}$. Then, \begin{enumerate} \item The within-sample prediction of $Y_n = \sum_{i=1}^n \mathrm{I}(t_c < T_i \leq t_w)$ fails to be a regular prediction problem: denoting $G_n(y|\boldsymbol{D}_n,\boldsymbol{\theta}_0)=\text{pbinom}(y,n-r_n,p_0)$ as the conditional cdf of $Y_n$ and $G_n(y|\boldsymbol{D}_n,\widehat{\boldsymbol{\theta}}_n)=\text{pbinom}(y,n-r_n,\widehat{p}_n)$ as its plug-in estimator, then \[ \sup_{y \in \mathbb{R}} \left| G_n(y|\boldsymbol{D}_n,\boldsymbol{\theta}_0) - G_n(y|\boldsymbol{D}_n,\widehat{\boldsymbol{\theta}}_n)\right| \xrightarrow{d} 1 -2\Phi_{\mathrm{nor}}(\sqrt{v_1}|Z_1|/2), \] where $Z_1$ is a standard normal variable with cdf $\Phi_{\mathrm{nor}}(z)=\int_{-\infty}^{z} 1/\sqrt{2 \pi} e^{-u^{2} / 2}d u$, $z\in\mathbb{R}$, and $$ v_{1}\equiv\frac{[1-F(t_{c}; \boldsymbol{\theta}_{0})]}{p_{0}(1-p_{0})}\nabla_{0}^{t}\boldsymbol{V}_{0}\nabla_0\in(0, \infty). $$ \item The plug-in upper prediction bound $\tilde{Y}^{PL}_{n, 1-\alpha}$ generally fails to have asymptotically correct coverage: \begin{align*} \lim_{n\rightarrow\infty}\Pr(Y_{n}\leq \tilde{Y}^{PL}_{n, 1-\alpha})=\Lambda_{1-\alpha}(v_1)\in(0,1) \quad \text{such that} \\ \sgn\left[\Lambda_{1-\alpha}(v_1)-(1-\alpha)\right]= \begin{cases} 1&\quad\mbox{if $\alpha \in(1/2,1)$}\\ 0&\quad\mbox{if $\alpha=1/2$}\\ -1&\quad\mbox{if $\alpha\in(0,1/2)$}, \end{cases} \end{align*} where $\sgn(\cdot)$ is the sign function and $\Lambda_{1-\alpha}(v_1) \equiv \int_{-\infty}^{\infty}\Phi_{\mathrm{nor}}\left[\Phi_{\mathrm{nor}}^{-1}(1-\alpha)+z \sqrt{v_{1}}\right]d \Phi_{\mathrm{nor}}(z)$. Furthermore, $\Lambda_{1-\alpha}(v_1) \in [1/2,1-\alpha)$ is a decreasing function of $v_1>0$ for a given $\alpha \in (0,1/2)$, while $\Lambda_{1-\alpha}(v_1) \in (1-\alpha,1/2]$ is increasing in $v_1>0$ for $\alpha \in (1/2,1)$, and $\lim_{v_1 \to \infty}\Lambda_{1-\alpha}(v_1)=1/2$ holds for any $\alpha\in(0,1)$. \end{enumerate} \label{first_theorem} \end{theorem} \noindent\textbf{Remark 1}. The lower plug-in bound $\underaccent{\tilde}{Y}_{n,1-\alpha}^{PL}$ behaves similarly with $\lim_{n\to \infty}\Pr(Y_n \geq\underaccent{\tilde}{Y}_{n, 1-\alpha}^{PL}) = \lim_{n\to\infty}\Pr(Y_n \leq \tilde{Y}_{n,1-\alpha}^{PL})$ in Theorem 1.\\ \indent The proof of Theorem~\ref{first_theorem} is in the online supplementary material. This counter-intuitive result reveals that the plug-in method should not be used to construct prediction intervals in the within-sample prediction problem, even if the sample size is large. The first part of Theorem~\ref{first_theorem} entails that plug-in estimation fails to capture the distribution of the predictand $Y_n$ here, to the extent that the supremum difference between estimated and true distributions has a {\em random} limit, rather than converging to zero as in a regular prediction (cf.~Definition~\ref{def1}). As a consequence, the limiting coverage probability of the plug-in bound turns out to be ``off'' by an amount determined by a magnitude of $v_1>0$ in Theorem~\ref{first_theorem} (part 2). For increasing values of $v_1$, the coverage probability approaches 0.5, regardless of the nominal coverage level intended. An intuitive explanation for the failure of plug-in method is that, although $\widehat{p}_{n}$ converges consistently to $p$, the growing number of Bernoulli trials $n-r_n$ in $Y_{n}$ offsets the improvements that larger samples may offer in estimation by $\widehat p_n$. In other words, when standardizing the true $1-\alpha$ quantile, say $Y_{n,1-\alpha}$, of the (conditionally binomial) predictand $Y_n$, one obtains a standard normal quantile $(Y_{n,1-\alpha} -p)/\sqrt{n-r_n}\approx \Phi_{\mathrm{nor}}^{-1}(1-\alpha)$ by normal approximation; however, the same standardization applied to the plug-in bound $\tilde{Y}_{n,1-\alpha}^{PL}$ gives $(\tilde{Y}_{n,1-\alpha}^{PL} -p)/\sqrt{n-r_n}\approx \Phi_{\mathrm{nor}}^{-1}(1-\alpha) + \sqrt{n-r_n} (\widehat{p}_n-p)$, which differs by a substantial and random amount $\sqrt{n-r_n} (\widehat{p}_n-p)$ (having a normal limit itself). Hence, validity of the plug-in method for within-sample prediction would require an estimator $\widehat p_n$ such that $\widehat p_n=p+o_p(n^{-1/2})$, which demands more than what is available from standard $\sqrt{n}$-consistency. \section{Prediction Intervals Based on Calibration} \label{calibration} \subsection{Calibrating Plug-in Prediction Bounds} \citet{cox1975} suggested an approximation for improving the plug-in method, which will be described next. Considering the general prediction problem (cf.~Section~\ref{predinterval}), suppose a future random variable $Y \equiv Y_n$ has a conditional cdf $G_n(\cdot|\boldsymbol{D}_n;\boldsymbol{\theta}) \equiv G(\cdot|\boldsymbol{D}_n; \boldsymbol{\theta})$ given random sample $\boldsymbol{D}_n$ and $\widehat{\boldsymbol{\theta}}_n$ is a consistent estimator of $\boldsymbol{\theta}$ from $\boldsymbol{D}_n$. The coverage probability of the $100(1-\alpha)$\% plug-in upper prediction bound is denoted by $\Pr\left[G(Y|\boldsymbol{D}_n;\widehat{\boldsymbol{\theta}}_n)\leq 1-\alpha\right]=1-\alpha^\prime$, where $\alpha^\prime$ is generally different from $\alpha$ due to the estimation uncertainty in $\widehat{\boldsymbol{\theta}}_n$. The basic idea of the calibration method is to find a level $\alpha^\dagger$ so that the coverage probability $\Pr\left[G(Y|\boldsymbol{D}_n;\widehat{\boldsymbol{\theta}}_n)\leq1-\alpha^\dagger\right]$ is equal to (or closer to) $1-\alpha$. The resulting $100(1-\alpha^\dagger)\%$ upper plug-in prediction bound $\tilde{Y}_{n,1-\alpha^\dag}^{PL}$ is called the $100(1-\alpha)\%$ upper calibrated prediction bound. However, determination of $\alpha^\dagger$ relies on both the distribution of $Y$ and the sampling distribution of $\widehat{\boldsymbol{\theta}}_n$, each of which depends on the unknown parameter $\boldsymbol{\theta}$. So instead, $\alpha^\dagger$ is obtained by solving the equation $ \Pr{}_{\!\!\ast}\left[G(Y^\ast|\boldsymbol{D}_n;\widehat{\boldsymbol{\theta}}^{\ast}_n)\leq1-\alpha^\dagger\right]=1-\alpha $, where $\Pr{}_{\!\!\ast}$ denotes bootstrap probability induced by $Y^\ast\sim G(\cdot|\boldsymbol{D}_n;\widehat{\boldsymbol{\theta}}_n)$ and by $\widehat{\boldsymbol{\theta}}^{\ast}_n$ as a bootstrap version of $\widehat{\boldsymbol{\theta}}_n$; for example, $\widehat{\boldsymbol{\theta}}^{\ast}_n$ may be based on a bootstrap sample $\boldsymbol{D}_n^*$ found by a parametric bootstrap applied using $\widehat{\boldsymbol{\theta}}_n$ in the role of the unknown parameter vector $\boldsymbol{\theta}$. \citet{beran1990} showed, that under certain conditions, instead of having a coverage error of $O(n^{-1})$, the coverage probability of the calibrated upper prediction bound improves upon the plug-in methods, e.g., $\Pr\left[Y\leq G^{-1}(1-\alpha^\dagger|\boldsymbol{D}_n;\widehat{\boldsymbol{\theta}}_n)\right]=1-\alpha+O(n^{-2})$. However, such results for the validity of the calibration method cannot be applied directly to within-sample prediction because conditions in \citet{beran1990} entail that the prediction problem be regular (cf.~Section~\ref{regular_prediction}), which is not true for the within-sample prediction problem (Theorem~\ref{first_theorem}). Consequently, the issue of asymptotic correctness for the calibration method needs to be determined for within-sample prediction, as next considered. \subsection{The Calibration-Bootstrap Method for the Within-Sample Prediction} The general method in \citet{beran1990} is modified to construct a calibrated prediction interval for within-sample prediction and it is called the calibration-bootstrap method in the rest of this paper. For a bootstrap sample $\boldsymbol{D}^\ast_n$ with $r^\ast_n$ observed events (e.g., from a parametric bootstrap using $\widehat{\boldsymbol{\theta}}_n$), we define a random variable set $\left(Y_n^\dagger, n-r_{n}^{\ast}, \widehat{p}_{n}^{\ast}\right)$ where $\widehat{p}_{n}^{\ast}=\pi( \widehat{\boldsymbol{\theta}}^{\ast}_n)$ is the bootstrap version of $\widehat{p}_n=\pi(\widehat{\boldsymbol{\theta}}_n)$ and $Y_n^\dagger\sim \mathrm{binomial}(n-r_{n}^{\ast}, \widehat{p}_n)$, conditional on $r_n^\ast$. For the $100(1-\alpha)\%$ lower prediction bound, the calibrated confidence level is $$ \alpha^{\dagger}_{L}=\sup\{u\in[0, 1]:\Pr{}_{\!\!\ast}\left[\mathrm{pbinom}(Y^\dagger_n, n-r_n^\ast,\widehat p_n^\ast)\leq u\right]\leq\alpha\}, $$ where $\Pr{}_{\!\!\ast}$ is the bootstrap probability induced by $\boldsymbol{D}^\ast_n$, and then the calibrated $100(1-\alpha)\%$ lower prediction bound is given by $\underaccent{\tilde}{Y}_{n,1-\alpha}^C= \underaccent{\tilde}{Y}_{n,1-\alpha^\dagger_L}^{PL}$. For the $100(1-\alpha)\%$ upper prediction bound, the calibrated confidence level is $$ 1-\alpha^\dagger_U = \inf\{u\in[0, 1] :\Pr{}_{\!\!\ast}\!\left[\mathrm{pbinom}(Y_n^\dagger, n-r_{n}^{\ast}, \widehat{p}_{n}^{\ast})\leq u\right]\geq 1-\alpha\}, $$ so that the calibrated $100(1-\alpha)\%$ upper prediction bound is $\tilde{Y}_{n,1-\alpha}^C=\tilde{Y}_{n,1-\alpha^\dagger_U}^{PL}$. Here $\underaccent{\tilde}{Y}_{n,1-\alpha}^{PL}$ and $\tilde{Y}_{n,1-\alpha}^{PL}$ represent lower and upper plug-in prediction bounds, respectively, as defined in Section~\ref{withinsample}. The calibration-bootstrap method involves approximating the distribution of $U=\mathrm{pbinom}(Y_{n}, n-r_n, \widehat{p}_{n})$ with the bootstrap distribution of $U^\ast=\mathrm{pbinom}(Y_{n}^\dagger, n-r^\ast_n, \widehat{p}_{n}^\ast)$. The bootstrap distribution of $U^\ast$ is used to calibrate the plug-in method. The procedure of using the calibration-bootstrap method to construct a prediction interval is described below: \begin{enumerate} \itemsep-0.5em \item Compute the maximum likelihood (ML) estimate $\widehat{\boldsymbol{\theta}}_n$ using data $\boldsymbol{D}_n$ and the ML estimate $\widehat{p}_n=\pi(\widehat{\boldsymbol{\theta}}_n)$. \item Generate a bootstrap sample $\boldsymbol{D}_n^\ast$ and the number of events is denoted by $r_n^\ast$. \item Compute $\widehat{\boldsymbol{\theta}}_n^\ast$ and $\widehat{p}_n^\ast=\pi(\widehat{\boldsymbol{\theta}}_n^\ast)$ using the bootstrap sample $\boldsymbol{D}_n^\ast$. \item Generate $y^\ast$ from the distribution $\text{binomial}(n-r^\ast_n, \widehat{p}_n)$ and compute $u^\ast=\mathrm{pbinom}(y^\ast, n-r_n^\ast, \widehat{p}_n^\ast)$. \item Repeat step 2-4 for $B$ times and get $B$ realizations of $u^\ast$ as $\{u_1^\ast,\dots,u_B^\ast\}$. \item Find the $\alpha$ and $1-\alpha$ quantiles of $\{u_1^\ast,\dots,u_B^\ast\}$, and denote these by $u_\alpha$ and $u_{1-\alpha}$, respectively. The $1-\alpha$ calibrated lower and upper prediction bounds are $\underaccent{\tilde}{Y}_{n,1-\alpha}^C=\underaccent{\tilde}{Y}_{n,1-u_{\alpha}}^{PL}$ and $\tilde{Y}^C_{n,1-\alpha}=\tilde{Y}_{n,u_{1-\alpha}}^{PL}$. \end{enumerate} The pseudo-code for this algorithm is in the online supplementary material. Next, the calibration-bootstrap method is shown to be asymptotically correct. This requires a mild assumption on the bootstrap involved, namely that the parameter estimators $\widehat{\boldsymbol{\theta}}_n^\ast$ in the bootstrap world provide valid approximations for the sampling distribution of the original data estimators $\sqrt{n}(\widehat{\boldsymbol{\theta}}_n-\boldsymbol{\theta})$, in large samples. More formally, let $\mathcal{L}_n^* \equiv \mathcal{L}_n^*(\boldsymbol{D}_n)$ denote the probability law of the bootstrap quantity $\sqrt{n}(\widehat{\boldsymbol{\theta}}_n^\ast-\widehat{\boldsymbol{\theta}}_n)$ (conditional on the data $\boldsymbol{D}_n$) and let $\mathcal{L}_n$ denote the probability law of $\sqrt{n}(\widehat{\boldsymbol{\theta}}_n-\boldsymbol{\theta})$. Let $\rho(\mathcal{L}_n, \mathcal{L}_n^\ast)$ denote the distance between these distributions under any metric $\rho(\cdot,\cdot)$ that metricizes the topology of weak convergence (e.g., the Prokhorov Metric). Also, in the bootstrap re-creation, the probability $\Pr{}_{\!\!\ast}(T_1^* \leq t_c)$ that a bootstrap observation $T_1^\ast$ is observed before the censoring time $t_c$ should be a consistent estimator of $F(t_c;\boldsymbol{\theta})$ (e.g., $\Pr{}_{\!\!\ast}(T_1^* \leq t_c) = F(t_c;\widehat{\boldsymbol{\theta}}_n)$ would hold as a natural estimator under a parametric bootstrap). \begin{theorem} Under the conditions of Theorem~\ref{first_theorem}, suppose that $\rho(\mathcal{L}_n^*, \mathcal{L}_n) \stackrel{p}{\rightarrow} 0$ and $\Pr{}_{\!\!\ast}(T_1^* \leq t_c) \stackrel{p}{\rightarrow} F(t_c ; \boldsymbol{\theta}_0)$ as $n\to \infty$. Then, the $100(1-\alpha)\%$ calibrated upper and lower prediction bounds, respectively $\tilde{Y}^{C}_{n, 1-\alpha}$ and $\underaccent{\tilde}{Y}^{C}_{n,1-\alpha}$ have asymptotically correct coverage, that is \begin{equation*} \lim_{n\rightarrow\infty}\Pr(Y_{n}\leq\tilde{Y}^{C}_{n, 1-\alpha}) = 1-\alpha=\lim_{n\rightarrow\infty}\Pr(Y_{n}\geq\underaccent{\tilde}{Y}^{C}_{n,1-\alpha}). \end{equation*} \label{theocali} \end{theorem} The proof is in the online supplementary material. Theorem~\ref{theocali} and its extension in Section 7 guarantee, for example, that the calibration prediction method employed in \citet{elawqm1999}, \citet{hong2009}, \citet{hong2010}, and \citet{hong2013} to construct the prediction intervals for the cumulative number of events is asymptotically correct. \section{Prediction Intervals Based on Predictive Distributions} \label{pred:method} \subsection{Predictive Distributions} Under the general prediction setting in Section~\ref{background}, recall that the predictive distribution under the plug-in method, given by $G(\cdot|\boldsymbol{D}_n, \widehat{\boldsymbol{\theta}}_n)$, provides an estimator of the conditional cdf $G(\cdot|\boldsymbol{D}_n; \boldsymbol{\theta})$, of the predictand $Y$. Quantiles of this predictive distribution can be associated with prediction bounds for $Y$. Generally speaking, any method that leads to a prediction bound for $Y$ can be translated to a predictive distribution by defining the $100(1-\alpha)\%$ upper prediction bound as the $1-\alpha$ quantile of the predictive distribution (and vice versa). In this section, the strategy is to construct predictive distributions that lead to prediction bound (or interval) methods having asymptotically correct coverage for within-sample prediction. For this purpose, it is helpful to consider a Bayesian predictive distribution, defined by \begin{equation} G_{B}(y |\boldsymbol{D}_n)=\int G(y |\boldsymbol{D}_n; \boldsymbol{\theta}) \gamma(\boldsymbol{\theta} |\boldsymbol{D}_n) d \boldsymbol{\theta}, \label{bayespred} \end{equation} where $\gamma(\boldsymbol{\theta} |\boldsymbol{D}_n)$ is a joint posterior distribution for $\boldsymbol{\theta}$. The $1-\alpha$ quantile of the Bayesian predictive distribution provides the $100(1-\alpha)\%$ upper Bayesian prediction bound. While this paper does not pursue the Bayesian method, the idea of the Bayesian predictive distribution can nevertheless be used by replacing the posterior $\gamma(\boldsymbol{\theta} |\boldsymbol{D}_n)$ in (\ref{bayespred}) with an alternative distribution over parameters to similarly define non-Bayesian predictive distributions. \citet{harris1989} replaced the posterior distribution in (\ref{bayespred}) with the bootstrap distribution of the parameters to construct a predictive distribution while \citet{wang2012} replaced the posterior distribution with a fiducial distribution. \citet{shen_liu_xie_2018} proposed a framework for predictive inference by replacing the posterior distribution in (\ref{bayespred}) with a confidence distribution (CD) and provided theoretical results for this CD-based predictive distribution for the case of a scalar parameter. A CD is a probability distribution that can quantify the uncertainty of an unknown parameter, where both the bootstrap distribution in \citet{harris1989} and the fiducial distribution in \citet{wang2012} can be viewed as CDs; see \citet{xie_singh_2013} for a review of these ideas. To summarize, a predictive distribution can be constructed by using a data-based distribution on the parameter space to replace the posterior distribution in (\ref{bayespred}). Following this idea, we aim to use draws from a joint probability distribution for the parameters such that the resulting predictive distribution can be used to construct asymptotically correct prediction bounds and intervals for within-sample prediction. In particular, we propose two ways of constructing predictive distributions, extending the framework proposed by \citet{shen_liu_xie_2018} to the within-sample prediction case. In Section~\ref{bootpred}, we describe a prediction method that is based on the bootstrap distribution of the parameters and it is called the direct-bootstrap method in this paper. In Section~\ref{gpqpred}, we describe another method that works specifically with the (log)-location-scale family of distributions. This method is inspired by generalized pivotal quantities (GPQ) and involves generating bootstrap samples and it is called the GPQ-bootstrap method. \subsection{The Direct-Bootstrap Method} \label{bootpred} For within-sample prediction, recall that number $Y_n$ of events between the censoring time $t_c$ and a future time $t_w>t_c$, given the Type~\Romannum{1} censored data $\boldsymbol{D}_n$, is $\mathrm{binomial}(n-r_n, p)$, where $r_n$ is the number of events observed in $\boldsymbol{D}_n$ and $p$ is the conditional probability in (\ref{piequa}). The direct-bootstrap method uses the distribution of a bootstrap version $\widehat{p}_{n}^{*}=\pi(\widehat{\boldsymbol{\theta}}_n^\ast)$ of $\widehat{p}_{n}=\pi(\widehat{\boldsymbol{\theta}}_n)$, which is induced by the distribution of estimates $\widehat{\boldsymbol{\theta}}_n^\ast$ from a bootstrap sample $\boldsymbol{D}_n^\ast$, to construct a predictive distribution. Letting $\Pr_{\ast}$ denote bootstrap probability (probability induced by a bootstrap sample $\boldsymbol{D}^{*}_n$), the predictive distribution constructed using direct-bootstrap method is \begin{equation} G_{Y_{n}}^{DB}(y|\boldsymbol{D}_n)= \int\mathrm{pbinom}(y, n-r_n,\widehat{p}_n^\ast)\Pr{}_{\!\!*}\left(d \widehat{p}_{n}^{*}\right) \approx\frac{1}{B} \sum_{b=1}^{B}\mathrm{pbinom}(y, n-r_n, \widehat{p}_b^\ast), \label{bootpredformula} \end{equation} where $\widehat{p}_{1}^{*}, ...,\widehat{p}_{B}^{*}$ are realized bootstrap versions of $\widehat{p}_{n}$ from $B$ independently generated bootstrap samples $\boldsymbol{D}_n^{*(1)},\ldots,\boldsymbol{D}_n^{*(B)}$, and $B$ is the number of bootstrap samples. The $100(1-\alpha)\%$ lower and upper prediction bounds using the direct-bootstrap method are then \begin{equation} \begin{split} \underaccent{\tilde}{Y}_{n, 1-\alpha}^{{DB}}&=\sup \left\{y \in \{0\}\cup\mathbb{Z}^{+}:G_{Y_{n}}^{DB}(y-1 | \boldsymbol{D}_n)\leq \alpha\right\},\\ \tilde{Y}_{n, 1-\alpha}^{{DB}}&=\inf \left\{y \in \{0\}\cup\mathbb{Z}^{+} :G_{Y_{n}}^{DB}(y | \boldsymbol{D}_n)\geq 1-\alpha\right\}. \end{split} \label{direct-bound} \end{equation} \subsection{The GPQ-Bootstrap Method} \label{gpqpred} This section focuses on the log-location-scale distribution family and develops another method to construct a predictive distribution through approximate GPQs. Suppose $(T_1,..., T_n)$ is an i.i.d. random sample from a log-location-scale distribution \begin{equation} F(t;\mu, \sigma)=\Phi\left[\frac{\log(t)-\mu}{\sigma}\right], \label{log-location-scale} \end{equation} where $\Phi(\cdot)$ is a known cdf that is free of parameters. For example, if $\Phi(\cdot)$ is the standard normal cdf $\Phi_{\mathrm{nor}(\cdot)}$, then $T_{1}$ has the log-normal distribution. \citet{hannig2006} described methods for constructing GPQs and outlined the relationship between GPQs and fiducial inference. Applying these ideas, GPQs can be defined for the parameters $(\mu,\sigma)$ in the log-location-scale model as follows. If $\mathbb{S}$ is a complete or Type~II censored independent sample from a log-location-scale distribution, a set of GPQs for $(\mu, \sigma)$ under $\mathbb{S}$ is given by \begin{equation} \begin{aligned} \mu_n^{\ast\ast}=\widehat{\mu}_{n}+\left(\frac{\mu-\widehat{\mu}^{\mathbb{S}^{*}}_{n}}{\widehat{\sigma}^{\mathbb{S}^{*}}_{n}}\right) \widehat{\sigma}_{n} \quad\text{and}\quad \sigma^{\ast\ast}_n=\left(\frac{\sigma}{\widehat{\sigma}_{n}^{\mathbb{S}^{*}}}\right) \widehat{\sigma}_{n},\end{aligned} \label{gpq} \end{equation} where $\mathbb{S}^{*}$ denotes an independent copy of the sample $\mathbb{S}$, and $(\widehat{\mu}_{n}, \widehat{\sigma}_{n})$ and $(\widehat{\mu}^{\mathbb{S}^{*}}_{n}, \widehat{\sigma}^{\mathbb{S}^{*}}_{n})$ denote the ML estimators of $(\mu, \sigma)$ computed from $\mathbb{S}$ and $\mathbb{S}^{*}$, respectively. These GPQs induce a distribution over the parameter space $(\mu,\sigma)$ based on data estimates $(\widehat{\mu}_n,\widehat{\sigma}_n)$ and, due to the fact that $[(\mu-\widehat{\mu}_n)/\sigma,\widehat{\sigma}_n/\sigma]$ are pivotal quantities based on a complete or Type~II censored sample $T_1,\dots,T_n$ from the log-location-family, the distribution of $[(\mu-\widehat{\mu}_n^{\mathbb{S}*})/\widehat{\sigma}_n^{\mathbb{S}*}, \sigma/\widehat{\sigma}_n^{\mathbb{S}*})]$ in (\ref{gpq}) can be directly approximated by simulation. GPQs can also, in some applications, be used to construct confidence intervals when an exact pivot is unavailable. Notice that, while the quantities in (\ref{gpq}) are GPQs for log-location-scale family based on complete or Type~II censored data, these are no longer GPQs with Type~\Romannum{1} censored data, where exact GPQs technically fail to exist. This is because the distribution of $\left[(\mu-\widehat{\mu}_{n})/\widehat{\sigma}_{n}, \sigma/\widehat{\sigma}_{n}\right]$ depends on the unknown event probability $F(t_c;\mu,\sigma)$ before the censoring time $t_c$ under Type~\Romannum{1} censoring, which applies also to $\left[(\mu-\widehat{\mu}_{n}^{\mathbb{S}^{*}})/\widehat{\sigma}_{n}^{\mathbb{S}^{*}}, \sigma/\widehat{\sigma}_{n}^{\mathbb{S}^{*}}\right]$. However, the formula in (\ref{gpq}) can be used to provide a joint approximate GPQ distribution under Type~I censoring. Letting $\widehat{\boldsymbol{\theta}}_n^\ast = \left(\widehat{\mu}_{n}^{*}, \widehat{\sigma}_{n}^{*}\right)$ denote a bootstrap version of $\boldsymbol{\widehat{\theta}}_{n} = \left(\widehat{\mu}_{n}, \widehat{\sigma}_{n}\right)$, (\ref{gpq}) is extended to define a joint approximate GPQ distribution as the bootstrap distribution of $\widehat{\boldsymbol{\theta}}_n^{\ast\ast} = \left(\widehat{\mu}_{n}^{**}, \widehat{\sigma}_{n}^{**}\right)$, where \begin{equation} \begin{aligned} \widehat{\mu}_{n}^{**} &=\widehat{\mu}_{n}+\left(\frac{\widehat{\mu}_{n}-\widehat{\mu}^{*}_{n}}{\widehat{\sigma}^{*}_{n}}\right) \widehat{\sigma}_{n}\quad\text{and}\quad\widehat{\sigma}^{**}_{n} &=\left(\frac{\widehat{\sigma}_{n}}{\widehat{\sigma}_{n}^{*}}\right) \widehat{\sigma}_{n}.\end{aligned} \label{gpq2} \end{equation} The above definition of $\widehat{\boldsymbol{\theta}}_n^{**}$ also follows by using the bootstrap distribution of $\left[(\widehat{\mu}_{n}-\widehat{\mu}_{n}^{*})/\widehat{\sigma}_{n}^{*}, \widehat{\sigma}_{n}/\widehat{\sigma}_{n}^{*}\right]$ to approximate the sampling distribution of $\left[(\mu-\widehat{\mu}_{n})/\widehat{\sigma}_{n}, \sigma/\widehat{\sigma}_{n}\right]$ and linearly solving for $(\mu,\sigma)$. Then using $\widehat{\boldsymbol{\theta}}_n^{\ast\ast}=(\widehat{\mu}_n^{\ast\ast},\widehat{\sigma}_n^{\ast\ast})$ instead of $\widehat{\boldsymbol{\theta}}_n^\ast=(\widehat{\mu}_n^{*},\widehat{\sigma}_n^{*})$, a predictive distribution can be defined by using the same procedure that defined the predictive distribution in (\ref{bootpredformula}). Namely, by defining a random variable $\widehat{p}^{**}_{n}\equiv \pi(\widehat{\boldsymbol{\theta}}_n^{\ast\ast})$ from (\ref{piequa}) with a bootstrap distribution induced by $\widehat{\boldsymbol{\theta}}_n^{\ast\ast}=(\widehat{\mu}_{n}^{**}, \widehat{\sigma}_{n}^{**})$, the predictive distribution for $Y_n$ using the GPQ-bootstrap method is given by \begin{equation*} G_{Y_{n}}^{GPQ}(y | \boldsymbol{D}_n)=\int\mathrm{pbinom}(y, n-r_n, \widehat{p}_n^{**}) \Pr{}_{\!\!*}\left(d \widehat{p}_{n}^{**}\right)\approx\frac{1}{B}\sum_{b=1}^{B} \mathrm{pbinom}(y, n-r_n, \widehat p^{\ast\ast}_b), \end{equation*} where $\widehat p_1^{\ast\ast},\dots, \widehat p_B^{\ast\ast}$ are computed from realized bootstrap samples. The $100(1-\alpha)\%$ lower and upper prediction bounds using GPQ-bootstrap method can be obtained by replacing the predictive distribution $G_{Y_n}^{DB}(\cdot|\cdot)$ with $G_{Y_n}^{GPQ}(\cdot|\cdot)$ in (\ref{direct-bound}). \subsection{Coverage Probability of the Proposed Methods} This section shows that both the direct-bootstrap method (Section~\ref{bootpred}) and the GPQ-bootstrap method (Section~\ref{gpqpred}) produce asymptotically correct prediction bounds/intervals for the future count $Y_n$. Hence, these two methods yield asymptotically valid inference for within-sample prediction of $Y_n$, as does the calibration-bootstrap method (Theorem~\ref{theocali}, Section~\ref{calibration}), but not by the standard plug-in method (Theorem~\ref{first_theorem}, Section~\ref{plugin_not_regular}). \begin{theorem} Under the same conditions as Theorem~\ref{theocali}, \begin{enumerate} \item The $100(1-\alpha)\%$ upper and lower prediction bounds using the direct-bootstrap method, respectively $\tilde{Y}^{DB}_{n, 1-\alpha}$ and $\underaccent{\tilde}{Y}^{DB}_{n, 1-\alpha}$, have asymptotically correct coverage. That is,$$\lim_{n\rightarrow\infty}\Pr(Y_{n}\leq\tilde{Y}^{DB}_{n, 1-\alpha}) = 1-\alpha=\lim_{n\rightarrow\infty}\Pr(Y_n\geq\underaccent{\tilde}{Y}_{n, 1-\alpha}^{DB}).$$ \item If the parametric distribution $F(\cdot; \mu, \sigma)$ belongs to the log-location-scale distribution family (\ref{log-location-scale}), with standard cdf $\Phi(\cdot)$ differentiable on $\mathbb{R}$, the $100(1-\alpha)\%$ upper and lower prediction bounds using the GPQ-bootstrap method, respectively $\tilde{Y}^{GPQ}_{n, 1-\alpha}$ and $\underaccent{\tilde}{Y}^{GPQ}_{n,1-\alpha}$, have asymptotically correct coverage. That is, $$\lim_{n\rightarrow\infty}\Pr(Y_{n}\leq\tilde{Y}^{GPQ}_{n, 1-\alpha}) = 1-\alpha=\lim_{n\rightarrow\infty}\Pr(Y_n\geq\underaccent{\tilde}{Y}_{n,1-\alpha}^{GPQ}).$$ \end{enumerate} \label{predbound} \end{theorem} The proof of Theorem~3 is in the online supplementary material. \section{Multiple Cohort Within-Sample Prediction} \label{sec:multiple-cohort} \subsection{Multiple Cohort Data} So far, the focus has been on the within-sample prediction for single-cohort data. Multiple cohort data, however, are more common in applications. In this section, the results from single-cohort data are extended to multiple-cohort data. In multiple-cohort data (e.g. the bearing cage data of Section~\ref{m_exampls}), units from different cohorts are placed into service at different times. The multiple-cohort data $\mathbb{D}$ can be seen as a collection of several single-cohort datasets as $\mathbb{D}=\{\boldsymbol{D}_{n_{s}}, s=1,...,S\}$, where $S$ is the number of cohorts and $n_s$ is the number of units in the cohort $s$ (sometimes, with no grouping, many cohorts have size 1). Within each cohort $\boldsymbol{D}_{n_{s}}=(D_{s,1},...,D_{s,n_s})$, we may express an observation involved as $D_{s,i}=(\delta_i^s, T^{obs,s}_{i})$, for $T^{obs,s}_{i}=T_i^s\delta_i^s+(1-\delta_i^s)t_c^s$, where $T_i^s$ is a random variable from a parametric distribution $F(\cdot;\boldsymbol{\theta})$, $t_c^s$ is the censoring time for cohort $s$, and $\delta_i^s=\mathrm{I}(T_i^s\leq t_c^s)$ is a random variable indicating whether a unit's value (e.g., failure time) is less than the censoring time $t_c^s$. Given the multiple-cohort data $\mathbb{D}$, the number of observed events (e.g., failures) within cohort $s$ is defined as $r_{n_s}=\sum_{i=1}^{n_s}\mathrm{I}(T_i^s\leq t_c^s),s=1,...,S$, where the total number of units is $n=\sum_{s=1}^{S}n_s$. The predictand in the multiple-cohort data is the total number of events that will occur in a future time window of length $\Delta$ and it is denoted by $Y_n=\sum_{s=1}^{S}\sum_{i=1}^{n_s}\mathrm{I}(t_c^s<T^s_i\leq t_w^s)$, where $t_w^s=t_c^s+\Delta$ for $s=1,\dots, S$. Within each cohort $s=1,...,S$, the number $Y_s =\sum_{i=1}^{n_s}\mathrm{I}(t_c^s < T_i^s \leq t_w^s)$ of future events has a binomial distribution. As in Section 3, the conditional distribution of $Y_s$ is $\mathrm{binomial}(n-r_{n_s}, p_s)$, where $p_s$ is defined as \begin{align*} p_s\equiv\pi_{s}(\boldsymbol{\theta})=\frac{F(t_w^s;\boldsymbol{\theta})-F(t_c^s;\boldsymbol{\theta})}{1-F(t_c^s;\boldsymbol{\theta})},\quad s=1,\dots,S. \end{align*} Consequently, the predictand $Y_n=\sum_{s=1}^{S}Y_s$ has a Poisson-binomial distribution with probability vector $\boldsymbol{p}=(p_1,...,p_S)$ and weight vector $\boldsymbol{w}=(n_1-r_{n_1},...,n_S-r_{n_S})$. We denote this Poisson-binomial distribution by $\mathrm{Poibin(\boldsymbol{p}, \boldsymbol{w})}$, where the cdf of the Poisson-binomial distribution is denoted by $\mathrm{ppoibin}(\cdot, \boldsymbol{p}, \boldsymbol{w})$ and the quantile function is denoted by $\mathrm{qpoibin(\cdot, \boldsymbol{p}, \boldsymbol{w})}$; these functions are available in the \textbf{poibin} R package (described in \citet{hongpoisson2013}). If $\widehat{\boldsymbol{\theta}}_n$ is a consistent estimator of $\boldsymbol{\theta}$ based on the multiple-cohort data $\mathbb{D}$, an estimator $\widehat{\boldsymbol{p}}=(\widehat p^1_{n},... ,\widehat p^S_{n})$ of conditional probabilities $\boldsymbol{p}$ follows by substitution $\widehat{p}_n^s = \pi_s(\widehat{\boldsymbol{\theta}}_n)$, $s=1,\ldots,S$, similar to the single-cohort case. Then, the $100(1-\alpha)\%$ plug-in lower and upper prediction bounds for $Y_n$ are \begin{align*} \underaccent{\tilde}{Y}^{PL}_{n, 1-\alpha}&= \sup \{ y\in \{0\}\cup\mathbb{Z}^{+}: \mathrm{ppoibin}\left(y-1, \widehat{\boldsymbol{p}}, \boldsymbol{w}\right) \leq\alpha\}\\ &=\begin{cases} \mathrm{qpoibin}(\alpha, \widehat{\boldsymbol{p}}, \boldsymbol{w}), &\text{if } \mathrm{pbinom}(\mathrm{qpoibin}(\alpha, \widehat{\boldsymbol{p}}, \boldsymbol{w}), \widehat{\boldsymbol{p}}, \boldsymbol{w})>\alpha,\\ \mathrm{qpoibin}(\alpha, \widehat{\boldsymbol{p}}, \boldsymbol{w})+1, &\text{if } \mathrm{pbinom}(\mathrm{qpoibin}(\alpha, \widehat{\boldsymbol{p}}, \boldsymbol{w}), \widehat{\boldsymbol{p}}, \boldsymbol{w})=\alpha, \end{cases}\\ \tilde{Y}_{n,1-\alpha}^{PL}&=\inf\{y\in\{0\}\cup\mathbb{Z}^{+}:\mathrm{ppoibin}(y, \widehat{\boldsymbol{p}}, \boldsymbol{w})\geq1-\alpha\}=\mathrm{qpoibin}(1-\alpha, \widehat{\boldsymbol{p}}, \boldsymbol{w}). \end{align*} Similar to the single-cohort case (Theorem~\ref{first_theorem}), the plug-in method also fails to provide an asymptotically correct coverage probability under multiple-cohort data; see the online supplementary material. \subsection{The Calibration-Bootstrap Method for Multiple Cohort Data} \label{calibration_multiple_cohort_data} Formulating prediction bounds using the calibration-bootstrap method first requires simulation of bootstrap samples, where each bootstrap sample $\mathbb{D}^\ast$ matches the original data in terms of the number $S$ of cohorts as well as their respective sizes $n_s$ and censoring times $t_c^s$, $s=1,\ldots,S$. The bootstrap version of the estimator $\widehat{\boldsymbol{p}}=(\widehat p^1_{n},... ,\widehat p^S_{n})$ is $\widehat{\boldsymbol{p}}^{\ast}=(\widehat p^{1,\ast}_{n},... ,\widehat p^{S,\ast}_{n})$ from each bootstrap sample $\mathbb{D}^*$. Additionally, the number of events (e.g., failures) in the bootstrap sample, grouped by cohort, is $(r_{n_1}^{\ast},...,r_{n_S}^{\ast})$, from which we denote a bootstrap future count by $Y_n^{\dagger}\sim\text{Poibin}(\widehat{\boldsymbol{p}}; \boldsymbol{w}^\ast)$ based on a weight vector from the bootstrap sample as $\boldsymbol{w}^{\ast}=(n_1-r_{n_1}^{\ast},...,n_S-r_{n_S}^{\ast})$. The bootstrap variable set $(Y_n^\dagger, \widehat{\boldsymbol{p}}^{\ast}, \boldsymbol{w}^{\ast})$ is then applied into a Poisson-binomial cdf and then leads to a transformed random variable $U^\ast=\mathrm{ppoibin}(Y_{n}^\dagger, \widehat{\boldsymbol{p}}^{\ast}, \boldsymbol{w}^\ast)\in[0,1]$ for deriving calibrated confidence levels $\alpha^\dagger_L$ and $\alpha^\dagger_U$ in the same way as in the single-cohort situation. Then, the $100(1-\alpha)\%$ calibrated lower prediction bound is $\underaccent{\tilde}{Y}^{C}_{n,1-\alpha}=\underaccent{\tilde}{Y}^{PL}_{n,1-\alpha^\dagger_L}$ and the similar upper prediction bound version is $\tilde{Y}^{C}_{n,1-\alpha}=\tilde{Y}^{PL}_{n,1-\alpha^\dagger_U}$. The calibration-bootstrap method remains asymptotically correct for multiple-cohort within-sample prediction. The multiple-cohort extensions of Theorem~\ref{theocali} and the algorithm are in the online supplementary material. \subsection{The Direct- and GPQ-Bootstrap Methods for Multiple Cohort Data} For multiple-cohort data, constructing prediction bounds for $Y_n$ based on the predictive-distribution-based methods also requires bootstrap data and, in particular, the distribution of a bootstrap version $\widehat{\boldsymbol{p}}^\ast$ of $\widehat{\boldsymbol{p}}$ as in Section~\ref{calibration_multiple_cohort_data}. The predictive distribution from the direct-bootstrap method is \begin{equation}\label{bootppoi} G^{DB}_{Y_n}(y|\mathbb{D})=\int \mathrm{ppoibin}(y, \widehat{\boldsymbol{p}}^{\ast}, \boldsymbol{w})\Pr{}_{\!\!\ast}(d \widehat{\boldsymbol{p}}^{\ast})\approx\frac{1}{B}\sum_{b=1}^{B}\mathrm{ppoibin}(y, \widehat{\boldsymbol{p}}^\ast_b, \boldsymbol{w}). \end{equation} where $\widehat{\boldsymbol{p}}^\ast_1,\dots, \widehat{\boldsymbol{p}}^\ast_B$ are realized bootstrap versions of $\widehat{\boldsymbol{p}}$ across independently generated bootstrap versions of multiple-cohort data (e.g., $\mathbb{D}^*$). The $100(1-\alpha)\%$ direct-bootstrap lower and upper prediction bounds for $Y_n$ are defined as the modified $\alpha$ quantile and $1-\alpha$ quantile of this predictive distribution, respectively, and given by \begin{align*} \underaccent{\tilde}{Y}_{n, 1-\alpha}^{DB}&=\sup \left\{y \in\{0\} \cup \mathbb{Z}^{+}: G_{Y_{n}}^{DB}\left(y-1 | \mathbb{D}\right) \leq\alpha\right\},\\ \tilde{Y}_{n, 1-\alpha}^{DB}&=\inf \left\{y \in\{0\} \cup \mathbb{Z}^{+}: G_{Y_{n}}^{DB}\left(y | \mathbb{D}\right) \geq 1-\alpha\right\}. \end{align*} If $F(\cdot;\boldsymbol{\theta})=F(\cdot;\mu,\sigma)$ belongs to the log-location-scale family as in (\ref{log-location-scale}), we use $\widehat{\boldsymbol{\theta}}_n^{\ast}=(\hat\mu_n^\ast,\hat\sigma_n^\ast)$ to compute approximate GPQs $\widehat{\boldsymbol{\theta}}_n^{\ast\ast}=(\hat\mu_n^{\ast\ast},\hat\sigma_n^{\ast\ast})$ using (\ref{gpq2}), and compute $\widehat{\boldsymbol{p}}^{\ast\ast}=(\widehat p_{n}^{1,\ast\ast},\dots,\widehat p_{n}^{S,\ast\ast})$ where $\widehat p_{n}^{s, \ast\ast}=\pi_s(\widehat{\boldsymbol{\theta}}^{\ast\ast}_n)$. Then the GPQ-bootstrap method can be implemented to obtain prediction bounds for $Y_n$ by replacing $\widehat{\boldsymbol{p}}^\ast$ with $\widehat{\boldsymbol{p}}^{\ast\ast}$ in the definition of the direct-bootstrap predictive distribution (\ref{bootppoi}) and analogously determining prediction bounds from the quantiles of this predictive distribution. The direct- and GPQ-bootstrap methods produce asymptotically correct prediction bounds from multiple-cohort data, and the extension of Theorem~\ref{predbound} is provided in the online supplementary material. \section{A Simulation Study} \label{simu:study} The purpose of this simulation study is to illustrate agreement for finite sample sizes with the theorems established in the previous sections and to provide insights into the performance of different methods in the case of finite samples. The details and results in this section are for Type~\Romannum{1} censored single-cohort data. Let the event of interest be the failure of a unit. We simulated Type~\Romannum{1} censored data using the two-parameter Weibull distribution and compared the coverage probabilities of the prediction bounds based on the plug-in, calibration-bootstrap, direct-bootstrap, and GPQ-bootstrap methods. The Weibull cdf is $$ F(t;\eta, \beta) = 1-\exp\left[-\left(\frac{t}{\eta}\right)^{\beta}\right],\quad t>0, $$ with positive scale $\eta$ and shape $\beta$ parameters, and can also be parameterized as $$ F(t;\mu, \sigma) = \Phi_{\textrm{sev}}\left[\frac{\log (t)-\mu}{\sigma}\right],\quad t>0, $$ where $\Phi_{\textrm{sev}}(x)=1-\exp\left[-\exp(x)\right]$ is the cdf of the standard smallest extreme value distribution with $\mu = \log (\eta)$ and $\sigma = 1/\beta$. The conditions in Theorems~1-3 can be verified for Type~I censored Weibull data, so that the Weibull distribution can be used to illustrate all of the aforementioned methods for within-sample prediction (e.g., the ML estimators of the Weibull parameters $\widehat{\boldsymbol{\theta}}_n = (\widehat{\mu}_n,\widehat{\sigma}_n)$ have sampling distributions with normal limits and can be validly approximated by parametric bootstrap as described in \citet{scholz1996maximum}). \subsection{Simulation Setup} The factors for the simulation experiment are (i) $p_{f1} = F(t_c;\beta,\eta)$, the probability that a unit fails before the censoring time $t_{c}$; (ii) $\mathrm{E}(r)= np_{f1}$, the expected number of failures at the censoring time $t_c$, where $n$ is the total sample size (i.e., including both the censored and the uncensored observations); (iii) $d \equiv p_{f2}-p_{f1}$, the probability that a unit fails in a future time interval $(t_c,t_w]$ where $p_{f2} = F(t_w;\beta, \eta)$; (iv) $\beta = 1/\sigma$, the Weibull shape parameter. Because $\eta=\exp(\mu)$ is a scale parameter, without loss of generality, $\eta=1$ was used in the simulation. A simulation with all combinations of the following factors levels was conducted: (i) $p_{f1} = 0.05, 0.1, 0.2$; (ii) $\mathrm{E}(r) = 5, 15, 25, 35, 45$; (iii) $d = 0.1, 0.2$; (iv) $\beta = 0.5, 0.8, 2, 4$. For each combination of the these four factors, 90\% and 95\% upper prediction bounds and 90\% and 95\% lower prediction bounds were constructed. The procedure for the simulation is as follows: \begin{enumerate} \itemsep-0.5em \item Simulate $N$=5000 Type~\Romannum{1} censored samples for each of the factors-level combinations of the four factors. \item Use ML to estimate parameters $\beta,\eta$ in each censored sample. \item Compute prediction bounds using the different methods for each sample. \item Compute the conditional (i.e., binomial) coverage probability for each of the prediction bounds. \item Determine the unconditional coverage probability for each method by averaging the $N=5000$ conditional coverage probabilities. \end{enumerate} Within each of the $N$=5000 simulated Type~I censored samples, $B$=5000 bootstrap samples were generated by parametric bootstrap (i.e., as a random sample from the fitted Weibull distribution with Type~I censoring at $t_c$) and these samples were used for the calibration-bootstrap method and the two predictive-distribution-based methods. In the simulation, we excluded those samples having fewer than 2 failures to avoid estimability problems, so that all $N$=5000 original samples and all the $N\times B$=25{,}000{,}000 bootstrap samples in the simulation have at least 2 failures. The probability of a data sample with fewer than 2 failures for each factor-level combination is given in Table~\ref{table:droprate}. \begin{table}[ht!] \centering \begin{tabular}{cccccc} \hline & {E}($r$)=5 & {E}($r$)=15 & {E}($r$)=25 & {E}($r$)=35 & {E}($r$)=45\\\hline $p_{f1}=0.05$ & 0.037 &0.000&0.000&0.000&0.000\\ $p_{f1}=0.1$ & 0.034 &0.000&0.000&0.000&0.000\\ $p_{f1}=0.2$& 0.027 &0.000&0.000&0.000&0.000\\\hline \end{tabular} \caption{Probability of an excluded sample (i.e., $r=0$ or 1 failures) for different factor-level combinations.} \label{table:droprate} \end{table} \subsection{Simulation Results} \setcounter{figure}{0} \makeatletter \renewcommand{\thefigure}{\arabic{figure}} A small subset of the plots displaying the complete simulation results are given here, as the results are generally consistent across the different factor-level combinations. Figure~\ref{threeMethods} shows the coverage probabilities from plug-in, calibration-bootstrap, direct-bootstrap, and GPQ-bootstrap methods when $\beta = 2$ and $d = 0.2$. The horizontal dashed line in each subplot represents the nominal confidence level. Plots for the other factor-level combinations are given in the online supplementary material. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{threeMethods.pdf} \caption{Coverage probabilities versus expected number of events for the direct-bootstrap (DB), GPQ-bootstrap (GPQ), calibration-bootstrap (CB), and plug-in (PL) methods when $d=p_{f2}-p_{f1}=0.2$ and $\beta = 2$.} \label{threeMethods} \end{figure} Some observations from the simulation results are: \begin{enumerate} \itemsep-0.5em \item The plug-in method fails to have asymptotically correct coverage probability. As $p_{f1}$ decreases, which entails less information or fewer events observed before the censoring time $t_c$, the coverage probability deviates more from the nominal level. \item The direct- and GPQ-bootstrap methods are close to each other in terms of coverage probabilities except when $\mathrm{E}(r)=5$. The calibration-bootstrap method differs considerably from the direct- and GPQ-bootstrap methods. The calibration-bootstrap method tends to be more conservative than the other bootstrap-based methods for constructing lower prediction bounds, and also is less conservative for constructing upper prediction bounds. \item For the lower bounds, the direct- and GPQ-bootstrap methods dominate the calibration-bootstrap method. For the upper bounds, the coverage probabilities of the former two bootstrap-based methods are slightly conservative but still close to the nominal level. The calibration-bootstrap method is better than the direct- and GPQ-bootstrap methods in just a few of these upper bounds. \item Compared with the calibration-bootstrap method, whose performance is highly related to the level of $p_{f1}$, the coverage probabilities of the direct- and GPQ-bootstrap methods are insensitive to the level of $p_{f1}$. As $p_{f1}$ decreases, the lower prediction bound using the calibration-bootstrap method has over-coverage while the upper prediction bound has under-coverage. This implies that under heavy censoring (small $p_{f1}$), extremely large sample sizes $n$ (or correspondingly large expected number of failing $\text{E}(r)=n p_{f1}$) are required to attain coverage probabilities close to the nominal confidence level. \end{enumerate} From these observations, we can see that the direct- and GPQ-bootstrap methods (i.e., predictive-distribution-based methods) tend to dominate the calibration-bootstrap method in terms of the performance of the prediction bounds, even though all three methods are asymptotically valid. This is because the predictive-distribution-based methods target the one source $p$ of parameter uncertainty in the conditional $\text{binomial}(n-r_n,p)$ distribution of the predictand $Y_n$ (i.e., as addressed by applying bootstrap versions $\widehat{p}^\ast$ or $\widehat{p}^{\ast\ast}$ to ``smooth'' estimation uncertainty for $p$), while the number $n-r_n$ of Bernoulli trials used in these predictive distributions matches that of the predictand. Due to its definition, however, the calibration-bootstrap method involves bootstrap approximation steps (i.e., $r^*_n, \widehat{p}^*$) for both the number $r_n$ of failures as well as the binomial probability $p$. The calibration-bootstrap method essentially imposes an approximation $n-r^*_n$ for the known number $n-r_n$ of trials prescribing the predictand $Y_n$. As a consequence, coverages from the calibration-bootstrap method are generally less accurate than those from the predictive-distribution-based methods for within-sample prediction. \section{Application of the Methods} \label{sec:applications} \subsection{Examples} \noindent \textbf{Product-A Data}: The ML estimates of the Weibull shape and scale parameters are $\widehat\beta=1.518$ and $\widehat\eta=1152$, respectively, based on 80 failure times among 10,000 units before 48 months. Then, for the 9920 surviving units, the ML estimate of the probability that a unit will fail between 48 and 60 months of age is $ \widehat p_n = [F(60;\widehat\beta, \widehat\eta)-F(48;\widehat\beta, \widehat\eta)]/[1-F(48;\widehat\beta, \widehat\eta)]= 0.00323. $ Using the ML estimates of the Weibull parameters $(\widehat\beta, \widehat\eta)$, we simulate 10,000 bootstrap samples that are censored at 48 months and obtain ML estimates of $(\beta, \eta)$ from each bootstrap sample. Based on applying these with each interval method, Table~\ref{productAData} gives prediction bounds for the number of failures in the next 12 months. As indicated by our results, even with a large number of failures, the plug-in method intervals can be expected to be off and are too narrow compared to the other bounds. \begin{table}[!ht] \centering \begin{tabular}{c c c c c c} \hline Confidence Level &Bound Type& Plug-in & Direct & GPQ & Calibration \\ [0.5ex] \hline 95\% & Lower &\multicolumn{1}{r}{23} & \multicolumn{1}{r}{20} & \multicolumn{1}{r}{20} & \multicolumn{1}{r}{20} \\ 90\% & Lower &\multicolumn{1}{r}{25} & \multicolumn{1}{r}{23} & \multicolumn{1}{r}{23} & \multicolumn{1}{r}{23} \\ 90\% & Upper &\multicolumn{1}{r}{39} & \multicolumn{1}{r}{43} & \multicolumn{1}{r}{43} & \multicolumn{1}{r}{43} \\ 95\% & Upper &\multicolumn{1}{r}{42} & \multicolumn{1}{r}{47} & \multicolumn{1}{r}{47} & \multicolumn{1}{r}{46} \\ \hline \end{tabular} \caption{Product A Data: Prediction Bounds for the number of failures in the next 12 months using different methods.} \label{productAData} \end{table} \noindent \textbf{Heat Exchanger Data}: In this example, there are no exact failure times in the data. That is, the data here contain limited information as there were only 8 failures among 20,000 exchanger tubes that were inspected (in censored data analysis, the informational content of data is closely related to the number of failures) and these failure times are interval-censored (not exact). The likelihood function under a Weibull model for the heat exchanger data is $$ L(\beta, \eta)=F(1; \beta, \eta)[F(2; \beta, \eta)-F(1; \beta, \eta)][F(3; \beta, \eta)-F(2; \beta, \eta)]^{6}[1-F(3; \beta, \eta)]^{19992}, $$ resulting in ML estimates $\widehat\beta=2.531$ and $\widehat\eta=66.058$. The conditional probability of a tube failing between the third and tenth year, given that tube has not failed at the end of the third year, is then estimated as $\widehat p_n = [F(10;\widehat\beta, \widehat\eta)-F(3;\widehat\beta, \widehat\eta)]/[1-F(3;\widehat\beta, \widehat\eta)]= 0.00797$. \begin{figure}[ht!] \centering \includegraphics[width=0.9\textwidth]{calibrationQuantile.pdf} \caption{The quantile function of $\mathrm{pbinom}(Y^\dagger_n, n-r^\ast_n, \widehat p^\ast_n)$ used for the calibration-bootstrap method with heat exchanger data.} \label{calibrationquantile} \end{figure} The ML estimates from 10,000 bootstrap samples (parametric bootstrap with censoring at 3 years) are used in the calibration-bootstrap and two predictive-distribution-based methods. However, the calibration-bootstrap method exhibits numerical instabilities with these data due to the small number of failures. To illustrate, Figure~\ref{calibrationquantile} shows the approximate quantile function of $U^\ast=\mathrm{pbinom}(Y_n^\dagger, n-r_n^\ast, \widehat p_n^\ast)$ used in the calibration-bootstrap method, involving the evaluation of a $\text{binomial}(n-r^\ast_n,\widehat{p}_n^*)$ random variable $Y_n^\dagger$ in its cdf $\mathrm{pbinom}$, given the number $r^*_n$ of failures and the estimate $\widehat{p}_n^*$ from a bootstrap sample. This quantile function is also the calibration curve, where the x-axis gives the desired confidence level $1-\alpha$, while the y-axis gives the corresponding calibrated confidence level ($\alpha^\dagger_L$ or $1-\alpha^\dagger_U$) to be used for determining plug-in prediction bounds (or quantiles from a $\text{binomial}(n-r_n=19992,\widehat{p}=0.00797)$ distribution). From Figure~\ref{calibrationquantile}, we can see that the $0.05$ and $0.1$ quantiles nearly equal $0$ while the $0.9$ and $0.95$ quantiles nearly equal 1. This creates complications in computing the prediction bounds, for example, as there is numerical instability near the 100\% quantile of the $\text{binomial}(n-r_n=19992,\,\widehat{p}=0.00797)$ distribution. Consequently, 90\% and 95\% bounds from the calibration-bootstrap method are computationally not available (NA). \begin{table}[ht] \centering \begin{tabular}{c c c c c c} \hline Confidence Level& Bound Type & Plug-in & Direct & GPQ & Calibration \\ [0.5ex] \hline 95\% & Lower &\multicolumn{1}{r}{138} & \multicolumn{1}{r}{28} & \multicolumn{1}{r}{23} & \multicolumn{1}{r}{NA}\\ 90\% & Lower &\multicolumn{1}{r}{142} & \multicolumn{1}{r}{43} & \multicolumn{1}{r}{34} & \multicolumn{1}{r}{NA}\\ 90\% & Upper &\multicolumn{1}{r}{176} & \multicolumn{1}{r}{1627} & \multicolumn{1}{r}{888} & \multicolumn{1}{r}{NA}\\ 95\% & Upper &\multicolumn{1}{r}{180} & \multicolumn{1}{r}{4343} & \multicolumn{1}{r}{1890} & \multicolumn{1}{r}{NA}\\ \hline \end{tabular} \caption{Heat Exchanger Data: Prediction Bounds for the number of failures in the next 7 years using different methods.} \label{heatExchangerData} \end{table} Table~\ref{heatExchangerData} instead provides prediction bounds from the plug-in and direct- and GPQ-bootstrap methods. The plug-in prediction bounds differ substantially from the two bootstrap-based methods. Unlike the previous example (Product A data), the direct- and GPQ-bootstrap methods also differ appreciably based on the limited failure information with the heat exchanger data; we return to explore such differences in Section~\ref{compare:gpq:boot}. The upper bounds involve a large amount of extrapolation and may not be practically meaningful other than to warn that there is a huge amount of uncertainty in the 10-year predictions. \noindent \textbf{Bearing Cage Data}: In this example, staggered entry data containing multiple cohorts are considered. Table~{\ref{bearingcage}} gives the prediction bounds for the bearing cage dataset using 10,000 bootstrap samples. While similar in spirit to the Product-A example, the predictand here differs by having a Poisson-binomial distribution. The latter can be computed with the R package \textbf{poibin}, which is applied to construct prediction bounds using methods described in Section~\ref{calibration_multiple_cohort_data}. Table~\ref{bearingcage} gives the resulting prediction bounds for the bearing cage dataset. \begin{table}[ht] \centering \begin{tabular}{c c c c c c} \hline Confidence Level & Bound Type & Plug-in & Direct & GPQ & Calibration \\ [0.5ex] \hline 95\% & Lower &\multicolumn{1}{r}{2} & \multicolumn{1}{r}{1} & \multicolumn{1}{r}{1} & \multicolumn{1}{r}{1} \\ 90\% & Lower &\multicolumn{1}{r}{2} & \multicolumn{1}{r}{2} & \multicolumn{1}{r}{2} & \multicolumn{1}{r}{2} \\ 90\% & Upper &\multicolumn{1}{r}{8} & \multicolumn{1}{r}{10} & \multicolumn{1}{r}{13} & \multicolumn{1}{r}{10} \\ 95\% & Upper &\multicolumn{1}{r}{9} & \multicolumn{1}{r}{12} & \multicolumn{1}{r}{20} & \multicolumn{1}{r}{12} \\ \hline \end{tabular} \caption{Bearing Cage Data: Prediction Bounds for the number of failures in the next 300 service hours using different methods.} \label{bearingcage} \end{table} \subsection{Comparing the Direct- and GPQ-Bootstrap Methods} \label{compare:gpq:boot} In the heat exchanger example, the prediction bounds obtained from the direct- and GPQ-bootstrap methods appear very different. This motivates us to investigate the cause of such differences in similar prediction applications involving limited information. A general simulation setting is first described for mimicking the heat exchanger data. The heat exchanger data has two important features in that the number of events is small (i.e., 8) and so is the proportion of observed events (i.e., 0.004). Hence, in the simulation, the expected number of events $\text{E}(r)$ is set to 5 while the proportion failing $p_{f1}$ is 0.001, with a Weibull shape parameter $\beta=2$ and scale parameter $\eta = 1$. Different levels of $d = p_{f2}-p_{f1}$ are used for the probability of events in the forecast window. The simulation results (available in the online supplementary material) reveal that, overall, the GPQ-bootstrap method has better coverage probability than the direct-bootstrap method in this simulation setting. For the upper prediction bound, the direct-bootstrap method is generally more conservative than the GPQ-bootstrap method in terms of coverage probability, indicating that upper prediction bounds from the direct-bootstrap method are larger than the GPQ counterparts. On the other hand, the lower bound based on the direct-bootstrap method generally tends to have under-coverage compared to the GPQ-bootstrap method, suggesting also larger lower bounds from the direct-bootstrap method relative to the GPQ-bootstrap method. These patterns in the prediction bounds (i.e., with larger direct-bootstrap bounds compared to those from the GPQ-bootstrap in a setting of a limited number of events) are consistent with the prediction bounds found from the heat exchanger example. \begin{figure}[ht!] \centering \includegraphics[width=0.85\textwidth]{compare_gpq_boot.pdf} \caption{A Representative Distribution of $\widehat p^{\ast}$ and $\widehat p^{\ast\ast}$.} \label{fig:2} \end{figure} To further illustrate, Figure~\ref{fig:2} shows the bootstrap distributions of $\widehat{p}^*$ and $\widehat{p}^{**}$ from a single Monte Carlo sample that represents the typical behavior found in this simulation setting: values of $\widehat{p}^{**}$ used in the predictive distribution of GPQ-bootstrap method tend to be smaller and more concentrated than the $\widehat{p}^*$ values used in the direct-bootstrap predictive distribution. Note that direct- and GPQ-bootstrap predictive distributions are approximated by $G^{DB}_{Y_n}(y|\boldsymbol{D}_n)\approx1/B\sum_{b=1}^{B}\text{pbinom}(y, n-r_n, \widehat p^\ast_b)$ and $G^{GPQ}_{Y_n}(y|\boldsymbol{D}_n)\approx1/B\sum_{b=1}^{B}\text{pbinom}(y, n-r_n, \widehat p^{\ast\ast}_b)$, respectively, and that direct- and GPQ-bootstrap prediction bounds correspond to quantiles from these predictive distributions. Consequently, because $\widehat{p}_{b}^{*}$ and $\widehat{p}_b^{**}$ are small (e.g., less than 0.25) while $\widehat p^\ast_b$ is generally larger than $\widehat p^{\ast\ast}_b$ in Figure~\ref{fig:2}, then $G^{DB}_{Y_n}(y|\boldsymbol{D}_n)$ is generally smaller than $G^{GPQ}_{Y_n}(y|\boldsymbol{D}_n)$, implying quantiles from $G^{DB}_{Y_n}(y|\boldsymbol{D}_n)$ can be expected to exceed those from $G^{GPQ}_{Y_n}(y|\boldsymbol{D}_n)$ in data cases with a limited number of events. However, asymptotically, both $\widehat p_n^\ast$ and $\widehat p_n^{\ast\ast}$ are similarly normally distributed and symmetric around $\widehat p_n$ (shown in online supplementary material), so that the direct- and GPQ-bootstrap prediction bounds may be expected to behave alike in data situations with a larger number of events and larger sample sizes, as seen in Figure~\ref{threeMethods} (and in the Product A application). \section{Choice of a Distribution} \label{choice-of-dist} Extrapolation is usually required when predicting the number of future events based on an on-going time-to-event process. For example, it may be necessary to predict the number of returns in a three-year warranty period based on field data for the first year of operation of a product. An exception arises when life can be modeled in terms of use (as opposed to time in service) and there is much variability in use rates among units in the population. The high-use units will fail early and provide good information about the upper tail of the amount-of-use return-time distribution (e.g., \citet{hong2010}). When extrapolation is required, predictions can be strongly dependent on the distribution choice. In most applications, especially with heavy censoring, there is little or no useful information in the data to help choose a distribution. Then, for example, it is best to choose a failure-time distribution based on knowledge of the failure mechanism and the related physics/chemistry of failure. In important applications, this would be typically be done by consulting with experts who have such knowledge. For example, the lognormal distribution could be justified for failure times that arise from the product of a large number of small, approximately independent positive random quantities. Examples include failure from crack initiation and growth due to cyclic stressing of metal components (e.g., in aircraft engines) and chemical degradation like corrosion (e.g., in microelectronics). These are two common applications where the lognormal distribution is often used. \citet[][pages 36-37]{GnedenkoBelyayevSolovyev1969} provide mathematical justification for this physical/chemical motivation. \begin{figure}[t!] \begin{tabular}{cc} \includegraphics[width=0.5\linewidth]{{DistPredCompare_beta2_pfq0.05_d0.1}.pdf} & \includegraphics[width=0.5\linewidth]{{DistPredCompare_beta2_pfq0.05_d0.2}.pdf} \\ \includegraphics[width=0.5\linewidth]{{DistPredCompare_beta2_pfq0.1_d0.1}.pdf} & \includegraphics[width=0.5\linewidth]{{DistPredCompare_beta2_pfq0.1_d0.2}.pdf} \\ \includegraphics[width=0.5\linewidth]{{DistPredCompare_beta2_pfq0.2_d0.1}.pdf} & \includegraphics[width=0.5\linewidth]{{DistPredCompare_beta2_pfq0.2_d0.2}.pdf} \end{tabular} \caption{Distributional comparisons for $\beta=2$. The two vertical dotted lines on the left indicate the points in time where all three distributions have the same $0.01$ and $p_{f1}$ quantiles. The three vertical lines on the right indicate the times at $p_{f2}=p_{f1}+d$ for the three distributions. } \label{figure:beta.two} \end{figure} Based on extreme value theory, the Weibull distribution can be used to model the distribution of the minimum of a large number of approximately iid positive random variables from certain classes of distributions. For example, the Weibull distribution may provide a suitable model for the time to first failure of a large number of similar components in a system. Consider a chain with many nominally identical links and suppose that the chain is subjected cyclic stresses over time. As suggested in the previous paragraph, the number of cycles to failure for each link could be described adequately with a lognormal distribution. The chain, however, fails when the first link fails. The limiting distribution of (properly standardized) minima of iid lognormal random variables is a type 1 smallest extreme value (or Gumbel) distribution. For all practical purposes, however, the Weibull distribution provides a better approximation. For further information on this result from the penultimate theory of extreme values, see \citet{Green1976}, \citet[Section 3.11]{Castillo1988}, and \citet{GomesHaan1999}. Similarly, if failures are driven by the maximum of a large number of approximately iid positive random variables, a Fr\'{e}chet distribution would be suggested. The reciprocal of a Weibull random variable has a Fr\'{e}chet distribution. Of course, choosing a distribution based on failure-mechanism knowledge is not always possible. The alternative is to do sensitivity analyses, using different distributions. Figure~\ref{figure:beta.two} provides a comparison of the Weibull, lognormal, and Fr\'{e}chet cdfs where the Weibull distribution was chosen with a shape parameter $\beta=2$ and the other factor level combinations of $p_{f1}$ and $d$ used in the Section~\ref{simu:study} simulation. The scale parameter $\eta$ is determined by letting the 0.01 Weibull quantile be 1. The cdfs are plotted on lognormal probability scales where the lognormal cdf is a straight line. The particular parameters for the lognormal and Fr\'{e}chet distributions were chosen such that the distributions cross at the 0.01 and $p_{f1}$ quantiles, simulating the range of the data where the agreement among distributions will be good. Similar plots for $\beta=1$ and $\beta=4$ are provided in the online supplementary material. The Weibull distribution is always more pessimistic (conservative) than the lognormal and the Fr\'{e}chet is always more optimistic than the lognormal. For example, if the true distribution is Weibull but lognormal distribution is used to fit the data, the prediction intervals, regardless of the method, will underpredict the number of events. When in doubt, the Weibull distribution is often used because it is the conservative choice. \section{Concluding Remarks} \label{sec:conclusion} This paper studies the problem of predicting the future number of events based on censored time-to-event data (e.g., failure times). This type of prediction is known as within-sample prediction. A regular prediction problem is defined for which standard plug-in estimation commonly applies, and it is shown that the within-sample prediction is not regular and that the plug-in method fails to produce asymptotically valid prediction bounds. The irregularity of within-sample prediction and the failure of the plug-in method motivated the study of the calibration method as an alternative approach for prediction bounds, though the previously established theory for calibration bounds does not apply to within-sample prediction. The calibration method is implemented via bootstrap and called calibration-bootstrap method, which is proved to be asymptotically correct (i.e., producing prediction bounds with asymptotically correct coverage). Then, turning to formulations of a predictive distribution, we study and validate two other methods to obtain prediction bounds, namely the direct-bootstrap and GPQ-bootstrap methods. All prediction methods considered can be applied to both single-cohort and multiple-cohort data. While theoretical results show that the calibration-bootstrap method and the two predictive-distribution-based methods are all asymptotically correct, the simulation study shows that the direct-bootstrap and GPQ-bootstrap methods outperform the calibration-bootstrap method in terms of coverage probability accuracy relative to a nominal coverage level. The two predictive-distribution-based methods are also easier to implement compared to the calibration-bootstrap method, and can also be computationally more stable (e.g., heat exchanger data example). Thus, we recommend predictive distribution methods, especially the direct-bootstrap method for general applications involving within-sample prediction. In this paper, all of the units in the population were assumed to have the same time-to-event distributions. In many applications, however, units are exposed to different operating or environmental conditions, resulting in different time-to-event distributions. For example, during 1996-2000, the Firestone tires installed on Ford Explorer SUVs experienced unusually high rates of failure, where problems first arose in Saudi Arabia, Qatar, and Kuwait because of the high temperatures in those countries (see \citet{national2001engineering}). Having prediction intervals that use covariate information (like temperature and moisture) could be useful for manufacturers and regulators in making decisions about a possible product recall, for example. Similarly, there can be seasonality effects in time-to-event processes and within-sample predictions. The methods described in this paper can be extended to handle either constant covariates or time-varying covariates. Using calibration-bootstrap methods, \citet{hong2009} used constant covariates to predict power-transformer failures. Despite the complicated nature of their data (random right censoring and truncation and combinations of categorical covariates with small counts in some cells), \citet{hong2009} were able to use the fractional random-weight method \citep[e.g.,][]{XuGotwaltHongKingMeeker2020} to generate bootstrap estimates. \citet{ShanHongMeeker2020} used time-varying covariates to account for seasonality in two different warranty prediction applications. As mentioned by one of the referees, if there is seasonality and data from only part of one year is available, there is a difficulty. In such cases, it would be necessary to use past data on a similar process to provide information about the seasonality. Covariate information in reliability field data has not been common, but that is changing, due to a reduction in costs and advances and in sensor, communications, and storage technology. In the future, much more covariate information on various system operating/environmental variables will be available to make better predictions, as described in \citet{MeekerHong2014}. \section*{Acknowledgments} We would like to thanks Luis A. Escobar for helpful comments on this paper. We are also grateful to the editorial staff, including two reviewers, for helpful comments that improved the manuscript. Research was partially supported by NSF DMS-2015390. \begingroup \setlength{\bibsep}{12pt} \linespread{1}\selectfont \bibliographystyle{apalike} \section{Introduction} \label{m_exampls} There are many applications where it is necessary to predict the number of future events from a population of units associated with an on-going time-to-event process. Such applications also require a prediction interval to quantify statistical prediction uncertainty arising from the combination of process variability and parameter uncertainty. Some motivating applications are given below. \noindent\textbf{Product-A Data}: This example is from \citet{elawqm1999}, where, during a particular month, $n$=10000 units of Product-A were put into service. Over the next 48 months, 80 failures occurred and the failure times were recorded. A prediction interval on the number of failures among the remaining 9920 units during the next 12 months was requested by the management. \noindent\textbf{Heat Exchanger Tube Data}: This example is based on data described in \citet{nelson2000}. Nuclear power plants have steam generators that contain many stainless steel heat-exchanger tubes. Cracks initiate and grow in the tubes due to a stress-corrosion mechanism over time. Periodic inspections of the tubes are used to detect cracks. Consider a fleet of steam generators having a total of $n$=20,000 tubes. One crack was detected after the first year of operation, which was followed by another crack during the second year and six more cracks during the third year. The data are interval-censored as the exact initiation times are unknown. A prediction interval was needed for the number of tubes that would crack from the end of the third year to the end of the tenth year. \noindent\textbf{Bearing-Cage Data}: The bearing-cage failure-time data are from \citet{weibullhandbook} and are provided in the online supplementary material. Groups of aircraft engines employing this bearing cage were put into service over time (staggered entry). At the data freeze date, 6 bearing-cage failures had occurred while the remaining 1697 units with various service times were still in service (multiple right-censored data). To assure that a sufficient number of spare parts would be available to repair the aircraft engine in a timely manner, management requested a prediction interval for the number of bearing-cages that would fail in the next year, assuming 300 hours of service for each aircraft. \medskip The purpose of this paper is to show how to construct prediction intervals for the number of future events from an on-going time-to-event process, investigate the properties of different prediction methods, and give recommendations on which methods to use. This paper is organized as follows. Section~\ref{background} provides concepts and background for prediction inference. Section~\ref{single_cohort_within_sample_pred} describes the single-cohort within-sample prediction problem. Section~\ref{plugin_not_regular} defines how the within-sample prediction is irregular and demonstrates that the plug-in method fails to provide an asymptotically correct prediction interval. Section~\ref{calibration} describes the calibration method for prediction intervals and establishes its asymptotic correctness. Section~\ref{pred:method} presents two other prediction interval methods based on predictive distributions. The first one is a general method using parametric bootstrap samples, while the second method is inspired by generalized pivotal quantities and applies to a log-location-scale family of distributions. Section~\ref{sec:multiple-cohort} extends the single-cohort within-sample prediction to the multiple-cohort problem. Section~\ref{simu:study} compares different prediction methods, through simulation, while Section~\ref{sec:applications} applies the prediction methods to the motivating examples. Section~\ref{choice-of-dist} discusses the choice of distribution for the time-to-event process and addresses the issue of distribution misspecification. Section~\ref{sec:conclusion} gives recommendations and describes potential areas for future research. \section{Background} \label{background} In a general prediction problem, denote the observable data by $\boldsymbol{D}_n$ and the future random variable by $Y_n\equiv Y$; while generic for now, later this paper will focus on the within-sample prediction where $Y$ is a count. The conditional cdf for $Y$ given $\boldsymbol{D}_n$ is denoted by $G_n(\cdot|\boldsymbol{D}_n; \boldsymbol{\theta})\equiv G(\cdot|\boldsymbol{D}_n; \boldsymbol{\theta})$, where $\boldsymbol{\theta}$ is a vector of parameters. The goal is to make inference for $Y$ through a prediction interval, as a useful tool for quantifying uncertainty in prediction. \subsection{Prediction Intervals} \label{predinterval} When parameters in $\boldsymbol{\theta}$ are known, the one-sided upper $100(1-\alpha/2)\%$ prediction bound $\tilde{Y}_{1-\alpha/2}$ is defined as the $100(1-\alpha/2)\%$ quantile of the conditional cdf for $Y$, which is \begin{equation} \tilde{Y}_{1-\alpha/2}=\inf\{y\in\mathbb{R}:G(y|\boldsymbol{D}_n;\boldsymbol{\theta})=\Pr(Y\leq y\vert \boldsymbol{D}_n, \boldsymbol{\theta})\geq1-\alpha/2\}, \label{upperbound::true} \end{equation} and the one-sided lower $100(1-\alpha/2)\%$ prediction bound may be defined as \begin{equation} \underaccent{\tilde}{Y}_{1-\alpha/2}=\sup\{y\in\mathbb{R}:\Pr(Y\geq y\vert \boldsymbol{D}_n, \boldsymbol{\theta})\geq1-\alpha/2\}, \label{lowerbound::true} \end{equation} where this modification of the usual $\alpha/2$ quantile of $Y$ ensures that $\Pr(Y\geq\underaccent{\tilde}{Y}_{1-\alpha/2}|\boldsymbol{D}_n, \boldsymbol{\theta})$ is at least $100(1-\alpha/2)\%$ when $Y$ is a discrete random variable. We may obtain an equal-tail $100(1-\alpha)\%$ prediction interval (approximate when $Y$ is a discrete random variable) by combining these two prediction bounds. In most applications, equal-tail prediction intervals are preferred over unequal ones, even though it is sometimes possible to find a narrower prediction interval with unequal tail probabilities. This is because the equal-tail prediction interval can be naturally decomposed into a practical one-sided upper prediction bound and a lower prediction bound where the separate consideration of one-sided bounds is needed when the cost of being outside the prediction bound is much higher on one side than the other. When the parameters in $\boldsymbol{\theta}$ are unknown, an estimation of $\boldsymbol{\theta}$ from the observed data $\boldsymbol{D}_n$ is required. The plug-in method, also known as the naive or estimative method (cf.~Section~\ref{literature_review}), is to replace $\boldsymbol{\theta}$ with a consistent estimator $\widehat{\boldsymbol{\theta}}_n$ in the prediction bounds (\ref{upperbound::true}) and (\ref{lowerbound::true}). The $100(1-\alpha)\%$ plug-in upper prediction bound is then $\tilde{Y}_{1-\alpha}^{PL}=\inf\{y\in\mathbb{R}:G(y|\boldsymbol{D}_n;\widehat{\boldsymbol{\theta}}_n)\geq1-\alpha\}$ while the $100(1-\alpha)\%$ plug-in lower prediction bound is $\underaccent{\tilde}{Y}_{1-\alpha/2}^{PL}=\sup\{y\in\mathbb{R}:\Pr(Y\geq y\vert \boldsymbol{D}_n, \widehat{\boldsymbol{\theta}}_n)\geq1-\alpha\}$. \subsection{Coverage Probability} \label{coverageprob} Besides the plug-in method, other methods for computing prediction bounds or intervals are available. Let $\mathrm{PI}(1-\alpha)$ generically denote a prediction interval (or bound) of a nominal coverage level $100(1-\alpha)\%$, where researchers would like the probability of $Y$ falling within the interval to be (or close to) $1-\alpha$ (i.e., $\Pr[Y\in\mathrm{PI}(1-\alpha)]=1-\alpha$). To be clear, there are two possible types of coverage probability: conditional coverage probability and unconditional (overall) coverage probability. The conditional coverage probability of a particular $\mathrm{PI(1-\alpha)}$ method is defined as \begin{align*} \mathrm{CP}[\mathrm{PI}(1-\alpha)| \boldsymbol{D}_n; \boldsymbol{\theta}]=\Pr[Y\in\mathrm{PI}(1-\alpha)| \boldsymbol{D}_n; \boldsymbol{\theta}], \end{align*} where $\Pr(\cdot|\boldsymbol{D}_n; \boldsymbol{\theta})$ denotes the conditional probability of $Y$ given the observable data $\boldsymbol{D}_n$. The conditional coverage probability $\mathrm{CP}[\mathrm{PI}(1-\alpha)|\boldsymbol{D}_n; \boldsymbol{\theta}]$ is a random variable because it is a function of the data $\boldsymbol{D}_n$. The unconditional coverage probability of a prediction interval method can be obtained by taking an expectation with respect to the data $\boldsymbol{D}_n$ and it is defined as \begin{align*} \mathrm{CP}[\mathrm{PI}(1-\alpha); \boldsymbol{\theta}]=\boldsymbol{\mathrm{E}}\left\{\Pr[Y\in\mathrm{PI}(1-\alpha)| \boldsymbol{D}_n; \boldsymbol{\theta}]\right\}. \end{align*} The unconditional coverage probability is a fixed property of a prediction method and, as such, can be most readily studied and used to compare alternative prediction interval methods. We focus on unconditional coverage probability in this paper and use the term coverage probability to refer to the unconditional probability, unless stated otherwise. We say a prediction method is exact if $\mathrm{CP}[\mathrm{PI}(1-\alpha); \boldsymbol{\theta}]=1-\alpha$ holds. If $\mathrm{CP}[\mathrm{PI}(1-\alpha); \boldsymbol{\theta}]$ converges to $1-\alpha$ as the sample size $n$ increases, we say the corresponding prediction method is asymptotically correct. When $Y$ is a discrete random variable, however, asymptotic correctness and exactness may not generally hold or be possible for a prediction interval method, due to the discreteness in the distribution of $Y$. \subsection{Related Literature} \label{literature_review} Extensive research exists regarding some methods for computing prediction intervals. While the plug-in method has been criticized for ignoring the uncertainty in $\widehat{\boldsymbol{\theta}}_n$, this method is often widely viewed as being asymptotically correct (related to ``regular predictions'' described in Section~\ref{regular_prediction}). For example, \citet{cox1975}, \citet{beran1990}, and \citet{hall1999} showed that the coverage probability of the plug-in method has an accuracy of $O(n^{-1})$ for a continuous predictand under certain conditions. In Section~\ref{plugin_not_regular} we show, however, that the plug-in method is not asymptotically correct in the context of within-sample prediction. Section~\ref{calibration} presents a calibration method for within-sample prediction intervals. \citet{cox1975} originally proposed the calibration idea to improve on the plug-in method and also provided analytical forms for prediction intervals based on general asymptotic expansions. \citet{atwood1984} used a similar method. \citet{beran1990} employed bootstrap in the calibration method, avoiding the complicated analytical expressions. \citet{elawqm1999} described similar methods for constructing prediction intervals for failure times and the number of future failures, based on censored life data. This paper does not specifically address Bayesian prediction methods, but the classic idea of a Bayesian predictive distribution can be extended to non-Bayesian methods and two such methods are considered in Section~\ref{pred:method}. Several authors have considered similar notions of a non-Bayesian predictive distribution (e.g., \citet{aitchison1975}, \citet{davison1986}, \citet{barncox1996}). \citet{lawless2005} demonstrated a relationship between predictive distributions and (approximate) pivotal-based prediction intervals, including the calibration method described in \citet{beran1990}. \citet{fonseca2012} further elaborated on the relationship between predictive distributions and the calibration method. \citet{shen_liu_xie_2018} proposed a general framework to construct a predictive distribution by replacing the posterior distribution in the definition of a Bayesian predictive distribution with a confidence distribution. \section{Single Cohort Within-Sample Prediction} \label{single_cohort_within_sample_pred} \subsection{Within-Sample Prediction and New Sample Prediction} The term ``within-sample'' prediction has been used to distinguish from the more widely known ``new sample'' prediction. In new-sample prediction, past data are used, for example, to compute a prediction interval for the lifetime of a single unit from a new and completely independent sample. For within-sample prediction, however, the sample has not changed; the future random variable that researchers wish to predict (i.e., a count) relates to the same sample that provided the original (censored) data. \subsection{Single-Cohort Within-Sample Prediction and Plug-in Method} \label{withinsample} Let $({T}_1,...,{T}_n)$ be an unordered random sample from a parametric distribution $F(t;\boldsymbol{\theta})$ having support on the positive real line and $\boldsymbol{\theta}\in\mathbb{R}^q$. Under Type~\Romannum{1} censoring at $t_c>0$, the available data may then be expressed by $D_i=(\delta_i, T_i^{obs}),i=1,...,n$, where $\delta_i=\mathrm{I}(T_i\leq t_c)$ is a variable indicating whether $T_i$ is observed before the censoring time $t_c$, so that the actual observed variables are given as $T_i^{obs}=T_i\delta_i+t_c(1-\delta_i)$. The observed number of events (uncensored units) in the sample will be denoted by $r_n=\sum_{i=1}^{n}\mathrm{I}(T_i\leq t_c)$. For a future time $t_w>t_c$, let $Y_n=\sum_{i=1}^{n}\mathrm{I}(T_i\in(t_c, t_w])$ denote the (future) number of values from $T_1,...,T_n$, that occur in the interval $(t_c, t_w]$. The conditional distribution of $Y_n$ is then $\mathrm{binomial}(n-r_n, p)$ given the observed data $\boldsymbol{D}_n=(D_1,...,D_n)$, where $p$ is the conditional probability that $T_i\in(t_c, t_w]$ given that $T_i>t_c$. As a function of $\boldsymbol{\theta}$, we may define $p$ by \begin{equation} p\equiv\pi(\boldsymbol{\theta})=\frac{F(t_w;\boldsymbol{\theta})-F(t_c;\boldsymbol{\theta})}{1-F(t_c;\boldsymbol{\theta})}. \label{piequa} \end{equation} The goal is to construct a prediction interval for $Y_n$ based on the observed data $\boldsymbol{D}_n=(D_1,...,D_n)$ when $\boldsymbol{\theta}$ is unknown. This is referred to as single-cohort within-sample prediction because all the units enter the system at the same time and are homogeneous; and both the data $\boldsymbol{D}_n$ and the predictand $Y_n$ are functions of the uncensored random sample $(T_1,...,T_n)$. Let $\widehat{\boldsymbol{\theta}}_n$ denote an estimator of $\boldsymbol{\theta}$ based on $\boldsymbol{D}_n$, then a plug-in estimator $\widehat{p}_n=\pi(\widehat{\boldsymbol{\theta}}_n)$ of the conditional probability $p$ follows from (\ref{piequa}). Analogous to the bounds in Section~2.1, a $100(1-\alpha)\%$ plug-in lower prediction bound is defined as \begin{align*} \underaccent{\tilde}{Y}^{PL}_{n, 1-\alpha}&=\sup\{y\in\{0\}\cup\mathbb{Z}^{+}; \mathrm{pbinom}(y-1, n-r_n, \widehat{p}_{n})\leq\alpha\} \\&= \begin{cases} \mathrm{qbinom}(\alpha, n-r_n, \widehat p_n), &\text{if } \mathrm{pbinom}(\mathrm{qbinom}(\alpha, n-r_n, \widehat p_n), n-r_n, \widehat p_n)>\alpha.\\ \mathrm{qbinom}(\alpha, n-r_n, \widehat p_n)+1, &\text{if } \mathrm{pbinom}(\mathrm{qbinom}(\alpha, n-r_n, \widehat p_n), n-r_n, \widehat p_n)=\alpha.\\ \end{cases} \end{align*} where $\mathrm{pbinom}$ and $\mathrm{qbinom}$ are, respectively, the binomial cdf and quantile function. Similarly, the $100(1-\alpha)\%$ plug-in upper prediction bound for $Y_n$ is defined as \begin{align*} \tilde{Y}^{PL}_{n, 1-\alpha}&=\inf\{y\in\{0\}\cup\mathbb{Z}^{+}; \mathrm{pbinom}(y, n-r_n, \widehat{p}_{n})\geq1-\alpha\}=\mathrm{qbinom}(1-\alpha, n-r_n, \widehat p_n). \end{align*} Section~\ref{coverageprob} mentioned that asymptotically correct coverage may not generally be possible for prediction intervals involving a discrete predictand. However, for within-sample prediction here, prediction interval methods can be sensibly examined for properties of asymptotic correctness, which we consider in the following section. This is because discreteness in the (conditionally) binomial predictand $Y_n$ essentially disappears in large sample sizes $n$, due to normal approximations. \section{The Irregularity of the Within-Sample Prediction} \label{plugin_not_regular} \subsection{A Regular Prediction Problem} \label{regular_prediction} Under the general prediction framework described in Section~\ref{background}, the conditional cdf $G_n(\cdot| \boldsymbol{D}_n;\boldsymbol{\theta})$ of a predictand $Y_n$ given the observed data $\boldsymbol{D}_n$ is often estimated by the plug-in method as $G_n(\cdot| \boldsymbol{D}_n;\widehat{\boldsymbol{\theta}}_n)$ (also known as a predictive distribution), where $\widehat{\boldsymbol{\theta}}_n$ is a consistent estimator of $\boldsymbol{\theta}$ based on $\boldsymbol{D}_n$. To frame much of the literature related to the plug-in method (Section~\ref{literature_review}), we may define the prediction problem most often commonly related to the plug-in method as ``regular'' according to the following definition. \begin{definition} In the notation of Section~\ref{background}, a prediction problem is called regular if \begin{equation*} \sup_{y\in\mathbb{R}}|G_n(y|\boldsymbol{D}_n; \boldsymbol{\theta})-G_n(y|\boldsymbol{D}_n; \widehat{\boldsymbol{\theta}}_n)|\xrightarrow{p}0 \end{equation*} holds as $n\to\infty$ for any consistent estimator $\widehat{\boldsymbol{\theta}}_n$ of $\boldsymbol{\theta}$ (i.e., $\widehat{\boldsymbol{\theta}}_n\xrightarrow{p} \boldsymbol{\theta}$). \label{def1} \end{definition} Unlike coverage probability (where exactness may again not be possible for discrete predictands), the above definition reflects the underlying sense of how the plug-in method for prediction intervals is often asymptotically valid for both discrete and continuous predictands. By the nature of many prediction problems (e.g., new sample prediction), the conditional form of cdf $G_n$ may also not necessarily vary with $n$ (e.g., $G_n( \cdot |\boldsymbol{D}_n;\boldsymbol{\theta} )= G( \cdot; \boldsymbol{\theta})$). Hence, in a regular prediction problem, the plug-in predictive distribution (estimated cdf) asymptotically captures the true conditional cdf of the predictand, so that differences are expected to vanish between quantiles of the true predictand $Y_n$ and the associated plug-in prediction bounds. Further, when the predictand has a continuous and asymptotically tight conditional distribution (with probability 1), such as when the conditional cdf $G_n(\cdot| D_n; \boldsymbol{\theta}) = G(\cdot;\boldsymbol{\theta})$ of the predictand does not vary with $n$, then the plug-in method will be asymptotically correct. \subsection{Failure of the Plug-in Method} This section shows that the within-sample prediction problem described in Section~\ref{single_cohort_within_sample_pred} is not regular and that the plug-in method is not asymptotically valid for within-sample prediction. To avoid redundancy, the presentation of results will focus on the plug-in upper prediction bound; the lower bound is analogous by Remark~1 below. In the context of within-sample prediction (cf.~Section~\ref{withinsample}), recall that the $100(1-\alpha)\%$ plug-in upper prediction bound for the future count $Y_n \equiv \sum_{i=1}^n \mathrm{I}(T_i \in (t_c,t_w])$ is defined as \begin{align*} \tilde{Y}^{PL}_{n, 1-\alpha}=\inf\{y\in\mathbb{Z}; \mathrm{pbinom}(y, n-r_n, \widehat{p}_{n})\geq1-\alpha\}. \end{align*} The following theorem shows that the coverage probability of $\tilde{Y}^{PI}_{n, 1-\alpha}$ will not correctly converge to $1-\alpha$ as $n$ increases. \begin{theorem} Let $T_{1}, ..., T_{n}$ denote a random sample from a parametric distribution with cdf $F(\cdot; \boldsymbol{\theta}_{0})$ (at the true value of $\boldsymbol{\theta}=\boldsymbol{\theta}_{0}\in\mathbb{R}^q$), which is observed under Type~I censoring at $t_c>0$. Suppose also that $F(t_c;\boldsymbol{\theta}_0) <1$, $p_0 = \pi(\boldsymbol{\theta}_0)\in(0, 1)$ in (\ref{piequa}), $F(t_c;\boldsymbol{\theta})$ is continuous at $\boldsymbol{\theta}_0$, and that the conditional probability (parametric function) $p\equiv\pi(\boldsymbol{\theta})$ is continuously differentiable in a neighborhood of $\boldsymbol{\theta}_0$ with non-zero gradient $\nabla_0\equiv\partial \pi(\boldsymbol{\theta})/\partial \boldsymbol{\theta}|_{\boldsymbol{\theta}=\boldsymbol{\theta}_{0}}$. Based on the censored sample, suppose $\widehat{\boldsymbol{\theta}}_n$ is an estimator of $\boldsymbol{\theta}$ satisfying $\sqrt{n}(\widehat{\boldsymbol{\theta}}_n-\boldsymbol{\theta}_{0})\xrightarrow{d} \mathrm{MVN}(\boldsymbol{0}, \boldsymbol{V}_0)$, as $n\rightarrow\infty$, for a multivariate normal distribution with mean vector $\boldsymbol{0}$ and positive definite variance matrix $\boldsymbol{V}_{0}$. Then, \begin{enumerate} \item The within-sample prediction of $Y_n = \sum_{i=1}^n \mathrm{I}(t_c < T_i \leq t_w)$ fails to be a regular prediction problem: denoting $G_n(y|\boldsymbol{D}_n,\boldsymbol{\theta}_0)=\text{pbinom}(y,n-r_n,p_0)$ as the conditional cdf of $Y_n$ and $G_n(y|\boldsymbol{D}_n,\widehat{\boldsymbol{\theta}}_n)=\text{pbinom}(y,n-r_n,\widehat{p}_n)$ as its plug-in estimator, then \[ \sup_{y \in \mathbb{R}} \left| G_n(y|\boldsymbol{D}_n,\boldsymbol{\theta}_0) - G_n(y|\boldsymbol{D}_n,\widehat{\boldsymbol{\theta}}_n)\right| \xrightarrow{d} 1 -2\Phi_{\mathrm{nor}}(\sqrt{v_1}|Z_1|/2), \] where $Z_1$ is a standard normal variable with cdf $\Phi_{\mathrm{nor}}(z)=\int_{-\infty}^{z} 1/\sqrt{2 \pi} e^{-u^{2} / 2}d u$, $z\in\mathbb{R}$, and $$ v_{1}\equiv\frac{[1-F(t_{c}; \boldsymbol{\theta}_{0})]}{p_{0}(1-p_{0})}\nabla_{0}^{t}\boldsymbol{V}_{0}\nabla_0\in(0, \infty). $$ \item The plug-in upper prediction bound $\tilde{Y}^{PL}_{n, 1-\alpha}$ generally fails to have asymptotically correct coverage: \begin{align*} \lim_{n\rightarrow\infty}\Pr(Y_{n}\leq \tilde{Y}^{PL}_{n, 1-\alpha})=\Lambda_{1-\alpha}(v_1)\in(0,1) \quad \text{such that} \\ \sgn\left[\Lambda_{1-\alpha}(v_1)-(1-\alpha)\right]= \begin{cases} 1&\quad\mbox{if $\alpha \in(1/2,1)$}\\ 0&\quad\mbox{if $\alpha=1/2$}\\ -1&\quad\mbox{if $\alpha\in(0,1/2)$}, \end{cases} \end{align*} where $\sgn(\cdot)$ is the sign function and $\Lambda_{1-\alpha}(v_1) \equiv \int_{-\infty}^{\infty}\Phi_{\mathrm{nor}}\left[\Phi_{\mathrm{nor}}^{-1}(1-\alpha)+z \sqrt{v_{1}}\right]d \Phi_{\mathrm{nor}}(z)$. Furthermore, $\Lambda_{1-\alpha}(v_1) \in [1/2,1-\alpha)$ is a decreasing function of $v_1>0$ for a given $\alpha \in (0,1/2)$, while $\Lambda_{1-\alpha}(v_1) \in (1-\alpha,1/2]$ is increasing in $v_1>0$ for $\alpha \in (1/2,1)$, and $\lim_{v_1 \to \infty}\Lambda_{1-\alpha}(v_1)=1/2$ holds for any $\alpha\in(0,1)$. \end{enumerate} \label{first_theorem} \end{theorem} \noindent\textbf{Remark 1}. The lower plug-in bound $\underaccent{\tilde}{Y}_{n,1-\alpha}^{PL}$ behaves similarly with $\lim_{n\to \infty}\Pr(Y_n \geq\underaccent{\tilde}{Y}_{n, 1-\alpha}^{PL}) = \lim_{n\to\infty}\Pr(Y_n \leq \tilde{Y}_{n,1-\alpha}^{PL})$ in Theorem 1.\\ \indent The proof of Theorem~\ref{first_theorem} is in the online supplementary material. This counter-intuitive result reveals that the plug-in method should not be used to construct prediction intervals in the within-sample prediction problem, even if the sample size is large. The first part of Theorem~\ref{first_theorem} entails that plug-in estimation fails to capture the distribution of the predictand $Y_n$ here, to the extent that the supremum difference between estimated and true distributions has a {\em random} limit, rather than converging to zero as in a regular prediction (cf.~Definition~\ref{def1}). As a consequence, the limiting coverage probability of the plug-in bound turns out to be ``off'' by an amount determined by a magnitude of $v_1>0$ in Theorem~\ref{first_theorem} (part 2). For increasing values of $v_1$, the coverage probability approaches 0.5, regardless of the nominal coverage level intended. An intuitive explanation for the failure of plug-in method is that, although $\widehat{p}_{n}$ converges consistently to $p$, the growing number of Bernoulli trials $n-r_n$ in $Y_{n}$ offsets the improvements that larger samples may offer in estimation by $\widehat p_n$. In other words, when standardizing the true $1-\alpha$ quantile, say $Y_{n,1-\alpha}$, of the (conditionally binomial) predictand $Y_n$, one obtains a standard normal quantile $(Y_{n,1-\alpha} -p)/\sqrt{n-r_n}\approx \Phi_{\mathrm{nor}}^{-1}(1-\alpha)$ by normal approximation; however, the same standardization applied to the plug-in bound $\tilde{Y}_{n,1-\alpha}^{PL}$ gives $(\tilde{Y}_{n,1-\alpha}^{PL} -p)/\sqrt{n-r_n}\approx \Phi_{\mathrm{nor}}^{-1}(1-\alpha) + \sqrt{n-r_n} (\widehat{p}_n-p)$, which differs by a substantial and random amount $\sqrt{n-r_n} (\widehat{p}_n-p)$ (having a normal limit itself). Hence, validity of the plug-in method for within-sample prediction would require an estimator $\widehat p_n$ such that $\widehat p_n=p+o_p(n^{-1/2})$, which demands more than what is available from standard $\sqrt{n}$-consistency. \section{Prediction Intervals Based on Calibration} \label{calibration} \subsection{Calibrating Plug-in Prediction Bounds} \citet{cox1975} suggested an approximation for improving the plug-in method, which will be described next. Considering the general prediction problem (cf.~Section~\ref{predinterval}), suppose a future random variable $Y \equiv Y_n$ has a conditional cdf $G_n(\cdot|\boldsymbol{D}_n;\boldsymbol{\theta}) \equiv G(\cdot|\boldsymbol{D}_n; \boldsymbol{\theta})$ given random sample $\boldsymbol{D}_n$ and $\widehat{\boldsymbol{\theta}}_n$ is a consistent estimator of $\boldsymbol{\theta}$ from $\boldsymbol{D}_n$. The coverage probability of the $100(1-\alpha)$\% plug-in upper prediction bound is denoted by $\Pr\left[G(Y|\boldsymbol{D}_n;\widehat{\boldsymbol{\theta}}_n)\leq 1-\alpha\right]=1-\alpha^\prime$, where $\alpha^\prime$ is generally different from $\alpha$ due to the estimation uncertainty in $\widehat{\boldsymbol{\theta}}_n$. The basic idea of the calibration method is to find a level $\alpha^\dagger$ so that the coverage probability $\Pr\left[G(Y|\boldsymbol{D}_n;\widehat{\boldsymbol{\theta}}_n)\leq1-\alpha^\dagger\right]$ is equal to (or closer to) $1-\alpha$. The resulting $100(1-\alpha^\dagger)\%$ upper plug-in prediction bound $\tilde{Y}_{n,1-\alpha^\dag}^{PL}$ is called the $100(1-\alpha)\%$ upper calibrated prediction bound. However, determination of $\alpha^\dagger$ relies on both the distribution of $Y$ and the sampling distribution of $\widehat{\boldsymbol{\theta}}_n$, each of which depends on the unknown parameter $\boldsymbol{\theta}$. So instead, $\alpha^\dagger$ is obtained by solving the equation $ \Pr{}_{\!\!\ast}\left[G(Y^\ast|\boldsymbol{D}_n;\widehat{\boldsymbol{\theta}}^{\ast}_n)\leq1-\alpha^\dagger\right]=1-\alpha $, where $\Pr{}_{\!\!\ast}$ denotes bootstrap probability induced by $Y^\ast\sim G(\cdot|\boldsymbol{D}_n;\widehat{\boldsymbol{\theta}}_n)$ and by $\widehat{\boldsymbol{\theta}}^{\ast}_n$ as a bootstrap version of $\widehat{\boldsymbol{\theta}}_n$; for example, $\widehat{\boldsymbol{\theta}}^{\ast}_n$ may be based on a bootstrap sample $\boldsymbol{D}_n^*$ found by a parametric bootstrap applied using $\widehat{\boldsymbol{\theta}}_n$ in the role of the unknown parameter vector $\boldsymbol{\theta}$. \citet{beran1990} showed, that under certain conditions, instead of having a coverage error of $O(n^{-1})$, the coverage probability of the calibrated upper prediction bound improves upon the plug-in methods, e.g., $\Pr\left[Y\leq G^{-1}(1-\alpha^\dagger|\boldsymbol{D}_n;\widehat{\boldsymbol{\theta}}_n)\right]=1-\alpha+O(n^{-2})$. However, such results for the validity of the calibration method cannot be applied directly to within-sample prediction because conditions in \citet{beran1990} entail that the prediction problem be regular (cf.~Section~\ref{regular_prediction}), which is not true for the within-sample prediction problem (Theorem~\ref{first_theorem}). Consequently, the issue of asymptotic correctness for the calibration method needs to be determined for within-sample prediction, as next considered. \subsection{The Calibration-Bootstrap Method for the Within-Sample Prediction} The general method in \citet{beran1990} is modified to construct a calibrated prediction interval for within-sample prediction and it is called the calibration-bootstrap method in the rest of this paper. For a bootstrap sample $\boldsymbol{D}^\ast_n$ with $r^\ast_n$ observed events (e.g., from a parametric bootstrap using $\widehat{\boldsymbol{\theta}}_n$), we define a random variable set $\left(Y_n^\dagger, n-r_{n}^{\ast}, \widehat{p}_{n}^{\ast}\right)$ where $\widehat{p}_{n}^{\ast}=\pi( \widehat{\boldsymbol{\theta}}^{\ast}_n)$ is the bootstrap version of $\widehat{p}_n=\pi(\widehat{\boldsymbol{\theta}}_n)$ and $Y_n^\dagger\sim \mathrm{binomial}(n-r_{n}^{\ast}, \widehat{p}_n)$, conditional on $r_n^\ast$. For the $100(1-\alpha)\%$ lower prediction bound, the calibrated confidence level is $$ \alpha^{\dagger}_{L}=\sup\{u\in[0, 1]:\Pr{}_{\!\!\ast}\left[\mathrm{pbinom}(Y^\dagger_n, n-r_n^\ast,\widehat p_n^\ast)\leq u\right]\leq\alpha\}, $$ where $\Pr{}_{\!\!\ast}$ is the bootstrap probability induced by $\boldsymbol{D}^\ast_n$, and then the calibrated $100(1-\alpha)\%$ lower prediction bound is given by $\underaccent{\tilde}{Y}_{n,1-\alpha}^C= \underaccent{\tilde}{Y}_{n,1-\alpha^\dagger_L}^{PL}$. For the $100(1-\alpha)\%$ upper prediction bound, the calibrated confidence level is $$ 1-\alpha^\dagger_U = \inf\{u\in[0, 1] :\Pr{}_{\!\!\ast}\!\left[\mathrm{pbinom}(Y_n^\dagger, n-r_{n}^{\ast}, \widehat{p}_{n}^{\ast})\leq u\right]\geq 1-\alpha\}, $$ so that the calibrated $100(1-\alpha)\%$ upper prediction bound is $\tilde{Y}_{n,1-\alpha}^C=\tilde{Y}_{n,1-\alpha^\dagger_U}^{PL}$. Here $\underaccent{\tilde}{Y}_{n,1-\alpha}^{PL}$ and $\tilde{Y}_{n,1-\alpha}^{PL}$ represent lower and upper plug-in prediction bounds, respectively, as defined in Section~\ref{withinsample}. The calibration-bootstrap method involves approximating the distribution of $U=\mathrm{pbinom}(Y_{n}, n-r_n, \widehat{p}_{n})$ with the bootstrap distribution of $U^\ast=\mathrm{pbinom}(Y_{n}^\dagger, n-r^\ast_n, \widehat{p}_{n}^\ast)$. The bootstrap distribution of $U^\ast$ is used to calibrate the plug-in method. The procedure of using the calibration-bootstrap method to construct a prediction interval is described below: \begin{enumerate} \itemsep-0.5em \item Compute the maximum likelihood (ML) estimate $\widehat{\boldsymbol{\theta}}_n$ using data $\boldsymbol{D}_n$ and the ML estimate $\widehat{p}_n=\pi(\widehat{\boldsymbol{\theta}}_n)$. \item Generate a bootstrap sample $\boldsymbol{D}_n^\ast$ and the number of events is denoted by $r_n^\ast$. \item Compute $\widehat{\boldsymbol{\theta}}_n^\ast$ and $\widehat{p}_n^\ast=\pi(\widehat{\boldsymbol{\theta}}_n^\ast)$ using the bootstrap sample $\boldsymbol{D}_n^\ast$. \item Generate $y^\ast$ from the distribution $\text{binomial}(n-r^\ast_n, \widehat{p}_n)$ and compute $u^\ast=\mathrm{pbinom}(y^\ast, n-r_n^\ast, \widehat{p}_n^\ast)$. \item Repeat step 2-4 for $B$ times and get $B$ realizations of $u^\ast$ as $\{u_1^\ast,\dots,u_B^\ast\}$. \item Find the $\alpha$ and $1-\alpha$ quantiles of $\{u_1^\ast,\dots,u_B^\ast\}$, and denote these by $u_\alpha$ and $u_{1-\alpha}$, respectively. The $1-\alpha$ calibrated lower and upper prediction bounds are $\underaccent{\tilde}{Y}_{n,1-\alpha}^C=\underaccent{\tilde}{Y}_{n,1-u_{\alpha}}^{PL}$ and $\tilde{Y}^C_{n,1-\alpha}=\tilde{Y}_{n,u_{1-\alpha}}^{PL}$. \end{enumerate} The pseudo-code for this algorithm is in the online supplementary material. Next, the calibration-bootstrap method is shown to be asymptotically correct. This requires a mild assumption on the bootstrap involved, namely that the parameter estimators $\widehat{\boldsymbol{\theta}}_n^\ast$ in the bootstrap world provide valid approximations for the sampling distribution of the original data estimators $\sqrt{n}(\widehat{\boldsymbol{\theta}}_n-\boldsymbol{\theta})$, in large samples. More formally, let $\mathcal{L}_n^* \equiv \mathcal{L}_n^*(\boldsymbol{D}_n)$ denote the probability law of the bootstrap quantity $\sqrt{n}(\widehat{\boldsymbol{\theta}}_n^\ast-\widehat{\boldsymbol{\theta}}_n)$ (conditional on the data $\boldsymbol{D}_n$) and let $\mathcal{L}_n$ denote the probability law of $\sqrt{n}(\widehat{\boldsymbol{\theta}}_n-\boldsymbol{\theta})$. Let $\rho(\mathcal{L}_n, \mathcal{L}_n^\ast)$ denote the distance between these distributions under any metric $\rho(\cdot,\cdot)$ that metricizes the topology of weak convergence (e.g., the Prokhorov Metric). Also, in the bootstrap re-creation, the probability $\Pr{}_{\!\!\ast}(T_1^* \leq t_c)$ that a bootstrap observation $T_1^\ast$ is observed before the censoring time $t_c$ should be a consistent estimator of $F(t_c;\boldsymbol{\theta})$ (e.g., $\Pr{}_{\!\!\ast}(T_1^* \leq t_c) = F(t_c;\widehat{\boldsymbol{\theta}}_n)$ would hold as a natural estimator under a parametric bootstrap). \begin{theorem} Under the conditions of Theorem~\ref{first_theorem}, suppose that $\rho(\mathcal{L}_n^*, \mathcal{L}_n) \stackrel{p}{\rightarrow} 0$ and $\Pr{}_{\!\!\ast}(T_1^* \leq t_c) \stackrel{p}{\rightarrow} F(t_c ; \boldsymbol{\theta}_0)$ as $n\to \infty$. Then, the $100(1-\alpha)\%$ calibrated upper and lower prediction bounds, respectively $\tilde{Y}^{C}_{n, 1-\alpha}$ and $\underaccent{\tilde}{Y}^{C}_{n,1-\alpha}$ have asymptotically correct coverage, that is \begin{equation*} \lim_{n\rightarrow\infty}\Pr(Y_{n}\leq\tilde{Y}^{C}_{n, 1-\alpha}) = 1-\alpha=\lim_{n\rightarrow\infty}\Pr(Y_{n}\geq\underaccent{\tilde}{Y}^{C}_{n,1-\alpha}). \end{equation*} \label{theocali} \end{theorem} The proof is in the online supplementary material. Theorem~\ref{theocali} and its extension in Section 7 guarantee, for example, that the calibration prediction method employed in \citet{elawqm1999}, \citet{hong2009}, \citet{hong2010}, and \citet{hong2013} to construct the prediction intervals for the cumulative number of events is asymptotically correct. \section{Prediction Intervals Based on Predictive Distributions} \label{pred:method} \subsection{Predictive Distributions} Under the general prediction setting in Section~\ref{background}, recall that the predictive distribution under the plug-in method, given by $G(\cdot|\boldsymbol{D}_n, \widehat{\boldsymbol{\theta}}_n)$, provides an estimator of the conditional cdf $G(\cdot|\boldsymbol{D}_n; \boldsymbol{\theta})$, of the predictand $Y$. Quantiles of this predictive distribution can be associated with prediction bounds for $Y$. Generally speaking, any method that leads to a prediction bound for $Y$ can be translated to a predictive distribution by defining the $100(1-\alpha)\%$ upper prediction bound as the $1-\alpha$ quantile of the predictive distribution (and vice versa). In this section, the strategy is to construct predictive distributions that lead to prediction bound (or interval) methods having asymptotically correct coverage for within-sample prediction. For this purpose, it is helpful to consider a Bayesian predictive distribution, defined by \begin{equation} G_{B}(y |\boldsymbol{D}_n)=\int G(y |\boldsymbol{D}_n; \boldsymbol{\theta}) \gamma(\boldsymbol{\theta} |\boldsymbol{D}_n) d \boldsymbol{\theta}, \label{bayespred} \end{equation} where $\gamma(\boldsymbol{\theta} |\boldsymbol{D}_n)$ is a joint posterior distribution for $\boldsymbol{\theta}$. The $1-\alpha$ quantile of the Bayesian predictive distribution provides the $100(1-\alpha)\%$ upper Bayesian prediction bound. While this paper does not pursue the Bayesian method, the idea of the Bayesian predictive distribution can nevertheless be used by replacing the posterior $\gamma(\boldsymbol{\theta} |\boldsymbol{D}_n)$ in (\ref{bayespred}) with an alternative distribution over parameters to similarly define non-Bayesian predictive distributions. \citet{harris1989} replaced the posterior distribution in (\ref{bayespred}) with the bootstrap distribution of the parameters to construct a predictive distribution while \citet{wang2012} replaced the posterior distribution with a fiducial distribution. \citet{shen_liu_xie_2018} proposed a framework for predictive inference by replacing the posterior distribution in (\ref{bayespred}) with a confidence distribution (CD) and provided theoretical results for this CD-based predictive distribution for the case of a scalar parameter. A CD is a probability distribution that can quantify the uncertainty of an unknown parameter, where both the bootstrap distribution in \citet{harris1989} and the fiducial distribution in \citet{wang2012} can be viewed as CDs; see \citet{xie_singh_2013} for a review of these ideas. To summarize, a predictive distribution can be constructed by using a data-based distribution on the parameter space to replace the posterior distribution in (\ref{bayespred}). Following this idea, we aim to use draws from a joint probability distribution for the parameters such that the resulting predictive distribution can be used to construct asymptotically correct prediction bounds and intervals for within-sample prediction. In particular, we propose two ways of constructing predictive distributions, extending the framework proposed by \citet{shen_liu_xie_2018} to the within-sample prediction case. In Section~\ref{bootpred}, we describe a prediction method that is based on the bootstrap distribution of the parameters and it is called the direct-bootstrap method in this paper. In Section~\ref{gpqpred}, we describe another method that works specifically with the (log)-location-scale family of distributions. This method is inspired by generalized pivotal quantities (GPQ) and involves generating bootstrap samples and it is called the GPQ-bootstrap method. \subsection{The Direct-Bootstrap Method} \label{bootpred} For within-sample prediction, recall that number $Y_n$ of events between the censoring time $t_c$ and a future time $t_w>t_c$, given the Type~\Romannum{1} censored data $\boldsymbol{D}_n$, is $\mathrm{binomial}(n-r_n, p)$, where $r_n$ is the number of events observed in $\boldsymbol{D}_n$ and $p$ is the conditional probability in (\ref{piequa}). The direct-bootstrap method uses the distribution of a bootstrap version $\widehat{p}_{n}^{*}=\pi(\widehat{\boldsymbol{\theta}}_n^\ast)$ of $\widehat{p}_{n}=\pi(\widehat{\boldsymbol{\theta}}_n)$, which is induced by the distribution of estimates $\widehat{\boldsymbol{\theta}}_n^\ast$ from a bootstrap sample $\boldsymbol{D}_n^\ast$, to construct a predictive distribution. Letting $\Pr_{\ast}$ denote bootstrap probability (probability induced by a bootstrap sample $\boldsymbol{D}^{*}_n$), the predictive distribution constructed using direct-bootstrap method is \begin{equation} G_{Y_{n}}^{DB}(y|\boldsymbol{D}_n)= \int\mathrm{pbinom}(y, n-r_n,\widehat{p}_n^\ast)\Pr{}_{\!\!*}\left(d \widehat{p}_{n}^{*}\right) \approx\frac{1}{B} \sum_{b=1}^{B}\mathrm{pbinom}(y, n-r_n, \widehat{p}_b^\ast), \label{bootpredformula} \end{equation} where $\widehat{p}_{1}^{*}, ...,\widehat{p}_{B}^{*}$ are realized bootstrap versions of $\widehat{p}_{n}$ from $B$ independently generated bootstrap samples $\boldsymbol{D}_n^{*(1)},\ldots,\boldsymbol{D}_n^{*(B)}$, and $B$ is the number of bootstrap samples. The $100(1-\alpha)\%$ lower and upper prediction bounds using the direct-bootstrap method are then \begin{equation} \begin{split} \underaccent{\tilde}{Y}_{n, 1-\alpha}^{{DB}}&=\sup \left\{y \in \{0\}\cup\mathbb{Z}^{+}:G_{Y_{n}}^{DB}(y-1 | \boldsymbol{D}_n)\leq \alpha\right\},\\ \tilde{Y}_{n, 1-\alpha}^{{DB}}&=\inf \left\{y \in \{0\}\cup\mathbb{Z}^{+} :G_{Y_{n}}^{DB}(y | \boldsymbol{D}_n)\geq 1-\alpha\right\}. \end{split} \label{direct-bound} \end{equation} \subsection{The GPQ-Bootstrap Method} \label{gpqpred} This section focuses on the log-location-scale distribution family and develops another method to construct a predictive distribution through approximate GPQs. Suppose $(T_1,..., T_n)$ is an i.i.d. random sample from a log-location-scale distribution \begin{equation} F(t;\mu, \sigma)=\Phi\left[\frac{\log(t)-\mu}{\sigma}\right], \label{log-location-scale} \end{equation} where $\Phi(\cdot)$ is a known cdf that is free of parameters. For example, if $\Phi(\cdot)$ is the standard normal cdf $\Phi_{\mathrm{nor}(\cdot)}$, then $T_{1}$ has the log-normal distribution. \citet{hannig2006} described methods for constructing GPQs and outlined the relationship between GPQs and fiducial inference. Applying these ideas, GPQs can be defined for the parameters $(\mu,\sigma)$ in the log-location-scale model as follows. If $\mathbb{S}$ is a complete or Type~II censored independent sample from a log-location-scale distribution, a set of GPQs for $(\mu, \sigma)$ under $\mathbb{S}$ is given by \begin{equation} \begin{aligned} \mu_n^{\ast\ast}=\widehat{\mu}_{n}+\left(\frac{\mu-\widehat{\mu}^{\mathbb{S}^{*}}_{n}}{\widehat{\sigma}^{\mathbb{S}^{*}}_{n}}\right) \widehat{\sigma}_{n} \quad\text{and}\quad \sigma^{\ast\ast}_n=\left(\frac{\sigma}{\widehat{\sigma}_{n}^{\mathbb{S}^{*}}}\right) \widehat{\sigma}_{n},\end{aligned} \label{gpq} \end{equation} where $\mathbb{S}^{*}$ denotes an independent copy of the sample $\mathbb{S}$, and $(\widehat{\mu}_{n}, \widehat{\sigma}_{n})$ and $(\widehat{\mu}^{\mathbb{S}^{*}}_{n}, \widehat{\sigma}^{\mathbb{S}^{*}}_{n})$ denote the ML estimators of $(\mu, \sigma)$ computed from $\mathbb{S}$ and $\mathbb{S}^{*}$, respectively. These GPQs induce a distribution over the parameter space $(\mu,\sigma)$ based on data estimates $(\widehat{\mu}_n,\widehat{\sigma}_n)$ and, due to the fact that $[(\mu-\widehat{\mu}_n)/\sigma,\widehat{\sigma}_n/\sigma]$ are pivotal quantities based on a complete or Type~II censored sample $T_1,\dots,T_n$ from the log-location-family, the distribution of $[(\mu-\widehat{\mu}_n^{\mathbb{S}*})/\widehat{\sigma}_n^{\mathbb{S}*}, \sigma/\widehat{\sigma}_n^{\mathbb{S}*})]$ in (\ref{gpq}) can be directly approximated by simulation. GPQs can also, in some applications, be used to construct confidence intervals when an exact pivot is unavailable. Notice that, while the quantities in (\ref{gpq}) are GPQs for log-location-scale family based on complete or Type~II censored data, these are no longer GPQs with Type~\Romannum{1} censored data, where exact GPQs technically fail to exist. This is because the distribution of $\left[(\mu-\widehat{\mu}_{n})/\widehat{\sigma}_{n}, \sigma/\widehat{\sigma}_{n}\right]$ depends on the unknown event probability $F(t_c;\mu,\sigma)$ before the censoring time $t_c$ under Type~\Romannum{1} censoring, which applies also to $\left[(\mu-\widehat{\mu}_{n}^{\mathbb{S}^{*}})/\widehat{\sigma}_{n}^{\mathbb{S}^{*}}, \sigma/\widehat{\sigma}_{n}^{\mathbb{S}^{*}}\right]$. However, the formula in (\ref{gpq}) can be used to provide a joint approximate GPQ distribution under Type~I censoring. Letting $\widehat{\boldsymbol{\theta}}_n^\ast = \left(\widehat{\mu}_{n}^{*}, \widehat{\sigma}_{n}^{*}\right)$ denote a bootstrap version of $\boldsymbol{\widehat{\theta}}_{n} = \left(\widehat{\mu}_{n}, \widehat{\sigma}_{n}\right)$, (\ref{gpq}) is extended to define a joint approximate GPQ distribution as the bootstrap distribution of $\widehat{\boldsymbol{\theta}}_n^{\ast\ast} = \left(\widehat{\mu}_{n}^{**}, \widehat{\sigma}_{n}^{**}\right)$, where \begin{equation} \begin{aligned} \widehat{\mu}_{n}^{**} &=\widehat{\mu}_{n}+\left(\frac{\widehat{\mu}_{n}-\widehat{\mu}^{*}_{n}}{\widehat{\sigma}^{*}_{n}}\right) \widehat{\sigma}_{n}\quad\text{and}\quad\widehat{\sigma}^{**}_{n} &=\left(\frac{\widehat{\sigma}_{n}}{\widehat{\sigma}_{n}^{*}}\right) \widehat{\sigma}_{n}.\end{aligned} \label{gpq2} \end{equation} The above definition of $\widehat{\boldsymbol{\theta}}_n^{**}$ also follows by using the bootstrap distribution of $\left[(\widehat{\mu}_{n}-\widehat{\mu}_{n}^{*})/\widehat{\sigma}_{n}^{*}, \widehat{\sigma}_{n}/\widehat{\sigma}_{n}^{*}\right]$ to approximate the sampling distribution of $\left[(\mu-\widehat{\mu}_{n})/\widehat{\sigma}_{n}, \sigma/\widehat{\sigma}_{n}\right]$ and linearly solving for $(\mu,\sigma)$. Then using $\widehat{\boldsymbol{\theta}}_n^{\ast\ast}=(\widehat{\mu}_n^{\ast\ast},\widehat{\sigma}_n^{\ast\ast})$ instead of $\widehat{\boldsymbol{\theta}}_n^\ast=(\widehat{\mu}_n^{*},\widehat{\sigma}_n^{*})$, a predictive distribution can be defined by using the same procedure that defined the predictive distribution in (\ref{bootpredformula}). Namely, by defining a random variable $\widehat{p}^{**}_{n}\equiv \pi(\widehat{\boldsymbol{\theta}}_n^{\ast\ast})$ from (\ref{piequa}) with a bootstrap distribution induced by $\widehat{\boldsymbol{\theta}}_n^{\ast\ast}=(\widehat{\mu}_{n}^{**}, \widehat{\sigma}_{n}^{**})$, the predictive distribution for $Y_n$ using the GPQ-bootstrap method is given by \begin{equation*} G_{Y_{n}}^{GPQ}(y | \boldsymbol{D}_n)=\int\mathrm{pbinom}(y, n-r_n, \widehat{p}_n^{**}) \Pr{}_{\!\!*}\left(d \widehat{p}_{n}^{**}\right)\approx\frac{1}{B}\sum_{b=1}^{B} \mathrm{pbinom}(y, n-r_n, \widehat p^{\ast\ast}_b), \end{equation*} where $\widehat p_1^{\ast\ast},\dots, \widehat p_B^{\ast\ast}$ are computed from realized bootstrap samples. The $100(1-\alpha)\%$ lower and upper prediction bounds using GPQ-bootstrap method can be obtained by replacing the predictive distribution $G_{Y_n}^{DB}(\cdot|\cdot)$ with $G_{Y_n}^{GPQ}(\cdot|\cdot)$ in (\ref{direct-bound}). \subsection{Coverage Probability of the Proposed Methods} This section shows that both the direct-bootstrap method (Section~\ref{bootpred}) and the GPQ-bootstrap method (Section~\ref{gpqpred}) produce asymptotically correct prediction bounds/intervals for the future count $Y_n$. Hence, these two methods yield asymptotically valid inference for within-sample prediction of $Y_n$, as does the calibration-bootstrap method (Theorem~\ref{theocali}, Section~\ref{calibration}), but not by the standard plug-in method (Theorem~\ref{first_theorem}, Section~\ref{plugin_not_regular}). \begin{theorem} Under the same conditions as Theorem~\ref{theocali}, \begin{enumerate} \item The $100(1-\alpha)\%$ upper and lower prediction bounds using the direct-bootstrap method, respectively $\tilde{Y}^{DB}_{n, 1-\alpha}$ and $\underaccent{\tilde}{Y}^{DB}_{n, 1-\alpha}$, have asymptotically correct coverage. That is,$$\lim_{n\rightarrow\infty}\Pr(Y_{n}\leq\tilde{Y}^{DB}_{n, 1-\alpha}) = 1-\alpha=\lim_{n\rightarrow\infty}\Pr(Y_n\geq\underaccent{\tilde}{Y}_{n, 1-\alpha}^{DB}).$$ \item If the parametric distribution $F(\cdot; \mu, \sigma)$ belongs to the log-location-scale distribution family (\ref{log-location-scale}), with standard cdf $\Phi(\cdot)$ differentiable on $\mathbb{R}$, the $100(1-\alpha)\%$ upper and lower prediction bounds using the GPQ-bootstrap method, respectively $\tilde{Y}^{GPQ}_{n, 1-\alpha}$ and $\underaccent{\tilde}{Y}^{GPQ}_{n,1-\alpha}$, have asymptotically correct coverage. That is, $$\lim_{n\rightarrow\infty}\Pr(Y_{n}\leq\tilde{Y}^{GPQ}_{n, 1-\alpha}) = 1-\alpha=\lim_{n\rightarrow\infty}\Pr(Y_n\geq\underaccent{\tilde}{Y}_{n,1-\alpha}^{GPQ}).$$ \end{enumerate} \label{predbound} \end{theorem} The proof of Theorem~3 is in the online supplementary material. \section{Multiple Cohort Within-Sample Prediction} \label{sec:multiple-cohort} \subsection{Multiple Cohort Data} So far, the focus has been on the within-sample prediction for single-cohort data. Multiple cohort data, however, are more common in applications. In this section, the results from single-cohort data are extended to multiple-cohort data. In multiple-cohort data (e.g. the bearing cage data of Section~\ref{m_exampls}), units from different cohorts are placed into service at different times. The multiple-cohort data $\mathbb{D}$ can be seen as a collection of several single-cohort datasets as $\mathbb{D}=\{\boldsymbol{D}_{n_{s}}, s=1,...,S\}$, where $S$ is the number of cohorts and $n_s$ is the number of units in the cohort $s$ (sometimes, with no grouping, many cohorts have size 1). Within each cohort $\boldsymbol{D}_{n_{s}}=(D_{s,1},...,D_{s,n_s})$, we may express an observation involved as $D_{s,i}=(\delta_i^s, T^{obs,s}_{i})$, for $T^{obs,s}_{i}=T_i^s\delta_i^s+(1-\delta_i^s)t_c^s$, where $T_i^s$ is a random variable from a parametric distribution $F(\cdot;\boldsymbol{\theta})$, $t_c^s$ is the censoring time for cohort $s$, and $\delta_i^s=\mathrm{I}(T_i^s\leq t_c^s)$ is a random variable indicating whether a unit's value (e.g., failure time) is less than the censoring time $t_c^s$. Given the multiple-cohort data $\mathbb{D}$, the number of observed events (e.g., failures) within cohort $s$ is defined as $r_{n_s}=\sum_{i=1}^{n_s}\mathrm{I}(T_i^s\leq t_c^s),s=1,...,S$, where the total number of units is $n=\sum_{s=1}^{S}n_s$. The predictand in the multiple-cohort data is the total number of events that will occur in a future time window of length $\Delta$ and it is denoted by $Y_n=\sum_{s=1}^{S}\sum_{i=1}^{n_s}\mathrm{I}(t_c^s<T^s_i\leq t_w^s)$, where $t_w^s=t_c^s+\Delta$ for $s=1,\dots, S$. Within each cohort $s=1,...,S$, the number $Y_s =\sum_{i=1}^{n_s}\mathrm{I}(t_c^s < T_i^s \leq t_w^s)$ of future events has a binomial distribution. As in Section 3, the conditional distribution of $Y_s$ is $\mathrm{binomial}(n-r_{n_s}, p_s)$, where $p_s$ is defined as \begin{align*} p_s\equiv\pi_{s}(\boldsymbol{\theta})=\frac{F(t_w^s;\boldsymbol{\theta})-F(t_c^s;\boldsymbol{\theta})}{1-F(t_c^s;\boldsymbol{\theta})},\quad s=1,\dots,S. \end{align*} Consequently, the predictand $Y_n=\sum_{s=1}^{S}Y_s$ has a Poisson-binomial distribution with probability vector $\boldsymbol{p}=(p_1,...,p_S)$ and weight vector $\boldsymbol{w}=(n_1-r_{n_1},...,n_S-r_{n_S})$. We denote this Poisson-binomial distribution by $\mathrm{Poibin(\boldsymbol{p}, \boldsymbol{w})}$, where the cdf of the Poisson-binomial distribution is denoted by $\mathrm{ppoibin}(\cdot, \boldsymbol{p}, \boldsymbol{w})$ and the quantile function is denoted by $\mathrm{qpoibin(\cdot, \boldsymbol{p}, \boldsymbol{w})}$; these functions are available in the \textbf{poibin} R package (described in \citet{hongpoisson2013}). If $\widehat{\boldsymbol{\theta}}_n$ is a consistent estimator of $\boldsymbol{\theta}$ based on the multiple-cohort data $\mathbb{D}$, an estimator $\widehat{\boldsymbol{p}}=(\widehat p^1_{n},... ,\widehat p^S_{n})$ of conditional probabilities $\boldsymbol{p}$ follows by substitution $\widehat{p}_n^s = \pi_s(\widehat{\boldsymbol{\theta}}_n)$, $s=1,\ldots,S$, similar to the single-cohort case. Then, the $100(1-\alpha)\%$ plug-in lower and upper prediction bounds for $Y_n$ are \begin{align*} \underaccent{\tilde}{Y}^{PL}_{n, 1-\alpha}&= \sup \{ y\in \{0\}\cup\mathbb{Z}^{+}: \mathrm{ppoibin}\left(y-1, \widehat{\boldsymbol{p}}, \boldsymbol{w}\right) \leq\alpha\}\\ &=\begin{cases} \mathrm{qpoibin}(\alpha, \widehat{\boldsymbol{p}}, \boldsymbol{w}), &\text{if } \mathrm{pbinom}(\mathrm{qpoibin}(\alpha, \widehat{\boldsymbol{p}}, \boldsymbol{w}), \widehat{\boldsymbol{p}}, \boldsymbol{w})>\alpha,\\ \mathrm{qpoibin}(\alpha, \widehat{\boldsymbol{p}}, \boldsymbol{w})+1, &\text{if } \mathrm{pbinom}(\mathrm{qpoibin}(\alpha, \widehat{\boldsymbol{p}}, \boldsymbol{w}), \widehat{\boldsymbol{p}}, \boldsymbol{w})=\alpha, \end{cases}\\ \tilde{Y}_{n,1-\alpha}^{PL}&=\inf\{y\in\{0\}\cup\mathbb{Z}^{+}:\mathrm{ppoibin}(y, \widehat{\boldsymbol{p}}, \boldsymbol{w})\geq1-\alpha\}=\mathrm{qpoibin}(1-\alpha, \widehat{\boldsymbol{p}}, \boldsymbol{w}). \end{align*} Similar to the single-cohort case (Theorem~\ref{first_theorem}), the plug-in method also fails to provide an asymptotically correct coverage probability under multiple-cohort data; see the online supplementary material. \subsection{The Calibration-Bootstrap Method for Multiple Cohort Data} \label{calibration_multiple_cohort_data} Formulating prediction bounds using the calibration-bootstrap method first requires simulation of bootstrap samples, where each bootstrap sample $\mathbb{D}^\ast$ matches the original data in terms of the number $S$ of cohorts as well as their respective sizes $n_s$ and censoring times $t_c^s$, $s=1,\ldots,S$. The bootstrap version of the estimator $\widehat{\boldsymbol{p}}=(\widehat p^1_{n},... ,\widehat p^S_{n})$ is $\widehat{\boldsymbol{p}}^{\ast}=(\widehat p^{1,\ast}_{n},... ,\widehat p^{S,\ast}_{n})$ from each bootstrap sample $\mathbb{D}^*$. Additionally, the number of events (e.g., failures) in the bootstrap sample, grouped by cohort, is $(r_{n_1}^{\ast},...,r_{n_S}^{\ast})$, from which we denote a bootstrap future count by $Y_n^{\dagger}\sim\text{Poibin}(\widehat{\boldsymbol{p}}; \boldsymbol{w}^\ast)$ based on a weight vector from the bootstrap sample as $\boldsymbol{w}^{\ast}=(n_1-r_{n_1}^{\ast},...,n_S-r_{n_S}^{\ast})$. The bootstrap variable set $(Y_n^\dagger, \widehat{\boldsymbol{p}}^{\ast}, \boldsymbol{w}^{\ast})$ is then applied into a Poisson-binomial cdf and then leads to a transformed random variable $U^\ast=\mathrm{ppoibin}(Y_{n}^\dagger, \widehat{\boldsymbol{p}}^{\ast}, \boldsymbol{w}^\ast)\in[0,1]$ for deriving calibrated confidence levels $\alpha^\dagger_L$ and $\alpha^\dagger_U$ in the same way as in the single-cohort situation. Then, the $100(1-\alpha)\%$ calibrated lower prediction bound is $\underaccent{\tilde}{Y}^{C}_{n,1-\alpha}=\underaccent{\tilde}{Y}^{PL}_{n,1-\alpha^\dagger_L}$ and the similar upper prediction bound version is $\tilde{Y}^{C}_{n,1-\alpha}=\tilde{Y}^{PL}_{n,1-\alpha^\dagger_U}$. The calibration-bootstrap method remains asymptotically correct for multiple-cohort within-sample prediction. The multiple-cohort extensions of Theorem~\ref{theocali} and the algorithm are in the online supplementary material. \subsection{The Direct- and GPQ-Bootstrap Methods for Multiple Cohort Data} For multiple-cohort data, constructing prediction bounds for $Y_n$ based on the predictive-distribution-based methods also requires bootstrap data and, in particular, the distribution of a bootstrap version $\widehat{\boldsymbol{p}}^\ast$ of $\widehat{\boldsymbol{p}}$ as in Section~\ref{calibration_multiple_cohort_data}. The predictive distribution from the direct-bootstrap method is \begin{equation}\label{bootppoi} G^{DB}_{Y_n}(y|\mathbb{D})=\int \mathrm{ppoibin}(y, \widehat{\boldsymbol{p}}^{\ast}, \boldsymbol{w})\Pr{}_{\!\!\ast}(d \widehat{\boldsymbol{p}}^{\ast})\approx\frac{1}{B}\sum_{b=1}^{B}\mathrm{ppoibin}(y, \widehat{\boldsymbol{p}}^\ast_b, \boldsymbol{w}). \end{equation} where $\widehat{\boldsymbol{p}}^\ast_1,\dots, \widehat{\boldsymbol{p}}^\ast_B$ are realized bootstrap versions of $\widehat{\boldsymbol{p}}$ across independently generated bootstrap versions of multiple-cohort data (e.g., $\mathbb{D}^*$). The $100(1-\alpha)\%$ direct-bootstrap lower and upper prediction bounds for $Y_n$ are defined as the modified $\alpha$ quantile and $1-\alpha$ quantile of this predictive distribution, respectively, and given by \begin{align*} \underaccent{\tilde}{Y}_{n, 1-\alpha}^{DB}&=\sup \left\{y \in\{0\} \cup \mathbb{Z}^{+}: G_{Y_{n}}^{DB}\left(y-1 | \mathbb{D}\right) \leq\alpha\right\},\\ \tilde{Y}_{n, 1-\alpha}^{DB}&=\inf \left\{y \in\{0\} \cup \mathbb{Z}^{+}: G_{Y_{n}}^{DB}\left(y | \mathbb{D}\right) \geq 1-\alpha\right\}. \end{align*} If $F(\cdot;\boldsymbol{\theta})=F(\cdot;\mu,\sigma)$ belongs to the log-location-scale family as in (\ref{log-location-scale}), we use $\widehat{\boldsymbol{\theta}}_n^{\ast}=(\hat\mu_n^\ast,\hat\sigma_n^\ast)$ to compute approximate GPQs $\widehat{\boldsymbol{\theta}}_n^{\ast\ast}=(\hat\mu_n^{\ast\ast},\hat\sigma_n^{\ast\ast})$ using (\ref{gpq2}), and compute $\widehat{\boldsymbol{p}}^{\ast\ast}=(\widehat p_{n}^{1,\ast\ast},\dots,\widehat p_{n}^{S,\ast\ast})$ where $\widehat p_{n}^{s, \ast\ast}=\pi_s(\widehat{\boldsymbol{\theta}}^{\ast\ast}_n)$. Then the GPQ-bootstrap method can be implemented to obtain prediction bounds for $Y_n$ by replacing $\widehat{\boldsymbol{p}}^\ast$ with $\widehat{\boldsymbol{p}}^{\ast\ast}$ in the definition of the direct-bootstrap predictive distribution (\ref{bootppoi}) and analogously determining prediction bounds from the quantiles of this predictive distribution. The direct- and GPQ-bootstrap methods produce asymptotically correct prediction bounds from multiple-cohort data, and the extension of Theorem~\ref{predbound} is provided in the online supplementary material. \section{A Simulation Study} \label{simu:study} The purpose of this simulation study is to illustrate agreement for finite sample sizes with the theorems established in the previous sections and to provide insights into the performance of different methods in the case of finite samples. The details and results in this section are for Type~\Romannum{1} censored single-cohort data. Let the event of interest be the failure of a unit. We simulated Type~\Romannum{1} censored data using the two-parameter Weibull distribution and compared the coverage probabilities of the prediction bounds based on the plug-in, calibration-bootstrap, direct-bootstrap, and GPQ-bootstrap methods. The Weibull cdf is $$ F(t;\eta, \beta) = 1-\exp\left[-\left(\frac{t}{\eta}\right)^{\beta}\right],\quad t>0, $$ with positive scale $\eta$ and shape $\beta$ parameters, and can also be parameterized as $$ F(t;\mu, \sigma) = \Phi_{\textrm{sev}}\left[\frac{\log (t)-\mu}{\sigma}\right],\quad t>0, $$ where $\Phi_{\textrm{sev}}(x)=1-\exp\left[-\exp(x)\right]$ is the cdf of the standard smallest extreme value distribution with $\mu = \log (\eta)$ and $\sigma = 1/\beta$. The conditions in Theorems~1-3 can be verified for Type~I censored Weibull data, so that the Weibull distribution can be used to illustrate all of the aforementioned methods for within-sample prediction (e.g., the ML estimators of the Weibull parameters $\widehat{\boldsymbol{\theta}}_n = (\widehat{\mu}_n,\widehat{\sigma}_n)$ have sampling distributions with normal limits and can be validly approximated by parametric bootstrap as described in \citet{scholz1996maximum}). \subsection{Simulation Setup} The factors for the simulation experiment are (i) $p_{f1} = F(t_c;\beta,\eta)$, the probability that a unit fails before the censoring time $t_{c}$; (ii) $\mathrm{E}(r)= np_{f1}$, the expected number of failures at the censoring time $t_c$, where $n$ is the total sample size (i.e., including both the censored and the uncensored observations); (iii) $d \equiv p_{f2}-p_{f1}$, the probability that a unit fails in a future time interval $(t_c,t_w]$ where $p_{f2} = F(t_w;\beta, \eta)$; (iv) $\beta = 1/\sigma$, the Weibull shape parameter. Because $\eta=\exp(\mu)$ is a scale parameter, without loss of generality, $\eta=1$ was used in the simulation. A simulation with all combinations of the following factors levels was conducted: (i) $p_{f1} = 0.05, 0.1, 0.2$; (ii) $\mathrm{E}(r) = 5, 15, 25, 35, 45$; (iii) $d = 0.1, 0.2$; (iv) $\beta = 0.5, 0.8, 2, 4$. For each combination of the these four factors, 90\% and 95\% upper prediction bounds and 90\% and 95\% lower prediction bounds were constructed. The procedure for the simulation is as follows: \begin{enumerate} \itemsep-0.5em \item Simulate $N$=5000 Type~\Romannum{1} censored samples for each of the factors-level combinations of the four factors. \item Use ML to estimate parameters $\beta,\eta$ in each censored sample. \item Compute prediction bounds using the different methods for each sample. \item Compute the conditional (i.e., binomial) coverage probability for each of the prediction bounds. \item Determine the unconditional coverage probability for each method by averaging the $N=5000$ conditional coverage probabilities. \end{enumerate} Within each of the $N$=5000 simulated Type~I censored samples, $B$=5000 bootstrap samples were generated by parametric bootstrap (i.e., as a random sample from the fitted Weibull distribution with Type~I censoring at $t_c$) and these samples were used for the calibration-bootstrap method and the two predictive-distribution-based methods. In the simulation, we excluded those samples having fewer than 2 failures to avoid estimability problems, so that all $N$=5000 original samples and all the $N\times B$=25{,}000{,}000 bootstrap samples in the simulation have at least 2 failures. The probability of a data sample with fewer than 2 failures for each factor-level combination is given in Table~\ref{table:droprate}. \begin{table}[ht!] \centering \begin{tabular}{cccccc} \hline & {E}($r$)=5 & {E}($r$)=15 & {E}($r$)=25 & {E}($r$)=35 & {E}($r$)=45\\\hline $p_{f1}=0.05$ & 0.037 &0.000&0.000&0.000&0.000\\ $p_{f1}=0.1$ & 0.034 &0.000&0.000&0.000&0.000\\ $p_{f1}=0.2$& 0.027 &0.000&0.000&0.000&0.000\\\hline \end{tabular} \caption{Probability of an excluded sample (i.e., $r=0$ or 1 failures) for different factor-level combinations.} \label{table:droprate} \end{table} \subsection{Simulation Results} \setcounter{figure}{0} \makeatletter \renewcommand{\thefigure}{\arabic{figure}} A small subset of the plots displaying the complete simulation results are given here, as the results are generally consistent across the different factor-level combinations. Figure~\ref{threeMethods} shows the coverage probabilities from plug-in, calibration-bootstrap, direct-bootstrap, and GPQ-bootstrap methods when $\beta = 2$ and $d = 0.2$. The horizontal dashed line in each subplot represents the nominal confidence level. Plots for the other factor-level combinations are given in the online supplementary material. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{threeMethods.pdf} \caption{Coverage probabilities versus expected number of events for the direct-bootstrap (DB), GPQ-bootstrap (GPQ), calibration-bootstrap (CB), and plug-in (PL) methods when $d=p_{f2}-p_{f1}=0.2$ and $\beta = 2$.} \label{threeMethods} \end{figure} Some observations from the simulation results are: \begin{enumerate} \itemsep-0.5em \item The plug-in method fails to have asymptotically correct coverage probability. As $p_{f1}$ decreases, which entails less information or fewer events observed before the censoring time $t_c$, the coverage probability deviates more from the nominal level. \item The direct- and GPQ-bootstrap methods are close to each other in terms of coverage probabilities except when $\mathrm{E}(r)=5$. The calibration-bootstrap method differs considerably from the direct- and GPQ-bootstrap methods. The calibration-bootstrap method tends to be more conservative than the other bootstrap-based methods for constructing lower prediction bounds, and also is less conservative for constructing upper prediction bounds. \item For the lower bounds, the direct- and GPQ-bootstrap methods dominate the calibration-bootstrap method. For the upper bounds, the coverage probabilities of the former two bootstrap-based methods are slightly conservative but still close to the nominal level. The calibration-bootstrap method is better than the direct- and GPQ-bootstrap methods in just a few of these upper bounds. \item Compared with the calibration-bootstrap method, whose performance is highly related to the level of $p_{f1}$, the coverage probabilities of the direct- and GPQ-bootstrap methods are insensitive to the level of $p_{f1}$. As $p_{f1}$ decreases, the lower prediction bound using the calibration-bootstrap method has over-coverage while the upper prediction bound has under-coverage. This implies that under heavy censoring (small $p_{f1}$), extremely large sample sizes $n$ (or correspondingly large expected number of failing $\text{E}(r)=n p_{f1}$) are required to attain coverage probabilities close to the nominal confidence level. \end{enumerate} From these observations, we can see that the direct- and GPQ-bootstrap methods (i.e., predictive-distribution-based methods) tend to dominate the calibration-bootstrap method in terms of the performance of the prediction bounds, even though all three methods are asymptotically valid. This is because the predictive-distribution-based methods target the one source $p$ of parameter uncertainty in the conditional $\text{binomial}(n-r_n,p)$ distribution of the predictand $Y_n$ (i.e., as addressed by applying bootstrap versions $\widehat{p}^\ast$ or $\widehat{p}^{\ast\ast}$ to ``smooth'' estimation uncertainty for $p$), while the number $n-r_n$ of Bernoulli trials used in these predictive distributions matches that of the predictand. Due to its definition, however, the calibration-bootstrap method involves bootstrap approximation steps (i.e., $r^*_n, \widehat{p}^*$) for both the number $r_n$ of failures as well as the binomial probability $p$. The calibration-bootstrap method essentially imposes an approximation $n-r^*_n$ for the known number $n-r_n$ of trials prescribing the predictand $Y_n$. As a consequence, coverages from the calibration-bootstrap method are generally less accurate than those from the predictive-distribution-based methods for within-sample prediction. \section{Application of the Methods} \label{sec:applications} \subsection{Examples} \noindent \textbf{Product-A Data}: The ML estimates of the Weibull shape and scale parameters are $\widehat\beta=1.518$ and $\widehat\eta=1152$, respectively, based on 80 failure times among 10,000 units before 48 months. Then, for the 9920 surviving units, the ML estimate of the probability that a unit will fail between 48 and 60 months of age is $ \widehat p_n = [F(60;\widehat\beta, \widehat\eta)-F(48;\widehat\beta, \widehat\eta)]/[1-F(48;\widehat\beta, \widehat\eta)]= 0.00323. $ Using the ML estimates of the Weibull parameters $(\widehat\beta, \widehat\eta)$, we simulate 10,000 bootstrap samples that are censored at 48 months and obtain ML estimates of $(\beta, \eta)$ from each bootstrap sample. Based on applying these with each interval method, Table~\ref{productAData} gives prediction bounds for the number of failures in the next 12 months. As indicated by our results, even with a large number of failures, the plug-in method intervals can be expected to be off and are too narrow compared to the other bounds. \begin{table}[!ht] \centering \begin{tabular}{c c c c c c} \hline Confidence Level &Bound Type& Plug-in & Direct & GPQ & Calibration \\ [0.5ex] \hline 95\% & Lower &\multicolumn{1}{r}{23} & \multicolumn{1}{r}{20} & \multicolumn{1}{r}{20} & \multicolumn{1}{r}{20} \\ 90\% & Lower &\multicolumn{1}{r}{25} & \multicolumn{1}{r}{23} & \multicolumn{1}{r}{23} & \multicolumn{1}{r}{23} \\ 90\% & Upper &\multicolumn{1}{r}{39} & \multicolumn{1}{r}{43} & \multicolumn{1}{r}{43} & \multicolumn{1}{r}{43} \\ 95\% & Upper &\multicolumn{1}{r}{42} & \multicolumn{1}{r}{47} & \multicolumn{1}{r}{47} & \multicolumn{1}{r}{46} \\ \hline \end{tabular} \caption{Product A Data: Prediction Bounds for the number of failures in the next 12 months using different methods.} \label{productAData} \end{table} \noindent \textbf{Heat Exchanger Data}: In this example, there are no exact failure times in the data. That is, the data here contain limited information as there were only 8 failures among 20,000 exchanger tubes that were inspected (in censored data analysis, the informational content of data is closely related to the number of failures) and these failure times are interval-censored (not exact). The likelihood function under a Weibull model for the heat exchanger data is $$ L(\beta, \eta)=F(1; \beta, \eta)[F(2; \beta, \eta)-F(1; \beta, \eta)][F(3; \beta, \eta)-F(2; \beta, \eta)]^{6}[1-F(3; \beta, \eta)]^{19992}, $$ resulting in ML estimates $\widehat\beta=2.531$ and $\widehat\eta=66.058$. The conditional probability of a tube failing between the third and tenth year, given that tube has not failed at the end of the third year, is then estimated as $\widehat p_n = [F(10;\widehat\beta, \widehat\eta)-F(3;\widehat\beta, \widehat\eta)]/[1-F(3;\widehat\beta, \widehat\eta)]= 0.00797$. \begin{figure}[ht!] \centering \includegraphics[width=0.9\textwidth]{calibrationQuantile.pdf} \caption{The quantile function of $\mathrm{pbinom}(Y^\dagger_n, n-r^\ast_n, \widehat p^\ast_n)$ used for the calibration-bootstrap method with heat exchanger data.} \label{calibrationquantile} \end{figure} The ML estimates from 10,000 bootstrap samples (parametric bootstrap with censoring at 3 years) are used in the calibration-bootstrap and two predictive-distribution-based methods. However, the calibration-bootstrap method exhibits numerical instabilities with these data due to the small number of failures. To illustrate, Figure~\ref{calibrationquantile} shows the approximate quantile function of $U^\ast=\mathrm{pbinom}(Y_n^\dagger, n-r_n^\ast, \widehat p_n^\ast)$ used in the calibration-bootstrap method, involving the evaluation of a $\text{binomial}(n-r^\ast_n,\widehat{p}_n^*)$ random variable $Y_n^\dagger$ in its cdf $\mathrm{pbinom}$, given the number $r^*_n$ of failures and the estimate $\widehat{p}_n^*$ from a bootstrap sample. This quantile function is also the calibration curve, where the x-axis gives the desired confidence level $1-\alpha$, while the y-axis gives the corresponding calibrated confidence level ($\alpha^\dagger_L$ or $1-\alpha^\dagger_U$) to be used for determining plug-in prediction bounds (or quantiles from a $\text{binomial}(n-r_n=19992,\widehat{p}=0.00797)$ distribution). From Figure~\ref{calibrationquantile}, we can see that the $0.05$ and $0.1$ quantiles nearly equal $0$ while the $0.9$ and $0.95$ quantiles nearly equal 1. This creates complications in computing the prediction bounds, for example, as there is numerical instability near the 100\% quantile of the $\text{binomial}(n-r_n=19992,\,\widehat{p}=0.00797)$ distribution. Consequently, 90\% and 95\% bounds from the calibration-bootstrap method are computationally not available (NA). \begin{table}[ht] \centering \begin{tabular}{c c c c c c} \hline Confidence Level& Bound Type & Plug-in & Direct & GPQ & Calibration \\ [0.5ex] \hline 95\% & Lower &\multicolumn{1}{r}{138} & \multicolumn{1}{r}{28} & \multicolumn{1}{r}{23} & \multicolumn{1}{r}{NA}\\ 90\% & Lower &\multicolumn{1}{r}{142} & \multicolumn{1}{r}{43} & \multicolumn{1}{r}{34} & \multicolumn{1}{r}{NA}\\ 90\% & Upper &\multicolumn{1}{r}{176} & \multicolumn{1}{r}{1627} & \multicolumn{1}{r}{888} & \multicolumn{1}{r}{NA}\\ 95\% & Upper &\multicolumn{1}{r}{180} & \multicolumn{1}{r}{4343} & \multicolumn{1}{r}{1890} & \multicolumn{1}{r}{NA}\\ \hline \end{tabular} \caption{Heat Exchanger Data: Prediction Bounds for the number of failures in the next 7 years using different methods.} \label{heatExchangerData} \end{table} Table~\ref{heatExchangerData} instead provides prediction bounds from the plug-in and direct- and GPQ-bootstrap methods. The plug-in prediction bounds differ substantially from the two bootstrap-based methods. Unlike the previous example (Product A data), the direct- and GPQ-bootstrap methods also differ appreciably based on the limited failure information with the heat exchanger data; we return to explore such differences in Section~\ref{compare:gpq:boot}. The upper bounds involve a large amount of extrapolation and may not be practically meaningful other than to warn that there is a huge amount of uncertainty in the 10-year predictions. \noindent \textbf{Bearing Cage Data}: In this example, staggered entry data containing multiple cohorts are considered. Table~{\ref{bearingcage}} gives the prediction bounds for the bearing cage dataset using 10,000 bootstrap samples. While similar in spirit to the Product-A example, the predictand here differs by having a Poisson-binomial distribution. The latter can be computed with the R package \textbf{poibin}, which is applied to construct prediction bounds using methods described in Section~\ref{calibration_multiple_cohort_data}. Table~\ref{bearingcage} gives the resulting prediction bounds for the bearing cage dataset. \begin{table}[ht] \centering \begin{tabular}{c c c c c c} \hline Confidence Level & Bound Type & Plug-in & Direct & GPQ & Calibration \\ [0.5ex] \hline 95\% & Lower &\multicolumn{1}{r}{2} & \multicolumn{1}{r}{1} & \multicolumn{1}{r}{1} & \multicolumn{1}{r}{1} \\ 90\% & Lower &\multicolumn{1}{r}{2} & \multicolumn{1}{r}{2} & \multicolumn{1}{r}{2} & \multicolumn{1}{r}{2} \\ 90\% & Upper &\multicolumn{1}{r}{8} & \multicolumn{1}{r}{10} & \multicolumn{1}{r}{13} & \multicolumn{1}{r}{10} \\ 95\% & Upper &\multicolumn{1}{r}{9} & \multicolumn{1}{r}{12} & \multicolumn{1}{r}{20} & \multicolumn{1}{r}{12} \\ \hline \end{tabular} \caption{Bearing Cage Data: Prediction Bounds for the number of failures in the next 300 service hours using different methods.} \label{bearingcage} \end{table} \subsection{Comparing the Direct- and GPQ-Bootstrap Methods} \label{compare:gpq:boot} In the heat exchanger example, the prediction bounds obtained from the direct- and GPQ-bootstrap methods appear very different. This motivates us to investigate the cause of such differences in similar prediction applications involving limited information. A general simulation setting is first described for mimicking the heat exchanger data. The heat exchanger data has two important features in that the number of events is small (i.e., 8) and so is the proportion of observed events (i.e., 0.004). Hence, in the simulation, the expected number of events $\text{E}(r)$ is set to 5 while the proportion failing $p_{f1}$ is 0.001, with a Weibull shape parameter $\beta=2$ and scale parameter $\eta = 1$. Different levels of $d = p_{f2}-p_{f1}$ are used for the probability of events in the forecast window. The simulation results (available in the online supplementary material) reveal that, overall, the GPQ-bootstrap method has better coverage probability than the direct-bootstrap method in this simulation setting. For the upper prediction bound, the direct-bootstrap method is generally more conservative than the GPQ-bootstrap method in terms of coverage probability, indicating that upper prediction bounds from the direct-bootstrap method are larger than the GPQ counterparts. On the other hand, the lower bound based on the direct-bootstrap method generally tends to have under-coverage compared to the GPQ-bootstrap method, suggesting also larger lower bounds from the direct-bootstrap method relative to the GPQ-bootstrap method. These patterns in the prediction bounds (i.e., with larger direct-bootstrap bounds compared to those from the GPQ-bootstrap in a setting of a limited number of events) are consistent with the prediction bounds found from the heat exchanger example. \begin{figure}[ht!] \centering \includegraphics[width=0.85\textwidth]{compare_gpq_boot.pdf} \caption{A Representative Distribution of $\widehat p^{\ast}$ and $\widehat p^{\ast\ast}$.} \label{fig:2} \end{figure} To further illustrate, Figure~\ref{fig:2} shows the bootstrap distributions of $\widehat{p}^*$ and $\widehat{p}^{**}$ from a single Monte Carlo sample that represents the typical behavior found in this simulation setting: values of $\widehat{p}^{**}$ used in the predictive distribution of GPQ-bootstrap method tend to be smaller and more concentrated than the $\widehat{p}^*$ values used in the direct-bootstrap predictive distribution. Note that direct- and GPQ-bootstrap predictive distributions are approximated by $G^{DB}_{Y_n}(y|\boldsymbol{D}_n)\approx1/B\sum_{b=1}^{B}\text{pbinom}(y, n-r_n, \widehat p^\ast_b)$ and $G^{GPQ}_{Y_n}(y|\boldsymbol{D}_n)\approx1/B\sum_{b=1}^{B}\text{pbinom}(y, n-r_n, \widehat p^{\ast\ast}_b)$, respectively, and that direct- and GPQ-bootstrap prediction bounds correspond to quantiles from these predictive distributions. Consequently, because $\widehat{p}_{b}^{*}$ and $\widehat{p}_b^{**}$ are small (e.g., less than 0.25) while $\widehat p^\ast_b$ is generally larger than $\widehat p^{\ast\ast}_b$ in Figure~\ref{fig:2}, then $G^{DB}_{Y_n}(y|\boldsymbol{D}_n)$ is generally smaller than $G^{GPQ}_{Y_n}(y|\boldsymbol{D}_n)$, implying quantiles from $G^{DB}_{Y_n}(y|\boldsymbol{D}_n)$ can be expected to exceed those from $G^{GPQ}_{Y_n}(y|\boldsymbol{D}_n)$ in data cases with a limited number of events. However, asymptotically, both $\widehat p_n^\ast$ and $\widehat p_n^{\ast\ast}$ are similarly normally distributed and symmetric around $\widehat p_n$ (shown in online supplementary material), so that the direct- and GPQ-bootstrap prediction bounds may be expected to behave alike in data situations with a larger number of events and larger sample sizes, as seen in Figure~\ref{threeMethods} (and in the Product A application). \section{Choice of a Distribution} \label{choice-of-dist} Extrapolation is usually required when predicting the number of future events based on an on-going time-to-event process. For example, it may be necessary to predict the number of returns in a three-year warranty period based on field data for the first year of operation of a product. An exception arises when life can be modeled in terms of use (as opposed to time in service) and there is much variability in use rates among units in the population. The high-use units will fail early and provide good information about the upper tail of the amount-of-use return-time distribution (e.g., \citet{hong2010}). When extrapolation is required, predictions can be strongly dependent on the distribution choice. In most applications, especially with heavy censoring, there is little or no useful information in the data to help choose a distribution. Then, for example, it is best to choose a failure-time distribution based on knowledge of the failure mechanism and the related physics/chemistry of failure. In important applications, this would be typically be done by consulting with experts who have such knowledge. For example, the lognormal distribution could be justified for failure times that arise from the product of a large number of small, approximately independent positive random quantities. Examples include failure from crack initiation and growth due to cyclic stressing of metal components (e.g., in aircraft engines) and chemical degradation like corrosion (e.g., in microelectronics). These are two common applications where the lognormal distribution is often used. \citet[][pages 36-37]{GnedenkoBelyayevSolovyev1969} provide mathematical justification for this physical/chemical motivation. \begin{figure}[t!] \begin{tabular}{cc} \includegraphics[width=0.5\linewidth]{{DistPredCompare_beta2_pfq0.05_d0.1}.pdf} & \includegraphics[width=0.5\linewidth]{{DistPredCompare_beta2_pfq0.05_d0.2}.pdf} \\ \includegraphics[width=0.5\linewidth]{{DistPredCompare_beta2_pfq0.1_d0.1}.pdf} & \includegraphics[width=0.5\linewidth]{{DistPredCompare_beta2_pfq0.1_d0.2}.pdf} \\ \includegraphics[width=0.5\linewidth]{{DistPredCompare_beta2_pfq0.2_d0.1}.pdf} & \includegraphics[width=0.5\linewidth]{{DistPredCompare_beta2_pfq0.2_d0.2}.pdf} \end{tabular} \caption{Distributional comparisons for $\beta=2$. The two vertical dotted lines on the left indicate the points in time where all three distributions have the same $0.01$ and $p_{f1}$ quantiles. The three vertical lines on the right indicate the times at $p_{f2}=p_{f1}+d$ for the three distributions. } \label{figure:beta.two} \end{figure} Based on extreme value theory, the Weibull distribution can be used to model the distribution of the minimum of a large number of approximately iid positive random variables from certain classes of distributions. For example, the Weibull distribution may provide a suitable model for the time to first failure of a large number of similar components in a system. Consider a chain with many nominally identical links and suppose that the chain is subjected cyclic stresses over time. As suggested in the previous paragraph, the number of cycles to failure for each link could be described adequately with a lognormal distribution. The chain, however, fails when the first link fails. The limiting distribution of (properly standardized) minima of iid lognormal random variables is a type 1 smallest extreme value (or Gumbel) distribution. For all practical purposes, however, the Weibull distribution provides a better approximation. For further information on this result from the penultimate theory of extreme values, see \citet{Green1976}, \citet[Section 3.11]{Castillo1988}, and \citet{GomesHaan1999}. Similarly, if failures are driven by the maximum of a large number of approximately iid positive random variables, a Fr\'{e}chet distribution would be suggested. The reciprocal of a Weibull random variable has a Fr\'{e}chet distribution. Of course, choosing a distribution based on failure-mechanism knowledge is not always possible. The alternative is to do sensitivity analyses, using different distributions. Figure~\ref{figure:beta.two} provides a comparison of the Weibull, lognormal, and Fr\'{e}chet cdfs where the Weibull distribution was chosen with a shape parameter $\beta=2$ and the other factor level combinations of $p_{f1}$ and $d$ used in the Section~\ref{simu:study} simulation. The scale parameter $\eta$ is determined by letting the 0.01 Weibull quantile be 1. The cdfs are plotted on lognormal probability scales where the lognormal cdf is a straight line. The particular parameters for the lognormal and Fr\'{e}chet distributions were chosen such that the distributions cross at the 0.01 and $p_{f1}$ quantiles, simulating the range of the data where the agreement among distributions will be good. Similar plots for $\beta=1$ and $\beta=4$ are provided in the online supplementary material. The Weibull distribution is always more pessimistic (conservative) than the lognormal and the Fr\'{e}chet is always more optimistic than the lognormal. For example, if the true distribution is Weibull but lognormal distribution is used to fit the data, the prediction intervals, regardless of the method, will underpredict the number of events. When in doubt, the Weibull distribution is often used because it is the conservative choice. \section{Concluding Remarks} \label{sec:conclusion} This paper studies the problem of predicting the future number of events based on censored time-to-event data (e.g., failure times). This type of prediction is known as within-sample prediction. A regular prediction problem is defined for which standard plug-in estimation commonly applies, and it is shown that the within-sample prediction is not regular and that the plug-in method fails to produce asymptotically valid prediction bounds. The irregularity of within-sample prediction and the failure of the plug-in method motivated the study of the calibration method as an alternative approach for prediction bounds, though the previously established theory for calibration bounds does not apply to within-sample prediction. The calibration method is implemented via bootstrap and called calibration-bootstrap method, which is proved to be asymptotically correct (i.e., producing prediction bounds with asymptotically correct coverage). Then, turning to formulations of a predictive distribution, we study and validate two other methods to obtain prediction bounds, namely the direct-bootstrap and GPQ-bootstrap methods. All prediction methods considered can be applied to both single-cohort and multiple-cohort data. While theoretical results show that the calibration-bootstrap method and the two predictive-distribution-based methods are all asymptotically correct, the simulation study shows that the direct-bootstrap and GPQ-bootstrap methods outperform the calibration-bootstrap method in terms of coverage probability accuracy relative to a nominal coverage level. The two predictive-distribution-based methods are also easier to implement compared to the calibration-bootstrap method, and can also be computationally more stable (e.g., heat exchanger data example). Thus, we recommend predictive distribution methods, especially the direct-bootstrap method for general applications involving within-sample prediction. In this paper, all of the units in the population were assumed to have the same time-to-event distributions. In many applications, however, units are exposed to different operating or environmental conditions, resulting in different time-to-event distributions. For example, during 1996-2000, the Firestone tires installed on Ford Explorer SUVs experienced unusually high rates of failure, where problems first arose in Saudi Arabia, Qatar, and Kuwait because of the high temperatures in those countries (see \citet{national2001engineering}). Having prediction intervals that use covariate information (like temperature and moisture) could be useful for manufacturers and regulators in making decisions about a possible product recall, for example. Similarly, there can be seasonality effects in time-to-event processes and within-sample predictions. The methods described in this paper can be extended to handle either constant covariates or time-varying covariates. Using calibration-bootstrap methods, \citet{hong2009} used constant covariates to predict power-transformer failures. Despite the complicated nature of their data (random right censoring and truncation and combinations of categorical covariates with small counts in some cells), \citet{hong2009} were able to use the fractional random-weight method \citep[e.g.,][]{XuGotwaltHongKingMeeker2020} to generate bootstrap estimates. \citet{ShanHongMeeker2020} used time-varying covariates to account for seasonality in two different warranty prediction applications. As mentioned by one of the referees, if there is seasonality and data from only part of one year is available, there is a difficulty. In such cases, it would be necessary to use past data on a similar process to provide information about the seasonality. Covariate information in reliability field data has not been common, but that is changing, due to a reduction in costs and advances and in sensor, communications, and storage technology. In the future, much more covariate information on various system operating/environmental variables will be available to make better predictions, as described in \citet{MeekerHong2014}. \section*{Acknowledgments} We would like to thanks Luis A. Escobar for helpful comments on this paper. We are also grateful to the editorial staff, including two reviewers, for helpful comments that improved the manuscript. Research was partially supported by NSF DMS-2015390. \begingroup \setlength{\bibsep}{12pt} \linespread{1}\selectfont \bibliographystyle{apalike}
{ "timestamp": "2020-08-10T02:09:04", "yymm": "2007", "arxiv_id": "2007.08648", "language": "en", "url": "https://arxiv.org/abs/2007.08648" }
\section{Introduction} \textcolor{black}{Discrete-valued time series arise in a wide variety of fields ranging from finance to molecular biology and public health. For instance, we can mention the number of transactions in stocks in the finance field, see \cite{brannas:quoreshi:2010}. In the field of molecular biology, modeling RNA-Seq kinetics data is a challenging issue, see \cite{Thorne:2018} and in the public health context, there is an interest in the modeling of daily asthma presentations in a given hospital, see \cite{SOUZA:2014}.} The literature on modeling discrete-valued time series is becoming increasingly abundant, see \cite{handbook:2016} for a review. Different classes of models have been proposed such as the Integer Autoregressive Moving Average (INARMA) models and the generalized state space models. The Integer Autoregressive process of order 1 (INAR(1)) was first introduced by \cite{McKenzie:1985} and the Integer-valued Moving Average (INMA) process is described in \cite{Al-Osh:1988}. One of the attractive features of INARMA processes is that their autocorrelation structure is similar to the one of autoregressive moving average (ARMA) models. However, it has to be noticed that statistical inference in these models is generally complicated and requires to develop intensive computational approaches such as the efficient MCMC algorithm devised by \cite{Neal:rao:2007} for INARMA processes of known AR and MA orders. This strategy was extended to unknown AR and MA orders by \cite{enciso:nea:rao:2009}. For further references on INARMA models, we refer the reader to \cite{weiss:dts}. The other important class of models for discrete-valued time series is the one of generalized state space models which can have a parameter-driven and an observation-driven version, see \cite{davis:1999} for a review. The main difference between \textcolor{black}{these two versions} is that in parameter-driven models, the state vector evolves independently of the past history of the observations whereas the state vector depends on the past observations in observation-driven models. More precisely, in parameter-driven models, let $(\nu_t)$ be a stationary process, the observations $Y_t$ are thus modeled as follows: conditionally on $(\nu_t)$, $Y_t$ has a Poisson distribution of parameter $\exp(\beta_0^\star+\sum_{i=1}^p\beta_i^\star x_{t,i}+\nu_t)$, where the $x_{t,i}$'s are the $p$ regressor variables (or covariates). Estimating the parameters in such models has a very high computational load, see \cite{jung:2001}. Observation-driven models initially proposed by \cite{cox:1981} and further studied in \cite{zeger:qaqish:1988} do not have this computational drawback and are thus considered as a promising alternative to parameter-driven models. Different kinds of observation-driven models can be found in the literature: the Generalized Linear Autoregressive Moving Average (GLARMA) models introduced by \cite{davis:1999} and further studied in \cite{davis:dunsmuir:streett:2003}, \cite{davis:dunsmuir:street:2005}, \cite{dunsmuir:2015} and the (log-)linear Poisson autoregressive models studied in \cite{fokianos:2009}, \cite{fokianos:2011} and \cite{fokianos:2012}. Note that GLARMA models cannot be seen as a particular case of the log-linear Poisson autoregressive models. \textcolor{black}{In the following, we shall consider the GLARMA model introduced in \cite{davis:dunsmuir:street:2005} with additional covariates. More precisely,} given the past history $\mathcal{F}_{t-1}=\sigma(Y_s,s\leq t-1)$, \textcolor{black}{we assume that} \begin{equation}\label{eq:Yt} Y_t|\mathcal{F}_{t-1}\sim\mathcal{P}\left(\mu_t^\star\right), \end{equation} where $\mathcal{P}(\mu)$ denotes the Poisson distribution with mean $\mu$. In (\ref{eq:Yt}), \begin{equation}\label{eq:mut_Wt} \mu_t^\star=\exp(W_t^\star) \textrm{ with } W_t^\star=\beta_0^\star+\sum_{i=1}^p\beta_i^\star x_{t,i}+Z_t^\star, \end{equation} where the $x_{t,i}$'s are the $p$ regressor variables ($p\geq 1$), \begin{equation}\label{eq:Zt} Z_t^\star=\sum_{j=1}^q \gamma_j^\star E_{t-j}^\star \textrm{ with } E_t^\star=\frac{Y_t-\mu_t^\star}{\mu_t^\star}=Y_t\exp(-W_t^\star)-1, \end{equation} with $1\leq q\leq\infty$ and $E_t^\star=0$ for all $t\leq 0$. \textcolor{black}{Here, the $E_t^\star$'s correspond to the working residuals in classical Generalized Linear Models (GLM), which means that we limit ourselves to the case $\lambda=1$ in the more general definition: $ E_t^\star=(Y_t-\mu_t^\star){\mu_t^{\star}}^{-\lambda} $. Note that in the case where $q=\infty$, $(Z_t^\star)$ satisfies the ARMA-like recursions given in Equation (4) of \cite{davis:dunsmuir:street:2005}. The model defined by (\ref{eq:Yt}), (\ref{eq:mut_Wt}) and (\ref{eq:Zt}) is thus referred as a GLARMA model.} The main goal of this paper is to introduce a novel variable selection approach in the deterministic part \textcolor{black}{(covariates)} of sparse GLARMA models that is in (\ref{eq:Yt}), (\ref{eq:mut_Wt}) and (\ref{eq:Zt}) where the vector of the $\beta_i^\star$'s is sparse meaning many $\beta_i^\star$'s are null. \textcolor{black}{The novel approach that we propose consists in} combining a procedure for estimating the ARMA part coefficients with regularized methods designed for GLM. The paper is organized as follows. Firstly, in Section \ref{sec:estim}, we describe the classical estimation procedure in GLARMA models and in Section \ref{sec:consistency}, establish a consistency result in a specific case. Secondly, we propose a novel two-stage estimation procedure which is described in Section \ref{sec:our_estim}. It consists in first estimating the ARMA coefficients and then in estimating the regression coefficients by using a regularized approach. \textcolor{black}{The practical implementation of our approach is given in Section \ref{sec:practical}.} Thirdly, in Section \ref{sec:num}, we provide some numerical experiments to illustrate our method and to compare its performance to alternative approaches on finite sample size data. Finally, we give the proofs of the theoretical results in Section \ref{sec:proofs}. \section{Statistical inference}\label{sec:stat_inf} \subsection{Classical estimation procedure in GLARMA models}\label{sec:estim} Classically, for estimating the parameter $\boldsymbol{\delta}^\star=(\boldsymbol{\beta}^{\star\prime},\boldsymbol{\gamma}^{\star\prime})$ where $\boldsymbol{\beta}^\star=(\beta_0^\star,\beta_1^\star,\dots,\beta_p^\star)'$ is the vector of regressor coefficients defined in (\ref{eq:mut_Wt}) and $\boldsymbol{\gamma}^\star=(\gamma_1^\star,\dots,\gamma_q^\star)'$ is the vector of the ARMA part coefficients defined in (\ref{eq:Zt}), the following criterion, based on the conditional log-likelihood, is maximized with respect to $\boldsymbol{\delta}=(\boldsymbol{\beta}',\boldsymbol{\gamma}')$, with $\boldsymbol{\beta}=(\beta_0,\beta_1,\dots,\beta_p)'$ and $\boldsymbol{\gamma}=(\gamma_1,\dots,\gamma_q)'$: \begin{equation}\label{eq:likelihood} L(\boldsymbol{\delta})=\sum_{t=1}^n\left(Y_t W_t(\boldsymbol{\delta})-\exp(W_t(\boldsymbol{\delta}))\right). \end{equation} In (\ref{eq:likelihood}), \begin{equation}\label{eq:Wt} W_t(\boldsymbol{\delta})=\boldsymbol{\beta}'x_t+Z_t(\boldsymbol{\delta})=\beta_0+\sum_{i=1}^p\beta_i x_{t,i}+\sum_{j=1}^q \gamma_j E_{t-j}(\boldsymbol{\delta}), \end{equation} with $x_t=(x_{t,0},x_{t,1},\dots,x_{t,p})'$, $x_{t,0}=1$ for all $t$ and \begin{eqnarray} E_t(\boldsymbol{\delta})=Y_t\exp(-W_t(\boldsymbol{\delta}))-1,\mbox{ if }t>0\mbox{ and }E_t(\boldsymbol{\delta})=0\mbox{, if }t\leq 0. \label{eq:Et} \end{eqnarray} For further details on the choice of this criterion, we refer the reader to \cite{davis:dunsmuir:street:2005}. To obtain $\widehat{\boldsymbol{\delta}}$ defined by \begin{equation* \widehat{\boldsymbol{\delta}}=\textrm{Argmax}_{\boldsymbol{\delta}} \; L(\boldsymbol{\delta}), \end{equation*} the first derivatives of $L$ are considered: \begin{equation}\label{eq:def:grad} \frac{\partial L}{\partial \boldsymbol{\delta}}(\boldsymbol{\delta})=\sum_{t=1}^n(Y_t-\exp(W_t(\boldsymbol{\delta}))\frac{\partial W_t}{\partial \boldsymbol{\delta}}(\boldsymbol{\delta}), \end{equation} where \begin{equation*} \frac{\partial W_t}{\partial \boldsymbol{\delta}}(\boldsymbol{\delta})=\frac{\partial\boldsymbol{\beta}' x_t}{\partial \boldsymbol{\delta}}+\frac{\partial Z_t}{\partial \boldsymbol{\delta}} (\boldsymbol{\delta}), \end{equation*} $\boldsymbol{\beta}$, $x_t$ and $Z_t$ being given in (\ref{eq:Wt}). The computations of the first derivatives of $W_t$ are detailed in Section \ref{subsub:first_derive}. Based on Equation (\ref{eq:def:grad}) which is non linear in $\boldsymbol{\delta}$ and which has to be recursively computed, it is not possible to obtain a closed-form formula for $\widehat{\boldsymbol{\delta}}$. Thus $\widehat{\boldsymbol{\delta}}$ is computed by using the Newton-Raphson algorithm. More precisely, starting from an initial value for $\boldsymbol{\delta}$ denoted by $\boldsymbol{\delta}^{(0)}$, the following recursion for $r\geq 1$ is used: \begin{equation}\label{eq:newton_raphson} \boldsymbol{\delta}^{(r)}=\boldsymbol{\delta}^{(r-1)}-\frac{\partial^2 L}{\partial \boldsymbol{\delta}'\partial \boldsymbol{\delta}}(\boldsymbol{\delta}^{(r-1)})^{-1}\frac{\partial L}{\partial \boldsymbol{\delta}}(\boldsymbol{\delta}^{(r-1)}), \end{equation} where $\frac{\partial^2 L}{\partial \boldsymbol{\delta}'\partial \boldsymbol{\delta}}$ corresponds to the Hessian matrix of $L$ and is defined in (\ref{eq:def:hess}) given below. Hence, it requires the computation of the first and second derivatives of $L$. We already explained how to compute the first derivatives of $L$. As for the second derivatives of $L$, it can be obtained as follows: \begin{equation}\label{eq:def:hess} \frac{\partial^2 L}{\partial \boldsymbol{\delta}'\partial \boldsymbol{\delta}}(\boldsymbol{\delta}) =\sum_{t=1}^n(Y_t-\exp(W_t(\boldsymbol{\delta}))\frac{\partial^2 W_t}{\partial \boldsymbol{\delta}'\partial\boldsymbol{\delta}}(\boldsymbol{\delta}) -\sum_{t=1}^n\exp(W_t(\boldsymbol{\delta}))\frac{\partial W_t}{\partial \boldsymbol{\delta}'}(\boldsymbol{\delta})\frac{\partial W_t}{\partial \boldsymbol{\delta}}(\boldsymbol{\delta}). \end{equation} The computations of the second derivatives of $W_t$ are detailed in Section \ref{subsub:second_derive}. However, in our sparse framework where many components of $\boldsymbol{\beta}^\star$ are null, this procedure provides poor estimation results, see Section \ref{sec:sparse_estim} for numerical illustration. This is the reason why we devised a novel estimation procedure described in the next section. \subsection{Our estimation procedure}\label{sec:our_estim} For selecting the most relevant components of $\boldsymbol{\beta}^\star$, we propose the following two-stage procedure: Firstly, we estimate $\boldsymbol{\gamma}^\star$ by using the Newton-Raphson algorithm described in Section \ref{sec:estim_gamma} and secondly, we estimate $\boldsymbol{\beta}^\star$ by using the regularized approach detailed in Section \ref{sec:variable}. \subsubsection{Estimation of $\boldsymbol{\gamma}^\star$}\label{sec:estim_gamma} To estimate $\boldsymbol{\gamma}^\star$, we propose using \begin{equation* \widehat{\boldsymbol{\gamma}}=\textrm{Argmax}_{\boldsymbol{\gamma}} \; L({\boldsymbol{\beta}^{(0)}}',\boldsymbol{\gamma}'), \end{equation*} where $L$ is defined in (\ref{eq:likelihood}), $\boldsymbol{\beta}^{(0)}=(\beta_{0}^{(0)},\dots,\beta_{p}^{(0)})'$ is a given initial value for $\boldsymbol{\beta}^\star$ and $\boldsymbol{\gamma}=(\gamma_1,\dots,\gamma_q)'$. Similar to the approach proposed in Section \ref{sec:estim}, we use the Newton-Raphson algorithm to obtain $\widehat{\boldsymbol{\gamma}}$ based on the following recursion for $r\geq 1$ starting from the initial value $\boldsymbol{\gamma}^{(0)}=(\gamma_1^{(0)},\dots,\gamma_q^{(0)})'$: \begin{equation}\label{eq:newton_raphson:gamma} \boldsymbol{\gamma}^{(r)}=\boldsymbol{\gamma}^{(r-1)}-\frac{\partial^2 L}{\partial \boldsymbol{\gamma}'\partial \boldsymbol{\gamma}}({\boldsymbol{\beta}^{(0)}}',{\boldsymbol{\gamma}^{(r-1)}}')^{-1} \frac{\partial L}{\partial \boldsymbol{\gamma}}({\boldsymbol{\beta}^{(0)}}',{\boldsymbol{\gamma}^{(r-1)}}'), \end{equation} where the first and second derivatives of $L$ are obtained using the same strategy as the one used for deriving Equations (\ref{eq:def:grad}) and (\ref{eq:def:hess}) in Section \ref{sec:estim}. \subsubsection{Variable selection: Estimation of $\boldsymbol{\beta}^\star$}\label{sec:variable} To perform variable selection in the $\beta_i^\star$ of Model (\ref{eq:mut_Wt}) aimed to obtain a sparse estimator of $\beta_i^\star$, we shall use a methodology inspired by \cite{friedman:hastie:tibshirani:2010} for fitting generalized linear models with $\ell_1$ penalties. It consists in penalizing a quadratic approximation to the log-likelihood obtained by a Taylor expansion. Using $\boldsymbol{\beta}^{(0)}$ and $\widehat{\boldsymbol{\gamma}}$ defined in Section \ref{sec:estim_gamma}, the quadratic approximation is obtained as follows: \begin{align*} \widetilde{L}(\boldsymbol{\beta})&:=L(\beta_0,\dots,\beta_p,\widehat{\gamma})\\ &=\widetilde{L}(\boldsymbol{\beta}^{(0)}) +\frac{\partial L}{\partial \boldsymbol{\beta}}(\boldsymbol{\beta}^{(0)},\widehat{\boldsymbol{\gamma}})(\boldsymbol{\beta}-\boldsymbol{\beta}^{(0)}) +\frac12 (\boldsymbol{\beta}-\boldsymbol{\beta}^{(0)})' \frac{\partial^2 L}{\partial \boldsymbol{\beta}\partial \boldsymbol{\beta}'}(\boldsymbol{\beta}^{(0)},\widehat{\boldsymbol{\gamma}}) (\boldsymbol{\beta}-\boldsymbol{\beta}^{(0)}), \end{align*} where $$\frac{\partial L}{\partial \boldsymbol{\beta}}=\left(\frac{\partial L}{\partial \beta_0},\dots,\frac{\partial L}{\partial \beta_p}\right) \textrm{ and } \frac{\partial^2 L}{\partial \boldsymbol{\beta}\partial \boldsymbol{\beta}'}=\left(\frac{\partial^2 L}{\partial \beta_j \partial \beta_k}\right)_{0\leq j,k\leq p}.$$ Thus, \begin{align}\label{eq:Ltilde} \widetilde{L}(\boldsymbol{\beta})=\widetilde{L}(\boldsymbol{\beta}^{(0)})+\frac{\partial L}{\partial \boldsymbol{\beta}}(\boldsymbol{\beta}^{(0)},\widehat{\boldsymbol{\gamma}}) U(\boldsymbol{\nu}-\boldsymbol{\nu}^{(0)})-\frac12 (\boldsymbol{\nu}-\boldsymbol{\nu}^{(0)})' \Lambda (\boldsymbol{\nu}-\boldsymbol{\nu}^{(0)}), \end{align} where $U\Lambda U'$ is the singular value decomposition of the positive semidefinite symmetric matrix $-\frac{\partial^2 L}{\partial \boldsymbol{\beta}\partial \boldsymbol{\beta}'}(\boldsymbol{\beta}^{(0)},\widehat{\boldsymbol{\gamma}})$ and $\boldsymbol{\nu}-\boldsymbol{\nu}^{(0)}=U'(\boldsymbol{\beta}-\boldsymbol{\beta}^{(0)})$. In order to obtain a sparse estimator of $\boldsymbol{\beta}^\star$, we propose using $\widehat{\boldsymbol{\beta}}(\lambda)$ defined by \begin{equation}\label{eq:beta_hat} \widehat{\boldsymbol{\beta}}(\lambda)=\textrm{Argmin}_{\boldsymbol{\beta}}\left\{-\widetilde{L}_Q(\boldsymbol{\beta})+\lambda \|\boldsymbol{\beta}\|_1\right\}, \end{equation} for a positive $\lambda$, where $\|\boldsymbol{\beta}\|_1=\sum_{k=0}^p |\beta_k|$ and $\widetilde{L}_Q(\boldsymbol{\beta})$ denotes the quadratic approximation of the log-likelihood. This quadratic approximation is defined by \begin{equation}\label{eq:LQtilde} -\widetilde{L}_Q(\boldsymbol{\beta})=\frac12\|\mathcal{Y}-\mathcal{X}\boldsymbol{\beta}\|_2^2, \end{equation} with \begin{equation}\label{eq:def_Y_X} \mathcal{Y}=\Lambda^{1/2}U'\boldsymbol{\beta}^{(0)} +\Lambda^{-1/2}U'\left(\frac{\partial L}{\partial \boldsymbol{\beta}}(\boldsymbol{\beta}^{(0)},\widehat{\boldsymbol{\gamma}})\right)' ,\; \mathcal{X}=\Lambda^{1/2}U' \end{equation} and $\|\cdot\|_2$ denoting the $\ell_2$ norm in $\mathbb{R}^{p+1}$. Computational details for obtaining the expression \eqref{eq:LQtilde} of $\widetilde{L}_Q(\boldsymbol{\beta})$ appearing in Criterion (\ref{eq:beta_hat}) are provided in Section \ref{sub:var_sec}. To obtain the final estimator $\widehat{\boldsymbol{\beta}}$ of $\boldsymbol{\beta}^\star$, we shall consider two different approaches: \begin{itemize} \item \textsf{Standard stability selection.} It consists in using the stability selection procedure devised by \cite{meinshausen:buhlmann:2010} which guarantees the robustness of the selected variables. This approach can be described as follows. The vector $\mathcal{Y}$ defined in (\ref{eq:def_Y_X}) is randomly split into several subsamples of size $(p+1)/2$, which corresponds to half of the length of $\mathcal{Y}$. For each subsample $\mathcal{Y}^{(s)}$ and the corresponding design matrix $\mathcal{X}^{(s)}$, the LASSO criterion (\ref{eq:beta_hat}) is applied with a given $\lambda$, where $\mathcal{Y}$ and $\mathcal{X}$ are replaced by $\mathcal{Y}^{(s)}$ and $\mathcal{X}^{(s)}$, respectively. For each subsampling, the indices $i$ of the non null $\widehat{\beta}_i$ are stored and, for a given threshold, we keep in the final set of selected variables only the ones appearing a number of times larger than this threshold. Concerning the choice of $\lambda$, we shall consider the one obtained by cross-validation (Chapter 7 of \cite{hastie2009elements}) and the smallest element of the grid of $\lambda$ provided by the R \texttt{glmnet} package. \item \textsf{Fast stability selection.} It consists in applying the LASSO criterion (\ref{eq:beta_hat}) for several values of $\lambda$. For each $\lambda$, the indices $i$ of the non null $\widehat{\beta}_i(\lambda)$ are stored and, for a given threshold, we keep in the final set of selected variables only the ones appearing a number of times larger than this threshold. \end{itemize} These approaches will be further investigated in Section \ref{sec:num}. \subsection{\textcolor{black}{Practical implementation}}\label{sec:practical} In practice, the previous approach can be summarized as follows. \begin{itemize} \item\textsf{Initialization.} We take for $\boldsymbol{\beta}^{(0)}$ the estimator of $\boldsymbol{\beta}^\star$ obtained by fitting a GLM to the observations $Y_1,\dots,Y_n$ thus ignoring the ARMA part of the model in the case where $n>p$. If $p$ is larger than $n$, then a regularized criterion for GLM models can be used, see for instance \cite{friedman:hastie:tibshirani:2010}. For $\boldsymbol{\gamma}^{(0)}$, we take the null vector. \item\textsf{Newton-Raphson algorithm.} We use the recursion defined in (\ref{eq:newton_raphson:gamma}) with the initialization $(\boldsymbol{\beta}^{(0)},\boldsymbol{\gamma}^{(0)})$ obtained in the previous step and we stop at the iteration $R$ such that $\|\boldsymbol{\gamma}^{(R)}-\boldsymbol{\gamma}^{(R-1)}\|_\infty<10^{-6}$. \item\textsf{Variable selection.} To obtain a sparse estimator of $\boldsymbol{\beta}^\star$, we use the criterion (\ref{eq:beta_hat}) where $\boldsymbol{\beta}^{(0)}$ and $\widehat{\boldsymbol{\gamma}}$ appearing in (\ref{eq:def_Y_X}) are replaced by $\boldsymbol{\beta}^{(0)}$ and $\boldsymbol{\gamma}^{(R)}$ obtained in the previous steps. We thus get $\widehat{\boldsymbol{\beta}}$ by using one of the three approaches described at the end of Section \ref{sec:variable}. \end{itemize} This procedure can be improved by iterating the \textsf{Newton-Raphson algorithm} and \textsf{Variable selection} steps. More precisely, let us denote by $\boldsymbol{\beta}_{1}^{(0)}$, $\gamma_{1}^{(R_1)}$ and $\widehat{\boldsymbol{\beta}}_1$ the values of $\boldsymbol{\beta}^{(0)}$, $\gamma^{(R)}$ and $\widehat{\boldsymbol{\beta}}$ obtained in the three steps described above at the first iteration. At the second iteration, $(\boldsymbol{\beta}^{(0)},\boldsymbol{\gamma}^{(0)})$ appearing in the \textsf{Newton-Raphson algorithm} step is replaced by $(\widehat{\boldsymbol{\beta}}_1,\gamma_{1}^{(R_1)})$. At the end of this second iteration, $\widehat{\boldsymbol{\beta}}_2$ and $\gamma_{2}^{(R_2)}$ denote the obtained values of $\widehat{\boldsymbol{\beta}}$ and $\gamma^{(R)}$, respectively. This approach is iterated until the stabilization of $\gamma_{k}^{(R_k)}$. \subsection{Consistency results}\label{sec:consistency} In this section, we shall establish the consistency of the parameter $\gamma_1^\star$ in the case where $q=1$ from $Y_1,\dots,Y_n$ defined in (\ref{eq:Yt}) and (\ref{eq:Zt}) where (\ref{eq:mut_Wt}) is replaced by \begin{equation}\label{eq:mut_simple} \mu_t^\star=\exp(W_t^\star) \textrm{ with } W_t^\star=\beta_0^\star+Z_t^\star. \end{equation} We limit ourselves to this framework since in the more general one the consistency is much more tricky to handle and is beyond the scope of this paper. Note that some theoretical results have already been obtained in this framework (no covariates and $q=1$) by \cite{davis:dunsmuir:streett:2003} and \cite{davis:dunsmuir:street:2005}. However, here, we provide, on the one hand, a more detailed version of the proof of these results and on the other hand, a proof of the consistency of $\gamma_1^\star$ based on a stochastic equicontinuity result. \begin{theo}\label{theo:MA1} Assume that $Y_1,\dots,Y_n$ satisfy the model defined by (\ref{eq:Yt}), (\ref{eq:mut_simple}) and (\ref{eq:Zt}) with $q=1$ and $\gamma_1^\star\in\Gamma$ where $\Gamma$ is a compact set of $\mathbb{R}$ which does not contain 0. Assume also that $(W_t^\star)$ starts with its stationary invariant distribution. Let $\widehat{\gamma}_1$ be defined by: $$ \widehat{\gamma}_1=\textrm{Argmax}_{\gamma_1\in\Gamma}\; L(\beta_0^\star,\gamma_1), $$ where \begin{equation}\label{eq:L:beta_0} L(\beta_0^\star,\gamma_1)=\sum_{t=1}^n\left(Y_t W_t(\beta_0^\star,\gamma_1)-\exp(W_t(\beta_0^\star,\gamma_1)\right), \end{equation} with \begin{equation}\label{eq:W_Z} W_t(\beta_0^\star,\gamma_1)=\beta_0^\star+Z_t(\gamma_1)=\beta_0^\star+\gamma_1 E_{t-1}(\gamma_1), \end{equation} $$ E_{t-1}(\gamma_1)=Y_{t-1}\exp(-W_{t-1}(\beta_0^\star,\gamma_1))-1, \textrm{ if } t>1 \textrm{ and } E_{t-1}(\gamma_1)=0, \textrm{ if } t\leq 1. $$ Then $\widehat{\gamma}_1\stackrel{p}{\longrightarrow}\gamma_1^\star$, as $n$ tends to infinity, where $\stackrel{p}{\longrightarrow}$ denotes the convergence in probability. \end{theo} The proof of Theorem \ref{theo:MA1} is based on the following propositions which are proved in Section \ref{sec:proofs}. These propositions are the classical arguments for establishing consistency results of maximum likelihood estimators. Note that we shall explain in the proof of Proposition \ref{prop1} why a stationary invariant distribution for $(W_t^\star)$ does exist. The main tools used for proving Propositions \ref{prop1} and \ref{prop3} are the Markov property and the ergodicity of $(W_t^\star)$. \begin{prop}\label{prop1} For all fixed $\gamma_1$, under the assumptions of Theorem \ref{theo:MA1}, \begin{equation}\label{eq:conv} \frac1n L(\beta_0^\star,\gamma_1)\stackrel{p}{\longrightarrow} \mathcal{L}(\gamma_1):=\mathbb{E}\left[Y_3 W_3(\beta_0^\star,\gamma_1)-\exp(W_3(\beta_0^\star,\gamma_1)\right], \textrm{ as $n$ tends to infinity.} \end{equation} \end{prop} \begin{prop}\label{prop2} The function $\mathcal{L}$ defined in (\ref{eq:conv}) has a unique maximum at the true parameter $\gamma_1=\gamma_1^\star$. \end{prop} \begin{prop}\label{prop3} Under the assumptions of Theorem \ref{theo:MA1} $$\sup_{\gamma_1\in\Gamma}\left|\frac{L(\beta_0^\star,\gamma_1)}{n}-\mathcal{L}(\gamma_1)\right|\stackrel{p}{\longrightarrow}0, \textrm{ as $n$ tends to infinity,}$$ where $\mathcal{L}(\gamma_1)$ is defined in (\ref{eq:conv}). \end{prop} \section{Numerical experiments}\label{sec:num} The goal of this section is to investigate the performance of our method both from a statistical and a numerical points of view, using synthetic data generated by the model defined by (\ref{eq:Yt}), (\ref{eq:mut_Wt}) and (\ref{eq:Zt}). \subsection{Statistical performance} \subsubsection{Estimation of the parameters when $p=0$} In this section, we investigate the statistical performance of our methodology in the model defined by (\ref{eq:Yt}), (\ref{eq:mut_Wt}) and (\ref{eq:Zt}) for $n$ in $\{50,100,250,500,1000\}$ in the case where $p=0$, namely when there are no covariates and for $q$ in $\{1,2,3\}$. The performance of our approach for estimating $\beta_0^\star$ and the $\gamma_k^\star$ are displayed in Figures \ref{fig:estim_beta}, \ref{fig:estim:gam1} and \ref{fig:estim:gam2_3}. We can see from these figures that the accuracy of the parameter estimations is improved when $n$ increases, which corroborates the consistency of $\gamma_1^\star$ given in Theorem \ref{theo:MA1} in the case $q=1$. \begin{figure}[!htbp] \includegraphics[scale=0.28]{plot_beta_ma1.pdf} \includegraphics[scale=0.28]{plot_beta_ma2.pdf} \includegraphics[scale=0.28]{plot_beta_ma3.pdf} \caption{Boxplots for the estimations of $\beta_0^\star=3$ in Model (\ref{eq:mut_Wt}) with no regressor and $q=1$ (left), $q=2$ (middle) and $q=3$ (right). The horizontal lines correspond to the value of $\beta_0^\star$.\label{fig:estim_beta}} \end{figure} \begin{figure}[!htbp] \includegraphics[scale=0.28]{plot_gamma1_ma1.pdf} \includegraphics[scale=0.28]{plot_gamma1_ma2.pdf} \includegraphics[scale=0.28]{plot_gamma1_ma3.pdf} \caption{Boxplots for the estimations of $\gamma_1^\star=0.5$ in Model (\ref{eq:mut_Wt}) with no regressor and $q=1$ (left), $q=2$ (middle) and $q=3$ (right). \textcolor{black}{The horizontal lines correspond to the value of $\gamma_1^\star$.} \label{fig:estim:gam1}} \end{figure} \begin{figure}[!htbp] \includegraphics[scale=0.28]{plot_gamma2_ma2.pdf} \includegraphics[scale=0.28]{plot_gamma2_ma3.pdf} \includegraphics[scale=0.28]{plot_gamma3_ma3.pdf} \caption{Boxplots for the estimations of $\gamma_2^\star=1/4$ in Model (\ref{eq:mut_Wt}) with no regressor and $q=2$ (left), $\gamma_2^\star=1/3$ in Model (\ref{eq:mut_Wt}) with no regressor and $q=3$ (middle) and of $\gamma_3^\star=1/4$ in Model (\ref{eq:mut_Wt}) with no regressor and $q=3$ (right). \textcolor{black}{The horizontal lines correspond to the true values of the parameters.} \label{fig:estim:gam2_3}} \end{figure} Moreover, it has to be noticed that in this particular context where there are no covariates ($p=0$), the performance of our approach in terms of parameters estimation is similar to the one of the package \texttt{glarma} described in \cite{glarma:package}. \subsubsection{Estimation of the parameters when $p\geq 1$ and $\boldsymbol{\beta}^\star$ is sparse}\label{sec:sparse_estim} In this section, we assess the performance of our methodology in terms of support recovery, namely the identification of the non null coefficients of $\boldsymbol{\beta}^\star$, and of the estimation of $\boldsymbol{\gamma}^\star$. We shall consider $Y_1,\dots,Y_n$ satisfying the model defined by (\ref{eq:Yt}), (\ref{eq:mut_Wt}) and (\ref{eq:Zt}) with covariates chosen in a Fourier basis, for $n=1000$ in the first two paragraphs, $q\in\{1,2,3\}$, $p=100$ and two sparsity levels (5\% or 10\% of non null coefficients in $\boldsymbol{\beta}^\star$). More precisely, when the sparsity level is 5\%(resp. 10\%) all the $\beta_i^\star$ are assumed to be equal to zero except for five (resp. ten) of them for which the values are given in the caption of Figure \ref{fig:TPR:FPR:1} (resp.\,in the caption of Figure \ref{fig:TPR:FPR:1:10} given in the Appendix). Other values of $n$ (150, 200, 500, 1000) will be considered in the third paragraph to evaluate the impact of $n$ on the performance of our approach. \textbf{Estimation of the support of $\boldsymbol{\beta}^\star$} In this paragraph, we focus on the performance of our approach for retrieving the support of $\boldsymbol{\beta}^\star$ by computing the True Positive Rates (TPR) and False Positive Rates (FPR). We shall consider the two methods that are proposed in Section \ref{sec:variable}: standard stability selection (\verb|ss_cv| and \verb|ss_min|) and fast stability selection (\verb|fast_ss|). For comparison purpose, we shall also consider the standard Lasso approach proposed by \cite{friedman:hastie:tibshirani:2010} in GLM where the parameter $\lambda$ is either chosen thanks to the standard cross-validation (\verb|lasso_cv|) or by taking the optimal $\lambda$ which maximizes the difference between the TPR and FPR (\verb|lasso_best|). Figures \ref{fig:TPR:FPR:1}, \ref{fig:TPR:FPR:2} and \ref{fig:TPR:FPR:3} display the TPR and FPR of the previously mentioned approaches with respect to the threshold defined at the end of Section \ref{sec:variable} when $n=1000$, the sparsity level is equal to 5\% and $q=1$, 2 and 3, respectively. We can see from these figures that when the threshold is well tuned, our approaches outperform the classical Lasso even when the parameter $\lambda$ is chosen in an optimal way. More precisely, the thresholds 0.4, 0.7 and 0.8 achieve a satisfactory trade-off between the TPR and the FPR for \verb|fast_ss|, \verb|ss_cv| and \verb|ss_min|, respectively. The conclusions are similar in the case where the sparsity level is equal to 10\%, the corresponding figures (\ref{fig:TPR:FPR:1:10}, \ref{fig:TPR:FPR:2:10} and \ref{fig:TPR:FPR:3:10}) are given in the Appendix. We can observe from these figures that the performance of \verb|fast_ss| are slightly better than \verb|ss_cv| and \verb|ss_min| when the sparsity level is equal to 5\% but it is the reverse when the sparsity level is equal to 10\%. \begin{figure}[!htbp] \includegraphics[scale=0.28]{error_bars_1000_5_1.pdf} \caption{Error bars of the TPR and FPR associated to the support recovery of $\boldsymbol{\beta}^\star$ for five methods with respect to the thresholds when $n=1000$, $q=1$, $p=100$ and a 5\% sparsity level. All the $\beta_i^\star=0$ except for five of them: $\beta_1^\star=1.73$, $\beta_3^\star=0.38$, $\beta_{17}^\star=0.29$, $\beta_{33}^\star=-0.64$ and $\beta_{44}^\star=-0.13$. \label{fig:TPR:FPR:1}} \end{figure} \begin{figure}[!htbp] \includegraphics[scale=0.28]{error_bars_1000_5_2.pdf} \caption{Error bars of the TPR and FPR associated to the support recovery of $\boldsymbol{\beta}^\star$ for five methods with respect to the thresholds when $n=1000$, $q=2$, $p=100$ and a 5\% sparsity level. All the $\beta_i^\star=0$ except for five of them: $\beta_1^\star=1.73$, $\beta_3^\star=0.38$, $\beta_{17}^\star=0.29$, $\beta_{33}^\star=-0.64$ and $\beta_{44}^\star=-0.13$.\label{fig:TPR:FPR:2}} \end{figure} \begin{figure}[!htbp] \includegraphics[scale=0.28]{error_bars_1000_5_3.pdf} \caption{Error bars of the TPR and FPR associated to the support recovery of $\boldsymbol{\beta}^\star$ for five methods with respect to the thresholds when $n=1000$, $q=3$, $p=100$ and a 5\% sparsity level. All the $\beta_i^\star=0$ except for five of them: $\beta_1^\star=1.73$, $\beta_3^\star=0.38$, $\beta_{17}^\star=0.29$, $\beta_{33}^\star=-0.64$ and $\beta_{44}^\star=-0.13$.\label{fig:TPR:FPR:3}} \end{figure} We also compare our approach with the method implemented in the \texttt{glarma} package of \cite{glarma:package} in the case where $q=1$ and when the sparsity level is equal to 5\%. Since this method is not devised for performing variable selection, we consider that a given component of $\boldsymbol{\beta}^\star$ is estimated by 0 if its estimation obtained by the \texttt{glarma} package is smaller than a given threshold. The results are displayed in Figure \ref{fig:TPR:FPR:glarma} for different thresholds ranging from $10^{-9}$ to 0.1. We can see from this figure that for the best choice of the threshold the results of the variable selection provided by the \texttt{glarma} package underperform our method. \begin{figure}[!htbp] \includegraphics[scale=0.28]{error_bars_glarma.pdf} \caption{Error bars of the TPR and FPR associated to the support recovery of $\boldsymbol{\beta}^\star$ obtained with the \texttt{glarma} package for different thresholds when $n=1000$, $q=1$, $p=100$ and a 5\% sparsity level. All the $\beta_i^\star=0$ except for five of them: $\beta_1^\star=1.73$, $\beta_3^\star=0.38$, $\beta_{17}^\star=0.29$, $\beta_{33}^\star=-0.64$ and $\beta_{44}^\star=-0.13$. \label{fig:TPR:FPR:glarma}} \end{figure} \textbf{Estimation of $\boldsymbol{\gamma}^\star$} Figures \ref{fig:gamma:5:cv}, \ref{fig:gamma:5:fast} and \ref{fig:gamma:5:min} display the boxplots for the estimations of $\boldsymbol{\gamma}^\star$ in Model (\ref{eq:mut_Wt}) with a 5\% sparsity level and $q=1,2,3$ obtained by \verb|ss_cv|, \verb|fast_ss| and \verb|ss_min|, respectively. The threshold chosen for each of these methods is the one achieving a satisfactory trade-off between the TPR and the FPR, namely 0.7, 0.4 and 0.8. We can see from these figures that all these approaches provide accurate estimations of $\boldsymbol{\gamma}^\star$ from the second iteration. The conclusions are similar in the case where the sparsity level is equal to 10\%, the corresponding figures \ref{fig:gamma:10:cv}, \ref{fig:gamma:10:fast} and \ref{fig:gamma:10:min} are given in the Appendix. \begin{figure}[!htbp] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_cv_1000_5_1_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_cv_1000_5_2_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_cv_1000_5_2_q2.pdf}\\ \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_cv_1000_5_3_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_cv_1000_5_3_q2.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_cv_1000_5_3_q3.pdf}\\ \end{tabular} \caption{Boxplots for the estimations of $\boldsymbol{\gamma}^\star$ in Model (\ref{eq:mut_Wt}) with a 5\% sparsity level and $q=1,2,3$ obtained by \texttt{ss\_cv}. Top: $q=1$ and $\gamma_1^\star=0.5$ (left), $q=2$ and $\gamma_1^\star=0.5$ (middle), $q=2$ and $\gamma_2^\star=0.25$ (right). Bottom: $q=3$ and $\gamma_1^\star=0.5$ (left), $q=3$ and $\gamma_2^\star=1/3$ (middle), $q=3$ and $\gamma_3^\star=0.25$ (right). The horizontal lines correspond to the values of the $\gamma_i^\star$'s. \label{fig:gamma:5:cv}} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_fast_1000_5_1_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_fast_1000_5_2_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_fast_1000_5_2_q2.pdf}\\ \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_fast_1000_5_3_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_fast_1000_5_3_q2.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_fast_1000_5_3_q3.pdf}\\ \end{tabular} \caption{Boxplots for the estimations of $\boldsymbol{\gamma}^\star$ in Model (\ref{eq:mut_Wt}) with a 5\% sparsity level and $q=1,2,3$ obtained by \texttt{fast\_ss}. Top: $q=1$ and $\gamma_1^\star=0.5$ (left), $q=2$ and $\gamma_1^\star=0.5$ (middle), $q=2$ and $\gamma_2^\star=0.25$ (right). Bottom: $q=3$ and $\gamma_1^\star=0.5$ (left), $q=3$ and $\gamma_2^\star=1/3$ (middle), $q=3$ and $\gamma_3^\star=0.25$ (right). The horizontal lines correspond to the values of the $\gamma_i^\star$'s.\label{fig:gamma:5:fast}} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_min_1000_5_1_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_min_1000_5_2_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_min_1000_5_2_q2.pdf}\\ \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_min_1000_5_3_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_min_1000_5_3_q2.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_min_1000_5_3_q3.pdf}\\ \end{tabular} \caption{Boxplots for the estimations of $\boldsymbol{\gamma}^\star$ in Model (\ref{eq:mut_Wt}) with a 5\% sparsity level and $q=1,2,3$ obtained by \texttt{ss\_min}. Top: $q=1$ and $\gamma_1^\star=0.5$ (left), $q=2$ and $\gamma_1^\star=0.5$ (middle), $q=2$ and $\gamma_2^\star=0.25$ (right). Bottom: $q=3$ and $\gamma_1^\star=0.5$ (left), $q=3$ and $\gamma_2^\star=1/3$ (middle), $q=3$ and $\gamma_3^\star=0.25$ (right). The horizontal lines correspond to the values of the $\gamma_i^\star$'s.\label{fig:gamma:5:min}} \end{center} \end{figure} \textbf{Impact of the value of $n$} In this paragraph, we study the impact of the value of $n$ on the TPR and the FPR associated to the support recovery of $\boldsymbol{\beta}^\star$ and on the estimation of $\boldsymbol{\gamma}^\star$ for \texttt{ss\_min}, the other approaches providing similar results. Based on Figures \ref{fig:TPR:FPR:thresh_5} and \ref{fig:TPR:FPR:thresh_10}, we chose a threshold equal to 0.7 for both sparsity levels (5\% and 10\%) which provides a good trade-off between TPR and FPR for all values of $n$. We can see from Figure \ref{fig:TPR:FPR:n} that \texttt{ss\_min} with this threshold outperforms \texttt{lasso\_cv} when the sparsity level is equal to 5\% and all the values of $n$ considered. In the case where the sparsity level is equal to 10\%, \texttt{lasso\_cv} has a slightly larger TPR for $n=150$ and $n=200$. However, the FPR of \texttt{ss\_min} is much smaller. \begin{figure}[!htbp] \includegraphics[scale=0.22]{TPR_FPR_by_threshold_5.pdf} \caption{Error bars of the TPR and FPR associated to the support recovery of $\boldsymbol{\beta}^\star$ for \texttt{ss\_min} with respect to the thresholds for different values of $n$, $q=1$, $p=100$ and a 5\% sparsity level. \label{fig:TPR:FPR:thresh_5}} \end{figure} \begin{figure}[!htbp] \includegraphics[scale=0.22]{TPR_FPR_by_threshold_10.pdf} \caption{Error bars of the TPR and FPR associated to the support recovery of $\boldsymbol{\beta}^\star$ for \texttt{ss\_min} with respect to the thresholds for different values of $n$, $q=1$, $p=100$ and a 10\% sparsity level. \label{fig:TPR:FPR:thresh_10}} \end{figure} \begin{figure}[!htbp] \includegraphics[scale=0.22]{TPR_FPR_by_n.pdf} \caption{Error bars of the TPR and FPR associated to the support recovery of $\boldsymbol{\beta}^\star$ for \texttt{ss\_min} and \texttt{lasso\_cv} for different values of $n$, $q=1$, $p=100$ and different sparsity levels. \label{fig:TPR:FPR:n}} \end{figure} Figure \ref{fig:gamma:iter} displays the boxplots for the estimations of $\boldsymbol{\gamma}^\star$ in Model (\ref{eq:mut_Wt}) for $q=1$, $p=100$, different values of $n$ (150, 200, 500, 1000) and sparsity levels (5\% and 10\%) obtained by \texttt{ss\_min} with a threshold of 0.7 for six iterations. We can see from this figure that this approach provides accurate estimations of $\gamma_1^\star$ from Iteration 2 especially when $n$ is larger than 200. \begin{figure}[!htbp] \includegraphics[scale=0.22]{gamma_est_by_iter.pdf} \caption{Boxplots for the estimations of $\boldsymbol{\gamma}^\star$ in Model (\ref{eq:mut_Wt}) for $q=1$, $p=100$, different values of $n$ and sparsity levels (left: 5\%, right: 10\%) obtained by \texttt{ss\_min} with a threshold of 0.7 for different iterations (\texttt{iter}). \label{fig:gamma:iter}} \end{figure} \subsection{Numerical performance} Figure \ref{fig:time} displays the means of the computational times for \texttt{ss\_min} and \texttt{fast\_ss}. The performance of \texttt{ss\_cv} are not displayed since they are similar to the one of \texttt{ss\_min}. We can see from this figure that it takes around 1 minute to process observations $Y_1,\dots,Y_n$ satisfying Model (\ref{eq:Yt}) for a given threshold and one iteration, when $n=1000$ and $p=100$. Moreover, we can observe that the computational burden of \texttt{fast\_ss} is slightly smaller than the one of \texttt{ss\_min}. \begin{figure}[!htbp] \includegraphics[scale=0.2]{time_5_1iter.pdf} \caption{Means of the computational times in seconds for \texttt{ss\_min} and \texttt{fast\_ss} in the case where $p=100$, and different values of $n$ and $q$, a given threshold and one iteration.\label{fig:time}} \end{figure} \section{Proofs}\label{sec:proofs} \subsection{\textcolor{black}{Computation of the first and second derivatives of $W_t$ defined in (\ref{eq:Wt})}} The computations given below are similar to those provided in \cite{davis:dunsmuir:street:2005} but are specific to the parametrization $\boldsymbol{\delta}=(\boldsymbol{\beta}',\boldsymbol{\gamma}')$ considered in this paper. \subsubsection{\textcolor{black}{Computation of the first derivatives of $W_t$ }}\label{subsub:first_derive} By the definition of $W_t$ given in (\ref{eq:Wt}), we get \begin{equation*} \frac{\partial W_t}{\partial \boldsymbol{\delta}}(\boldsymbol{\delta})=\frac{\partial\boldsymbol{\beta}' x_t}{\partial \boldsymbol{\delta}}+\frac{\partial Z_t}{\partial \boldsymbol{\delta}} (\boldsymbol{\delta}), \end{equation*} where $\boldsymbol{\beta}$, $x_t$ and $Z_t$ are defined in (\ref{eq:Wt}). More precisely, for all $k\in\{0,\dots,p\}$, $\ell\in\{1,\dots,q\}$ and $t\in\{1,\dots,n\}$, by (\ref{eq:Et}), \begin{align}\label{eq:gradW_beta} \frac{\partial W_t}{\partial \beta_k}&=x_{t,k}+\frac{\partial Z_t}{\partial \beta_k}=x_{t,k}+\sum_{j=1}^{q\wedge (t-1)}\gamma_j\frac{\partial E_{t-j}}{\partial \beta_k}\nonumber\\ &=x_{t,k}-\sum_{j=1}^{q\wedge (t-1)}\gamma_j Y_{t-j}\frac{\partial W_{t-j}}{\partial \beta_k}\exp(-W_{t-j})=x_{t,k}-\sum_{j=1}^{q\wedge (t-1)}\gamma_j(1+E_{t-j})\frac{\partial W_{t-j}}{\partial \beta_k},\\ \frac{\partial W_t}{\partial \gamma_\ell}&=E_{t-\ell}+\sum_{j=1}^{q\wedge (t-1)} \gamma_j\frac{\partial E_{t-j}}{\partial\gamma_\ell}\nonumber\\\label{eq:gradW_gamma} &=E_{t-\ell}-\sum_{j=1}^{q\wedge (t-1)}\gamma_j Y_{t-j}\frac{\partial W_{t-j}}{\partial \gamma_\ell}\exp(-W_{t-j})=E_{t-\ell}-\sum_{j=1}^{q\wedge (t-1)}\gamma_j(1+E_{t-j})\frac{\partial W_{t-j}}{\partial \gamma_\ell}, \end{align} where we used that $E_t=0,\; \forall t\leq 0$. The first derivatives of $W_t$ are thus obtained from the following recursive expressions. For all $k\in\{0,\dots,p\}$ \begin{align*} \frac{\partial W_1}{\partial \beta_k}&=x_{1,k},\\ \frac{\partial W_2}{\partial \beta_k}&=x_{2,k}-\gamma_1(1+E_{1})\frac{\partial W_{1}}{\partial \beta_k}, \end{align*} where \begin{equation}\label{eq:E1} W_1=\boldsymbol{\beta}' x_1 \textrm{ and } E_1=Y_1\exp(-W_1)-1. \end{equation} Moreover, \begin{equation*} \frac{\partial W_3}{\partial \beta_k}=x_{3,k}-\gamma_1(1+E_{2})\frac{\partial W_{2}}{\partial \beta_k}-\gamma_2(1+E_{1})\frac{\partial W_{1}}{\partial \beta_k}, \end{equation*} where \begin{equation}\label{eq:E2} W_2=\boldsymbol{\beta}' x_2 +\gamma_1 E_{1},\; E_2=Y_2\exp(-W_2)-1, \end{equation} and so on. In the same way, for all $\ell\in\{1,\dots,q\}$ \begin{align*} \frac{\partial W_1}{\partial \gamma_\ell}&=0,\\ \frac{\partial W_2}{\partial \gamma_\ell}&=E_{2-\ell},\\ \frac{\partial W_3}{\partial \gamma_\ell}&=E_{3-\ell}-\gamma_1(1+E_{2})\frac{\partial W_{2}}{\partial \gamma_\ell \end{align*} and so on, where $E_t=0,\; \forall t\leq 0$ and $E_1$, $E_2$ are defined in (\ref{eq:E1}) and (\ref{eq:E2}), respectively. \subsubsection{\textcolor{black}{Computation of the second derivatives of $W_t$}}\label{subsub:second_derive} Using (\ref{eq:gradW_beta}) and (\ref{eq:gradW_gamma}), we get that for all $j,k\in\{0,\dots,p\}$, $\ell,m\in\{1,\dots,q\}$ and $t\in\{1,\dots,n\}$, \begin{align*} \frac{\partial^2 W_t}{\partial \beta_j\partial \beta_k}&=-\sum_{i=1}^{q\wedge (t-1)}\gamma_i(1+E_{t-i})\frac{\partial^2 W_{t-i}}{\partial \beta_j\partial \beta_k} -\sum_{i=1}^{q\wedge (t-1)}\gamma_i\frac{\partial E_{t-i}}{\partial\beta_j}\frac{\partial W_{t-i}}{\partial \beta_k}\\ &=-\sum_{i=1}^{q\wedge (t-1)}\gamma_i(1+E_{t-i})\frac{\partial^2 W_{t-i}}{\partial \beta_j\partial \beta_k} +\sum_{i=1}^{q\wedge (t-1)}\gamma_i(1+E_{t-i})\frac{\partial W_{t-i}}{\partial \beta_j}\frac{\partial W_{t-i}}{\partial \beta_k},\\ \frac{\partial^2 W_t}{\partial \beta_k\partial\gamma_\ell}&=-(1+E_{t-\ell})\frac{\partial W_{t-\ell}}{\partial \beta_k} -\sum_{i=1}^{q\wedge (t-1)}\gamma_i\left\{\frac{\partial W_{t-i}}{\partial \beta_k}\frac{\partial E_{t-i}}{\partial\gamma_\ell} +(1+E_{t-i})\frac{\partial^2 W_{t-i}}{\partial \beta_k\partial\gamma_\ell}\right\}\\ &=-(1+E_{t-\ell})\frac{\partial W_{t-\ell}}{\partial \beta_k} -\sum_{i=1}^{q\wedge (t-1)}\gamma_i\left\{-(1+E_{t-i})\frac{\partial W_{t-i}}{\partial\beta_k}\frac{\partial W_{t-i}}{\partial \gamma_\ell} +(1+E_{t-i})\frac{\partial^2 W_{t-i}}{\partial \beta_k\partial\gamma_\ell}\right\},\\ \frac{\partial^2 W_t}{\partial \gamma_\ell\partial\gamma_m}&=\frac{\partial E_{t-\ell}}{\partial \gamma_m} -(1+E_{t-m})\frac{\partial W_{t-m}}{\partial \gamma_\ell} -\sum_{i=1}^{q\wedge (t-1)}\gamma_i\left\{\frac{\partial W_{t-i}}{\partial \gamma_\ell} \frac{\partial E_{t-i}}{\partial \gamma_m} +(1+E_{t-i})\frac{\partial^2 W_{t-i}}{\partial \gamma_\ell\partial \gamma_m}\right\}\\ &=-(1+E_{t-\ell})\frac{\partial W_{t-\ell}}{\partial \gamma_m}-(1+E_{t-m})\frac{\partial W_{t-m}}{\partial \gamma_\ell} \\ &-\sum_{i=1}^{q\wedge (t-1)}\gamma_i\left\{-(1+E_{t-i})\frac{\partial W_{t-i}}{\partial \gamma_\ell}\frac{\partial W_{t-i}}{\partial \gamma_m} +(1+E_{t-i})\frac{\partial^2 W_{t-i}}{\partial \gamma_\ell\partial \gamma_m}\right\}.\\ \end{align*} To compute the second derivatives of $W_t$, we shall use the following recursive expressions for all $j,k\in\{0,\dots,p\}$ \begin{align*} \frac{\partial^2 W_1}{\partial \beta_j\partial \beta_k}&=0,\\ \frac{\partial^2 W_2}{\partial \beta_j\partial \beta_k}&=\gamma_1(1+E_1)x_{1,j}x_{1,k}, \end{align*} where $E_1$ is defined in (\ref{eq:E1}) and so on. Moreover, for all $k\in\{0,\dots,p\}$ and $\ell\in\{1,\dots,q\}$ \begin{align*} \frac{\partial^2 W_1}{\partial \beta_k\partial\gamma_\ell}&=0,\\ \frac{\partial^2 W_2}{\partial \beta_k\partial\gamma_\ell}&=-(1+E_{2-\ell})\frac{\partial W_{2-\ell}}{\partial \beta_k}, \end{align*} where $E_t=0$ for all $t\leq 0$ and the first derivatives of $W_t$ are computed in (\ref{eq:gradW_beta}). Note also that \begin{align*} \frac{\partial^2 W_1}{\partial \gamma_\ell\partial\gamma_m}&=0,\\ \frac{\partial^2 W_2}{\partial \gamma_\ell\partial\gamma_m}&=0 \end{align*} and so on. \subsection{Computational details for obtaining Criterion (\ref{eq:beta_hat})}\label{sub:var_sec} By \eqref{eq:Ltilde}, \begin{align*} \widetilde{L}(\boldsymbol{\beta})=\widetilde{L}(\boldsymbol{\beta}^{(0)})+\frac{\partial L}{\partial \boldsymbol{\beta}}(\boldsymbol{\beta}^{(0)},\widehat{\boldsymbol{\gamma}}) U(\boldsymbol{\nu}-\boldsymbol{\nu}^{(0)})-\frac12 (\boldsymbol{\nu}-\boldsymbol{\nu}^{(0)})' \Lambda (\boldsymbol{\nu}-\boldsymbol{\nu}^{(0)}), \end{align*} where $\boldsymbol{\nu}-\boldsymbol{\nu}^{(0)}=U'(\boldsymbol{\beta}-\boldsymbol{\beta}^{(0)})$. Hence, \begin{align*} \widetilde{L}(\boldsymbol{\beta})&=\widetilde{L}(\boldsymbol{\beta}^{(0)})+\sum_{k=0}^p \left(\frac{\partial L}{\partial \boldsymbol{\beta}}(\boldsymbol{\beta}^{(0)},\widehat{\boldsymbol{\gamma}}) U\right)_k (\nu_k-\nu_{k}^{(0)}) -\frac12\sum_{k=0}^p\lambda_k (\nu_k-\nu_{k}^{(0)})^2\\ &=\widetilde{L}(\boldsymbol{\beta}^{(0)})-\frac12\sum_{k=0}^p\lambda_k\left(\nu_k-\nu_{k}^{(0)}-\frac{1}{\lambda_k} \left(\frac{\partial L}{\partial \boldsymbol{\beta}}(\boldsymbol{\beta}^{(0)},\widehat{\boldsymbol{\gamma}}) U\right)_k\right)^2 +\sum_{k=0}^p\frac{1}{2\lambda_k}\left(\frac{\partial L}{\partial \boldsymbol{\beta}}(\boldsymbol{\beta}^{(0)},\widehat{\boldsymbol{\gamma}}) U\right)_k^2, \end{align*} where the $\lambda_k$'s are the diagonal terms of $\Lambda$. Since the only term depending on $\boldsymbol{\beta}$ is the second one in the last expression of $\widetilde{L}(\boldsymbol{\beta})$, we define $\widetilde{L}_Q(\boldsymbol{\beta})$ appearing in Criterion (\ref{eq:beta_hat}) as follows: \begin{eqnarray*} -\widetilde{L}_Q(\boldsymbol{\beta})&=&\frac12\sum_{k=0}^p\lambda_k\left(\nu_k-\nu_{k}^{(0)}-\frac{1}{\lambda_k} \left(\frac{\partial L}{\partial \boldsymbol{\beta}}(\boldsymbol{\beta}^{(0)},\widehat{\boldsymbol{\gamma}}) U\right)_k\right)^2\\ &=&\frac12 \left\|\Lambda^{1/2}\left(\boldsymbol{\nu}-\boldsymbol{\nu}^{(0)}-\Lambda^{-1} \left(\frac{\partial L}{\partial \boldsymbol{\beta}}(\boldsymbol{\beta}^{(0)},\widehat{\boldsymbol{\gamma}}) U\right)' \right)\right\|_2^2\\ &=&\frac12 \left\|\Lambda^{1/2}U'(\boldsymbol{\beta}-\boldsymbol{\beta}^{(0)})-\Lambda^{-1/2} U' \left(\frac{\partial L}{\partial \boldsymbol{\beta}}(\boldsymbol{\beta}^{(0)},\widehat{\boldsymbol{\gamma}})\right)' \right\|_2^2\\ &=&\frac12 \left\|\Lambda^{1/2}U'(\boldsymbol{\beta}^{(0)}-\boldsymbol{\beta})+\Lambda^{-1/2} U' \left(\frac{\partial L}{\partial \boldsymbol{\beta}}(\boldsymbol{\beta}^{(0)},\widehat{\boldsymbol{\gamma}})\right)'\right\|_2^2\\ &=&\frac12\|\mathcal{Y}-\mathcal{X}\boldsymbol{\beta}\|_2^2, \end{eqnarray*} where \begin{equation*} \mathcal{Y}=\Lambda^{1/2}U'\boldsymbol{\beta}^{(0)} +\Lambda^{-1/2}U'\left(\frac{\partial L}{\partial \boldsymbol{\beta}}(\boldsymbol{\beta}^{(0)},\widehat{\boldsymbol{\gamma}})\right)' ,\; \mathcal{X}=\Lambda^{1/2}U'. \end{equation*} \subsection{Proofs of Propositions \ref{prop1}, \ref{prop2} and \ref{prop3} and of Lemma \ref{lem:aperiodic_doeblin}} This section contains the proofs of Propositions \ref{prop1}, \ref{prop2} and \ref{prop3}. \subsubsection{\textcolor{black}{Proof of Proposition \ref{prop1}}} \textcolor{black}{We first establish the following lemma for proving Proposition \ref{prop1}.} \begin{lemma}\label{lem:aperiodic_doeblin} $(W_t^\star)$ is an aperiodic Markov process satisfying Doeblin's condition. \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:aperiodic_doeblin}] By (\ref{eq:mut_simple}) and (\ref{eq:Zt}), we observe that: \begin{equation}\label{eq:Wtstar} W_t^\star=(\beta_0^\star-\gamma_1^\star)+\gamma_1^\star Y_{t-1}\exp(-W_{t-1}^\star). \end{equation} Thus, $\mathcal{F}_{t-2}=\mathcal{F}_{t-1}^{W^\star}:=\sigma(W_s,s\leq t-1)$. By (\ref{eq:Yt}), the distribution of $Y_{t-1}$ conditionally to $\mathcal{F}_{t-2}$ is $\mathcal{P}(\exp(W_{t-1}^\star))$. Hence, the distribution of $W_t^\star$ conditionally to $\mathcal{F}_{t-1}^{W^\star}$ is the same as distribution of $W_t^\star$ conditionally to $W_{t-1}^\star$, which means that $(W_t^\star)$ has the Markov property. Let us now prove that $(W_t^\star)$ is strongly aperiodic which implies that it is aperiodic. $$ \mathbb{P}(W^\star_{t}=\beta_0^\star-\gamma_1^\star | W^\star_{t-1}=\beta_0^\star-\gamma_1^\star) =\mathbb{P}(Y_{t-1}=0 | W^\star_{t-1}=\beta_0^\star-\gamma_1^\star)=\exp(-\exp(\beta_0^\star-\gamma_1^\star))>0, $$ where the first equality comes from (\ref{eq:Wtstar}) and the last equality comes from (\ref{eq:Yt}) since $\mathcal{F}_{t-2}=\mathcal{F}_{t-1}^{W^\star}$. To prove that $(W_t^\star)$ satisfies Doeblin's condition namely that there exists a probability measure $\nu$ with the property that, for some $m\geq 1$, $\varepsilon>0$ and $\delta >0$, \begin{equation}\label{eq:doeblin} \nu(B)>\varepsilon \Longrightarrow \mathbb{P}(W_{t+m-1}\in B,W_{t+m-2}\in B\dots,W_{t+1}\in B,W_{t}\in B|W_{t-1}=x)\geq\delta, \end{equation} for all $x$ in the state space $X$ of $W_t^\star$ and $B$ in the Borel sets of $X$, we refer the reader to the proof of Proposition 2 in \cite{davis:dunsmuir:streett:2003}. \end{proof} \begin{proof}[Proof of Proposition \ref{prop1}] For proving Proposition \ref{prop1}, we shall use Theorems 1.3.3 and 1.3.5 of \cite{taniguchibook:2012}. In order to apply these theorems it is enough to prove that $(W_t^\star)$ is a strictly stationary and ergodic process since $Y_t W_t(\beta_0^\star,\gamma_1)-\exp(W_t(\beta_0^\star,\gamma_1))$ is a measurable function of $W_{t+1}^\star,W_t^\star,\dots,W_2^\star$. Note that the latter fact comes from (\ref{eq:mut_simple}) and (\ref{eq:Zt}) for $Y_t$ and from (\ref{eq:Wt}) with $q=1$ and $p=0$ for $W_t$. In order to prove that $(W_t^\star)$ is a strictly stationary and ergodic process, we have first to prove that $(W_t^\star)$ is an aperiodic Markov process satisfying Doeblin's condition, see Lemma \ref{lem:aperiodic_doeblin}. cv The statement of Lemma \ref{lem:aperiodic_doeblin} corresponds to Assertion (iv) of Theorem 16.0.2 of \cite{meyn:tweedie} which is equivalent to Assertion (i) of this theorem, and implies that $(W_t^\star)$ is uniformly ergodic. Hence, by Definition (16.6) of uniform ergodicity given in \cite{meyn:tweedie}, there exists a unique stationary invariant measure for $(W_t^\star)$, see also the paragraph below Equation (1.3) of \cite{Sandric:2017} for an additional justification. Combining that existence of a unique stationary invariant measure for $(W_t^\star)$ with the following arguments shows that $(W_t^\star)$ is a strictly stationary process and also an ergodic Markov process. By Theorem 3.6.3, Corollary 3.6.1 and Definition 3.6.6 of \cite{stout:1974}, if the process $(W_t^\star)$ is started with its unique stationary invariant distribution, $(W_t^\star)$ is a strictly stationary process. By Definition 3.6.8 of \cite{stout:1974}, the existence of a unique stationary invariant measure for $(W_t^\star)$ means that $(W_t^\star)$ is an ergodic Markov process, see also the paragraph below (b) \cite[p. 717]{Sandric:2017}. Finally, by Theorem 3.6.5 of \cite{stout:1974}, since $(W_t^\star)$ is an ergodic Markov process and a strictly stationary process, $(W_t^\star)$ is an ergodic and strictly stationary process in the sense of the assumption of Theorem 1.3.5 of \cite{taniguchibook:2012}. \end{proof} \subsubsection{\textcolor{black}{Proof of Proposition \ref{prop2}}} Note that for all $\gamma_1$, \begin{align*} \mathcal{L}(\gamma_1)&=\mathbb{E}\left[Y_3 W_3(\beta_0^\star,\gamma_1)-\exp(W_3(\beta_0^\star,\gamma_1))\right] =\mathbb{E}\left[\mathbb{E}\left[Y_3 W_3(\beta_0^\star,\gamma_1)-\exp(W_3(\beta_0^\star,\gamma_1))|\mathcal{F}_2\right]\right]\\ &=\mathbb{E}\left[\exp(W_3^\star) W_3(\beta_0^\star,\gamma_1)-\exp(W_3(\beta_0^\star,\gamma_1))\right]\\ &=\mathbb{E}\left[\exp(W_3^\star) \left(W_3(\beta_0^\star,\gamma_1)-W_3^\star+W_3^\star-\exp(W_3(\beta_0^\star,\gamma_1)-W_3^\star)\right)\right]\\ &\leq \mathbb{E}\left[\exp(W_3^\star) \left(W_3^\star-1\right)\right]=\mathcal{L}(\gamma_1^\star), \end{align*} where the inequality comes from the following inequality $x-\exp(x)\leq -1$, for all $x\in\mathbb{R}$. This inequality is an equality only when $x=0$ which means that $\gamma_1=\gamma_1^\star$. \subsubsection{\textcolor{black}{Proof of Proposition \ref{prop3}}} The proof of this proposition comes from Proposition \ref{prop1} and the stochastic equicontinuity of $n^{-1}L(\beta_0^\star,\gamma_1)$. Thus, it is enough to prove that there exists a positive $\delta$ such that $$ \sup_{|\gamma_1-\gamma_2|\leq\delta}\left|\frac{L(\beta_0^\star,\gamma_1)}{n}-\frac{L(\beta_0^\star,\gamma_2)}{n}\right|\stackrel{p}{\longrightarrow}0, \textrm{ as $n$ tecvnds to infinity.} $$ Observe that, by (\ref{eq:L:beta_0}), \begin{align*} \left|\frac{L(\beta_0^\star,\gamma_1)}{n}-\frac{L(\beta_0^\star,\gamma_2)}{n}\right| &\leq\frac1n\sum_{t=1}^n Y_t\left|W_t(\beta_0^\star,\gamma_1)-W_t(\beta_0^\star,\gamma_2)\right|\\ &+\frac1n\sum_{t=1}^n\left|\exp\left(W_t(\beta_0^\star,\gamma_1)\right)-\exp\left(W_t(\beta_0^\star,\gamma_2)\right)\right|. \end{align*} Let us first focus on bounding the following expression for $t\geq 2$ (since $W_1(\beta_0^\star,\gamma)=\beta_0^\star$, for all $\gamma$). By (\ref{eq:W_Z}) \begin{align*} &\left|W_t(\beta_0^\star,\gamma_1)-W_t(\beta_0^\star,\gamma_2)\right|=\left|Z_t(\gamma_1)-Z_t(\gamma_2)\right|=\left|\gamma_1 E_{t-1}(\gamma_1)-\gamma_2 E_{t-1}(\gamma_2)\right|\\ &=\left|\gamma_1 \left[Y_{t-1}\exp(-W_{t-1}(\beta_0^\star,\gamma_1))-1\right]-\gamma_2 \left[Y_{t-1}\exp(-W_{t-1}(\beta_0^\star,\gamma_2))-1\right]\right|\\ &=\left|Y_{t-1}\textrm{e}^{-\beta_0^\star}\left[\gamma_1\exp(-Z_{t-1}(\gamma_1))-\gamma_2\exp(-Z_{t-1}(\gamma_2))\right]+\gamma_2-\gamma_1\right|\\ &\leq Y_{t-1}\textrm{e}^{-\beta_0^\star}\left[\left|\gamma_1-\gamma_2\right|\exp(-Z_{t-1}(\gamma_1))+|\gamma_2|\left|\exp(-Z_{t-1}(\gamma_1))-\exp(-Z_{t-1}(\gamma_2))\right|\right] +\left|\gamma_2-\gamma_1\right|\\ &\leq Y_{t-1}\textrm{e}^{-\beta_0^\star}\left|\gamma_1-\gamma_2\right|\exp(-Z_{t-1}(\gamma_1))\\ &+Y_{t-1}\textrm{e}^{-\beta_0^\star}|\gamma_2|\exp(-Z_{t-1}(\gamma_1))\left|Z_{t-1}(\gamma_1)-Z_{t-1}(\gamma_2)\right| \exp(|Z_{t-1}(\gamma_1)-Z_{t-1}(\gamma_2)|)\\ &+\left|\gamma_2-\gamma_1\right|, \end{align*} where we used in the last inequality that for all $x$ and $y$ in $\mathbb{R}$, \begin{equation}\label{eq:exp_x_y} |\textrm{e}^x-\textrm{e}^y|=\textrm{e}^x|1-\textrm{e}^{y-x}|\leq \textrm{e}^x |y-x|\textrm{e}^{|y-x|}. \end{equation} Observing that \begin{equation}\label{eq:exp_Zt} \exp(-Z_{t}(\gamma_1))=\exp\left(-\gamma_1\left[Y_{t-1}\textrm{e}^{-\beta_0^\star}\exp(-Z_{t-1}(\gamma_1))-1\right]\right), \end{equation} and $|Z_{2}(\gamma_1)-Z_{2}(\gamma_2)|\leq\delta[Y_1\textrm{e}^{-\beta_0^\star}+1]$ we get, for $\gamma_1$ and $\gamma_2$ such that $|\gamma_1-\gamma_2|\leq\delta$, that \begin{equation}\label{eq:diff_W} \left|W_t(\beta_0^\star,\gamma_1)-W_t(\beta_0^\star,\gamma_2)\right| \leq\delta\; F(Y_{t-1},Y_{t-2},\dots,Y_1), \end{equation} where $F$ is a measurable function. By (\ref{eq:exp_x_y}), \begin{align*} &\left|\exp\left(W_t(\beta_0^\star,\gamma_1)\right)-\exp\left(W_t(\beta_0^\star,\gamma_2)\right)\right|\\ &\leq \exp\left(W_t(\beta_0^\star,\gamma_1)\right)\left|W_t(\beta_0^\star,\gamma_1)-W_t(\beta_0^\star,\gamma_2)\right|\exp\left(\left|W_t(\beta_0^\star,\gamma_1)-W_t(\beta_0^\star,\gamma_2)\right|\right)\\ &\leq \delta G(Y_{t-1},Y_{t-2},\dots,Y_1) \end{align*} where the last inequality comes from (\ref{eq:diff_W}), (\ref{eq:exp_Zt}) and (\ref{eq:W_Z}) and where $G$ is a measurable function. Thus, we get that $$ \left|\frac{L(\beta_0^\star,\gamma_1)}{n}-\frac{L(\beta_0^\star,\gamma_2)}{n}\right|\leq\frac{\delta}{n}\sum_{t=1}^n H(Y_t,Y_{t-1},\dots,Y_1), $$ which gives the result by using similar arguments as those given in the proof of Proposition \ref{prop1} namely that $(Y_t)$ is strictly stationary and ergodic. By Theorem 1.3.3 of \cite{taniguchibook:2012}, $H(Y_t,Y_{t-1},\dots,Y_1)$ is strictly stationary and ergodic since $(Y_t)$ has these properties. Thus, $\mathbb{E}[|H(Y_t,Y_{t-1},\dots,Y_1)|]<\infty$, which concludes the proof by Theorem 1.3.5 of \cite{taniguchibook:2012}. \clearpage \section*{Appendix} This appendix contains additional results for the support recovery of $\boldsymbol{\beta}^\star$ and for the estimation of $\boldsymbol{\gamma}^\star$ discussed in Section \ref{sec:sparse_estim}. \begin{figure}[!h] \includegraphics[scale=0.28]{error_bars_1000_10_1.pdf} \caption{Error bars of the TPR and FPR associated to the support recovery of $\boldsymbol{\beta}^\star$ for five methods with respect to the thresholds when $n=1000$, $q=1$, $p=100$ and a 10\% sparsity level. All the $\beta_i^\star=0$ except for ten of them: $\beta_1^\star=1.73$, $\beta_3^\star=1.2$, $\beta_5^\star=0.67$, $\beta_{10}^\star=0.5$, $\beta_{14}^\star=-0.38$, $\beta_{17}^\star=0.29$, $\beta_{30}^\star=-0.64$, $\beta_{33}^\star=-0.13$, $\beta_{38}^\star=-0.1$ and $\beta_{44}^\star=-0.07$. \label{fig:TPR:FPR:1:10}} \end{figure} \begin{figure}[!h] \includegraphics[scale=0.28]{error_bars_1000_10_2.pdf} \caption{Error bars of the TPR and FPR associated to the support recovery of $\boldsymbol{\beta}^\star$ for five methods with respect to the thresholds when $n=1000$, $q=2$, $p=100$ and a 10\% sparsity level. All the $\beta_i^\star=0$ except for ten of them: $\beta_1^\star=1.73$, $\beta_3^\star=1.2$, $\beta_5^\star=0.67$, $\beta_{10}^\star=0.5$, $\beta_{14}^\star=-0.38$, $\beta_{17}^\star=0.29$, $\beta_{30}^\star=-0.64$, $\beta_{33}^\star=-0.13$, $\beta_{38}^\star=-0.1$ and $\beta_{44}^\star=-0.07$.\label{fig:TPR:FPR:2:10}} \end{figure} \begin{figure}[!h] \includegraphics[scale=0.28]{error_bars_1000_10_3.pdf} \caption{Error bars of the TPR and FPR giving the corresponding final sets of selected variables for five methods with respect to the thresholds when $n=1000$, $q=3$, $p=100$ and a 10\% sparsity level. All the $\beta_i^\star=0$ except for ten of them: $\beta_1^\star=1.73$, $\beta_3^\star=1.2$, $\beta_5^\star=0.67$, $\beta_{10}^\star=0.5$, $\beta_{14}^\star=-0.38$, $\beta_{17}^\star=0.29$, $\beta_{30}^\star=-0.64$, $\beta_{33}^\star=-0.13$, $\beta_{38}^\star=-0.1$ and $\beta_{44}^\star=-0.07$.\label{fig:TPR:FPR:3:10}} \end{figure} \begin{figure}[!h] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_cv_1000_10_1_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_cv_1000_10_2_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_cv_1000_10_2_q2.pdf}\\ \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_cv_1000_10_3_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_cv_1000_10_3_q2.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_cv_1000_10_3_q3.pdf}\\ \end{tabular} \caption{Boxplots for the estimations of $\boldsymbol{\gamma}^\star$ in Model (\ref{eq:mut_Wt}) with a 10\% sparsity level and $q=1,2,3$ obtained by \texttt{ss\_cv}. Top: $q=1$ and $\gamma_1^\star=0.5$ (left), $q=2$ and $\gamma_1^\star=0.5$ (middle), $q=2$ and $\gamma_2^\star=0.25$ (right). Bottom: $q=3$ and $\gamma_1^\star=0.5$ (left), $q=3$ and $\gamma_2^\star=1/3$ (middle), $q=3$ and $\gamma_3^\star=0.25$ (right). The horizontal lines correspond to the values of the $\gamma_i^\star$'s. \label{fig:gamma:10:cv}} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_fast_1000_10_1_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_fast_1000_10_2_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_fast_1000_10_2_q2.pdf}\\ \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_fast_1000_10_3_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_fast_1000_10_3_q2.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_fast_1000_10_3_q3.pdf}\\ \end{tabular} \caption{Boxplots for the estimations of $\boldsymbol{\gamma}^\star$ in Model (\ref{eq:mut_Wt}) with a 10\% sparsity level and $q=1,2,3$ obtained by \texttt{fast\_ss}. Top: $q=1$ and $\gamma_1^\star=0.5$ (left), $q=2$ and $\gamma_1^\star=0.5$ (middle), $q=2$ and $\gamma_2^\star=0.25$ (right). Bottom: $q=3$ and $\gamma_1^\star=0.5$ (left), $q=3$ and $\gamma_2^\star=1/3$ (middle), $q=3$ and $\gamma_3^\star=0.25$ (right). The horizontal lines correspond to the values of the $\gamma_i^\star$'s.\label{fig:gamma:10:fast}} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_min_1000_10_1_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_min_1000_10_2_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_min_1000_10_2_q2.pdf}\\ \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_min_1000_10_3_q1.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_min_1000_10_3_q2.pdf} & \includegraphics[width=0.32\textwidth, height=4.5cm]{gamma_est_min_1000_10_3_q3.pdf}\\ \end{tabular} \caption{Boxplots for the estimations of $\boldsymbol{\gamma}^\star$ in Model (\ref{eq:mut_Wt}) with a 10\% sparsity level and $q=1,2,3$ obtained by \texttt{ss\_min}. Top: $q=1$ and $\gamma_1^\star=0.5$ (left), $q=2$ and $\gamma_1^\star=0.5$ (middle), $q=2$ and $\gamma_2^\star=0.25$ (right). Bottom: $q=3$ and $\gamma_1^\star=0.5$ (left), $q=3$ and $\gamma_2^\star=1/3$ (middle), $q=3$ and $\gamma_3^\star=0.25$ (right). The horizontal lines correspond to the values of the $\gamma_i^\star$'s.\label{fig:gamma:10:min}} \end{center} \end{figure}
{ "timestamp": "2020-07-20T02:03:28", "yymm": "2007", "arxiv_id": "2007.08623", "language": "en", "url": "https://arxiv.org/abs/2007.08623" }
\section{Linear stability analysis: $\M_1$ and $\M_2$}\label{Stability} In this section, we present a linear stability analysis for the two paradigmatic models used in the main text namely the Stuart-Landau system $(\M_1)$ and the Lorenz system $(\M_2)$. \subsection {Eigenvalue analysis of $\M_1$} \label{Eigen_M_1} \noindent Stuart Landau model $(\M_1)$ is described by the following governing equation of motion \cite{ott2002chaos,strogatz2001nonlinear} \begin{eqnarray} \begin{array}{l}\label{eq.s1} \dot{Z} = (a + i\Omega-|Z|^2)Z, \end{array} \end{eqnarray} where $Z=x+iy$ is the complex variable; $a$ and $\Omega$ are the intrinsic parameters of the system. The system has one equilibrium point at $(0,0)$. Now, the Jacobian matrix $J$ of the system $\M_1$ at the equilibrium point $(0,0)$ is given by \begin{eqnarray*} J(0,0)= \left[ {\begin{array}{ccc} a & -\Omega \\ \Omega & a \\ \end{array} } \right]. \end{eqnarray*} The characteristic roots of the above Jacobian are $\lambda_{\pm}=a\pm\Omega i=-0.01\pm i$, where $a=-0.01$ and $\Omega=1$. Therefore, the trivial equilibrium point $(0,0)$ is a stable spiral. Here, the system parameter $a$ determines decay rate. On the other hand, the imaginary part of the eigenvalue $\Omega$ determines the intrinsic frequency of this decaying oscillation. Thus, the time period of oscillatory behavior during the transient phase, for our current choice of parameters, is given by \begin{eqnarray} T(\M_1)\sim\frac{2\pi}{\Omega}\approx6.28318. \label{T_M1} \end{eqnarray} We note that the system experiences a critical transition (from stable spiral to a stable limit cycle) at $a_c\equiv a=0.0$. \subsection {Eigenvalue analysis of $\M_2$} \label{Eig_M2} \noindent The governing equation of motion for the Lorenz system ($\M_2$) is given by \cite{ott2002chaos,strogatz2001nonlinear} \begin{equation} \begin{array}{l}\label{eq.s2} \dot{x} = \sigma(y-x),\\ \dot{y} = \rho x-y-xz,\\ \dot{z} = -\beta z+xy,\\ \end{array} \end{equation} where the system parameters are $\sigma, \rho,$ and $\beta$ $(>0)$. It is easy to see that the system has a trivial equilibrium point $P_0: (0, 0, 0)$ which is stable for $\rho<1.$ For $\rho>1$, two non-trivial equilibrium points emerge which are given by $P_{1}:(\sqrt{\beta(\rho-1)},\sqrt{\beta(\rho-1)},\rho-1)$ and $P_{2}:(-\sqrt{\beta(\rho-1)},-\sqrt{\beta(\rho-1)},\rho-1)$. Now we proceed to calculate the Jacobian matrix $J$ of the system $\M_2$ at the equilibrium point $P_{1}$. This gives \begin{eqnarray}\label{jacobian} J(\sqrt{\beta(\rho-1)},\sqrt{\beta(\rho-1)},\rho-1)= \left[ {\begin{array}{ccc} -\sigma & \sigma & 0 \\ 1 & -1 & -\sqrt{\beta(\rho-1)} \\ \sqrt{\beta(\rho-1)} & \sqrt{\beta(\rho-1)} & -\beta \\ \end{array} } \right]. \end{eqnarray} One can now immediately write the characteristic equation coming from the Jacobian above, and this reads \begin{equation}\label{jacobian1} \lambda^3+(\beta+\sigma+1)\lambda^2+\beta(\rho+\sigma)\lambda+2\beta\sigma(\rho-1)=0. \end{equation} For fixed parameters e.g., $\sigma=10,\beta=\dfrac{8}{3}$ and $\rho=23$, the characteristic equation \eref{jacobian1} becomes \begin{equation}\label{jacobian2} \lambda^3+\dfrac{41}{3}\lambda^2+88\lambda+\dfrac{3520}{3}=0. \end{equation} The roots of the above equation are simply given by $\lambda=-13.5588; -0.054\pm 9.3024 i$. Therefore linear stability analysis at the vicinity of $P_1$ determines that it is a stable spiral. In the same way, one can also show that $P_2$ is a stable spiral. Note that, the system has a transient chaos phase in a range of $\rho\in(13.926,24.06)$ for $\sigma=10$ and $\beta=\dfrac{8}{3}$ \cite{yorkeprl1986}. Increasing $\rho$ towards the critical transition point ($\rho_c=24.06$), the duration of chaotic transient phase follows a power law \cite{yorkeprl1986,lai2011transient,yorke1979}. At $\rho=\rho_c$, the critical transition occurs and the transient chaos becomes a chaotic attractor. \section{Distance and transient time density without resetting}\label{Distance} In this section, we discuss in details the quantitative features of the transient time density $P(TT)$ in the absence of resetting. To obtain the histogram for each model, we have scanned $5\times10^6$ initial conditions from the basin of attraction $\mathcal{B}_{\A}$. \subsection{Transient time for $\M_1$} In the case of system $\M_1$, we choose our basin span to be $[-6,6]\times[-6,6]$, and collect the transient time. The resulting density is plotted in Fig.\ 2b in the main text. From the figure, it becomes evident that the density is supported from above. Moreover, we observe that the probability to get larger values of $TT$ is higher than the smaller values of $TT$. To gain deeper insights, we have investigated the relation between the transient time (of the trajectories taken from initial points in basin to stable equilibrium point) and the Euclidean distance (between initial and target states). The Euclidean distance, metric $D$, is described as \begin{eqnarray} D(\textbf{x},\textbf{y})=\sqrt{\sum_{i=1}^{n}(x_i-y_i)^2}~, \end{eqnarray} where $\textbf{x}=(x_1,x_2,...,x_n)\in \R^n$ and $\textbf{y}=(y_1,y_2,...,y_n)\in \R^n$. We collect all $D$ and $TT$ for both models and plot them in Fig.\ \ref{figs1}a. For $\M_1$, the transient time increases exponentially as we increase the Euclidean distance ($TT\sim e^D$) till some threshold $D^*< 0.6$. Beyond this certain distance ($D>D^*$), all the trajectories take significant small time to reach to the surface of the circle having radius ($D \approx D^*$). In effect, $TT$ saturates around approximately $1800$ for the current choices of parameters. So, for $D<D^*$, $TT$ has an exponential growth and and beyond, it saturates to a specific value. This essentially tells that no matter where one starts in the basin, the maximum $TT$ that could be achieved is approximately similar (with some small fluctuations) to that of starting from $D^*$. Thus, the probability density function of the transient time is bounded from above by this maximum value of $TT$. \subsection{Transient time for $\M_2$} In $\M_2$, we take the size of basin of attraction to be $[-20,20]\times[-20,20]\times[0,30]$. Performing a similar analysis as above for the averaging, we have plotted the histogram for $TT$ in Fig.\ 2g in the main text. Here, we find that $P(TT)$ is an exponential distribution, which is a fingerprint of chaotic systems \cite{yorkeprl1986}. However, we did not find any direct relationship between $TT$ and $D$ for the Lorenz system. In higher $D$, $TT$ ranges from low value $500$ to higher value $4000$. It is clear that $D \gtrsim 5$, the scatter points are dense around $500$-$2500$ (see \fref{figs1}b). The less number of points appear in higher value of $TT$ ($TT \gtrsim 3000$). Therefore, $P(TT)$ is less probable at higher values of $TT$. This information is consistent with the form of $P(TT)$ [see Fig. 2g in the main text]. \begin{figure}[t] \centerline{ \includegraphics[scale=0.25]{figure_sm_1}} \caption{ Variation between $D$ and $TT$: Transient time as a function of $D$, the distance between the initial point and the target for $\M_1$ (panel a) and $\M_2$ (panel b). For $\M_1$ (panel a), we find that $TT$ increases exponentially as a function of $D$ till it reaches a threshold and then saturates. The threshold value for $D$ is estimated to be $\sim 0.6$. On the other hand, it is clear from panel b ($M_2$) that there is no such relationship between $TT$ and the distance $D$. Parameter values set for the simulations are for (a) $a=-0.01, \Omega=1$, and for (b) $\sigma=10,\rho=23,\beta=\dfrac{8}{3}$} \label{figs1} \end{figure} \begin{figure}[h] \centerline{ \includegraphics[scale=0.69]{figure_sm_2}} \caption{Panel (a): Table for mean transient time as a function of $\langle R \rangle$ (in the case of sharp resetting) for model $\M_1$. Marked in red are the rows for which the period $\langle R \rangle$ of sharp resetting is approximately close to the half or integer function of the intrinsic time period ($T$) of the original process. Time series (trajectory in $x$-coordinate as a function of time) without (in red) and with (in blue) resetting: in panel (b), we have plotted the trajectory for $\langle R \rangle=4.5$ against the original trajectory. We see a clear distinction between the original and resetting induced trajectories. In particular, the plot shows that the trajectory with resetting reaches the target much faster than the original one thus resulting in a lower $\langle TT_R \rangle$. In panel (c), we have plotted the trajectories when $\langle R \rangle=6.5$ (recall $T \approx 6.3$). We see that the trajectories almost follow each other (also see the inset where we have zoomed a part of both the signals) clearly indicating that both take almost same time to reach the target. Thus, in this case, the behavior of the resetting trajectory is clearly oscillatory like the original process. In other words, resetting will have almost no effect on the underlying process. Parameter values: set here are $a=-0.01, \Omega=1$. } \label{figs5} \end{figure} \section{Emergence of oscillatory behavior under sharp resetting in $\M_1$} In this section, we briefly discuss the origin of the oscillatory behavior of $\langle TT_R \rangle$ under sharp resetting mechanism in $\M_1$. This protocol essentially asserts that one resets the system always after a fixed $\langle R \rangle$ amount of time. Note that this oscillatory behavior is markedly different than the exponential resetting where we observed a simple non-monotonic behavior (Fig. 3 in the main text). To explain this, at first, we accumulated $\langle TT_R \rangle$ for different values of $\langle R \rangle$ shown in Fig.\ \ref{figs5}a (Table). Moreover, we recall from Sec. \ref{Eigen_M_1} that the intrinsic periodicity of $\M_1$ model is around $T=\frac{2\pi}{\Omega} \approx 6.3$ for $\Omega=1$ (Eq.\ \ref{T_M1}). We now identify from the table (Fig.\ \ref{figs5}a) the \textit{light red marked rows} that satisfy \begin{eqnarray} \langle R \rangle \approx \dfrac{nT}{2}, n=1,2,3,..., \end{eqnarray} where $T$ is the intrinsic period. From the red marked rows of the table, we identify the mean resetting time $\langle R \rangle: 3,6.5,9.5$ for which we respectively find $\langle TT_R \rangle\approx(1343.21, 1310.01, 1716.4)$, which are notably much higher than the transient time one would expect under resetting. Essentially, these mean resetting times commensurate with the intrinsic time periods and we observe a significant increase in $ \langle TT_R\rangle $. To further illustrate this behavior, we now choose two particular values of $ \langle R \rangle$ from the table such that one lowers the transient time while the second one does not provide any significant improvement. At first, we take $ \langle R \rangle=4.5$ which reduces the transient time (In Fig.\ \ref{figs5}b, blue line indicates time signal under sharp resetting which is placed in contrast to the original time series in the absence of resetting). Here, we clearly see a very quick convergence to the steady state for the trajectory subject to resetting. On the other hand, when $\langle R \rangle=6.5$, Fig. \ref{figs5}c clearly indicates that the blue line (which is the trajectory under resetting) is quite close to the original time signal (denoted by red solid line). A short segment of the signal is zoomed in below Fig. \ref{figs5}c to further demonstrate the proximity between the trajectories. Thus, in effect, the resultant transient time becomes of the same order as that of the uninterrupted process. In summary, sharp restarts are periodic temporal process which occur always after a fixed time $\langle R \rangle$. When this period becomes half or full integer of the intrinsic time period of the system, a sudden rise in mean transient time is observed with the emergence of those consecutive oscillations as seen in Fig. 3 (left panel for $\M_1$) in the main text. \begin{figure}[t \centerline{ \includegraphics[width=14.5cm, height=11cm]{SMmeancontrollineV1.jpg}} \caption{Variation in mean transient time for different control lines. We have chosen different type of control lines as mentioned in details in Sec.\ V. For $\M_1$, we have considered three different control lines which pass through the points $A : (4,0), B: (-2,4), C: (-4,-2)$ and the equilibrium point $P:(0,0)$ respectively. For each of these cases, we have plotted the mean transient time as a function of $\langle R \rangle$ [panel (a) for exponential and panel (d) for sharp]. We find that there is no effect of different control lines on the mean transient time in $\M_1$. In $\M_2$, there are two equilibrium points $P_1: (7.65942, 7.65942, 22)$ and $P_2: (-7.65942, -7.65942, 22)$. We have also taken three points $A: (0,0,0)$, $B:(20,20,30)$, and $ C:(-20,-20,30)$ through which control lines pass. Thus, there are two sets of control lines each of which comprises three lines passing through $A, B, C$ and either $P_1$ or $P_2$. In panel (b) and panel (e), we have plotted $\langle TT_R \rangle$ as a function of $\langle R \rangle$ for exponential and sharp resetting respectively using the control lines that pass through $A, B, C$ and $P_1$. We have prepared similar plots in panel (c) and panel (f) where the control lines pass through $A, B, C$ and $P_2$. Here, we see that mean transient time depends on the choice of control lines. This is due to the nature of the basin for the Lorenz system as discussed in details in Sec.\ VB. Parameter values set here are: $a=-0.01, \Omega=1$ (for $\M_{1}$) and $\sigma=10,\rho=23,\beta=\dfrac{8}{3}$ (for $\M_{2}$).} \label{figs6} \end{figure} \begin{figure}[t \centerline{ \includegraphics[width=10.5cm, height=9cm]{SMfluctuationcontrollineV1.jpg}} \caption{ Variation in fluctuations for different choices of control lines. In panel (a), we have shown a bar plot comparison of fluctuations between the original dynamics and resetting induced dynamics in $\M_1$. Resetting was conducted at $\langle R \rangle=1$ (both for exponential and sharp resetting) by taking the control lines which pass through $P$ and $A, B,C $ respectively (see Sec.\ VA). It is clear from the figure that (i) resetting reduces fluctuations and (ii) the magnitude of the fluctuations is almost same implying that resetting does not depend on the choice of control lines in $\M_1$. This observation is in accordance with Fig.\ 6a and Fig.\ 6d. Panel (b) and panel (c) show bar plot comparison of fluctuations between the original dynamics and resetting induced dynamics in $\M_2$ (conducted at $\langle R \rangle=0.1$) when the control lines pass through $A, B,C $ and either $P_1$ or $P_2$ respectively (see Sec.\ VB). We concur with the observation that resetting also reduces fluctuations in this case. However, the magnitudes of fluctuations are different in each case, as expected, due to the underlying non-uniform structure of the basin in Lorenz system. Parameter values set here are: $a=-0.01, \Omega=1$ (for $\M_{1}$) and $\sigma=10,\rho=23,\beta=\dfrac{8}{3}$ (for $\M_{2}$).} \label{fluctuations} \end{figure} \section{Behavior of the mean and fluctuations in transient time on the choice of control lines} In this section, we investigate in details the ramifications in $\langle TT_{R} \rangle$ and fluctuations $\sigma_R$ on the choice of control lines. Let us first recall that a control line is randomly chosen from the basin of attraction but it always passes through the equilibrium point(s). Here, the analysis is done both for the exponential and sharp resetting. In the following, we discuss the effects of control line on the mean and fluctuations first on the Stuart-Landau system ($\M_1$), and then on the Lorenz system ($\M_2$). \subsection{Effect of control lines on $\M_1$} In system $\M_1$, the equilibrium point is located at $(0,0)$ which we denote as $P$. In the main text, we choose the control line randomly from the basin such that it passes through $(4,4)$ and the equilibrium point $P$. We have shown that this protocol yields a significant reduction in mean and fluctuations in transient time. To show that this behavior is invariant to the choice of the control line, we now construct the following control lines which pass through the random coordinates mentioned below from the basin of attraction: \begin{enumerate} \item $P (0,0)$ and $A(4,0)$ , \item $P (0,0)$ and $B(-2,4)$ , \item $P (0,0)$ and $C(-4,-2)$. \end{enumerate} For each of the cases above, we have plotted $\langle TT_R \rangle$ as a function of $\langle R \rangle$ for the exponential (Fig.\ \ref{figs6}a) and sharp resetting (Fig.\ \ref{figs6}d) respectively. First, we note that indeed resetting reduces the mean transient time. Secondly, it becomes evident from the plots that all the curves collapse thus clearly indicating the fact that the variation in mean transient time does not depend on the choice of the control line, particularly, for case of $\M_1$, where the basin of attraction is homogeneous, and thus the system can not distinguish between the choice of the control lines. In Fig. \ref{fluctuations}a, we have shown a comparison between the fluctuations in the original dynamics and with resetting dynamics (both for exponential and sharp) for given $\langle R \rangle =1$. Note that the fluctuations are now reduced due to the resetting. Moreover, since the basin is uniform, the choice of control line did not have any impact on the fluctuations similar to the mean as seen above. \subsection{Effect of control lines on $\M_2$} To see the effects of control lines on $\M_2$, we first recall that $\M_2$ has two fixed points ($P_1$ ~and~ $P_2$) which are stable for a certain range of $\rho$ (See the Sec. \ref{Eig_M2}). The system $\M_2$ has riddle basin of attraction for the equilibrium points $P_1$ and $P_2$. As was mentioned in the main text, in this case, we have some flexibility in choosing control lines e.g., it can pass through one of the equilibrium points ($P_1$ or $P_2$) or via both. We discuss each of the cases in the following. \subsubsection{Effect of fixed control line passing through both $P_1$ and $P_2$} We first discuss the case when the control line passes through both the equilibrium points $P_1$ and $P_2$. We compute the transient time when the trajectory reaches any of these points. This scenario was already discussed in the main text. In particular, we choose the control line such that it passes through $P_1 (7.65942, 7.65942, 22)$ and $P_2(-7.65942,-7.65942, 22)$. When conducted at $\langle R \rangle=0.1$, a net reduction in mean and fluctuations was observed. \subsubsection{Effect of fixed control line passing through $P_1$} In this case, we choose control lines that pass through the equilibrium point $P_1$, which is the only target. Here, we take three random control lines passing through the following points from the basin of attraction as described below \begin{enumerate} \item $P_{1}$ $(7.65942, 7.65942, 22)$ and $ A(0,0,0)$, \item $P_{1}$ $(7.65942, 7.65942, 22)$ and $B(20,20,30)$, \item $P_{1}$ $(7.65942, 7.65942, 22)$ and $C(-20,-20,30)$. \end{enumerate} In Figs.\ \ref{figs6}b and \ref{figs6}e, we have plotted $\langle TT_R \rangle$ as a function of $\langle R \rangle$ for the exponential and deterministic resetting respectively. The behavior is similar to Fig. 3 in the main text which essentially reiterates the fact that resetting reduces the mean transient time. In Fig. \ref{fluctuations}b, we have shown a comparison between the fluctuations in the original dynamics and with resetting dynamics (both for exponential and sharp) for given $\langle R \rangle =0.1$ and choice of the control lines as mentioned above. In here, we also see that resetting lowers the fluctuations. \subsubsection{Effect of fixed control line passing through $P_2$} In this case we take the control line passing through the equilibrium point $P_2$ (which is the only target) and the following other points \begin{enumerate} \item $P_{2}$ $(-7.65942, -7.65942, 22)$ and $ A(0,0,0)$, \item $P_{2}$ $(-7.65942, -7.65942, 22)$ and $B(20,20,30)$, \item $P_{2}$ $(-7.65942, -7.65942, 22)$ and $C(-20,-20,30)$. \end{enumerate} Here too, we find that resetting using a control line technique reduces the mean transient time. These conclusions are in accordance with the Figs.\ \ref{figs6}c and \ref{figs6}f which show the variation of mean transient time as a function of $\langle R \rangle$. In Fig. \ref{fluctuations}c, we have shown a comparison between the fluctuations in the original dynamics and with resetting dynamics (both for exponential and sharp) for given $\langle R \rangle =0.1$ and choice of the control lines as mentioned above. In here, we also see that resetting lessens the fluctuations. \\ \noindent As a final remark, we note that since the basin of $\M_2$ is non-homogeneous, we do not observe any collapse for the mean transient time for different choices of control lines as was seen in the case of $\M_1$. \section{Impact of control parameters on the transient time near the critical transition}\label{parameter} \noindent It is well known that in non-linear systems, the parameters play a paramount role to decide the structure of the basin, attractor or fixed points. For our current models, we have already discussed in Sec.\ \ref{Stability} that the controlling parameters can change the structure of the attractor qualitatively i.e., transform the stable fixed points into limit cycle or chaos (beyond critical values). But it is important to note that as the parameters are tuned to the critical values, duration of transient states gradually increases. For example, it is known that the average lifetime or transient time of a chaotic transient depends critically upon the system parameter i.e., it diverges as a power law form near the critical point \cite{yorkeprl1986}. Naturally, the question appears on how the situation changes in the presence of resetting near the critical point and what are the overall ramifications of the resetting strategies (exponential and sharp) on the statistics of the transient time. In this section, we have examined these issues in details. \begin{figure}[t] \centerline{ \includegraphics[height=10cm, width=15cm]{SMFigparametersV2.jpg}} \caption{Mean transient time regulation by resetting near the critical transition. Panel (a) and panel (d) show four time series of the original dynamics for $\M_1$ and $\M_2$ respectively. In panel (a), the trajectories in coordinate $y$ are plotted as a function of time for different values of $a$ (shown in the plot) near the critical transition $a_c=0$. In panel (d), the trajectories in coordinate $x$ are plotted as a function of time for different values of $\rho$ (shown in the plot) near the critical transition $\rho_c=24.06$. The varying parameters are $a=-0.002$ (blue), $-0.009$ (red), $-0.02$ (black), $-0.04$ (cyan), and $\rho=15$ (cyan), $18$ (black), $21$ (red), $24$ (blue). In panel (b) and (c), we have plotted $\langle TT_R \rangle$ as a function of $\langle R \rangle$ for exponential and sharp resetting for the above mentioned values of $a$. Similarly, panel (e) and panel (f) depict variation of $\langle TT_R \rangle$ as a function of $\langle R \rangle$ for exponential and sharp resetting for the above mentioned values of $\rho$. Other parameters set for the simulations are: $\Omega=1$ (for $\M_1$), and $\sigma=10$, $\beta=\dfrac{8}{3}$ (for $\M_2$).} \label{fig5} \end{figure} \subsection{System $\M_1$} In the Stuart-Landau oscillatory system, we regulate the decay parameter $a$ which determines whether the system has a limit cycle or a fixed point. Following analysis from Sec. \ref{Stability}A, we know that this transition occurs exactly at $a_c=0$. In what follows, we scan $a$ for a range of values close to $a_c$ and examine the variations due to resetting. For a given initial condition, the transient time of the underlying process gradually increases as we increase $a$. This is shown in Fig. \ref{fig5}a where $a$ has assumed four different values $-0.002, -0.009, -0.02, -0.04$ and clearly, as $|a|$ increases the decay rate of the oscillation increases and we see a faster convergence (i.e., a shorter transient time) to the steady state (see inset in Fig. \ref{fig5}a). To add restart, we follow the same protocol (by resetting at the control line that passes through the equilibrium point $P$) as outlined in the main text to this dynamics but when $a$ is close to $a_c$. In Fig. \ref{fig5}b, we have plotted $\langle TT_R \rangle$ as a function of $\langle R \rangle$ [$a=-0.002$ (blue), $-0.009$ (red), $-0.02$ (black), and $-0.04$ (cyan)] when the resetting is exponential. The plot clearly shows that $\langle TT_R \rangle$ is significantly reduced near the critical transition. Moreover, in each case above, we find an optimal resetting time $\langle R^* \rangle$ which makes $\langle TT_R \rangle$ to be minimum (see Table I for exponential and Table II for sharp resetting and details of the mean transient time at the optimality). We prepare a plot in Fig. \ref{fig5}c for the sharp resetting case where we find behavior of $\langle TT_R \rangle$ to be similar. The oscillatory behavior, as was discussed in Sec. IV, was noted for the sharp resetting. \subsection{System $\M_2$} In the Lorenz system, it is known that the Rayleigh number $\rho$ marks the critical transition between the chaotic transient phase and chaotic attractor \cite{yorkeprl1986,yorke1979}. For fixed parameters $\sigma=10$ and $\rho=\frac{8}{3}$ , the system exhibits transient chaos in the range of $\rho \in (1.926,24.06)$ where the transition to a chaotic attractor takes place at $\rho_c=24.06$. To demonstrate the effects of resetting near the critical transition, we take four different values for $\rho$ and plot the trajectories for each of them. We demonstrate in Fig.\ \ref{fig5}d, trajectories in $x$-coordinates as a function of time for $\rho=24$ (blue), $21$ (red), $18$ (black), and $15$ (cyan). Here, chaotic transient phase persists longer as we increase $\rho$ close to $\rho_c$. To illustrate the effects of resetting, we plot $\langle TT_R \rangle$ as a function of $\langle R \rangle$ for each of the cases above (by taking a control line which passes through both the equilibrium points $P_1$ and $P_2$). Both for exponential (Fig.\ \ref{fig5}e) and sharp resetting (Fig.\ \ref{fig5}f), we observe that resetting reduces the transient time which would be significantly higher and even diverging (close to $\rho_c$). Moreover, emergence of an optimal resetting rate $\langle R^* \rangle$ was observed in each case (see Table I for exponential and Table II for sharp resetting and details of the mean transient time at the optimality). \begin{center} Table I \begin{tabular}{ |c|c|c|c| } \hline $a$ & $<TT_R^*>$ & $\rho$ & $<TT_R^*>$\\ \hline {$-0.04$} & 35.40 & {$15$} & 6.16 \\ \hline {$-0.02$} & 36.63 & {$18$} & 5.70 \\ \hline {$-0.009$} & 37.34 & {$21$} & 5.30 \\ \hline {$-0.002$} & 37.81 & {$24$} & 5.05\\ \hline \end{tabular} \end{center} \begin{center} Table II \begin{tabular}{ |c|c|c|c| } \hline $a$ & $<TT_R^*>$ & $\rho$ & $<TT_R^*>$\\ \hline {$-0.04$} & 12.905 & {$15$} & 3.59\\ \hline {$-0.02$} & 13.02 & {$18$} & 2.55\\ \hline {$-0.009$} & 13.07 & {$21$} & 1.74\\ \hline {$-0.002$} & 13.10 & {$24$} & 2.59\\ \hline \end{tabular} \end{center} \vspace{1cm} \noindent Finally, we conclude this section by reemphasizing the fact that resetting has a strong impact on the average transient times even close to the critical transition. In particular, resetting renders the mean transient time lower near the critical point which are otherwise large or diverging. It is worth emphasizing that resetting also regulates the fluctuations strongly near the critical transition. We refer to the barplot in \fref{fig6} which clearly shows that there is a significant reduction in fluctuations even when we modulate the parameters very close to the critical transition. A consistent limit is obtained for $\langle R \rangle \geq10$, where the system behaves as it would in the absence of resetting. \begin{figure}[t] \centerline{ \includegraphics[height=8cm, width=12cm]{SMFigparametersfluctuationsV2.jpg}} \caption{Fluctuation regulation by resetting near the critical transition. In this figure, we present bar plot comparison between the fluctuations of the original and reset induced dynamics. For $\M_1$, while conducted at $\langle R \rangle=1$, we observe that both exponential (panel a) and sharp (panel b) resetting strategies have reduced the fluctuations even when we are close to the critical transition $a=a_c=0$. Similar bar plot is laid out for $\M_2$ but resetting here was conducted at $\langle R \rangle=0.1$. Here too, we find that resetting remains beneficial to reduce fluctuations as we scan $\rho$ to its critical value $\rho_c=24.06$. } \label{fig6} \end{figure} \section{Computational method}\label{computational} \noindent In this section, we briefly discuss the computational method that has been used to gather statistics and perform averaging on the transient time under exponential (stochastic) and sharp (deterministic) resetting strategy. \begin{itemize} \item {\bf {\it Step I.} Fix the target:} First, we determine the equilibrium point $\mathcal{A}$ of the given differential equation. There can be many equilibrium points in the system, but we may choose one or many of them to be the target points. For brevity, let us denote the specific targeted fixed point by $\mathbf{x_f}$. \item {\bf {\it Step II.} Integration scheme:} To integrate the deterministic model, we choose a random initial condition, say $\bf x_{0}$ from the basin of attraction $\mathcal{B_A}$ at the initial time $T_0$. The $4$-th order Runge{-}Kutta method is used to simulate the system with fixed step length $h=0.01$. Sufficient number of data points are generated such that trajectory reaches to its target with a close vicinity measured by $\epsilon=10^{-9}$ i.e., it satisfies Eq.\ \ref{tt} in the main text. \item {\bf {\it Step III.} Generating resetting times:} Starting from $T_0$, we now evolve the dynamics under resetting mechanism. Resetting events occur at time $T_{1}, T_{2}, T_{3},...$, where the duration between two consecutive events ( $\Delta_T: \{T_1-T_0, T_2-T_1, T_3-T_2,...\}$) are extracted from an exponential distribution \begin{align} f_R(\Delta_T)=\langle R \rangle ^{-1} e^{- \frac{\Delta_T}{\langle R \rangle}},~~~\text{where}~\langle R \rangle~~\text{is the mean} \end{align} and periodic distribution for sharp resetting \begin{align} f_R(\Delta_T)=\delta(\Delta_T-\langle R \rangle),~~~\text{where}~\langle R \rangle~~\text{is the fixed time period}~. \end{align} For numerical schemes, the resetting times were generated at the discrete points: $\frac{1}{h}\times \{T_{1}, T_{2}, T_{3},...\}$. \item {\bf {\it Step IV.} Fixing a control line:} An arbitrary point $\bf x_{c}$ is randomly chosen from the basin of attraction and we draw a straight line passing through $\bf x_{c}$ and any of the equilibrium point(s), say, $\bf x_{f}$. This arbitrary control line is kept fixed for the entire scanning process. We have scanned the transient times of $5 \times 10^6$ initial states for each $\langle R \rangle$. \item {\bf {\it Step V.} Projection procedure:} To describe the projection or resetting to the control line, let us first assume that resetting occurred at some time $T_i$, and at this very moment, coordinate of the trajectory is ${\bf x_1}$. To decide, where to reset in the control line, we choose a point $\bf{x_2}$ from the control line such that the line passing through ${\bf x_1}$ and ${\bf x_2}$ will be perpendicular to the control line. If this condition is satisfied, we project the coordinate ${\bf x_1}$ to ${\bf x_2}$. This process is repeated for other resetting events. \item {\bf {\it Step VI.} Calculation of transient time:} We stop our simulation after reaching at $\bf x_n$ after $n$-th iteration only if the condition $||\bf x_f-\bf x_{n}||<\epsilon~(=10^{-9})$ (See Sec. I and Eq.\ (2) in the main text) is satisfied. Subsequently, the transient time will be $TT=n\times h$. This time is random, and we generate histogram of the transient time from many such realizations. \end{itemize} Following the steps I-VI, we collect data of the required observables and investigate various statistical properties. \begin{figure}[ht] \centerline{ \includegraphics[scale=0.325]{SMtablemaintextV2.jpg}} \caption{ Numerical values for the mean and fluctuations as was pointed out in the main text. In $\M_1$, both resetting strategies (exponential and sharp) were conducted at $\langle R \rangle=1$ (also see Fig.\ 2e in the main text for the exponential resetting). In $\M_2$, everything was similar but we took $\langle R \rangle=0.1$ (also see Fig.\ 2j in the main text for the exponential resetting). In both the cases (exponential and sharp), the order of improvement in mean and fluctuations was mentioned in the main text.} \label{figtable} \end{figure} \section{Summary of the numerical values used in the main text} In this section, we provide numerical values for the mean and fluctuations for exponential and sharp resetting as was discussed in the main text. We refer to Fig. \ref{figtable} which contains a table listing the exact values. \vspace{0cm}
{ "timestamp": "2020-07-20T02:03:57", "yymm": "2007", "arxiv_id": "2007.08642", "language": "en", "url": "https://arxiv.org/abs/2007.08642" }
\section{Introduction} Debuting with the first detection of a stellar-mass black hole binary (SBHB) in 2015 \cite{Abbott:2016blz}, the LIGO/Virgo collaboration has issued the first catalog of the gravitational wave (GW) sources identified during the first and second observing runs (O1 and O2) \cite{LIGOScientific:2018mvr,Abbott:2017oio,Abbott:2017gyy,TheLIGOScientific:2016pea,TheLIGOScientific:2017qsa,Abbott:2017vtc} and four noteworthy detections from the third observing run (O3) \cite{LIGOScientific:2020stg,Abbott:2020uma,GW190814,Abbott:2020tfl}. These observations of GWs in the $10$--$1000$ Hz band, which include 11 SBHBs mergers and two binary neutron star (BNS) mergers, have inaugurated the era of GW astronomy and opened a new window to the Universe, allowing to infer for the first time the properties of the population of compact binaries \cite{LIGOScientific:2018jsj,Abbott:2016ymx} and providing new tests of general relativity (GR) \cite{LIGOScientific:2019fpa,TheLIGOScientific:2016src,Abbott:2018lct}. The Laser Interferometer Space Antenna (LISA) \cite{AHO17}, scheduled for launch in 2034, will observe GWs in a different frequency band (the $\rm mHz$ band) and, therefore, complement ground-based detectors. The strongest anticipated GW sources in the LISA data will be massive black hole binaries (MBHBs), with total mass in the range $10^4$--$10^7 M_{\odot}$ \cite{Klein:2015hvg}, and galactic white dwarf binaries (GBs). The latter are so numerous that they will form a stochastic foreground signal dominating over instrumental noise in addition to a smaller number ($\sim 10^4$) of individually resolvable binaries \cite{Korol:2017qcx}. SBHBs with a total mass as large as those observed by LIGO and Virgo could also be detected by LISA during their early inspiral phase, long before entering the frequency band of ground-based detectors and merging \cite{Sesana:2016ljz}. SBHBs in the ${\rm mHz}$ band can be at very different stages of their evolution, ranging from almost monochromatic sources to chirping sources which leave the LISA band during the mission \cite{Sesana:2016ljz}. We focus on resolvable SBHBs, one of the best candidates for multiband observations \cite{Sesana:2016ljz,AmaroSeoane:2009ui}. Although SBHBs are not LISA's main target, the scientific potential of multiband observations with LISA and ground-based detectors is considerable. These could be used to probe low-frequency modifications due to deviations from GR \cite{Toubiana:2020vtf,Sesana:2016ljz,Gnocchi:2019jzp,Carson:2019rda,Vitale:2016rfr,Barausse:2016eii,Chamberlain:2017fjl} or to environmental effects \cite{Caputo:2020irr,Barausse:2014tra,Barausse:2014pra,Tamanini:2019usx,Cardoso:2019rou}, to facilitate electromagnetic follow up observations \cite{Caputo:2020irr}, or simply to improve parameter estimation (PE) over what is possible with ground-based interferometers alone \cite{Sesana:2016ljz,Vitale:2016rfr}. More precise measurements would improve the testing of competing astrophysical formation models. Several scenarios have been suggested for the formation of SBHBs, such as stellar evolution of field binaries versus dynamical formation channels \cite{Postnov:2014tza,Benacquista:2011kv}. Moreover, the possibility that these black holes (BHs) are of primordial origin~\cite{Bird:2016dcv} cannot be completely discarded. The various possible formation channels typically predict different distributions for the parameters of SBHBs, especially eccentricity and spin orientation/magnitude~\cite{Antonini:2012ad,Samsing:2017xmd,Gerosa:2018wbw} providing discriminating power in astrophysical model selection. A recent study has suggested that already a few ``special'' events, binaries with high primary mass and/or spin, would have a huge discriminating power \cite{Baibhav:2020xdf}. The specific impact of observations of SBHBs with LISA on astrophysical inference was considered in \cite{Gerosa:2019dbe,Samsing:2018isx,Nishizawa:2016jji,Nishizawa:2016eza,Breivik:2016ddj}. LISA will observe MBHBs somewhere between a few days and few months before their merger, i.e.,~in their final, rapidly evolving inspirals \cite{Klein:2015hvg}. On the contrary, GBs are slowly evolving, almost monochromatic sources, and they will remain in the band during the whole LISA mission \cite{Timpano:2005gm}. Resolvable SBHB signals will fall in between these two behaviors: they are long lived sources but are not monochromatic and some SBHBs can chirp and leave the LISA band. In addition, all resolvable SBHBs are clustered at the high end of LISA's sensitive band. Thus, SBHBs will produce very peculiar signals of great diversity. In this work we do not address the question of how to detect those sources, although it has been argued to be challenging~\cite{Moore:2019pke}. Instead, we assume that we have at our disposal an efficient detection method, and we focus on inferring the parameters of the detected signals. We also consider only one signal at a time, while in reality the SBHB signals will be superposed in the LISA data stream. The PE study presented here will also be valuable in building search tools as we discuss at the end of the paper. Most previous studies of PE for SBHBs with LISA relied on the Fisher approach, and used simple approximations to LISA's response to GW signals. While a quick and efficient method for forecasting studies, the Fisher approach might not be suited for the systems with low signal to noise ratio (SNR) and non-Gaussian parameter distributions \cite{Vallisneri:2007ev}. In addition, SBHBs signals are long lived and emit at wavelengths comparable to LISA's size. As a result, the commonly used long-wavelength approximation, also called low-frequency approximation \cite{Cutler:1997ta}, might not hold and could seriously bias the PE \cite{Vecchio:2004vt,Vecchio:2004ec}. In this work we thus consider the full LISA response as described in \cite{Marsat:2018oam, Marsat:2020rtl} and perform a full Bayesian analysis in zero noise of all the systems we consider. We also provide a comparison to Fisher-matrix-based PE and briefly comment on the impact of using the long-wavelength approximation. Despite the on-going effort to infer the astrophysical formation channel of the SBHBs observed by LIGO/Virgo, a huge uncertainty still remains. The situation will improve as we detect more signals. We expect that third generation detectors (Einstein Telescope \cite{Hild_2011,Punturo:2010zz,Ballmer:2015mvn}, Cosmic Explorer \cite{Abbott_2017_2}) will be operational in parallel with LISA, with SNR figures reaching hundreds or thousands, thus significantly improving the PE over current observations. Given the current uncertainty in the population of SBHBs, we cannot reliably specify the properties of most detectable sources \cite{Sesana:2016ljz,Samsing:2018nxk,Kremer:2018cir,Wong:2018uwb,Gerosa:2019dbe,Kyutoku:2016ppx,Moore:2019pke,Gupta:2020lxa}. In our study we focus on a fiducial system consistent with the population of currently observed SBHBs, instead of working with a randomized catalog of sources. We then perform a systematic scan of the parameter space by varying a few parameters at a time, investigating their qualitative impacts on the PE. We consider quasicircular binaries consisting of spinning BHs, with the spins aligned or antialigned with the orbital angular momentum, merging no later than twenty years from the beginning of observations. We start with a GW150914-like system \cite{Abbott:2016blz} and explore the parameter space by varying at most three parameters at a time. For each of these systems we infer the posterior distribution, and we discuss the correlations between the parameters and the accuracy in measuring each parameter across the parameter space. The paper is organized as follows. In Sec.~\ref{analysis} we describe how we generate GW signals and our tools to perform PE. Then we give details on how we choose all the systems on which we perform PE in Sec.~\ref{setups}. Detailed description and analysis of PE results are given in Sec.~\ref{results}. There we also provide a comparison to a slightly modified version of Fisher matrix analysis and assess the validity of the long-wavelength approximation for the PE. Finally, in Sec.~\ref{ccl} we discuss the scientific opportunities offered by LISA observations of SBHBs in light of our results. \section{Analysis method}\label{analysis} \subsection{Bayesian framework} Data measured by LISA ($d$) will consist in a superposition of GW signals ($s$) and a noise realization ($n$): $d=s+n$. The instantaneous amplitude of a GW signal is much lower than noise, making its detection very challenging. We use matched filtering as a main technique for detection, the main idea is to search for a specific pattern (GW template) in the data \cite{Allen:2005fk}. It is done by correlating the data with a set of GW templates in frequency domain ($\tilde{h}(f, \theta)$ which are functions of source parameters $\theta$). This correlation is given by the matched-filter overlap \begin{equation} (d|h) = 4 {\mathcal Re} \left(\int_0^{+\infty} \frac{\tilde{d}(f) \ \tilde{h}^*(f)}{S_n(f)} {\rm d}f \right) \label{inner_product}, \end{equation} where $S_n(f)$ is the power spectral density (PSD) of the detector noise, assumed to be stationary. In this work, we use the LISA ``proposal'' noise model given in \cite{AHO17}. We do not discuss the detectability of SBHBs in this paper; we assume that all sources discussed here can be detected, and we focus on the parameter extraction/estimation. We should note that the detection itself could be a challenge, at least for the traditional method of template banks~\cite{Moore:2019pke}, and that some mergers might only be detectable retroactively after being discovered by third generation ground based detectors \cite{Wong:2018uwb,Gupta:2020lxa,Ewing:2020brd}. We work in a Bayesian framework for the PE, treating the set of parameters of the source, $\theta$, as random variables. The Bayes theorem tells us that given the observed data $d$, the posterior distribution $p(\theta|d)$ is given by: \begin{equation} p(\theta|d)=\frac{p(d|\theta)p(\theta)}{p(d)} \label{bayes} \,. \end{equation} On the right hand side of this equation, $p(d|\theta)$ is the likelihood, $p(\theta)$ is the prior distribution and $p(d)$ is the evidence. As the noise and GW signal models will be fixed in this study, the evidence can be seen as a normalization constant that does not need an explicit calculation. Assuming noise to be stationary and Gaussian, the likelihood is given by: \begin{equation} \mathcal{L} = p(d|\theta) = \exp \left[ -\frac{1}{2} (d-h(\theta)|d-h(\theta)) \right] \,. \end{equation} In order to speed up the computation, we set the noise realization to zero as $n=0$, so that $d=s$. The addition of noise to the GW signal is not expected to drastically affect the PE, leading at most to a displacement of the centroid of the posterior distribution within the confidence intervals (CI) (with the probability defined by CI). Thus, the analysis of the posterior distribution itself should remain representative in the presence of noise (still assuming Gaussianity). We consider only one source at a time and we neglect all possible systematic errors due to signal mismodeling: $s=h(\theta_0)$ with $\theta_{0}$ the parameters of the GW source. Under these simplifications, the log-likelihood is given by (up to a normalization constant in $\mathcal{L}$): \begin{equation} \log \mathcal{L}=-\frac{1}{2}(h(\theta_{0})-h(\theta)|h(\theta_0)-h(\theta)) \,. \label{loglike} \end{equation} \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline Symbol & Meaning & Expression \\ \hline $m_1$ & Mass of the primary BH & / \\ $m_2$ & Mass of the secondary BH & / \\ $\mathcal{M}_c$ & Chirp mass & $\mathcal{M}_c=\left ( \frac{m_1^{3}m_2^{3}}{m_1+m_2} \right )^{1/5}$ \\ $\eta$ & Symmetric mass ratio & $\eta=\frac{m_1m_2}{(m_1+m2)^2}$ \\ $q$ & Mass ratio & $q=\frac{m_1}{m_2}$ \\ $M$ & Total mass & $M=m_1+m_2=\mathcal{M}_c \eta^{-3/5}$ \\ \multirow{2}{*}{$\chi_1$} & Spin of the primary BH along & \multirow{2}{*}{/} \\ & the orbital angular momentum & \\ \multirow{2}{*}{$\chi_2$} & Spin of the secondary BH along & \multirow{2}{*}{/} \\ & the orbital angular momentum & \\ $\chi_+$ & Effective spin & $\chi_+=\frac{m_1\chi_1+m_2\chi_2}{m_1+m_2}$ \\ $\chi_-$ & Antisymmetric spin combination & $\chi_+=\frac{m_1\chi_1-m_2\chi_2}{m_1+m_2}$ \\ \multirow{2}{*}{$\chi_{\rm PN}$} & \multirow{2}{*}{1.5 PN spin combination} & $\chi_{\rm PN}=\frac{\eta}{113} [ (113q+75)\chi_1 $ \\ & & $+(\frac{113}{q}+75)\chi_2 ]$ \\ $f_0$ & Initial frequency & / \\ $t_c$ & Time to coalescence & / \\ $\lambda$ & Longitude in the SSB frame & / \\ $\beta$ & Latitude in the SSB frame & / \\ $\psi$ & Polarization angle & / \\ $\varphi$ & Initial phase & / \\ $\iota$ & Inclination & / \\ $D_L$ & Luminosity distance in Mpc & / \\ $z$ & Redshift & $z(D_L)$ \cite{Aghanim:2018eyx} \\ \hline \end{tabular} \end{center} \caption{Parameters used throughout the paper and their explicit expressions when necessary.}\label{acronyms} \end{table} Since LISA will only observe the inspiral phase of these binaries, we expect that the dominant 22 mode is sufficient and we neglect the contribution of all other subdominant harmonics. We use the model called PhenomD \cite{Husa:2015iqa,Khan:2015jqa} to generate $\tilde{h}_{2,\pm2}$ and compute the LISA response to generate the time delay interferometry (TDI) observables $A$, $E$, and $T$ (see, e.g., \cite{Tinto:2004wu}) as described in~\cite{Marsat:2018oam, Marsat:2020rtl}. The three TDI observables constitute independent datasets, therefore the log-likelihood is actually a sum of three terms like Eq.~\eqref{loglike}, one per TDI observable. Similarly to the treatment of galactic binaries, we parametrize the sources by their initial frequency and phase at the start of the observation, instead of the time to coalescence and phase at coalescence (more suitable in LIGO/Virgo data analysis or for MBHBs with LISA). We define the initial time as the moment LISA starts observing the system. A system is characterized by (i) five intrinsic parameters: the masses ($m_1$ and $m_2$), the GW frequency at which LISA starts observing the system ($f_0$) and spins ($\chi_1$ and $\chi_2$); and (ii) six extrinsic parameters: the position in the sky defined in the solar system barycenter frame (SSB) ($\lambda$ and $\beta$), the polarization angle ($\psi$), the azimuthal angle of the observer in the source frame ($\varphi$), the inclination of the orbital angular momentum with respect to the line of sight ($\iota$) and the luminosity distance to the source ($D_L$). We need only two out of the six general spin parameters to describe the system because we assume each spin to be aligned (or antialigned) with the orbital angular momentum. We introduce a set of sampling parameters for which we expect the posterior distribution to be a simple function, i.e.~close to either a uniform or Gaussian distribution, based on the properties of post-Newtonian (PN) inspiral waveforms \cite{Blanchet:2002av,Buonanno:2006ui,Buonanno:2009zt}. These parameters are $\theta=(\mathcal{M}_c, \eta, f_0, \chi_+, \chi_-, \lambda, \sin(\beta), \psi, \varphi, \cos (\iota), \log_{10}(D_L))$, where $\mathcal{M}_c=\frac{m_1^{3/5}m_2^{3/5}}{(m_1+m_2)^{1/5}}$ is the chirp mass, $\eta=\frac{q}{(1+q)^2}$ is the symmetric mass ratio with $q=\frac{m_1}{m_2} > 1$ being the mass ratio, $\chi_+=\frac{m_1 \chi_1+ m_2 \chi_2}{m_1+m_2}$ is the effective spin (often denoted $\chi_{\rm eff}$ in the literature) and $\chi_-=\frac{m_1 \chi_1-m_2 \chi_2}{m_1+m_2}$ is an antisymmetric spin combination. For easier reference, in Table \ref{acronyms} we list the parameters used throughout this paper, together with their explicit expressions. Some of these combinations will be introduced later in the paper. For the simulated data we assume a sampling rate of $1 \ {\rm Hz}$ so the Nyquist frequency is $f_{\rm Ny}=0.5 \ {\rm Hz}$. When computing inner products given by Eq.~\eqref{inner_product}, we generate templates from $f_0$ up to $f_{\rm max} = {\rm min}(f_{\rm Ny},f_{T_{\rm obs}}$) where $f_{T_{\rm obs}}$ is the frequency reached by the system after the observation time $T_{\rm obs}$. We consider two mission durations: $T_{\rm obs}=4$ yr and $T_{\rm obs}=10$ yr. Details on fast LISA response generation and likelihood computation are given in \cite{Marsat:2020rtl}. Due to the high dimensionality of the problem, we need an efficient way to explore the parameter space. We do this by means of a Markov chain Monte Carlo (MCMC) algorithm \cite{Karandikar2006}. More specifically we designed a Metropolis-Hastings MCMC (MHMCMC) \cite{10.2307/2684568} for this purpose that we present next. A less costly alternative would be to use the PE based on the Fisher information matrix. We will show how one can modify the Fisher matrix to make it a robust PE tool in the following subsections. We exploit the metric interpretation of Fisher matrix in the MHMCMC, thus, we start by reviewing some basics on the Fisher matrix approach and delegate comparison with Bayesian PE to Sec.~\ref{results_fm}. \subsection{Fisher matrix}\label{fisher_mat} In the Fisher matrix approach, the likelihood is approximated by a multivariate Gaussian distribution \cite{Vallisneri:2007ev}: \begin{equation} p(d|\theta) \propto e^{-\frac{1}{2}F_{ij}(\theta_{0})\Delta \theta_i \Delta \theta_j} \,, \end{equation} where $\Delta \theta=\theta-\theta_{0}$ and $F$ is the Fisher matrix given by: \begin{equation} F_{ij}(\theta)= \left . (\partial_i h | \partial_j h) \right |_{\theta} \label{def_fisher} \,, \end{equation} where the partial derivative $\partial_i$ denotes the derivative with respect to $\theta_i$. Similarly to the likelihood, we actually have a sum of three terms, one for each TDI observable. We assume this to be implicit in the following. The inverse of $F$ is the Gaussian covariance matrix of the parameters, which gives an estimate of the error on each parameter. The Fisher approach has been extensively used in the studies of LISA's scientific capability, thanks to its simplicity. However, for systems with low SNR as the ones we consider, the Fisher approximation might not be valid \cite{Vallisneri:2007ev} and we need to perform a full Bayesian analysis. The Fisher matrix has an alternative interpretation: it can be seen as a metric on the parameter space associated with the distance defined by the inner product \eqref{inner_product}: \begin{align} ||h(\theta+\delta \theta)-h(\theta)||^2 = &(h(\theta+\delta \theta)-h(\theta)|h(\theta+\delta \theta) -h(\theta)) \nonumber\\ \simeq& \left . (\partial_i h \ \delta \theta^i | \partial_j h \ \delta \theta^j) \right |_{\theta} \nonumber\\ =& F_{ij}(\theta) \delta \theta^i \delta \theta^j. \end{align} We exploit this property in our MHMCMC sampler. \subsection{Metropolis-Hastings MCMC}\label{mhmcmc} We sample the \emph{target} distribution $p(\theta|d)$ by means of a Markov chain, generated from a transition function $P(\theta,\theta')$ satisfying the detailed balance condition: \begin{equation} p(\theta|d)P(\theta,\theta')=p(\theta'|d)P(\theta',\theta). \end{equation} We build the transition function from a proposal function $\pi$ such that $P(\theta,\theta')=\pi(\theta,\theta')a(\theta,\theta')$ where $a(\theta,\theta')$ is the acceptance ratio defined as: \begin{equation} a(\theta,\theta') = \left \{ \begin{array}{ll} \mathrm{min} \left( 1,\frac{p(\theta'|d)\pi(\theta',\theta)}{p(\theta|d)\pi(\theta,\theta')} \right) \ {\rm if} \ \pi(\theta,\theta') \neq 0 \\ 0 \ {\rm otherwise.} \end{array} \right. \label{acc_ratio} \end{equation} It is easy to verify that $P$ satisfies the detailed balance condition for any choice of $\pi$. In practice, a jump from a point $\theta$ to $\theta'$ is proposed using the function $\pi(\theta,\theta')$ and the new point is accepted with probability $a(\theta,\theta')$. If the point is not accepted, the chain remains at $\theta$. In both cases, we update the current state of the chain. By repeating this procedure, we obtain a sequence of samples of the target distribution. We see from the expression of the acceptance ratio that points with higher posterior density are more likely to be accepted, thus the chain will tend to move towards regions of higher posterior density, exploring all regions of the parameter space compatible with the observed data. In theory, the chain should converge regardless of the proposal function and starting point of the chain, but in practice it may take an inconveniently large time to do so unless the proposal and starting points are chosen wisely. Since we are interested in high posterior regions, we start the chain from the true signal parameters, i.e.~the maximum-likelihood point. The maximum-likelihood point coincides with the maximum posterior point if all priors are flat, but this is not true in general and the maximum posterior point can depend on the adopted prior distribution. Even though the posterior does not depend on the proposal, the convergence, efficiency and resolution of tails of the distribution do very strongly depend on the particular choice; ideally, an efficient proposal should closely resemble the target posterior distribution. Thus, most of the work goes into building an efficient proposal function. Note that for a symmetric proposal ($\pi(\theta,\theta')=\pi(\theta',\theta)$), the acceptance ratio is simply given by the ratio of the posterior distributions. This specific case is called Metropolis MCMC \cite{Metropolis:1953am} and is the one we consider. Our runs are done in two steps: we first run a short MCMC chain ($\simeq 10^5$ points) to explore the parameter space and then use the covariance matrix of the points obtained from this chain to build a multivariate Gaussian proposal that we use in a longer chain. During the first stage, called burn-in, we use a block diagonal covariance matrix. We split the set of parameters in three groups: intrinsic parameters ($\mathcal{M}_c$, $\eta$, $f_0$, $\chi_+$, $\chi_-$), angles except the inclination ($\lambda$, $\sin(\beta)$, $\psi$, $\varphi$) and inclination distance ($\cos (\iota)$, $\log_{10}(D_L)$). Each block is computed inverting the Fisher matrix of that group of parameters. This separation was based on the intuition, well verified in practice, that the stronger correlations are within these groups of parameters and is intended to avoid numerical instabilities that may arise when dealing with full Fisher matrices. Note that by making this choice we do not discard possible correlations between parameters of different groups; we are simply not taking them into account when proposing points based on the Fisher matrix. If those correlations exist, they should appear in the resulting covariance matrix that we use to build a proposal for the main chain. Failing to include existing correlations could reduce the efficiency of our sampler in its exploratory, or burn-in, phase; however the splitting can easily be adapted if needed. We rotate the current state vector $\theta$ to the basis of the covariance matrix's eigenvectors. In this basis the covariance matrix is diagonal, formed by the eigenvalues of the covariance matrix in the original basis. Because for some parameters the distribution is very flat, the eigenvalues of the covariance matrix predicted by Fisher can be very large, reducing the efficiency of the sampler. This is usually the case for poorly constrained but bounded parameters like $\cos (\iota)$ and spins. To avoid this issue, we truncate the eigenvalues of the ($\cos (\iota)$,$\log_{10}(D_L)$) matrices and define an effective Fisher matrix accounting for the finite extent of the prior on spins: $F_{\rm eff} = F + F^{\rm p}$. We take $F^{\rm p}_{\chi_+,\chi_+} = F^{\rm p}_{\chi_-,\chi_-} = \frac{1}{\sigma^2}$ with $\sigma=0.5$. We motivate this choice in Sec.~\ref{results_fm}. In order to improve the sampling efficiency in the event of complicated correlations between intrinsic parameters, we exploit the metric interpretation of Fisher matrix and occasionally recompute the covariance matrix for the first group of parameters with a given probability. By doing so we might violate the balance equation, but this is only done during the burn-in stage (exploration of the parameter space); the resulting points are then discarded from the analysis. We test the convergence of the chains by running multiple chains with different random number generator seeds, checking that they all give similar distributions and computing the Gelman-Rubin diagnosis \cite{Gelman:1992zz} for all the parameters. Potential scale reduction factors below 1.2, as the ones we get, indicate that the chains converged \cite{Gelman:1992zz}. For each chain we accumulate $10^3$--$10^4 $ independent samples (by thinning the full chain by the autocorrelation length) which takes $4$--$7$ hours on a single CPU thanks to the fast likelihood computation and LISA response generation presented in \cite{Marsat:2020rtl}. \section{Setups}\label{setups} \subsection{Systems} We start by considering a system with masses and spins compatible with the first detected GW signal (GW150914) and label it \emph{Fiducial} system. Its parameters are given in Table \ref{params_fiducial} along with its SNR assuming LISA mission lifetimes $T_{\rm obs}=4\mathrm{yr}$ and $T_{\rm obs}=10\mathrm{yr}$. We give both the detector and source frame masses, related by $m=(1+z)m_{s}$ where $z$ is the cosmological redshift and subscripts $s$ denote parameters in the source frame. We adopt the cosmology reported by the Planck mission (2018) \cite{Aghanim:2018eyx}. Note that $T_{\rm obs}$ is the mission duration, not the time spent by the system in the LISA band and we assume an ideal 100\% duty cycle. The initial frequency is derived from the time to coalescence from the beginning of LISA observation ($t_c$) that we fix to eight years for the \emph{Fiducial} system. Thus, with $T_{\rm obs} = 4$ yr the \emph{Fiducial} system is observed for a fraction of its inspiral, while with $T_{\rm obs} = 10$ yr the same system is observed for eight years before exiting the LISA band and coalescing. The sky location is given in the SSB frame. In the following, subscripts $f$ refers to the \emph{Fiducial} system. We explore the parameter space of SBHBs by changing a few parameters of the \emph{Fiducial} system at a time. We list all the systems we consider in the following subsections, specifying what are the changes with respect to the \emph{Fiducial} system and the corresponding labels. For all systems we consider the two possible mission durations quoted above, unless another choice is specified. In Table \ref{systems} we show the considered systems and their respective SNR. Note that we chose to use the LISA proposal noise level \cite{AHO17}, which does not include a 50\% margin introduced to form the ``science requirements'' SciRDv1~\cite{scirdv1}. The SNRs would thus be significantly lower with SciRDv1. From the point of view of the PE, using one or the other noise model amounts to a constant rescaling of the noise PSD $S_{n}$, with the same effect as rescaling the distance to the source. \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline $m_1$ (M\textsubscript{\(\odot\)}) & \multicolumn{2}{|c|}{$40$} \\ \hline $m_2$ (M\textsubscript{\(\odot\)}) & \multicolumn{2}{|c|}{$30$} \\ \hline $m_{1,s}$ (M\textsubscript{\(\odot\)}) & \multicolumn{2}{|c|}{$36.2$} \\ \hline $m_{2,s}$ (M\textsubscript{\(\odot\)}) & \multicolumn{2}{|c|}{$27.2$} \\ \hline $t_c$ (yrs) & \multicolumn{2}{|c|}{$8$} \\ \hline $f_0$ (mHz) & \multicolumn{2}{|c|}{$12.7215835397$} \\ \hline $\chi_1$ & \multicolumn{2}{|c|}{$0.6$} \\ \hline $\chi_2$ & \multicolumn{2}{|c|}{$0.4$} \\ \hline $\lambda$ (rad) & \multicolumn{2}{|c|}{$1.9$} \\ \hline $\beta$ (rad) & \multicolumn{2}{|c|}{$\pi/3$} \\ \hline $\psi$ (rad) & \multicolumn{2}{|c|}{$1.2$} \\ \hline $\varphi$ (rad) & \multicolumn{2}{|c|}{$0.7$}\\ \hline $\iota$ (rad) & \multicolumn{2}{|c|}{$\pi/6$} \\ \hline $D_L$ (Mpc) & \multicolumn{2}{|c|}{$250$} \\ \hline $z$ & \multicolumn{2}{|c|}{$0.054$} \\ \hline $T_{\rm obs}$ (yrs) & $4$ & $10$ \\ \hline SNR & $13.5$ & $21.5$ \\ \hline \end{tabular} \end{center} \caption{Parameters of a representative SBHB system labeled \emph{Fiducial}. The masses and spins of this system are compatible with GW150914 \cite{Abbott:2016izl}. The initial frequency is computed such that the system is merging in eight years from the start of LISA observations. We consider two possible durations of the LISA mission: four and ten years (in the latter case, the signal stops after eight years at coalescence). Subscripts $s$ denote quantities in the source frame, bare quantities are in the detector frame. The sky location is given in the SSB frame.}\label{params_fiducial} \end{table} \subsubsection{Intrinsic parameters} Unless specified we take $t_c=8 \ {\rm years}$ and we compute the initial frequency corresponding to the chosen $t_c$. Changing $t_c$ (or equivalently $f_0$) amounts in shifting the GW signal in frequency and also defines its frequency bandwidth (within the chosen observation time). We consider the following variations in the intrinsic parameters: \begin{itemize} \item Time left to coalescence at the beginning of LISA observations: \emph{Earlier}: $t_c=20$ yr, \emph{Later}: $t_c=2$ yr \item Chirp mass keeping the mass ratio unchanged: \emph{Heavy}: $\mathcal{M}_c =1.5 \mathcal{M}_{c,f}$, $q=q_f$, $D_L=445 \ {\rm Mpc}$ \emph{Light} $\mathcal{M}_c =\frac{\mathcal{M}_{c,f}}{1.5} $, $q=q_f$, $D_L=150 \ {\rm Mpc}$ \item Mass ratio, keeping the chirp mass unchanged: \emph{q3}: $q=\frac{m_1}{m_2}=3$, $\mathcal{M}_c =\mathcal{M}_{c,f}$ \emph{q8}: $q=\frac{m_1}{m_2}=8$, $\mathcal{M}_c =\mathcal{M}_{c,f}$ \item Spins: \emph{SpinUp}: $\chi_1=0.95$, $\chi_2=0.95$ \emph{SpinDown}: $\chi_1=-0.95$, $\chi_2=-0.95$ \emph{SpinOp12}: $\chi_1=0.95$, $\chi_2=-0.95$ \emph{SpinOp21}: $\chi_1=-0.95$, $\chi_2=0.95$. \end {itemize} For the \emph{Heavy} and \emph{Light} systems we scaled the distance so that the SNR remains the same as for the \emph{Fiducial} system in the case $T_{\rm obs}=10\mathrm{yr}$. Changing spins or mass ratio barely affects the SNR, so we do not change the distance for those systems. Since the \emph{Earlier} system merges in two years, increasing the observation time from four to ten years has no impact. \subsubsection{Extrinsic parameters} Changes in extrinsic parameters do not affect the time to coalescence, so all systems below have the same initial frequency as the \emph{Fiducial} system. We consider the following variations in the extrinsic parameters (those depending on the relative orientation of the source to the observer): \begin{itemize} \item Sky location in the SSB frame: \emph{Polar}: $\beta=\frac{\pi}{2}-\frac{\pi}{36}$, $\lambda=\lambda_f$ \emph{Equatorial}: $\beta=\frac{\pi}{36}$, $\lambda=\lambda_f$ \item Inclination: \emph{Edgeon}: $\iota=\frac{\pi}{2}-\frac{\pi}{36}$, $D_L=150 \ {\rm Mpc}$, $T_{\rm obs}=10\mathrm{yr}$ \item Distance: \emph{Close}: $D_L=190 \ {\rm Mpc}$, $T_{\rm obs}=4\mathrm{yr}$ \emph{Far}: $D_L=350 \ {\rm Mpc}$, $T_{\rm obs}=10\mathrm{yr}$ \emph{Very Far}: $D_L=500 \ {\rm Mpc}$, $T_{\rm obs}=10\mathrm{yr}$ \end{itemize} The drop in SNR being very large for an almost edge-on system, we decrease the distance of the \emph{Edgeon} system to maintain a reasonably high SNR. For the same reason, we use only $T_{\rm obs}=10\mathrm{yr}$ in this case. The goal of the variation in distance is to assess the impact of the SNR on the PE, all other things being equal. This also mimics the effect of varying the noise level and the duty cycle. For the \emph{Close} system we only consider the $T_{\rm obs}=4\mathrm{yr}$ case and for the \emph{Far} and \emph{Very Far} systems we only consider the $T_{\rm obs}=10\mathrm{yr}$ case. \begin{table} \begin{center} \begin{tabular}{c|c|c|} \cline{2-3} & $T_{\rm obs}=4\mathrm{yr}$ & $T_{\rm obs}=10\mathrm{yr}$ \\ \hline \multicolumn{1}{|c|}{\emph{Fiducial}} & $13.5$ & $21.1$ \\ \hline \multicolumn{1}{|c|}{\emph{Earlier}} & $10.3$ & $17.2$ \\ \hline \multicolumn{1}{|c|}{\emph{Later}} & $11.8$ & / \\ \hline \multicolumn{1}{|c|}{\emph{Heavy}} & $12.8$ & $20.9$ \\ \hline \multicolumn{1}{|c|}{\emph{Light}} & $14.1$ & $21.1$ \\ \hline \multicolumn{1}{|c|}{\emph{q3}} & $13.5$ & $21.1$ \\ \hline \multicolumn{1}{|c|}{\emph{q8}} & $13.5$ & $21.1$ \\ \hline \multicolumn{1}{|c|}{\emph{SpinUp}} & $13.5$ & $21.1$ \\ \hline \multicolumn{1}{|c|}{\emph{SpinDown}} & $13.5$ & $21.1$ \\ \hline \multicolumn{1}{|c|}{\emph{SpinOp12}} & $13.5$ & $21.1$ \\ \hline \multicolumn{1}{|c|}{\emph{SpinOp21}} & $13.5$ & $21.1$ \\ \hline \multicolumn{1}{|c|}{\emph{Polar}} & $12.8$ & $20.1$ \\ \hline \multicolumn{1}{|c|}{\emph{Equatorial}} & $14.9$ & $23.1$ \\ \hline \multicolumn{1}{|c|}{\emph{Edgeon}} & / & $14.7$ \\ \hline \multicolumn{1}{|c|}{\emph{Close}} & $17.8$ & / \\ \hline \multicolumn{1}{|c|}{\emph{Far}} & / & $15.1$ \\ \hline \multicolumn{1}{|c|}{\emph{Very Far}} & / &$10.6$ \\ \hline \end{tabular} \end{center} \caption{SNR of all systems considered, computed with the LISA proposal noise level given in \cite{AHO17}. Different systems are derived from the \emph{Fiducial} system, varying a few parameters at once. We use the \emph{Full} response.}\label{systems} \end{table} \subsection{Prior}\label{priors} Regarding the Bayesian analysis, we take our fiducial prior to be flat in $m_1$ and $m_2$ with $m_1 \geq m_2$, flat in spin magnitude between $-1$ and $1$, flat in initial frequency, volume uniform for the source location and flat in the source orientation, its polarization and its initial phase. For phase and polarization, since only $2\varphi$ (for a 22-mode waveform) and $2\psi$ intervene, we restrict to an interval of $\pi$. We obtain the prior probability density function (PDF) in terms of the sampling parameters by computing the Jacobian of the transformation from ($m_1,m_2,\chi_1,\chi_2,D_L$) to ($\mathcal{M}_c,\eta,\chi_+,\chi_-,\log_{10}(D_L)$) which gives: \begin{equation} p_f(\theta) = \begin{cases} & N \frac{\mathcal{M}_c \eta^{-11/5}D_L^3}{\sqrt{1-4\eta}} \; {\rm if} \; 0.05 \leq \eta \leq 0.25 \,, \\ & 0 \; {\rm otherwise.} \end{cases} \label{fid_prior} \end{equation} Just like the evidence in Eq.~\eqref{bayes}, $N$ acts only as a normalization constant and thus it is of no importance for us. The lower limit for $\eta$ was set according to the maximum mass ratio up to which PhenomD is calibrated ($q=16$)~\cite{Husa:2015iqa,Khan:2015jqa}. The range of chirp mass, initial frequency and distance are orders of magnitude larger than the posterior support so they do not affect the posterior. We label this prior as \emph{Flatphys} and use it by default unless we specify some other choice, for example, we will consider two additional priors: \begin{itemize} \item \emph{Flatmag}: uniform prior for the spins orientation and magnitude \item \emph{Flatsampl}: flat prior in $\mathcal{M}_c$, $\eta$ and $\log_{10}(D_L)$. \end{itemize} In the \emph{Flatmag} case we start from a full 3D spin prior, uniform in [0,1] for the spins amplitude and uniform on the sphere for the spins orientation. We then consider only the spin projections on the orbital momentum, thus ignoring the in-plane spin components. The resulting prior is $p(\chi_i)=-\frac{1}{2}\ln(|\chi_i|)$. The \emph{Flatmag} PDF is: $p(\theta)=p_f(\theta)p(\chi_1)p(\chi_2)$ where $p_f(\theta)$ is given in Eq.~\eqref{fid_prior}. This is the prior generally used by the LIGO/Virgo collaboration \cite{LIGOScientific:2018mvr}. The \emph{Flatsampl} PDF is given by: \begin{equation} p(\theta) = \begin{cases} & N\frac{1}{\eta} \; {\rm if} \; 0.05 \leq \eta \leq 0.25 \,, \\ & 0 \; {\rm otherwise.} \end{cases} \label{flatsampl_prior} \end{equation} This prior has no astrophysical motivation; we will use it to compare the Fisher-based PE to our full Bayesian inference in Sec.~\ref{results_fm}. We find it instructive to illustrate how the nontrivial priors look like. As we will show later in Sec.~\ref{results}, the chirp mass can be constrained by Bayesian analysis to a fractional error of $10^{-4}$, so we can impose a narrow constraint on the prior. The chirp mass is nontrivially coupled to other parameters (as we will show in great details in the following sections), and constraining it to the narrow interval introduces nonlinear slicing in other parameters. Note that the imposed interval ($10^{-3}$ in relative terms) is still much broader than the typical measurement error. In Fig.~\ref{comp_priors} we display the \emph{Flatphys}, \emph{Flatmag} and \emph{Flatsampl} prior distributions for $\eta$, $\chi_+$, and $\chi_-$ obtained by restraining the chirp mass to the specified interval. The remarkable features of our fiducial prior, the \emph{Flatphys} prior, are the double peak at $\eta=0.25$ and $\eta=0$ and the bell-like shape for the $\chi_+$ and $\chi_-$ priors with almost zero support at extreme values. The \emph{Flatmag} is singled out by the strong peak at $\chi_{+,-}=0$. As we will discuss in Sec.~\ref{results}, these non-trivial shapes of the priors can strongly affect the resulting posterior distributions in some cases. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{comp_priors.png}\\ \centering \caption{Comparison between the \emph{Flatphys} (blue), \emph{Flatmag} (green) and \emph{Flatsampl} (orange) priors for $\eta$, $\chi_+$, and $\chi_-$.}\label{comp_priors} \end{figure} \subsection{LISA response} We briefly review how the LISA response is computed and refer to \cite{Marsat:2020rtl} for a more extensive discussion. We recall that LISA is composed of three spacecraft linked by lasers across arms of length $L=2.5 \ {\rm Gm}$. The TDI observables $A$, $E$, and $T$ are time-delayed linear combinations of the single-link observables $y_{slr}$ that measure the laser frequency shift due to an incoming GW across link $l$ between spacecrafts $s$ and $r$. We consider only the dominant 22 mode of the waveform, and following~\cite{Marsat:2020rtl} we exploit a mode symmetry (for nonprecessing systems) between $h_{22}$ and $h_{2,-2}$ to write the signal in terms of $h_{22}$ only. The single-link observables can then be written with a transfer function: \begin{equation} \tilde{y}_{slr}=\mathcal{T}^{22}_{slr}\tilde{h}_{22} \,. \end{equation} Denoting the amplitude and phase of the mode 22 as $\tilde{h}_{22}=A_{22}(f)e^{-i\Psi_{22}(f)}$, working at leading order in the separation of timescales in the formalism of~\cite{Marsat:2018oam} the transfer functions are given by (we set $c=1$): \begin{align} \mathcal{T}^{22}_{slr}&=G^{22}_{slr}(f,t_{f}^{22}) \\ G^{22}_{slr}(f,t)&=\frac{i\pi fL}{2}{\rm sinc}\left[ \pi f L(1- {\bf k}\cdot {\bf n}_l) \right ] \nonumber \\ & \cdot \exp \left [i\pi f \left(L+ {\bf k} \cdot ({\bf p}^L_{r}+{\bf p}^L_{s}) \right) \right ] \nonumber \\ & \cdot \exp(2i\pi f {\bf k} \cdot {\bf p}_0) \; {\bf n}_l \cdot {\bf P}_{22} \cdot {\bf n}_l \label{kernel} \\ t_{f}^{22}&=-\frac{1}{2\pi}\frac{{\rm d}\Psi_{22}}{{\rm d}f}, \label{eq:tf} \end{align} where ${\bf k}$ is the unit GW propagation vector, ${\bf n}_l(t)$ is the link unit vector pointing from the spacecraft $s$ to $r$, ${\bf p}_0(t)$ is the position vector of the center of the LISA constellation in the SSB frame, ${\bf p}^L_{r}(t)$ is the position of spacecraft $r$ measured from the center of the LISA constellation, ${\bf P}_{22}$ is the polarization tensor defined in \cite{Marsat:2020rtl} and we adopt the convention ${\rm sinc}(x)=\sin(x)/x$. We dropped the $t$ dependence in Eq.~\eqref{kernel} for more clarity. The global factor $\exp(2i\pi f {\bf k} \cdot {\bf p}_0)$ is the Doppler modulation in GW phase and the ${\bf n}_l \cdot {\bf P}_{22} \cdot {\bf n}_l$ term is the projection of the GW tensor on the interferometer axes, which is associated with the antenna pattern function. Note that both the Doppler modulation in phase and the antenna pattern are time dependent due to LISA's motion. Moreover, they depend on the sky position of the source, so that the annual variation in the phase and amplitude allows us to localize the source. \begin{figure*} \centering \includegraphics[width=\textwidth]{comp_tobs_combined.png}\\ \centering \caption{Inferred parameter distribution for the \emph{Fiducial} system, both in the $T_{\rm obs}= 4\mathrm{yr}$ case (blue) and the $T_{\rm obs}=10\mathrm{yr}$ case (orange). The true parameters are indicated by black lines and squares. Masses are in the detector frame.}\label{comp_tobs} \end{figure*} All our results are obtained using the full LISA response, but we also assess the impact of using the long-wavelength approximation, a simplified version of the LISA response \cite{Cutler:1997ta}. In this approximation, LISA is somewhat similar to two LIGO/Virgo-type detectors rotated one with respect to the other by $\pi/4$, and with angles of $\pi/3$ between the arms. It is obtained by taking the $2\pi fL \ll 1$ limit in the LISA response so that: \begin{align} G^{22}_{slr}(f,t) &= \frac{i\pi fL}{2} \exp(2i\pi f {\bf k} \cdot {\bf p_0}){\bf n}_l \cdot {\bf P}_{22} \cdot {\bf n}_l. \label{kernel_lw} \end{align} The ${\rm sinc}$ function appearing in Eq.~\eqref{kernel} leads to a damping of the signal amplitude at high frequencies. However, in the long-wavelength approximation, it is replaced by 1, leading to unrealistically high SNRs. To compensate for this, inspired by the computation of the sky-averaged sensitivity \cite{Cornish:2018dyw}, we introduce a degradation function that multiplies the GW amplitude: \begin{equation} R(f)=\frac{1}{1+0.6(2\pi fL)^2}. \label{degrad_highf} \end{equation} To explore the validity of this approximation for SBHBs, we will compare the PE for the \emph{Fiducial}, \emph{Polar} and \emph{Equatorial} systems using the full response and the long-wavelength approximation labeled \emph{Full} and \emph{LW} respectively. We will only use the leading order in the separation of timescales in the framework of~\cite{Marsat:2018oam}, keeping in mind that corrections could be needed in general, in particular for almost-monochromatic signals. \section{Parameter estimation of SBHBs}\label{results} In order to test the performance of our MHMCMC sampler we compared it to our well tested parallel tempering MCMC code \texttt{PTMCMC} \footnote{https://github.com/JohnGBaker/ptmcmc}. The similarity of two distributions $p_1$ and $p_2$ can be quantified by computing their Kullback-Leibler (KL) divergence \cite{kullback1951}: \begin{equation} D_{KL}=\sum_{\theta}p_1(\theta)\log \left ( \frac{p_1(\theta)}{p_2(\theta)} \right ). \end{equation} The KL divergence is zero if two distributions are identical. We computed $D_{KL}$ for the marginalized distributions of each parameter obtained with two samplers using the \emph{Flatsampl} prior, and assuming four and ten years of observation. Apart from the polarization and the initial phase, all divergences were below $0.1$ for four years of observation and below $0.01$ for ten years of observation, showing a very good agreement between samplers. For $\psi$ and $\varphi$, less well determined in general, we get slightly higher values (up to $\simeq$ 0.6) but still showing a good agreement. The results presented in this paper were obtained with our MHMCMC code and, unless otherwise specified, we use the \emph{Flatphys} prior and the \emph{Full} response. In our discussion, we use redshifted masses (rather than source frame) because they are directly inferred from the observed data. We give the full ``corner plot'' \cite{corner} for our fiducial system, comparing results for the two observation times in Fig.~\ref{comp_tobs}, this plot shows pair-wise correlation between parameters and the fully marginalized posterior for each parameter. The inset on the right top of the figure shows posterior distributions for ($m_1,m_2,\chi_1,\chi_2$). It would be difficult to represent the posterior distributions for all possible variations (deviations from the \emph{Fiducial}) discussed above. Instead, we will summarize our results by underlining qualitative differences whenever we observe them and show comparative corner plots only when necessary. We start by discussing the structure of correlation between intrinsic parameters, move to extrinsic parameters, then compare the full Bayesian analysis with predictions from the Fisher matrix, and finally show the effect of the \emph{LW} approximation to the response. \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{fig_PN_phase.png}\\ \centering \caption{Individual PN phase contributions $\Delta \Psi_{n}$ for the \emph{Fiducial} system. The linestyle indicates the nature of the term, nonspinning (NS), spin-orbit (SO) or spin-spin (SS), while the color indicates the PN order. Note that these contributions are individually aligned at $f_{0}$, as explained in the text, and that interpreting the magnitude of these terms is not easy due to the alignment freedom. The vertical line shows $f_{4\mathrm{yr}}$, and the greyed area shows the frequency range contributing less than 1 in $\mathrm{SNR}^{2}$.}\label{fig:phiPN} \end{figure} \begin{figure*}[!ht] \centering \includegraphics[width=0.75\textwidth]{degen_chirp_eta.png}\\ \centering \caption{Analysis of the degeneracy between $\mathcal{M}_c$ and $\eta$. The blue (orange) dots were obtained running a PE on the \emph{Fiducial} system in the $T_{\rm obs}=4\mathrm{yr}$ ($T_{\rm obs}=10\mathrm{yr}$) case allowing only $\mathcal{M}_c$ and $\eta$ to vary. The injection point is indicated by the black dashed lines. The orange dotted and the green dashed curves are given by \eqref{eq_degen} using the full PhenomD phase and the 1.5 PN truncation of the phase respectively. The red solid line was obtained by minimizing the phase difference between the injected signal and templates over the whole frequency range spanned over four years of observation.}\label{degen} \end{figure*} \subsection{Intrinsic parameters}\label{res_int} One of the main features appearing in Fig.~\ref{comp_tobs} is the strong correlation between intrinsic parameters, in particular the one between $\mathcal{M}_c$ and $\eta$ which is especially pronounced for four years of observation. The main reason for this degeneracy is the limited evolution of the GW frequency: in four years of observation the \emph{Fiducial} system spans a very narrow range from $f_0=12.7 \ {\rm mHz}$ to $f_{4 \mathrm{yr}}=16.5 \ {\rm mHz}$. First, it will be instructive to consider the magnitude of the different PN orders appearing in the phasing (see~\cite{Blanchet:2002av} for a review). We can write formally \begin{equation} \Psi (f) = \frac{3}{128 \eta v^{5}} \sum_{i} a_{i} v^{i} \,, \end{equation} where $v = (\pi M f)^{1/3}$ (with $M=m_1+m_2=\mathcal{M}_c \eta^{-3/5}$) and where the $a_{i}$ are PN coefficients (we scaled out the leading term, so that $a_{0}=1$) that depend on the mass ratio and on the spins and can be separated between nonspinning terms (NS), spin-orbit terms (SO), and spin-square terms (SS). It was argued in~\cite{Mangiagli:2018kpu} that most SBHBs would require terms up to the 2 PN order. In Fig.~\ref{fig:phiPN}, we show the magnitude of the known PN terms in the phasing for our \emph{Fiducial} system. In general, the magnitude of phase contributions is delicate to interpret because of the alignment freedom, as some of the phasing error can typically be absorbed in a time and phase shift. In Fig.~\ref{fig:phiPN} we align contributions individually at $f_{0}$ with a zero phase and zero time according to~\eqref{eq:tf}. We see that, for $T_{\rm obs} = 4\mathrm{yr}$, PN orders beyond 1.5 PN appear negligible due to the limited chirping in frequency, while more terms become relevant for $T_{\rm obs} = 10\mathrm{yr}$ where much higher frequencies are reached. We also gray out the area ($f>123 \ {\rm mHz}$) beyond which the signal contributes less than 1 in $\mathrm{SNR}^{2}$, which we take as a somewhat conventional limit to indicate that ignoring the signal beyond this point would not affect the log-likelihood~\eqref{loglike} and therefore the PE. In order to provide an explanation for the strong correlation between chirp mass and symmetric mass ratio, we consider a simplified problem by reducing the dimensionality: we fix $f_0$, $\chi_+$, $\chi_-$ and all extrinsic parameters to the ``true'' values and investigate the correlation between the chirp mass and the symmetric mass ratio for the \emph{Fiducial} system. Keeping these parameters fixed will collapse some of the degeneracies seen in the full analysis, but this exercise will serve as an illustration of the differences between a nonchirping and chirping system. Since for $T_{\rm obs} = 4\mathrm{yr}$ the GW frequency changes little from $f_0$ to $f_{4 \mathrm{yr}}$, we can Taylor expand the phase around $f_0$: \begin{equation} \Psi(f) \simeq \Psi(f_0)+ \left . \frac{{\rm d} \Psi}{{\rm d}f} \right |_{f_0}(f-f_0)+\frac{1}{2} \left . \frac{{\rm d^2} \Psi}{{\rm d}f^2} \right |_{f_0}(f-f_0)^2. \end{equation} We consider the inner product between the data $d=A_d(f) e^{-i\Psi_d(f)}$ and the template $h=A_h(f) e^{-i\Psi_h(f)}$. From our convention, the initial phase at $f_{0}$ is the same, $\Psi_d(f_0) = \Psi_h(f_0)$. The initial time is zero at $f_{0}$, so the stationary phase approximation Eq.~\eqref{eq:tf} gives: $ \frac{{\rm d} \Psi}{{\rm d}f} |_{f_0}=0$. The inner product becomes: \begin{align} (d|h)& = 4\mathrm{Re}\int_{f_0}^{f_{4 \mathrm{yr}}} df \frac{A_d(f)A_h(f) e^{i(\Psi_d(f)-\Psi_h(f))} }{S_n(f)} {\rm d}f \nonumber \\ & \simeq A_d(f_0) A_h(f_0) \; 4 \mathrm{Re} \left[ \int_{f_0}^{f_{4 \mathrm{yr}}} \frac{df}{S_n(f)} e^{i(\frac{{\rm d^2} \Psi_d}{{\rm d}f^2} |_{f_0} - \frac{{\rm d^2} \Psi_h}{{\rm d}f^2} |_{f_0})\frac{(f-f_0)^2}{2}} \right], \end{align} where we used the fact that the amplitude is a slowly varying function of the frequency. The overlap is maximized when the template is in phase with the data, making the integrand nonoscillating. In our quadratic approximation to the dephasing, this defines a curve in the ($\mathcal{M}_c$, $\eta$) plane according to \begin{equation} \left . \frac{{\rm d^2} \Psi}{{\rm d}f^2} \right |_{f_0} = \left . \frac{{\rm d^2} \Psi (\mathcal{M}_{c,0},\eta_{0})}{{\rm d}f^2} \right |_{f_0} \label{eq_degen}. \end{equation} In Fig.~\ref{degen} we display in blue (orange) dots points from the sampling in the $(\mathcal{M}_c,\eta)$ plane in the $T_{\rm obs}=4\mathrm{yr}$ ($T_{\rm obs}=10\mathrm{yr}$) case and overplot (in orange dotted line) the curve obtained by solving \eqref{eq_degen}. The true (injection) value is indicated by black dashed lines. The curve closely follows the shape obtained from PE in the $T_{\rm obs}=4\mathrm{yr}$ case. The green dashed line is obtained by solving \eqref{eq_degen} truncating the phase to 1.5 PN order. We verified that adding higher PN terms does not produce any noticeable changes, which is in a good agreement with \cite{Mangiagli:2018kpu} and Fig.~\ref{fig:phiPN}. We can even better reproduce the degeneracy by minimizing the phase difference between injection and template over the whole frequency range spanned by the injected signal. More specifically, defining: \begin{equation} \delta_{I} \Psi(\mathcal{M}_c,\eta) ={\rm max}_I|\Psi(\mathcal{M}_{c,0},\eta_{0})(f)-\Psi(\mathcal{M}_c,\eta)(f)|, \end{equation} for each value of $\mathcal{M}_c$ we find $\eta$ such that $\delta_{I} \Psi$ is minimized. Note that all parameters are kept fixed in the dephasing measure we use here, in particular there is no optimization over a constant phase or time shift. The subscript $I$ stands for the frequency interval and we plot this curve for $I= [f_0,f_{4 \ {\rm years}}]$ in Fig.~\ref{degen}. One can see that we almost perfectly reproduce the shape of the correlation between the chirp mass and the symmetric mass ratio in the $T_{\rm obs}=4\mathrm{yr}$ case. In the $T_{\rm obs}= 10\mathrm{yr}$ case, the system evolves until it leaves the band so it spans a broader frequency range. In Fig.~\ref{diff_psi} we show the value of the minimised $\delta_I \Psi$ for $I= [f_0,f_{4 {\rm years}}]$ and for $I=[f_0,f_{\rm max}^{\rm LISA}]$ with $f_{\rm max}^{\rm LISA}=0.5 \ {\rm Hz}$ taken at the conventional end of the LISA frequency band. In practice, in the latter case the maximal dephasing occurs typically around $\sim 0.1\ {\rm Hz}$. For the observation span of four years, we can find $\delta_I \Psi$ to be quite small ($<0.5 \ {\rm rad}$) over a large range of $\eta$. As the bandwidth of the signal becomes broader, we cannot efficiently compensate for a change in the chirp mass by varying $\eta$, which results in a significant reduction of the degeneracy and a great improvement in measuring those two parameters, as seen by the narrower region covered by the orange dots on Fig.~\ref{degen}. \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{delta_psi.png}\\ \centering \caption{Value of $\delta_{I} \Psi$ along the curve in the $(\mathcal{M}_c,\eta)$ plane that minimizes it for $I=[f_0,f_{4 \ {\rm years}}]$ (blue) and $I=[f_0,f_{{\rm max,LISA}}]$ (orange). When LISA observes the system at low frequencies, the phase difference can be kept small over an extended region far from the injection. When LISA observes the chirp of the system, the phase difference becomes very large immediately at the vicinity of the injection point, reducing the extent of the degeneracy between $\mathcal{M}_c$ and $\eta$.}\label{diff_psi} \end{figure} We now come back to the full Bayesian analysis and consider the estimation of the BH spins. Following \cite{Poisson:1995ef,Khan:2015jqa} we introduce the 1.5 PN spin combination: \begin{align} \chi_{\rm PN}&=\frac{1}{113} \left ( 94\chi_++19 \frac{q-1}{q+1}\chi_- \right ) \\ &=\frac{\eta}{113} \left ( (113q+75)\chi_1+(\frac{113}{q}+75)\chi_2 \right ).\label{chipn} \end{align} This term defines how the spins enter the GW phase at the leading (1.5 PN) order~\cite{Blanchet:2002av} and, therefore, should be the most precisely measured spin combination. We found this to be indeed the case. As an illustration, we plot samples obtained for the \emph{q3}, \emph{q8}, \emph{SpinUp}, \emph{SpinDown}, \emph{SpinOp12} and \emph{SpinOp21} systems in the $T_{\rm obs}=10\mathrm{yr}$ case in Fig.~\ref{chischia}. The points are the samples obtained in PE analysis and the lines show $\chi_{\rm PN}=\chi_{\rm PN,0}$ (fixing the mass ratio to its ``true'' (injection) value) for all those systems in the ($\chi_1$, $\chi_2$) plane. In all these cases $\chi_{\rm PN}$ is extremely well measured, within $10^{-2}$, but the combination of spins orthogonal to $\chi_{\rm PN}$ is constrained only by the prior boundaries. \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{plot_chi1_chi2.png} \centering \caption{Samples of $\chi_1$ and $\chi_2$ obtained for different systems (defined in Sec.~\ref{setups}) in the $T_{\rm obs}=10\mathrm{yr}$ case. The black solid lines indicates the boundaries of the physically allowed region $-1 \leq \chi_{1,2} \leq 1$ and the $\chi_{\rm PN}=\chi_{\rm PN,0}$ lines. The samples follow the $\chi_{\rm PN}=\chi_{\rm PN,0}$ lines, showing that this is the specific combination of spins that can be measured. The orthogonal combination of spins is constrained only due to the boundaries of the physically allowed region. Due to the orientation of the $\chi_{\rm PN}={\rm const}$ lines, $\chi_1$ is better constrained than $\chi_2$. High values of spins with same (opposite) sign are the better (worse) constrained.}\label{chischia} \end{figure} For slowly evolving binaries, only terms up to 1.5 PN in the GW phase are found to be relevant. At this order we expect a strong correlation between the 1.5 PN spin combination and the symmetric mass ratio: any change in $\chi_{\rm PN}$ can be efficiently compensated by a change in $\eta$ such that the 1.5 PN term $(-16 \pi + {113}{3}\chi_{\rm PN})\eta^{-3/5}$ is kept (almost) constant. We have verified this by plotting the curve $(-16 \pi + {113}{3}\chi_{\rm PN})\eta^{-3/5}=\rm{const}$ on top of the samples obtained for the \emph{Fiducial} system and reproducing the shape formed by the posterior samples. Thus, we obtained and explained the three-way correlation between chirp mass, mass ratio and spins for the mildly relativistic systems spanning a narrow frequency band during the observation time. The increase in the observation time allows further chirping of the system, making the contribution of the 1 and 1.5 PN corrections in the phasing significant, thus breaking strong correlations between intrinsic parameters; however, the effect of higher-order PN terms is weak, consistently with~\cite{Mangiagli:2018kpu} and Fig.~\ref{fig:phiPN}, which leads to only the 1.5 PN spin combination being measured. This study also suggests that $\chi_{\rm PN}$, being the most relevant mass-weighted spin combination for PE, should be used as sampling parameter. The component of $\chi_{\rm PN}$ along $\chi_+$ is always much larger than the one along $\chi_-$ (at least by a factor $\frac{94}{19}\simeq 5$), so we find that $\chi_+$ is also measured reasonably well. The effective spin $\chi_+$ is frequently used in the GW literature and has a clear astrophysical interpretation, as opposed to the 1.5 PN spin combination; therefore, we will alternate between $\chi_{\rm PN}$ and $\chi_+$ in our next discussions. \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{etas.png}\\ \centering \caption{Distribution of $\eta$ and $\chi_+$ for the \emph{Later} system ($t_{c} = 2\mathrm{yr}$) and the \emph{Fiducial} system ($t_{c} = 8\mathrm{yr}$) for both observation times ($T_{\rm obs} = 4\mathrm{yr}$ and $T_{\rm obs} = 10\mathrm{yr}$). Since we observe the \emph{Later} system chirping, the determination of $\eta$ and $\chi_+$ is much better than for the \emph{Fiducial} system in the $T_{\rm obs}=4\mathrm{yr}$ case. But because of its low SNR ($\mathrm{SNR}=11.8$), the posterior distribution still peaks at $\eta=0.25$, as an effect of the prior. This is on contrast to the \emph{Fiducial} system in the $T_{\rm obs}=10\mathrm{yr}$ case ($\mathrm{SNR}=21.1$) which peaks at the injected value indicated by black lines and squares.}\label{etas} \end{figure} \begin{table*} \begin{center} \begin{tabular}{c *{17}{c|}} \cline{3-17} & & \multicolumn{5}{|c|}{\emph{Fiducial}} & \multicolumn{5}{|c|}{\emph{Earlier}} & \multicolumn{5}{|c|}{\emph{Later}} \\ \cline{3-17} & & $\mathcal{M}_c$ & $\eta$ & $\chi_+$ & $\chi_-$ & $\chi_{\rm PN}$ & $\mathcal{M}_c$ & $\eta$ & $\chi_+$ & $\chi_-$ & $\chi_{\rm PN}$ & $\mathcal{M}_c$ & $\eta$ & $\chi_+$ & $\chi_-$ & $\chi_{\rm PN}$ \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\emph{Flatphys}}} & \multicolumn{1}{|c|}{$T_{\rm obs}=4\mathrm{yr}$} & $3.6$ & $0.4$ & $0.2$ & $0.1$ & $0.3$ & $2.7$ &$0.04$ & $0.04$ & $0.03$ & $0.04$ &$6.1$ & $1.7$ & $3.1$ & $0.4$ & $3.6$ \\ \cline{2-17} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{$T_{\rm obs}=10\mathrm{yr}$} & $7.6$ & $2.5$ & $3.7$ & $0.5$ & $4.3$ & $4.5$ & $0.7$ & $0.5$ & $0.2$ & $0.6$ &/ & / & / & / & /\\ \hline\hline \multicolumn{1}{|c|}{\multirow{2}{*}{\emph{Flatmag}}} & \multicolumn{1}{|c|}{$T_{\rm obs}=4\mathrm{yr}$} & $3.4$ & $0.6$ & $0.07$ & $0.04$ & $0.08$ & / & / & / & / & / & / & / & / & / & / \\ \cline{2-17} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{$T_{\rm obs}=10\mathrm{yr}$} & $7.5$ & $2.5$ & $4.4$ & $0.4$ & $4.8$ & / & / & / & / & / & / & / & / & / & /\\ \hline\hline \multicolumn{1}{|c|}{\multirow{2}{*}{\emph{Flatsampl}}} & \multicolumn{1}{|c|}{$T_{\rm obs}=4\mathrm{yr}$} & $3.7$ & $0.4$ & $0.3$ & $0.2$ & $0.3$ & / & / & / & / & / & / & / & / & / & / \\ \cline{2-14} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{$T_{\rm obs}=10\mathrm{yr}$} & $7.3$ & $3.2$ & $3.7$ & $0.5$ & $4.4$ &/ & / & / & / & / & / & / & / & / & / \\ \hline \end{tabular} \end{center} \caption{Kullback-Leibler divergences between the marginalized posterior and prior distribution of the intrinsic parameters for different systems and choices of prior. When observing the system at low frequencies, only $\mathcal{M}_c$ shows a sensible deviation from the prior. The likelihood is informative on $\eta$ and $\chi_+$ (and $\chi_{\rm PN}$) only for chirping systems. Different choices of prior give similar results.}\label{kl_test} \end{table*} In order to further quantify the dependence of PE on the frequency bandwidth spanned by the signal during the observation time, we consider the \emph{Earlier}, \emph{Fiducial} and \emph{Later} systems which differ in the initial frequency chosen so that the SBHBs merge in 20, 8 and 2 years respectively. We compute the KL divergence between the marginalized posterior and the marginalized prior for each intrinsic parameter, and report our findings in Table~\ref{kl_test}. The larger values of $D_{KL}$ indicate that knowledge has been gained from the GW observations as compared to the prior. The results show a strong dependence on the observation time (therefore on the frequency bandwidth), especially for spins, for which the $D_{KL}$ varies by an order of magnitude. For the \emph{Earlier} system we find that only the chirp mass measurement is truly informative. Note that the longer frequency evolution plays a bigger role than the SNR. For instance, \emph{Later} which leaves LISA after two years with $\mathrm{SNR}=11.8$ is more informative than \emph{Earlier} with $T_{\rm obs}=10\mathrm{yr}$ which has an $\mathrm{SNR}=17.2$. We repeated this analysis using the \emph{Flatmag} and \emph{Flatsampl} priors for the \emph{Fiducial} system. For all choices of prior, the KL divergences are similar, proving the $\eta$, $\chi_+$, $\chi_-$ distributions are prior dominated when observing slowly evolving systems. Notice that KL divergences for spins are slightly smaller when using the \emph{Flatmag} prior, meaning that the posterior is even more dominated by the prior. This is because the \emph{Flatmag} prior peaks strongly at $\chi_+=\chi_-=0$ as discussed in Sec.~\ref{priors}. Note that the values of $D_{KL}$ are always larger for $\chi_{\rm PN}$ than for the other spin combinations, reflecting the fact that it is the best measured spin combination. Still, for systems evolving through a narrow frequency interval, the $\chi_{\rm PN}$ distribution is also prior dominated. The effect of the prior is especially well seen for the \emph{Fiducial} system and $T_{\rm obs}= 4\mathrm{yr}$ in Fig.~\ref{etas}: the strong peak of the symmetric mass ratio at 0.25 is what we expect due to prior (see Sec.~\ref{priors}). The same peak is also observed for the \emph{Later} system (predominantly due to low SNR) but $\eta$ is much better constrained for this system, the likelihood is informative enough to reduce the width of the distribution, but not large enough to supress the prior. {\it Let us reiterate this important finding: for intrinsic parameters beyond the chirp mass, the chirping (extent of the frequency evolution) of the observed SBHB has stronger influence on PE than the SNR or observation time per se.} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{degen_eta_chis.png} \includegraphics[width=0.49\textwidth]{masses_eta_chis.png} \centering \caption{The left panel shows the inferred distribution on $\eta$ and $\chi_+$ for the \emph{SpinUp} system. Because of a ``competition'' between the prior and the likelihood the distributions of $\eta$ and $\chi_-$ peak away from the true value indicated by the black lines and and the square. The $\mathcal{M}_c$ distribution, not shown, is marginally affected. Because of the bias in $\eta$, the inferred distribution of masses is significantly biased. However with our definitions, the true value is within the $90 \%$ CI.}\label{degen_chis_eta} \end{figure*} \begin{table} \begin{center} \begin{tabular}{c|c|c|} \cline{2-3} & $T_{\rm obs}=4\mathrm{yr}$ & $T_{\rm obs}=10\mathrm{yr}$ \\ \hline \multicolumn{1}{|c|}{$\mathcal{M}_c/ \mathcal{M}_{c,0}$} & $1^{+1\times 10^{-4}}_{-4 \times 10^{-5}}$ & $1^{+2 \times 10^{-6}}_{-1 \times 10^{-6}}$ \\ \hline \multicolumn{1}{|c|}{$\mathcal{M}_{c,s}/ \mathcal{M}_{c.s,0}$} & $0.99^{+0.01}_{-0.01}$ & $1.00^{+0.01}_{-0.01}$ \\ \hline \multicolumn{1}{|c|}{$q$} & $2.6^{+4.7}_{-1.6}$ & $1.3^{+0.1}_{-0.3}$ \\ \hline \multicolumn{1}{|c|}{$m_1 / m_{1,0}$} & $1.4^{+1.1}_{-0.6}$ & $0.99^{+0.04}_{-0.13}$ \\ \hline \multicolumn{1}{|c|}{$m_2 / m_{2,0}$} & $0.7^{+0.4}_{-0.3}$ & $1.06^{+0.14}_{-0.04}$ \\ \hline \multicolumn{1}{|c|}{$m_{1,s} / m_{1,s,0}$} & $1.5^{+1.2}_{-0.6}$ & $0.99^{+0.04}_{-0.13}$ \\ \hline \multicolumn{1}{|c|}{$m_{2,s} / m_{2,s,0}$} & $0.7^{+0.5}_{-0.3}$ & $1.06^{+0.15}_{-0.04}$ \\ \hline \multicolumn{1}{|c|}{$\chi_+$} & $0.2^{+0.5}_{-0.7}$ & $0.52^{+0.01}_{-0.02}$ \\ \hline \multicolumn{1}{|c|}{$\chi_-$} & $0.03^{+0.7}_{-0.6}$ & $0.1^{+0.4}_{-0.4}$ \\ \hline \multicolumn{1}{|c|}{$\chi_{\rm PN}$} & $0.2^{+0.4}_{-0.7}$ & $0.433^{+0.008}_{-0.009}$ \\ \hline \multicolumn{1}{|c|}{$\chi_1$} & $0.2^{+0.8}_{-0.6}$ & $0.6^{+0.4}_{-0.3}$ \\ \hline \multicolumn{1}{|c|}{$\chi_2$} & $0.2^{+0.7}_{-1.0}$ & $0.4^{+0.6}_{-0.5}$ \\ \hline \multicolumn{1}{|c|}{$\Delta t_c \ ({\rm s})$} & $10^4$ & $20$ \\ \hline \multicolumn{1}{|c|}{$\Delta \Omega \ ({\rm deg}^2) $} & $0.18$ & $0.03 $ \\ \hline \multicolumn{1}{|c|}{$D_L/D_{L,0}$} & $1.1^{+0.2}_{-0.3}$ & $1.0^{+0.2}_{-0.2}$ \\ \hline \multicolumn{1}{|c|}{$z$} & $0.060^{+0.012}_{-0.014}$ & $0.055^{+0.009}_{-0.012}$ \\ \hline \end{tabular} \end{center} \caption{$90 \%$ CI on the parameters of the \emph{Fiducial} system whose parameters are given in Table \ref{params_fiducial} using the \emph{Flatphys} prior. For masses and distance, we give the relative errors. The redshifted chirp mass is extremely well determined for both mission durations but individual masses can be measured only if the mission is long enough and we can observe the system chirping. The measurement of the source frame chirp mass is worse, being dominated by the error on the distance measurement and therefore the redshift in~\eqref{rel_source_mass}. The error on individual masses is dominated by their intrinsic degeneracy. For chirping systems, we can also measure $\chi_{\rm PN}$ which translates into a good constraint on the effective spin $\chi_+$. The error on individual spins remains large for the chirping system, but we can start to constrain the spin of the primary BH (in our example, excluding negative values). As a consequence of the overall improvement in the determination of the intrinsic parameters, the inference of the time to coalescence improves drastically. The sky location (given by Eq.~\ref{eq_omega}) is very well determined for both mission durations, within the field of view of next generation electromagnetic instruments like Athena and SKA \citep{WinNT,2018CoSka..48..498M}. }\label{errors} \end{table} We note that, although the frequency is slowly evolving, the signal is far from monochromatic unlike most of galactic binaries (e.g. double withe dwarf binaries). As an element of comparison, using the quadrupole formula to compute the frequency derivative at $f_0$, for the \emph{Earlier} system we find $\dot{f}_0=1.9 \times 10^{-11} \ {\rm Hz}^2$ which is four orders of magnitude higher than the fastest evolving galactic binaries \cite{Korol:2017qcx}. Thus, despite the strong correlation between intrinsic parameters, the chirp mass is always well measured, with a relative error of order $10^{-4}$ for the \emph{Earlier} system when observing for four years and below $10^{-6}$ for the chirping systems. The tight constraint on $\mathcal{M}_c$ leads to the bananalike shape correlation between $m_1$ and $m_2$ seen on the top right part of Fig.~\ref{comp_tobs}. As a result, we can determine individual masses (within $20$--$30 \%$) only for chirping systems. We give the $90\%$ CI for parameters of the \emph{Fiducial} system in Table \ref{errors}. Whenever the marginalized distribution of a given parameter is leaning against the upper (lower) boundary of the prior as for $m_1$ ($m_2$) we define the $90 \%$ CI as the value between the 0.1 and 1 quantile (0 and 0.9). Otherwise, in all other situations we define the $90 \%$ CI as the values between the 0.05 and 0.95 quantiles. In all cases we report the median as a point estimate. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{priors_eta_chis.png} \includegraphics[width=0.49\textwidth]{priors_m2_chi1.png} \centering \caption{The left (right) panel shows the inferred distribution on $\eta$ and $\chi_+$ ($m_2$, $\chi_1$) for the \emph{Fiducial} system using the \emph{Flatphys}, \emph{Flatmag} and \emph{Flatsampl} priors. Under the effect of the prior, the posterior distribution can be significantly shifted away from the true value indicated by black lines and squares.}\label{priors_post} \end{figure*} Systems with a higher mass ratio (\emph{q3} and \emph{q8}, keeping the chirp mass the same as for \emph{Fiducial}) give an error on the chirp mass similar to the \emph{Fiducial} system, but the mass ratio is better determined. This is because, when keeping the chirp mass fixed, the PN expansion of the GW phase features negative powers of $\eta$, notably in the 1 PN term. Moreover, what should matter is the derivative of the phase with respect to $\eta$ which contains only negative powers ($\eta^{-7/5}, \eta^{-2/5}$) which makes it more sensitive to the small mass ratio as compared to the equal-mass systems. For an observation time of four years, the uncertainty on individual masses is still of order of $100\%$, but for an observation time of ten years, it reaches below $10 \%$ and $1\%$ for the \emph{q3} and \emph{q8} system, respectively. We now discuss the effect of priors on PE for high-spin systems. Consider \emph{SpinUp} system in the $T_{\rm obs}=4\mathrm{yr}$ case shown in Fig.~\ref{degen_chis_eta}. As discussed above, in this case we have the correlation between spin ($\chi_{\rm PN}$) mass ratio $\eta$ and the chirp mass. In the posterior we observe the interplay between the symmetric mass ratio and effective spin priors which push samples towards $\eta=0.25$ and $\chi_+=0$ and the likelihood which peaks at the true value of $\chi_+$ (0.95). This, together with the correlation between parameters, leads to the resulting posterior distribution which has double peak in $\eta$ and broad distribution for $\chi_+$ (the 2D histogram is more informative). The distribution (overall) is shifted away from the true values (well evident in the right panel of Fig.~\ref{degen_chis_eta}), though they are still contained within 90\% CI. In the case of $T_{\rm obs}=10\mathrm{yr}$, the system chirps, so the information provided by the likelihood dominates over the prior, therefore, this bias is corrected and most of the degeneracies (at least partially) broken. In general, the posterior for the spins for weakly chirping systems are badly constrained and closely resemble the priors. For chirping systems, the determination of spins can be understood from Fig.~\ref{chischia}. Because of the orientation of lines $\chi_{\rm PN}=\rm{const}$, $\chi_1$ is better constrained than $\chi_2$. As the mass ratio increases the slope of these lines changes, accentuating this difference. Spins of same (opposite) sign, are better (worse) determined as their magnitude increase because of the narrowing (broadening) of the allowed region. For the \emph{Fiducial} system, the error on the spin of the primary BH is quite large but we can infer that the spin is positive with $0$ (and negative values) being outside the $90 \%$ CI given in Table \ref{errors}. The effective spin is measured within $0.1$ for chirping systems. \ All results (for masses) so far were for redshifted masses. Since $\mathcal{M}_{c,s}=\mathcal{M}_c/(1+z)$, we get: \begin{equation} \frac{\Delta \mathcal{M}_{c,s}}{\mathcal{M}_{c,s}}= \frac{\Delta z}{1+z} + \frac{\Delta \mathcal{M}_c}{\mathcal{M}_c}. \label{rel_source_mass} \end{equation} As we discuss in Sec.~\ref{ext2}, $D_L$ is typically measured within $40$--$60 \%$ which implies a measurement of the redshift $z$ within $\sim 40$--$60 \%$ (at the low redshifts we are considering, $D_L$ and $z$ are linearly related). Thus, the second term on the right-hand side of Eq.~\eqref{rel_source_mass} is clearly subdominant and the error on the source frame chirp mass is dominated by the error on redshift, as a result we get: \begin{equation} \frac{\Delta \mathcal{M}_{c,s}}{\mathcal{M}_{c,s}} \simeq \frac{0.5 z}{1+z}. \end{equation} This error is typically of the order of a few percent for systems detectable by LISA (up to $z \sim \mathcal{O}(10^{-1})$), which is better than current LIGO/Virgo measurements \cite{LIGOScientific:2018mvr}. This estimate is in good agreement with the results presented in Table \ref{errors}. The error for individual masses in the source frame are dominated by the error on the masses (like the second term on the left-hand side of Eq.~\eqref{rel_source_mass}) due to poorly constrained mass ratio. The initial frequency is always extremely well determined, with relative errors below $10^{-5}$. Its determination improves for chirping systems due to reduction of correlation with other intrinsic parameters. The frequency of the system at the beginning of the LISA observations is coincidental, as it is directly linked to the time that is left for the system to coalesce. We apply the stationary phase approximation~\eqref{eq:tf} to the full GW phase to infer $t_c$. This transformation involves all intrinsic parameters, so the error on $t_c$ is typically smaller for chirping systems. We find an error of the order of 1 day for systems far from merger, while for more strongly chirping systems $t_c$ can be determined to within $30 \ {\rm s}$. We find that increasing or decreasing the total mass of the system (while preserving the SNR) as in the \emph{Heavy} and \emph{Light} systems has little consequence on the estimation of intrinsic parameters. The error on spins and symmetric mass ratio are the same as in the \emph{Fiducial} case. The relative error on chirp mass and initial frequency is slightly smaller for lighter systems (factor $\simeq 1.4$ between the \emph{Heavy} and \emph{Light} systems) because of the larger number of cycles. However, we do not find a simple scaling with the chirp mass of the system for a fixed level of SNR. In particular, we do not find the error on the chirp mass to scale with $\mathcal{M}_c^{5/3}$ as computed in \cite{Finn:1992xs,Cutler:1994ys}. This was to be expected since as discussed in this section the error on intrinsic parameters depends crucially on the frequency interval through which we observe the binary. Finally, the choice of prior only marginally affects the posterior distribution for chirping systems. On the other hand, it can has a significant impact for nonchirping systems as can be seen in Fig.~\ref{priors_post}. For example, the \emph{Flatmag} prior completely dominates the posterior distribution of spins as the KL divergences suggested and shown in Fig.~\ref{priors_post}. Because of the noted correlations, the prior on spins propagates into the determination of mass ratio and individual masses. \subsection{Extrinsic parameters} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{dampl_fisher.png}\\ \centering \caption{We plot $f|\partial_i \tilde{h}|^2/S_n$ (normalized to its maximum value) as a function of the time to coalescence. As discussed in Sec.~\ref{res_int}, most of the information on intrinsic parameters comes from the high end of the frequency band, whereas the contribution to sky parameters mainly comes from the low frequencies.}\label{dampl} \end{figure} \subsubsection{Sky location}\label{skyloc} The sky location of the source is very well determined and, except for systems close to the equator, its posterior distribution is very similar to a Gaussian unimodal distribution. We define the solid angle as in \cite{Cutler:1997ta}: \begin{equation} \Delta \Omega=2 \pi \sqrt{(\Sigma^{\lambda,\lambda}) (\Sigma^{\sin(\beta),\sin(\beta)})-(\Sigma^{\lambda,\sin(\beta)})^2}\,, \label{eq_omega} \end{equation} where $\Sigma$ is the covariance matrix. This defines a $63\%$ confidence region around the true location. The error for the \emph{Fiducial} system, reported in Table \ref{errors}, is below $0.4 \ {\rm deg^2}$ which is within the field of view of most planned electromagnetic instruments such as Athena and SKA. \citep{WinNT,2018CoSka..48..498M}. With the exception of the \emph{Equatorial} system, the sky position is constrained with a similar precision for all systems considered in this work. The good localization comes from the complicated modulations imprinted on the signal by the orbital motion of LISA, according to~\eqref{kernel}. To understand how the sky localization evolves as a function of the frequency band we observe a system, in Fig.~\ref{dampl} we plot $f |\partial_\lambda \tilde{h}|^2/S_n$ (normalized with respect to its maximum value) as a function of the frequency. The quantity $f |\partial_i \tilde{h}|^2/S_n$ is the integrand entering the computation of the diagonal elements of the Fisher matrix~\eqref{def_fisher} and indicates (for each parameter) the most informative frequency range. Using a logarithmic scale for frequencies, the factor $f$ ensures that we can visualize the contributions to the integral as the area under the curve (up to a normalization factor). We also indicate the corresponding values of the time to coalescence $t_c$ on the upper x axis. We indicate the initial (dashed line) and end (solid line) frequencies of the \emph{Later} (red), \emph{Fiducial} (red) and \emph{Early} (black) systems for $T_{\rm obs}=10\mathrm{yr}$. The behavior for $\sin(\beta)$ is similar to $\lambda$. For comparison we show the same quantity for the chirp mass, the behavior for other intrinsic parameters is similar. As discussed in the previous section, most of the information on intrinsic parameters comes from high frequencies. On the other hand, there is more information on the sky location at low frequencies, where a given range of frequencies corresponds to more orbital cycles of the LISA constellation. However, this is to be balanced with the narrower frequency range spanned by systems evolving at lower frequencies, for a fixed observation time. For this reason, the \emph{Later} system gives a better localization than the \emph{Earlier} system even in the $T_{\rm obs}=10\mathrm{yr}$ case as reported in Table \ref{comp_err_sky} (0.05 against 0.2 ${\rm deg}^2$). We can distinguish two main effects in~\eqref{kernel} informing us about the sky localization: the time dependency (through $t_{f}$, see Eq.~\eqref{eq:tf}) of the response reflects the orbital cycles of LISA, and the Doppler modulation $\exp(2i\pi f {\bf k} \cdot {\bf p}_0)$ of the phase. The Doppler modulation shows this time dependency, but also scales with $f$, so this term is larger for chirping signals reaching high frequencies. We find a better sky localization for lighter systems: $\Delta\Omega_{\emph{Light}}<\Delta\Omega_{\emph{Fiducial}}<\Delta\Omega_{\emph{Heavy}}$ (ranging from $0.1$ to $0.3$ ${\rm deg}^2$ in the $T_{\rm obs}=4\mathrm{yr}$ case and from $0.02$ to $0.05$ ${\rm deg}^2$ in the $T_{\rm obs}=10\mathrm{yr}$ case). This is a result of keeping fixed the time to coalescence $t_c$ and the SNR (by adjusting the distance) for those systems. The GW signal from the lighter and heavier systems is displaced at higher and lower frequencies, since the evolution rate of the inspiral depends primarily on the chirp mass. Namely, $f_{0}=9.9 \, 12.7 \, 16.4 \ \mathrm{mHz}$ for \emph{Heavy}, \emph{Fiducial}, and \emph{Light}, respectively. Since we keep the SNR fixed in this comparison, this means that the lighter system has a stronger sky-dependent Doppler modulation of the phase, helping with the localization. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{comp_beta.png}\\ \centering \caption{Inferred distribution on the angles parametrizing the position of the source for the \emph{Polar}, \emph{Fiducial} and \emph{Equatorial} systems, with $T_{\rm obs} = 4\mathrm{yr}$. As explained in the main text, to avoid coordinate effects near the pole we do not compare the angles in the SSB frame ($\lambda,\beta$) but transformed angles ($\mu,\gamma$) defined by placing the injection point at the equator in each case (note that the scale of the two axis is not the same). The injection corresponds to $\mu=\gamma=0$ as indicated by the black solid lines and squares. $\mu$ is equally well recovered in the three cases. For the \emph{Equatorial} system we find a tail extending to the position $\beta \to -\beta$.}\label{comp_beta} \end{figure} \begin{table} \begin{center} \begin{tabular}{c|c|c|} \cline{2-3} & \multicolumn{2}{|c|}{$\Delta \Omega \ {\rm (deg^2)}$} \\ \cline{2-3} & $T_{\rm obs}=4\mathrm{yr}$ & $T_{\rm obs}=10\mathrm{yr}$ \\ \hline \multicolumn{1}{|c|}{\emph{Fiducial}} & $0.18$ & $0.03$ \\ \hline \multicolumn{1}{|c|}{\emph{Earlier}} & $0.70$ & $0.20$ \\ \hline \multicolumn{1}{|c|}{\emph{Later}} & $0.05$ & / \\ \hline \multicolumn{1}{|c|}{\emph{Polar}} & $0.14$ & $0.02$ \\ \hline \multicolumn{1}{|c|}{\emph{Equatorial}} & $2.74$ & $0.24$ \\ \hline \end{tabular} \end{center} \caption{Solid angle around the injection point corresponding to a $63\%$ confidence region, computed with~\eqref{eq_omega}. The sky localization is slightly better for the \emph{Polar} sky position $\beta \simeq \pm \pi/2$, but much worse for the \emph{Equatorial} sky position $\beta \simeq 0$. The sky localization is better for the \emph{Later} system than for the \emph{Earlier} system (despite a lower SNR in the $T_{\rm obs}=10\mathrm{yr}$ case) due to the broader frequency range spanned during its observation.}\label{comp_err_sky} \end{table} When comparing the \emph{Polar}, \emph{Fiducial} and \emph{Equatorial} systems, a direct comparison of the sky localization could be quite misleading because the metric on a sphere depends on the latitude, with a singularity at the pole. To evade this issue we define a system of coordinates on the sphere ($\mu,\gamma$) such that the injection point is always on the equator. The transformation from the ecliptic coordinates to this frame is source dependent. The spherical coordinates at the equator are locally Cartesian and simplify the comparison of the results. We show the results of the sky localization in Fig.~\ref{comp_beta} for the \emph{Polar}, \emph{Equatorial} and \emph{Fiducial} systems in the ($\mu,\gamma$) frame and for $T_{\rm obs}=4 \mathrm{yr}$. All three systems recover $\mu$ (azimuthal angle) similarly well but the determination of $\gamma$ worsens as $\beta \to 0$. Furthermore, for the \emph{Equatorial} system we find a tail extending all the way to a secondary sky position corresponding to $\beta \to -\beta$. This behavior is due to the dominant Doppler phase in the frequency response which goes as $\cos(\beta)$: although the amplitude of the effect itself is maximized, its variation with the latitude is minimal as $\cos(\beta)$ is flat for $\beta=0$. For $T_{\rm obs}=10\mathrm{yr}$ this partial degeneracy is broken thanks to a combination of effects: there are more cycles of LISA's orbit contributing, the signal reaches high frequencies where the $f$ dependent terms in the response~\eqref{kernel} are larger, and the total SNR itself is larger. The solid angle for the \emph{Equatorial} system is larger as compared to other systems (as reported in Table \ref{comp_err_sky}) but remains much tighter than the current sky localization with ground-based observatories \cite{LIGOScientific:2018mvr}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{comp_edgeon.png}\\ \centering \caption{Comparison of the inferred distribution on $\psi$, $\phi$, $\iota$ and $D_L$ for the \emph{Far} and \emph{Edgeon} systems, both have similar SNR. We normalized the distance to the injection value. Black lines and squares indicate the true values common for both systems and colored lines and squares the value of $\iota$ for each system. For the \emph{Edgeon} system the degeneracies between $\phi$ and $\psi$ and between $\iota$ and $D_L$ are broken giving a better estimation of each of these parameters. However, close to edge-on systems will usually have much lower SNR. Indeed, in order to keep a comparable SNR the distance to the \emph{Edgeon} is less than half the distance to the \emph{Far} system. }\label{comp_edgeon} \end{figure} \subsubsection{Other extrinsic parameters} \label{ext2} We find strong correlations between inclination and distance, and between the polarization and the initial phase. These degeneracies are commonly seen in the analysis of LIGO/Virgo sources when using only the dominant $2,\pm 2$ mode in the analysis. With only the dominant $2,\pm 2$ mode, the GW in the radiation frame is given as: \begin{subequations} \begin{align} \tilde{h}_{+} (f) &= \tilde{A}(f) \frac{1+\cos^{2}(\iota)}{2} e^{2 i \varphi} e^{-2 i \Psi(f)} \,,\\ \tilde{h}_{\times} (f) & = i \tilde{A}(f) \cos (\iota) e^{2 i \varphi} e^{-2 i \Psi(f)} \,, \end{align} \end{subequations} where $\tilde{h}_{22}(f) = A(f) \exp(-i\Psi (f))$ is the frequency domain amplitude and phase decomposition of the mode $h_{22}$, with $\tilde{A} \equiv \sqrt{5/16\pi} A(f) $ absorbing conventional factors. We refer to~\cite{Marsat:2020rtl} for notation; in particular we exploit the symmetry between $h_{22}$ and $h_{2,-2}$ for nonprecessing systems to write the waveform in terms of $h_{22}$ only. Going to the SSB frame we rotate by the polarization angle $\psi$: \begin{subequations}\label{eq:hpcSSB} \begin{align} \tilde{h}_{+}^{\rm SSB} &= \tilde{h}_{+}\cos(2\psi)-\tilde{h}_{\times}\sin(2\psi) \,,\\ \tilde{h}_{\times}^{\rm SSB} &= \tilde{h}_{+}\sin(2\psi)+\tilde{h}_{\times}\cos(2\psi) \,. \end{align} \end{subequations} For a face-on system, $\iota=0$ leading to: \begin{subequations} \begin{align} \tilde{h}_{+}^{\rm SSB}(f) &= \tilde{A}(f) e^{2 i (\varphi - \psi)} e^{-2 i \Psi(f)} \,, \\ \tilde{h}_{\times}^{\rm SSB}(f) &= i\tilde{A}(f) e^{2 i (\varphi - \psi)} e^{-2 i \Psi(f)} \,, \end{align} \end{subequations} Thus we see that the initial phase and the polarization appear only through the combination $\psi-\varphi$ yielding a true degeneracy corresponding to $\psi-\varphi=\rm{const}$. For systems close to face-on/face-off, like the \emph{Fiducial} system, this gives the strong correlation between $\psi$ and $\varphi$ well observed in Fig.~\ref{comp_tobs}. For edge-on systems ($\iota=\pi/2$) we have instead: \begin{subequations} \begin{align} \tilde{h}_{+}^{\rm SSB}(f) &= \tilde{A}(f) \cos (2\psi) e^{2 i \varphi} e^{-2 i \Psi(f)} \,, \\ \tilde{h}_{\times}^{\rm SSB}(f) &= \tilde{A}(f) \sin (2\psi) e^{2 i \varphi} e^{-2 i \Psi(f)} \,, \end{align} \end{subequations} and the degeneracy between $\psi$ and $\varphi$ is then broken as also shown in Fig.~\ref{comp_edgeon}. There we compare the distributions of $\psi$, $\varphi$, $\iota$ and $D_L$ for the \emph{Edgeon} system to the \emph{Far} system (which is almost face-on). Those systems have similar SNR. When the degeneracy between $\psi$ and $\phi$ is broken, we observe a correlation between the initial phase and the initial frequency. This is an artificial correlation due to relating $\varphi$ to the value of the phase at $f_0$ for each template. Using a fixed reference frequency, such as the initial frequency of the injected signal for example, eliminates this correlation. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{comp_prior_dl.png}\\ \centering \caption{Distribution of $\cos (\iota)$ and $D_L$ using the \emph{Flatphys} and \emph{Flatsampl} priors for $T_{\rm obs}=4\mathrm{yr}$. Although the distributions look rather different, the width of the $90\%$ CI for $D_L$ is barely affected and the true point, indicated by black lines and squares, is well within the CI.}\label{prior_dl} \end{figure} In Fig.~\ref{comp_edgeon} we also plot distance and inclination, which show a significant correlation for the \emph{Far} system that is absent for the \emph{Edgeon} system. Distance and inclination are purely extrinsic parameters, and the degeneracy features when subdominant (higher order) modes are negligible appear in the same way for LIGO/Virgo and LISA. For LISA, see e.g., a discussion in the context of galactic binaries in~\cite{Shah:2012vc}. In short, in the limit of face-on/off systems the inclination acts as a scaling factor over a rather broad range of inclination values, thus changes in $\cos (\iota)$ can be compensated by changes in $D_L$. For close to edge-on systems, the $\times$ polarization of the wave is suppressed (in the wave-frame, before transforming to the SSB frame as in~\eqref{eq:hpcSSB}). The important point is that this suppression of $h_{\times}$ depends itself quite rapidly on the inclination, so that reproducing the injected signal leads to a rather tight constraint on $\iota$, and as a consequence on $D_L$. For MBHB observations with LISA, higher modes play an important role and help break these degeneracies \cite{Marsat:2020rtl}; but SBHBs are observed by LISA far from coalescence and higher modes are negligible for these signals. \begin{figure*} \centering \includegraphics[width=0.58\textwidth]{comp_fm_tobs4.png} \centering \caption{Comparison between the inferred distribution for the \emph{Fiducial} system using the \emph{Flatsampl} prior and our Fisher analysis with $T_{\rm obs}=4\mathrm{yr}$. Black lines and squares indicate the true values.}\label{comp_fm4} \end{figure*} In Fig.~\ref{prior_dl} we show the effect of the distance prior on the posterior distribution for $\cos (\iota)$ and $D_L$ using the \emph{Flatphys} and \emph{Flatsampl} priors for $T_{\rm obs}=4\mathrm{yr}$. The former favors larger distances and, to keep the correct overall signal amplitude, compensates by preferring the face-on configuration. In the case of the \emph{Flatsampl} prior, the posterior distribution of $\cos (\iota)$ is flat because the likelihood itself is very flat around $\iota=0,\pi$ ($\cos (\iota)$ is a slowly varying function around its extrema). Thus, the choice of prior shifts the peak of the posterior, but the $90\%$ CI still contains the true value and its width is largely unaffected. Among all the cases we have considered, $D_L$ can be at best determined within $40 \%$, with the exception of the \emph{Edgeon} system for which we can determine distance to within $20 \%$. However, the edge-on systems will have lower SNR for a fixed distance to the source, and, therefore there is an observational selection effect where the face-on/off systems are preferred (that is what we observe with LIGO/Virgo). If we fix all other parameters of the \emph{Fiducial} system and set $\iota=\pi/2-\pi/36$, the SNR drops from $21$ to $9$ for $T_{\rm obs}=10\mathrm{yr}$. For the fixed inclination, time to coalescence and source position, the error on intrinsic parameters, distance and sky position scale, in first approximation, as $1/\mathrm{SNR}$. \subsection{Fisher matrix analysis}\label{results_fm} In this subsection we consider PE using a slightly improved version of Fisher information matrix analysis, inspired by \cite{Vallisneri:2007ev}. We have introduced the Fisher matrix in Sec.~\ref{fisher_mat} and discussed its augmented version, the effective Fisher, in Sec.~\ref{mhmcmc} for computing the covariance matrix. As we mentioned in Sec.~\ref{mhmcmc} and showed in Sec.~\ref{ext2}, the likelihood is very flat around $\iota=0,\pi$ leading Fisher-based-PE to overestimate the errors on $\cos (\iota)$ and $D_L$. To correct for this, we add an additional term ($F^{t}$) to the effective Fisher matrix: $F_{\rm eff}=F+F^{\rm p}+F^{\rm t}$ where $F$ is the ``original'' Fisher matrix given by Eq.~\eqref{def_fisher} and $F^{\rm p}$ is introduced to account for the prior on spins. Empirically, we found the choice $F^{\rm t}_{\cos (\iota),\cos (\iota)} = \frac{1}{(0.2(20/\mathrm{SNR}))^2}$ and 0 elsewhere to give good results for $\cos (\iota)$ and $\log_{10}(D_L)$. The prior matrix $F^{\rm p}$ does more than truncating the error on spins: it mimics the nontrivial prior on $\chi_{+}$ and $\chi_{-}$. Indeed, requiring the spins to be in the physically allowed range ($-1 \leq \chi_{1,2} \leq 1$) leads to a parabola-shaped prior on $\chi_{+,-}$ as seen on \ref{comp_priors}. We treat this nontrivial prior by a Gaussian distribution centered at $\chi_{+,-}=0$ with standard deviation $\sigma = 0.5$. We invert the effective Fisher matrix to obtain the covariance matrix and use it to draw points from a multivariate Gaussian distribution. To fully account for the effect of the prior on spins, the point at which the Gaussian distribution is centered is shifted to $\theta_{\rm eff}=F_{\rm eff}^{-1}F\theta_{0}$. We only keep points within the boundaries given in Eq.~\eqref{flatsampl_prior}. For $\psi$ and $\phi$ we draw points in an interval of width $\pi$ around the central value. In Figs. \ref{comp_fm4} and \ref{comp_fm10} we compare our Fisher analysis to the inferred distribution for the \emph{Fiducial} system using the \emph{Flatsampl} prior. \begin{figure*} \centering \includegraphics[width=0.58\textwidth]{comp_fm_tobs10.png} \centering \caption{Similar to Fig.~\ref{comp_fm4} but with $T_{\rm obs}=10\mathrm{yr}$. }\label{comp_fm10} \end{figure*} We find very good agreement despite the rather low SNR of this system, especially in the $T_{\rm obs}=4\mathrm{yr}$ case. In particular, the sky localization is the same for the full PE and the effective Fisher analysis. Naturally, this method cannot reproduce the secondary maximum we found for the \emph{Equatorial} system but it does predict a higher error as the system approaches the equatorial plane. The good agreement for $\chi_+$ and $\chi_-$ and for $T_{\rm obs}=4\mathrm{yr}$ is because the effective Fisher and posterior distribution are both prior dominated. For $\mathcal{M}_c$ and $\eta$, Fisher agrees with the full PE on a 2-sigma level but cannot reproduce the bananalike correlation. In case of $T_{\rm obs}=10\mathrm{yr}$, the likelihood becomes more informative for the effective spin, reducing the error predicted by the ``original'' Fisher while the $\chi_-$ distribution is still prior dominated. Without adding $F^{\rm t}$ to the effective Fisher, the direction for the correlation between $\cos(\iota)$ and $D_L$ is predicted well but the Fisher matrix severely overestimates the error for nearly face-on/face-off systems. For the \emph{Edgeon} system, the likelihood is not so flat so the error predicted by the ``original'' Fisher is already small (in agreement with the Bayesian analysis) and adding $F^{\rm t}$ does not affect the PE. Based on the rather good agreement we found with Bayesian PE, we can exploit the simplicity of Fisher analysis to further explore how does the PE evolve with the time (left) to coalescence. In Fig.~\ref{err_fm}, assuming $T_{\rm obs}=4\mathrm{yr}$ and $T_{\rm obs}=10\mathrm{yr}$, we plot the errors on $\mathcal{M}_c$, $\eta$, $\chi_+$ and $\Delta \Omega$ as a function of the time to coalescence $t_c$, keeping all the parameters of the \emph{Fiducial} system fixed but varying the initial frequency in accordance with the chosen $t_c$. We plot the corresponding evolution of the SNR in the top panel, with the lowest SNR of 8 being reached for $t_c\simeq1\mathrm{yr}$. Dashed lines mark $t_c=T_{\rm obs}$ in each case which corresponds to the maximum achievable SNR given the observation time and it also corresponds to the best estimation of parameters. Note two different regimes on the two sides of the dashed line: to the left, the PE is governed by the decrease in the signal duration in LISA band and reduction in SNR, while to the right the PE is determined mainly by the bandwidth of the signal spanned over the observation time. As discussed in Sec.~\ref{skyloc} the sky localization comes mainly from modulations caused by the motion of LISA, therefore it worsens rapidly if the system spends too little time in band (below 1 year). \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{errors_fisher.png} \centering \caption{Evolution of the error as function of the time before merger we start observing the system in the $T_{\rm obs}=4\mathrm{yr}$ and $T_{\rm obs}=10\mathrm{yr}$ cases. The SNR is given in the upper panel. The errors on $\mathcal{M}_c$, $\eta$, and $\chi_+$ correspond to the width of the $90 \%$ CIs, and $\Delta \Omega$ is defined in \eqref{eq_omega}.}\label{err_fm} \end{figure*} \subsection{Long-wavelength approximation} \begin{figure*} \centering \includegraphics[width=0.5\textwidth]{comp_lowfl_comb.png} \centering \caption{Comparison of inferred distributions of intrinsic parameters and sky location using the \emph{Full} and \emph{LW} responses for the \emph{Fiducial} system in the $T_{\rm obs}=10\mathrm{yr}$ case. Black lines and squares indicate the true values.} \label{int_lw} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{comp_dt_std_tobs10.png}\\ \centering \caption{Comparison of the inferred distributions for the \emph{Fiducial} system in the $T_{\rm obs}= 10\mathrm{yr}$ case using the \emph{Full} and the \emph{LW} response in the Bayesian analysis. In both cases, data was generated with the \emph{Full} response. Black lines and squares indicate the true values.}\label{comp_dt} \end{figure*} \begin{table} \begin{center} \begin{tabular}{c|c|c|c|c|} \cline{2-5} & \multicolumn{2}{|c|}{$T_{\rm obs}=4\mathrm{yr}$} & \multicolumn{2}{|c|}{$T_{\rm obs}=10\mathrm{yr}$} \\ \cline{2-5} & \emph{Full} & \emph{LW} & \emph{Full} & \emph{LW} \\ \hline \multicolumn{1}{|c|}{\emph{Fiducial}} & $13.5$ & $12.9$ & $21.1$ & $21.4$\\ \hline \multicolumn{1}{|c|}{\emph{Polar}} & $12.8$ & $12.2$ & $20.1$ & $20.0$ \\ \hline \multicolumn{1}{|c|}{\emph{Equatorial}} & $14.9$ & $14.2$ & $23.1$ & $23.4$\\ \hline \end{tabular} \end{center} \caption{Comparison between the SNRs for the \emph{Fiducial}, \emph{Polar} and \emph{Equatorial} systems using the \emph{Full} and the \emph{LW} response.}\label{snrs_lw} \end{table} \begin{table*} [!ht] \begin{center} \begin{tabular}{c|c|c|c|c|c|c|} \cline{2-7} & \multicolumn{3}{|c|}{$T_{\rm obs}=4\mathrm{yr}$} & \multicolumn{3}{|c|}{$T_{\rm obs}=10\mathrm{yr}$} \\ \cline{2-7} & $\log \mathcal{L}(\theta_0)$ & max($\log \mathcal{L}$) & $\tilde{\rho}$ & $\log \mathcal{L}(\theta_0)$ & max ($\log \mathcal{L}$) & $\tilde{\rho}$ \\ \hline \multicolumn{1}{|c|}{\emph{Fiducial}} & $-50$ & $-2$ & $0.99$ & $-268$ & $-38$ & $0.91$ \\ \hline \multicolumn{1}{|c|}{\emph{Polar}} & $-45$ & $-3$ & $0.99$ & $-234$ & $-30$ & $0.92$ \\ \hline \multicolumn{1}{|c|}{\emph{Equatorial}} & $-55$ & $-2$ & $0.99$ & $-288$ & $-34$ & $0.94$ \\ \hline \end{tabular} \end{center} \caption{Loglikelihood at the true point, maximum likelihood and relative SNR (defined in Eq.~\eqref{rel_snr}) when using the \emph{LW} approximation in the Bayesian analysis for data generated with the \emph{Full} response. }\label{snr_eff} \end{table*} In Table \ref{snrs_lw} we compare the SNR for the \emph{Fiducial}, \emph{Polar} and \emph{Equatorial} systems using the \emph{Full} and \emph{LW} responses for two observation times. We find that accounting for the degradation at high frequencies (Eq.~\eqref{degrad_highf}), the \emph{LW} approximation seems to barely affect the PE as can be seen in Fig.~\ref{int_lw}. We find similar behavior for the \emph{Polar} and \emph{Equatorial} systems. Some care is needed in interpreting this result: this comparison shows that the high frequency terms neglected in the \emph{LW} approximation have little impact on the posterior of the sky position if the likelihood is computed self-consistently (signal and template are produced using same response, either \emph{LW} or \emph{Full}). However, when analyzing real data these high frequency terms cannot be neglected. In other words, these effects in the full response can indeed be subdominant in the parameter recovery, if more information comes from other effects like the LISA motion and the main Doppler modulation, while not being negligible in the signal itself. To illustrate this, we simulate data for the \emph{Fiducial}, \emph{Polar} and \emph{Equatorial} systems using the full response and perform a Bayesian analysis using the \emph{LW} approximation to compute templates. In Table \ref{snr_eff}, we give the log-likelihood evaluated at the true point, the maximum likelihood and the maximum overlap: \begin{equation} \tilde{\rho}={\rm max}_h \left ( \frac{(d|h)}{\sqrt{(d|d)(h|h)}} \right ). \label{rel_snr} \end{equation} In practice, we compute the maximum overlap by optimizing over our samples. The quantity $1-\tilde{\rho}$ indicates how much SNR would be lost if wrong templates were used for the detection of signal. We find that up to $\sim 10\%$ of the SNR could be lost, given the already low SNR of SBHBs in LISA this would severely compromise our chances of detecting such sources. The very small value of the likelihood at the true point by itself shows that using the \emph{LW} approximation will have an impact on the PE. In Fig.~\ref{comp_dt}, we compare posterior distributions obtained by using template generated with \emph{Full} or \emph{LW} response while analyzing the \emph{Fiducial} system, with $T_{\rm obs}=10\mathrm{yr}$ and generated with the \emph{Full} response. This system has a significant bandwidth and the \emph{LW} template cannot fit simultaneously the low and high frequency content of the signal, causing severe biases in the PE and loss of SNR. The same system but with $T_{\rm obs}=4\mathrm{yr}$, shows different result, the \emph{LW} template is effectual enough to fit the signal rather well with the largest bias appearing only in $\psi-\varphi$ distribution as a compensation for terms neglected in the response and with a mild drop in the SNR. However, those signals are quite weak and we do not have the luxury to loose even a small portion of SNR. Thus, our findings seem to validate the \emph{LW} approximation for prospective PE studies, if it is used consistently for injecting and recovering the signal, while it would be inappropriate to analyze real data. However, we should remember that we did not explore the full parameter space, while \eqref{degrad_highf} is valid as an average over orientations, so a different choice of parameters could yield worse results. We also note that the full response~\eqref{kernel} is actually quite simple and not more expensive computationally, while being unambiguous and eliminating the need for the averaging entering \eqref{degrad_highf}. \section{Discussion}\label{ccl} Merging binary stellar mass BHs are detected almost weekly during the third LIGO/Virgo observational run (O3). In this work we explored what LISA will be able to tell us about those binaries. While ground-based detectors observe the last seconds before the merger, LISA will see the early inspiral evolution of those systems. The results of the O3 run are not publicly available yet, so we used a GW150914-like system as a fiducial system in our study. We varied the parameters of the system in turn, investigating the corresponding changes in PE. We constructed and analyzed simulated (noiseless) data applying the full LISA response. We employed a Bayesian PE analysis and cross-checked our results using two independent samplers. We have found that PE results are most sensitive to the frequency span of the GW and its extent within LISA sensitivity region given the observation duration, or in other words, how much the signal chirps during the observation time. For weakly chirping systems that do not reach high frequencies during LISA's observations, the GW phase is dominated by the leading PN order, with smaller contributions from higher PN terms. As a result, the best measured parameter is the chirp mass (entering at the leading order) with typical relative error below $10^{-4}$. The weak contributions of subleading terms up to 1.5 PN lead to a three-way correlation between spins, symmetric mass ratio and the chirp mass. The mass ratio is very poorly constrained and the posterior for the spins is dominated by the priors. We nonetheless recover the sky position very well (typically within $0.4$ deg$^2$) thanks to the amplitude and phase modulation of the GW signal due to LISA's motion. Such an area in the sky is within the field of view of electromagnetic instruments such as Athena and SKA. For chirping systems that reach the high end of the LISA frequency band and coalesce during the observation, higher-order PN terms become more important and help break the correlations between intrinsic parameters, thus leading to a significant improvement in PE. The individual masses for chirping systems are measured within $20$--$30\%$ and even better for systems with higher mass ratio. The constraints on individual spins result from the combination of the measurement of the 1.5 PN spin combination and the physical boundaries of the prior on spins: $-1 \leq \chi_{1,2} \leq 1 $. This suggests using $\chi_{\rm PN}$ (specified in Eq.~\eqref{chipn}) as a sampling parameter. We find that the measurement of the time to coalescence improves as we observe the systems closer to merger, from $\mathcal{O}(1 \ {\rm day})$ (for mildly chirping binaries) to $\mathcal{O}(30 \ {\rm s})$. We note that the best way to increase our chances to observe SBHBs chirping is by increasing LISA's mission duration. The measurement of the luminosity distance is less impacted by whether the systems are chirping, and it is essentially a function of their SNR. Much like LIGO/Virgo observations when higher modes can be neglected, the degeneracy between distance and inclination is important. In our example, the distance is typically measured within $40$--$60\%$ if the system is close to face-on/off and within $20 \%$ if the system is edge-on (when adjusting the distance to keep the same SNR). The distance and therefore redshift uncertainty dominates the measurement of the source-frame chirp mass at a percent level. The precision on individual masses in the source frame is dominated by intrinsic parameter degeneracies. We have suggested an augmentation of the usual Fisher matrix approach, that we called the effective Fisher matrix, and we have shown that it gives rather reliable results for the sky position and intrinsic parameters of the system when compared to Bayesian PE. We also showed that combining the use of the long-wavelength approximation for LISA with the introduction of a degradation factor at high frequencies yields very similar results as compared to using the full response for computing likelihoods self-consistently (using the same response for injected data and templates). However, using the long-wavelength approximation to analyse real data could decrease the effective SNR by $10\%$, drastically reducing our chances of detecting the signals and has a significant impact on the PE, particularly on the measurement of intrinsic parameters. The computational cost of the full response being essentially the same as the long-wavelength approximation, we recommend its use in future work. We can utilize the knowledge and understanding obtained in the study of PE for the development of search tools: (i) the PE for these systems is mainly unimodal, with secondary modes appearing either in special cases (like the \emph{Equatorial} system) or under the effect of priors where the likelihood is weakly informative (like the symmetric mass ratio for nonchirping systems); (ii) the chirp mass and the sky coordinates are the best measured parameters, so we can make a hierarchical search starting with those parameters and taking into account the correlations which are explored and understood in Sec.~\ref{results}; (iii) the effective Fisher is an efficient proposal for a Bayes-based search once we start to find indications for a candidate GW signal in the data. In addition, we can perform an incremental analysis starting with a half-year-long data segment and progressively increasing it. This works as a natural annealing scheme and should help in detecting (especially) chirping systems. Detection of even few SBHBs by LISA which merge somewhat later with a very high SNR in the band of ground-based detectors \cite{Gerosa:2018wbw} will constitute ``golden events''. Beyond all the benefits of multiband detections {\it per se}, the information provided by LISA itself will be very valuable. For example, \cite{Tso:2018pdv} suggested that the measurement of the time to coalescence could be used to inform ground-based detectors and improve BH spectroscopy. The good estimate on the time to coalescence and the sky location could be used for electromagnetic follow-up of the source as suggested in \cite{Caputo:2020irr}. Finally, these measurements could be used to tighten the constraints on the Hubble constant ($H_0$) even if no electromagnetic counterparts are detected, using galaxy catalogues \cite{Schutz:1986gp,DelPozzo:2017kme,Kyutoku:2016zxn}. Moreover, we expect our results to extend to more massive systems such as ``light'' intermediate mass black holes binaries (IMBHB), i.e., with component masses $\mathcal{O}(10^2 M_{\odot}) $ and systems similar to the massive binary recently announced by the LIGO/Virgo collaboration \cite{Abbott:2020tfl,Toubiana:2020drf}. Modifications to the GW phase induced by either modified theories of gravity or environmental effects will generically involve additional coefficients parametrizing the underlying mechanisms and their correlation with the parameters of the system ($\mathcal{M}_c$, $\eta$, \dots). As a consequence, even for low-frequency modifications, the best constraints or measurements will come from chirping systems, as we found in \cite{Toubiana:2020vtf} in the context of testing modified theories of gravity with LISA observations. A major improvement to this work would be the inclusion of orbital eccentricity. Astrophysical formation models predict that binaries formed dynamically should have large eccentricities \cite{Antonini:2012ad,Samsing:2017xmd}. However, by the time these binaries reach the frequency band of ground-based detectors, they will have circularized. Thus, LISA could play an important role in the discrimination between different formation channels \cite{Samsing:2018isx,Nishizawa:2016jji,Nishizawa:2016eza,Breivik:2016ddj}. Furthermore, neglecting eccentricity could affect the PE and detection efficiency. We are currently limited by the lack of fast eccentric waveforms, but work is ongoing in this direction \cite{Cao:2017ndf,Ireland:2019tao,Hinder:2017sxy,Hinderer:2017jcs,Huerta:2017kez}. Concerning spins, binaries formed dynamically are expected to have misaligned spins \cite{Gerosa:2018wbw}, causing the binary's orbit to precess. The system might endure a sizeable number of precession cycles over the lifetime of LISA, albeit with a small opening angle of the precession cone for a binary in its early inspiral. Precession effects can become more important close to merger and therefore should be considered carefully when relating the signal in the LISA band with the signal in the ground-based detectors band. We leave the investigation of precession effects for future work. We conclude with the claim that this work, together with \cite{Toubiana:2020vtf, Caputo:2020irr}, has confirmed the scientific potential of the observation of SBHBs with LISA and should be seen as a first step towards an extensive study of the PE for multiband observations. \section*{Acknowledgements} A.T. is grateful to Nikolaos Karnesis for his help at the start of the project and his support all along it. We thank the anonymous referee for making useful suggestions. This work has been supported by the European Union's Horizon 2020 research and innovation program under the Marie Sk\l{}odowska-Curie Grant Agreement No 690904. A.T. acknowledges financial support provided by Paris-Diderot University (now part of Université de Paris). The authors would also like to acknowledge networking support from the COST Action CA16104. \FloatBarrier
{ "timestamp": "2021-01-07T02:01:16", "yymm": "2007", "arxiv_id": "2007.08544", "language": "en", "url": "https://arxiv.org/abs/2007.08544" }
\section{Additional Results} \label{appendix:results} \noindent{\bf Short-term temporal video consistency.} For each sequence, we first take two neighboring frames from the ground truth images to compute the optical flow between them using FlowNet2~\cite{ilg2017flownet}. We then use the optical flow to warp the corresponding synthesized images and compute the L1 distance between the warped image and the target image, in RGB space, normalized by the number of pixels and channels. This process is repeated for all pairs of neighboring frames in all sequences and averaged. The result is shown in below in Table~\ref{table:short_term_consistency}. As can be seen, Ours w/o World Consistency (W.C.) consistently performs better than vid2vid~\cite{wang2018video}, and Ours (with world consistency) again consistently outperforms Ours w/o W.C. \begin{table}[h!] \centering \caption{Short-term temporal consistency scores. Lower is better.} \label{table:short_term_consistency} \begin{tabular}{l|c|c|c} \hline Dataset & vid2vid~\cite{wang2018video} & Ours w/o W.C. & Ours \\\hline Cityscapes & 0.0036 & 0.0032 & {\bf 0.0029} \\ MannequinChallenge & 0.0397 & 0.0319 & {\bf 0.0312} \\ ScanNet & 0.0351 & 0.0278 & {\bf 0.0192} \\\hline \end{tabular} \end{table} \section{Network architecture} \label{appendix:network} \input{figuretex/encoder} As described in the main paper, our framework contains four components: a label embedding network (Fig.~\ref{fig:embedding}), an image encoder (Fig.~\ref{fig:encoder}), a flow embedding network (Fig.~\ref{fig:embedding}), and an image generator (Fig.~\ref{fig:generator}). \medskip \noindent{\bf Label embedding network (Fig.~\ref{fig:embedding}).} We adopt an encoder-decoder style network to embed the input labels into different feature representations, which are then fed to the Multi-SPADE modules in the image generator. \medskip \noindent{\bf Image / segmentation encoder (Fig.~\ref{fig:encoder}).} These networks generate the input to the main image generator. The segmentation encoder is used when generator the first frame in the sequence, while the image encoder is used when generating the subsequent frames. The segmentation encoder encodes the input semantics of the first frame, while the image encoder encodes the previously generated frame. \input{figuretex/generator} \input{figuretex/multispade} \medskip \noindent{\bf Flow embedding network (Fig.~\ref{fig:embedding}).} It is used to embed the optical flow-warped previous frame, which adopts the same architecture as the label embedding network except for the number of input channels. The embedded features are again fed to the Multi-SPADE layers in the main image generator. \medskip \noindent{\bf Image generator (Fig.~\ref{fig:generator}).} The generator consists of a series of Multi-SPADE residual blocks (M-SPADE ResBlks) and upsampling layers. The structure of each M-SPADE Resblk is shown in Fig.~\ref{fig:mspade_resblk}, which replaces the SPADE layers in the original SPADE Resblks with Multi-SPADE layers. \medskip \noindent{\bf Discriminators.} We use the same image and video discriminators as vid2vid~\cite{wang2018video}. \section{Objective functions} \label{appendix:objective_functions} Our objective functions contain five losses: an image GAN loss, a video GAN loss, a perceptual loss, a flow-warping loss, and a world-consistency loss. Except for the world-consistency loss, the others are inherited from the vid2vid\xspace~\cite{wang2018video}. Note that we replace the least square losses used in the vid2vid\xspace for GAN losses with the hinge losses as used in SPADE~\cite{park2019semantic}. We describe these terms in details in the following. \medskip \noindent{\bf GAN losses.} Let ${\mathbf{s}}_1^T \equiv \{ {\mathbf{s}}_{1},{\mathbf{s}}_{2},...,{\mathbf{s}}_{T}\}$ be a sequence of input semantic frames. Let ${\mathbf{x}}_1^T \equiv \{{\mathbf{x}}_{1},{\mathbf{x}}_{2},...,{\mathbf{x}}_{T}\}$ be the sequence of corresponding real video frames, and $\tilde{\mathbf{x}}_1^T \equiv \{\tilde{\mathbf{x}}_{1},\tilde{\mathbf{x}}_{2},...,\tilde{\mathbf{x}}_{T}\}$ be the synthesized frames by our generator. Define $({\mathbf{x}}_t,{\mathbf{s}}_t)$ as one pair of frames at a particular time instance where ${\mathbf{x}}_t\in{\mathbf{x}}_1^T$ and ${\mathbf{s}}_t\in{\mathbf{s}}_1^T$. The image GAN loss ($\mathcal{L}_{I}^t$) and the video GAN loss ($\mathcal{L}_{V}^t$) for time $t$ are then defined as \begin{align} \mathcal{L}_{I}^t =& E_{({\mathbf{x}}_t,{\mathbf{s}}_t)}[\min(0,-1+D_I({\mathbf{x}}_t,{\mathbf{s}}_t))] + \\ &E_{(\tilde{\mathbf{x}}_t,{\mathbf{s}}_t)}[\min(0,-1-D_I(\tilde{\mathbf{x}}_t,{\mathbf{s}}_t)] \\ \mathcal{L}_{V}^t =& E_{{\mathbf{x}}_{t-K+1}^{t}}[\min(0,-1+D_V({\mathbf{x}}_{t-K+1}^{t})] + \\ &E_{\tilde{\mathbf{x}}_{t-K}^{t-1}}[\min(0,-1-D_V(\tilde{\mathbf{x}}_{t-K}^{t-1}))] \end{align} where $D_I$ and $D_V$ are the image and video discriminators, respectively. The video discriminator takes $K$ consecutive frames and concatenates them together for discrimination. For both GAN losses, we also accompany them by the feature matching loss ($\mathcal{L}_{FM}^t$) as in pix2pixHD~\cite{wang2018high}, \begin{equation} \mathcal{L}_{FM,I/V}^t = \sum_{i}\frac{1}{P_i}\left[ ||D_{\{I/V\}}^{(i)}({\mathbf{x}}_t)-D_{\{I/V\}}^{(i)}(\tilde{\mathbf{x}}_t)||_1 \right], \end{equation} where $D_{\{I/V\}}^{(i)}$ denotes the $i$-th layer with $P_i$ elements of the discriminator network $D_I$ or $D_V$. \input{figuretex/embedding} \medskip \noindent{\bf Perceptual loss.} We use the VGG-16 network~\cite{simonyan2014very} as a feature extractor and minimize L1 losses between the extracted features from the real and the generated images. In particular, \begin{equation} \mathcal{L}_{P}^t = \sum_{i}\frac{1}{P_i}\left[ ||\psi^{(i)}({\mathbf{x}}_t)-\psi^{(i)}(\tilde{\mathbf{x}}_t)||_1 \right], \end{equation} where $\psi^{(i)}$ denotes the $i$-th layer of the VGG network. \medskip \noindent{\bf Flow-warping loss.} We first warp the previous frame to the current frame using optical flow. We then encourage the warped frame to be similar to the current frame by using an L1 loss, \begin{equation} \mathcal{L}_{F}^t = ||\tilde{\mathbf{x}}_t-{\mathbf{w}}_t(\tilde{\mathbf{x}}_{t-1})||_1 \end{equation} where ${\mathbf{w}}_t$ is the warping function derived from optical flow. \medskip \noindent{\bf World-consistency loss.} Finally, we add the world consistency by enforcing the generated image to be similar to our guidance image. It is achieved by \begin{equation} \mathcal{L}_{WC}^t = ||\tilde{\mathbf{x}}_t-\tilde{\mathbf{g}}_t||_1 \end{equation} where $\tilde{\mathbf{g}}_t$ is our estimated guidance image.\\ \noindent The overall objective function is then \begin{align} \mathcal{L} = \displaystyle\sum_t & \min_{G} \left( \max_{D_I,D_V} (\lambda_{I}\mathcal{L}_{I}^t + \lambda_{V}\mathcal{L}_{V}^t) \right) + \\ & \min_{G} \left(\lambda_{FM}\mathcal{L}_{FM}^t + \lambda_{P}\mathcal{L}_{P}^t + \lambda_{F}\mathcal{L}_{F}^t + \lambda_{W}\mathcal{L}_{WC}^t \right) \end{align} where $\lambda$ are the weights for each individual terms, which are set to 1, 1, 10, 10, 10, 10 in all of our experiments. \medskip \noindent{\bf Optimization details.} We use the ADAM optimizer~\cite{kingma2014adam} with $(\beta_1, \beta_2) = 0, 0.999$ for all experiments and network components. We use a learning rate of 1e-4 for the encoder and generator networks (which are described below) and 4e-4 for the discriminators. \section{World-consistent video-to-video synthesis} \label{sec:method} \input{src/method/background} \input{src/method/guidance_map_generation.tex} \input{src/method/framework.tex} \section{Related work} {\bf Semantic Image Synthesis}~\cite{chen2017photographic,liu2019learning,park2019semantic,qi2018semi,wang2018high} refers to the problem of converting a single input semantic representation to an output photorealistic image. Built on top of the generative adversarial networks (GAN)~\cite{goodfellow2014generative} framework, existing methods~\cite{liu2019learning,park2019semantic,wang2018high} propose various novel network architectures to advance state-of-the-art\xspace. Our work is built on the SPADE architecture proposed by Park \emph{et al}\onedot~\cite{park2019semantic} but focuses on the temporal stability issue in video synthesis. \smallskip \noindent{\bf Conditional GANs} synthesize data conditioned on user input. This stands in contrast to unconditional GANs that synthesize data solely based on random variable inputs~\cite{goodfellow2014generative,gulrajani2017improved,karras2017progressive,karras2018style}. Based on the input type, there exist label-conditional GANs~\cite{brock2018large,miyato2018cgans,odena2016conditional,zhang2019self}, text-conditional GANs \cite{reed2016generative,xu2018attngan,zhang2017stackgan}, image-conditional GANs \cite{benaim2018one,bousmalis2016unsupervised,choi2017stargan,huang2018multimodal,isola2017image,lee2018diverse,liu2016unsupervised,liu2019few,shrivastava2016learning,taigman2016unsupervised,zhu2017unpaired}, scene-graph conditional GANs \cite{johnson2018image}, and layout-conditional GANs \cite{zhao2019image}. Our method is a video-conditional GAN, where we generate a video conditioned on an input video. We address the long-term temporal stability issue that the state-of-the-art\xspace overlooks~\cite{chan2019everybody,wang2019few,wang2018video}. \smallskip \noindent{\bf Video synthesis} exists in many forms, including 1) unconditional video synthesis~\cite{saito2017temporal,tulyakov2017mocogan,vondrick2016generating}, which converts random variable inputs to video clips, 2) future video prediction~\cite{denton2017unsupervised,finn2016unsupervised,hao2018controllable,hu2018video,kalchbrenner2016video,lee2018stochastic,li2018flow,liang2017dual,lotter2016deep,mathieu2015deep,pan2019video,srivastava2015unsupervised,villegas2017decomposing,walker2016uncertain,walker2017pose,xue2016visual}, which generates future video frames based on the observed ones, and 3) video-to-video synthesis~\cite{chan2019everybody,chen2019mocycle,gafni2019vid2game,wang2019few,wang2018video,zhou2019dance}, which converts an input semantic video to a real video. Our work belongs to the last category. Our method treats the input video as one from a self-consistent world so that when the agent returns to a spot that it has previously visited, the newly generated frames should be consistent with the past generated frames. While a few works have focused on improving the temporal consistency of an input video~\cite{bonneel2015blind,lai2018learning,yao2017occlusion}, our method does not treat consistency as a post-processing step, but rather as a core part of the video generation process. \smallskip \noindent{\bf Novel-view synthesis} aims to synthesize images at unseen viewpoints given some viewpoints of the scene. Most of the existing works require images at multiple reference viewpoints as input~\cite{choi2019extreme,flynn2019deepview,flynn2016deepstereo,hedman2018deep,kalantari2016learning,mildenhall2019local,zhou2018stereo}. While some works can synthesize novel views based on a single image~\cite{srinivasan2017learning,wiles2019synsin,xie2016deep3d}, the synthesized views are usually close to the reference views. Our work differs from these works in the sense that our input is different -- instead of using a set of RGB images, our network takes in a sequence of semantic maps. If we directly treat all past synthesized frames as reference views, it makes the memory requirement grow linearly with respect to the video length. If we only use the latest frames, the system cannot handle long-term consistency as shown in Fig.~\ref{fig:teaser}. Instead, we propose a novel framework to keep track of the synthesis history in this work. The closest related works are those on neural rendering~\cite{aliev2019neural,meshry2019neural,sitzmann2019deepvoxels,thies2019deferred}, which can re-render a scene from arbitrary viewpoints after training on a set of given viewpoints. However, note that these methods still require RGB images from different viewpoints as input, making it unsuitable for applications such as those to game engines. On the other hand, our method can directly generate RGB images using semantic inputs, so rendering a virtual world becomes more effortless. Moreover, they need to train a separate model (or part of the model) for each scene, while we only need one model per dataset, or domain. \section{Introduction} Video-to-video synthesis~\cite{wang2018video} concerns generating a sequence of photorealistic images given a sequence of semantic representations extracted from a source 3D world. For example, the representations can be the semantic segmentation masks rendered by a graphics engine while driving a car in a virtual city~\cite{wang2018video}. The representations can also be the pose maps extracted from a source video of a person dancing, and the application is to create a video of a different person performing the same dance~\cite{chan2019everybody}. From the creation of a new class of digital artworks to applications in computer graphics, the video-to-video synthesis task has many exciting practical use-cases. A key requirement of any such video-to-video synthesis model is the ability to generate images that are not only individually photorealistic, but also temporally smooth. Moreover, the generated images have to follow the geometric and semantic structure of the source 3D world. While we have observed steady improvement in photorealism and short-term temporal stability in the generation results, we argue that one crucial aspect of the problem has been largely overlooked, which is the \emph{long-term temporal consistency} problem. As a specific example, when visiting the same location in the virtual city, an existing vid2vid\xspace method~\cite{wang2019few,wang2018video} could generate an image that is very different from the one it generated when the car first visited the location, despite using the same semantic inputs. Existing vid2vid\xspace methods rely on optical flow warping and generate an image conditioned on the past few generated images. While such operations can ensure short-term temporal stability, they cannot guarantee long-term temporal consistency. Existing vid2vid\xspace models have no knowledge of what they have rendered in the past. Even for a short round-trip in a virtual room, these methods fail to preserve the appearances of the wall and the person in the generated video, as illustrated in Fig.~\ref{fig:teaser}. In this paper, we attempt to address the long-term temporal consistency problem, by bolstering vid2vid\xspace models with memories of the past frames. By combining ideas from scene flow~\cite{vedula1999three} and conditional image synthesis models~\cite{park2019semantic}, we propose a novel architecture that explicitly enforces consistency in the entire generated sequence. We perform extensive experiments on several benchmark datasets, with comparisons to the state-of-the-art\xspace methods. Both quantitative and visual results verify that our approach achieves significantly better image quality and long-term temporal stability. On the application side, we also show that our approach can be used to generate videos consistent across multiple viewpoints, enabling simultaneous multi-agent world creation and exploration. \section{Conclusions and discussion} We presented a video-to-video synthesis framework that can achieve world consistency. By using a novel guidance image extracted from the generated 3D world, we are able to synthesize the current frame conditioned on all the past frames. The conditioning was implemented using a novel Multi-SPADE module, which not only led to better visual quality, but also made transplanting a single image generator to a video generator possible. Comparisons on several challenging datasets showed that our method improves upon prior state-of-the-art\xspace methods. While advancing the state-of-the-art, our framework still has several limitations. For example, the guidance image generation is based on SfM. When SfM fails to register the 3D content, our method will also fail to ensure consistency. Also, we do not consider a possible change in time of the day or lighting in the current framework. In the future, our framework can benefit from improved guidance images enabled by better 3D registration algorithms. Furthermore, the albedo and shading of the 3D world may be disentangled to better model the time effects. We leave these to future work. \medskip\noindent{\bf Acknowledgements.} We would like to thank Jan Kautz, Guilin Liu, Andrew Tao, and Bryan Catanzaro for their feedback, and Sabu Nadarajan, Nithya Natesan, and Sivakumar Arayandi Thottakara for helping us with the compute, without which this work would not have been possible. \section{Experiments} \label{sec:experiments} \noindent\textbf{Implementation details.} We train our network in two stages. In the first stage, we only train our network to generate single images. This means that only the first SPADE layer of our Multi-SPADE block (visualized in Fig.~\ref{fig:method_overview}) is trained. Following this, we have a network that can generate high-quality single frame outputs. In the second stage, we train on video clips, progressively doubling the generated video length every epoch, starting from 8 frames and stopping at 32 frames. In this stage, all 3 SPADE layers of each Multi-SPADE block are trained. We found that this two-stage pipeline makes the training faster and more stable. We observed that the ordering of the flow and guidance SPADEs did not make a significant difference in the output quality. We train the network for 20 epochs in each stage, and this takes about 10 days on an NVIDIA DGX-1 (8 V-100 GPUs) for an output resolution of $1024\times 512$. We train our generator with the multi-scale image discriminator using perceptual and GAN feature matching losses as in SPADE~\cite{park2019semantic}. Following vid2vid~\cite{wang2018video}, we add a temporal video discriminator at two temporal scales and a warping loss that encourages the output frame to be similar to the optical flow-warped previous frame. We also add a loss term to encourage the output frame to correspond to the guidance image, and this is necessary to ensure view consistency. Additional details about architecture and loss terms can be found in Appendix~\ref{appendix:objective_functions} and~\ref{appendix:network}. Code and trained models will be released upon publication. \medskip \noindent\textbf{Datasets.} We train and evaluate our method on three datasets, Cityscapes~\cite{Cordts2016cityscapes}, MannequinChallenge~\cite{li2019learning}, and ScanNet~\cite{dai2017scannet}, as they have mostly static scenes where existing SfM methods perform well. \begin{itemize}[label=\textbullet, topsep=2pt, itemsep=2pt] \item \textbf{Cityscapes}~\cite{Cordts2016cityscapes}. This dataset consists of driving videos of $2048\times 1024$ resolution captured in several German cities, using a pair of stereo cameras. We split this dataset into a training set of 3500 videos with 30 frames each, and a test set of 3 long sequences with 600-1200 frames each, similar to vid2vid\xspace~\cite{wang2018video}. As not all the images are labeled with segmentation masks, we annotate the images using the network from Zhu \emph{et al}\onedot~\cite{zhu2019improving}, which is based on a DeepLabv3-Plus~\cite{chen2018encoder}-like architecture with a WideResNet38~\cite{wu2019wider} backbone. \item \textbf{MannequinChallenge}~\cite{li2019learning}. This dataset contains video clips captured using hand-held cameras, of people pretending frozen in a large variety of poses, imitating mannequins. We resize all frames to $1024\times 512$ and randomly split this dataset into 3040 train sequences and 292 test sequences, with sequence lengths ranging from 5-140 frames. We generate human body segmentation and part-specific UV coordinate maps using DensePose~\cite{Guler2018DensePose,wu2019detectron2} and body poses using OpenPose~\cite{cao2018openpose}. \item \textbf{ScanNet}~\cite{dai2017scannet}. This dataset contains multiple video clips captured in a total of 706 indoor rooms. We set aside 50 rooms for testing, and the rest for training. From each video sequence, we extracted 3 sub-sequences of length at most 100, resulting in 4000 train sequences and 289 test sequences, with images of size $512\times 512$. We used the provided segmentation maps based on the NYUDv2~\cite{silberman2012indoor} 40 labels. \end{itemize} For all datasets, we also use MegaDepth~\cite{li2018megadepth} to generate depth maps and add the visualized inverted depth images as input. As the MannequinChallenge and ScanNet datasets contain a large variety of objects and classes which are not fully annotated, we use edge maps produced by HED~\cite{xie2015holistically} in order to better represent the input content. In order to generate guidance images, we performed SfM on all the video sequences using OpenSfM~\cite{opensfm}, which provided 3D point clouds and estimated cameras poses and parameters as output. \medskip \noindent\textbf{Baselines.} We compare our method against the following strong baselines. \begin{itemize}[label=\textbullet, topsep=2pt, itemsep=2pt] \item vid2vid\xspace~\cite{wang2018video}. This is the prior state-of-the-art\xspace method for video-to-video synthesis. For comparison on Cityscapes, we use the publicly available pretrained model. For the other two datasets, we train vid2vid\xspace from scratch using the public code, while providing the same input labels (semantic segmentation, depth, edge maps, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot) as to our method. \item Inpainting~\cite{liu2018image}. We train a state-of-the-art\xspace partial convolution-based inpainting method to fill in the pixels missing from our guidance images. We train the models from scratch for each dataset, using masks obtained from the corresponding guidance images. \item Ours w/o W.C.\,(World Consistency). As an ablation, we also compare against our model that does not use guidance images. In this case, only the first two SPADE layers in each Multi-SPADE block are trained (label and flow-warped previous output SPADEs). Other details are the same as our full model. \end{itemize} \input{tables/fid.tex} \medskip \noindent\textbf{Evaluation metrics.} We use both objective and subjective metrics for evaluating our model against the baselines. \begin{itemize}[label=\textbullet, topsep=2pt, itemsep=2pt] \item \textit{Segmentation accuracy and Fr\'echet Inception Distance (FID)}. We adopt metrics widely used in prior work on image synthesis~\cite{chen2017photographic,park2019semantic,wang2018high} to measure the quality of generated video frames. We evaluate the output frames based on how well they can be segmented by a trained segmentation network. We report both the mean Intersection-Over-Union (mIOU) and Pixel Accuracy (P.A.) using the PSPNet~\cite{zhao2017pyramid} (Cityscapes) and DeepLabv2~\cite{chen2017deeplab} (MannequinChallenge \& ScanNet). We also use the Fr\'echet Inception Distance (FID)~\cite{heusel2017gans} to measure the distance between the distributions of the generated and real images, using the standard Inception-v3 network. \item \textit{Human preference score}. Using Amazon Mechanical Turk (AMT), we perform a subjective visual test to gauge the relative quality of videos. We evaluate videos on two criteria: 1) \textit{photorealism} and 2) \textit{temporal stability}. The first aims to find which generated video looks more like a real video, while the second aims to find which one is more temporally smooth and has lesser flickering. For each question, an AMT participant is shown two videos synthesized by two different methods, and asked to choose the better one according to the current criterion. We generate several hundred questions for each dataset, each of them is answered by 3 different workers. We evaluate an algorithm by the ratio that its outputs are preferred. \item \textit{Forward-Backward consistency}. A major contribution of our work is generating outputs that are consistent over a longer duration of time with the world that was previously generated. All our datasets have videos that explore new parts of the world over time, rarely revisiting previously explored parts. However, a simple way to revisit a location is to play the video in forward and then in reverse, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot arrange frames from time $t=0, 1, \cdots, N-1, N, N-1, \cdots, 1, 0$. We can then compare the first produced and last produced frames and measure their difference. We measure the difference per-pixel in both RGB and LAB space, and a lower value would indicate better long-term consistency. \end{itemize} \input{figures/vid2vid_flow_guidance.tex} \input{figures/fb_consistency.tex} \medskip \noindent\textbf{Main results.} In Table~\ref{table:comparison}, we compare our proposed approach against vid2vid~\cite{wang2018video}, as well as SPADE~\cite{park2019semantic}, which is the single image generator that our method builds upon. We also compare against a version of our method that does not use guidance images and is thus not world-consistent (Ours w/o W.C.). Inpainting~\cite{liu2018image} could not provide meaningful output images without large artifacts, as shown in Fig~\ref{fig:cityscapes_results}. We can observe that our method consistently beats vid2vid on all three metrics on all three datasets, indicating superior image quality. Interestingly, our method also improves upon SPADE in FID, probably as a result of reducing temporal variance across an output video sequence. We also see improvements over Ours w/o W.C.\ on almost all metrics. In Table~\ref{table:human_pref}, we show human evaluation results on metrics of image realism and temporal stability. We observe that the majority of workers rank our method better on both metrics. In Fig.~\ref{fig:cityscapes_results}, we visualize some sequences generated by the various methods (please zoom in and play the videos in Adobe Acrobat). We can observe that in the first row, vid2vid~\cite{wang2018video} produces temporal artifacts in the cars parked to the side and patterns on the road. SPADE~\cite{park2019semantic}, which produces one frame at a time, produces very unstable videos, as shown in the second row. The third row shows outputs from the partial convolution-based inpainting~\cite{liu2018image} method. It clearly has a hard time producing visually and semantically meaningful outputs. The fourth row shows Ours w/o W.C., an intermediate version of our method that uses labels and optical flow-warped previous output as input. While this clearly improves upon vid2vid in image quality and SPADE in temporal stability, it causes flickering in trees, cars, and signboards. The last row shows our method. Note how the textures of the cars, roads, and signboards, which are areas we have guidance images, are stable over time. We also provide high resolution, uncompressed videos for all three datasets on our website. In Table~\ref{table:fb_consistency}, we compare the forward-backward consistency of different methods, and it shows that our method beats vid2vid~\cite{wang2018video} by a large margin, especially on the MannequinChallenge and ScanNet datasets (by more than a factor of 3). Figure~\ref{fig:fb_consistency} visualizes some frames at the start and end of generation. As can be seen, the outputs of vid2vid change dramatically, while ours are consistent. We show additional qualitative examples in Fig.~\ref{fig:more_results}. We also provide additional quantitative results on short-term consistency in Appendix~\ref{appendix:results}. \input{figures/more_fb_results.tex} \subsection{Background} \medskip \noindent{\bf Background.} Recent image-to-image translation methods perform extremely well when turning semantic images to realistic outputs. To produce videos instead of images, simply doing it frame-by-frame will usually result in severe flickering artifacts~\cite{wang2018video}. To resolve this, vid2vid\xspace~\cite{wang2018video} proposes to take both the semantic inputs and $L$ previously generated frames as input to the network (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot $L=3$). The network then generates three outputs -- a hallucinated frame, a flow map, and a (soft) mask. The flow map is used to warp the previous frame and linearly combined with the hallucinated frame using the soft mask. Ideally, the network should reuse the content in the warped frame as much as possible, and only use the disoccluded parts from the hallucinated frame. While the above framework reduces flickering between neighboring frames, it still struggles to ensure long-term consistency. This is because it only keeps track of the past $L$ frames, and cannot memorize everything in the past. Consider the scenario in Fig.~\ref{fig:teaser}, where an object moves out of and back in the field-of-view. In this case, we would want to make sure its appearance is similar during the revisit, but that cannot be handled by existing frameworks like vid2vid\xspace~\cite{wang2018video}. In light of this, we propose a new framework to handle \emph{world-consistency}. It is a superset of \emph{temporal consistency}, which only ensures consistency between frames in a video. A world-consistent video should not only be temporally stable, but also be consistent across the entire 3D world the user is viewing. This not only makes the output look more realistic, but also enables applications such as the multi-player scenario where different players can view the same scene from different viewpoints. We achieve this by using a novel \emph{guidance image} conditional scheme, which is detailed below. \subsection{Framework for generating videos using guidance images} \input{figures/method_overview.tex} \medskip \noindent{\bf Framework for generating videos using guidance images.} Once the guidance images are generated, we are able to utilize them to synthesize the next frame. Our generator network is based on the SPADE architecture proposed by Park \emph{et al}\onedot~\cite{park2019semantic}, which accepts a random vector encoding the image style as input and uses a series of SPADE blocks and upsampling layers to generate an output image. Each SPADE block takes a semantic map as input and learns to modulate the incoming feature maps through an affine transform $y = x \cdot \gamma_\textrm{seg} + \beta_\textrm{seg}$, where $x$ is the incoming feature map, and $\gamma_\textrm{seg}$ and $\beta_\textrm{seg}$ are predicted from the input segmentation map. An overview of our method is shown in Fig.~\ref{fig:method_overview}. At a high-level, our method consists of four sub-networks: 1) an input label embedding network (\textcolor{overview_orange}{\bf orange}), 2) an image encoder (\textcolor{overview_red}{\bf red}), 3) a flow embedding network (\textcolor{overview_green}{\bf green}), and 4) an image generator (\textcolor{overview_gray}{\bf gray}). In our method, we make two modifications to the original SPADE network. First, we feed in the concatenated labels (semantic segmentation, edge maps, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot) to a label embedding network (\textcolor{overview_orange}{\bf orange}), and extract features in corresponding output layers as input to each SPADE block in the generator. Second, to keep the image style consistent over time, we encode the previously synthesized frame using the image encoder (\textcolor{overview_red}{\bf red}), and provide this embedding to our generator (\textcolor{overview_gray}{\bf gray}) in place of the random vector\footnote{When generating the first frame where no previous frame exists, we use an encoder which accepts the semantic map as input.}. \medskip\noindent{\it Utilizing guidance images.} Although using this modified SPADE architecture produces output images with better visual quality than vid2vid\xspace~\cite{wang2018video}, the outputs are not temporally stable, as shown in Sec.~\ref{sec:experiments}. To ensure world-consistency of the output, we would want to incorporate information from the introduced guidance images. Simply linearly combining it with the hallucinated frame from the SPADE generator is problematic, since the hallucinated frame may contain something very different from the guidance images. Another way is to directly concatenate it with the input labels. However, the semantic inputs and guidance images have different physical meanings. Besides, unlike semantic inputs, which are labeled densely (per pixel), the guidance images are labeled sparsely. Directly concatenating them would require the network to compensate for the difference. Hence, to avoid these potential issues, we choose to treat these two types of inputs differently. To handle the sparsity of the guidance images, we first apply partial convolutions~\cite{liu2018image} on these images to extract features. Partial convolutions only convolve valid regions in the input with the convolution kernels, so the output features can be uncontaminated by the holes in the image. These features are then used to generate affine transformation parameters $\gamma_\textrm{guidance}$ and $\beta_\textrm{guidance}$, which are \emph{inserted} into existing SPADE blocks while keeping the rest of the blocks untouched. This results in a \emph{Multi-SPADE} module, which allows us to use multiple conditioned inputs in sequence, so we can not only condition on the current input labels, but also on our guidance images, \begin{equation} \begin{aligned} y &= (x \cdot \gamma_\textrm{label} + \beta_\textrm{label}) \cdot \gamma_\textrm{guidance} + \beta_\textrm{guidance}. \end{aligned} \label{eq:multi_spade} \end{equation} Using this module yields several benefits. First, conditioning on these maps generates more temporally smooth and higher quality frames than simple linear blending techniques. Separating the two types of input (semantic labels and guidance images) also allows us to adopt different types of convolutions (i.e.\ normal vs.\ partial). Second, since most of the network architecture remains unchanged, we can initialize the weights of the generator with one trained for single image generation. It is easy to collect large training datasets for single image generation by crawling the internet, while video datasets can be harder to collect and annotate. After the single image generator is trained, we can train a video generator by just training the newly added layers (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot\ layers generating $\gamma_\textrm{guidance}$ and $\beta_\textrm{guidance}$) and only finetune the other parts of the network. \medskip \noindent{\it Handling dynamic objects.} The guidance image allows us to generate world-consistent outputs over time. However, since the guidance is generated based on SfM for real-world scenes, it has the inherent limitation that SfM cannot handle dynamic objects. To resolve this issue, we revert to using optical flow-warped frames to serve as additional maps in addition to the guidance images we have from SfM. The complete Multi-SPADE module then becomes \begin{equation} \begin{aligned} y &= \big{(}(x \cdot \gamma_\textrm{label} + \beta_\textrm{label}) \cdot \gamma_\textrm{flow} + \beta_\textrm{flow}\big{)} \cdot \gamma_\textrm{guidance} + \beta_\textrm{guidance}, \end{aligned} \label{eq:multi_spade} \end{equation} where $\gamma_\textrm{flow}$ and $\beta_\textrm{flow}$ are generated using a flow-embedding network (\textcolor{overview_green}{\bf green}) applied on the optical flow-warped previous frame. This provides additional constraints that the generated frame should be consistent even in the dynamic regions. Note that this is needed only due to the limitation of SfM, and can potentially be removed when ground truth / high quality 3D registrations are available, for example in the case of game engines, or RGB-D data capture. \input{figures/input_output.tex} Figure~\ref{fig:input_output} shows a sample set of inputs and outputs generated by our method on the Cityscapes dataset. \subsection{Guidance images and their generation} \input{figures/guidance_generation.tex} \medskip \noindent{\bf Guidance images and their generation.} The lack of knowledge about the world structure being generated limits the ability of vid2vid\xspace to generate view-consistent outputs. As shown in Fig.~\ref{fig:cityscapes_results} and Sec.~\ref{sec:experiments}, the color and structure of the objects generated by vid2vid~\cite{wang2018video} tend to drift over time. We believe that in order to produce realistic outputs that are consistent over time and viewpoint change, an ideal method must be aware of the 3D structure of the world. To achieve this, we introduce the concept of ``\emph{guidance images}'', which are physically-grounded estimates of what the next output frame should look like, based on how the world has been generated so far. As alluded to in their name, the role of these ``\emph{guidance images}'' is to guide the generative model to produce colors and textures that respect previous outputs. Prior works including vid2vid\xspace~\cite{wang2018video} rely on optical flows to warp the previous frame for producing an estimate of the next frame. Our guidance image differs from this warped frame in two aspects. First, instead of using optical flow, the guidance image should be generated by using the motion field, or scene flow, which describes the true motion of each 3D point in the world\footnote{As an example, consider a textureless sphere rotating under constant illumination. In this case, the optical flow would be zero, but the motion field would be nonzero.}. Second, the guidance image should aggregate information from \emph{all} past viewpoints (and thus frames), instead of only the direct previous frames as in vid2vid\xspace. This makes sure that the generated frame is consistent with the entire history. While estimating motion fields without an RGB-D sensor~\cite{golyanik2017multiframe} or a rendering engine~\cite{dosovitskiy2017carla} is not easy, we can obtain motion fields for the static parts of the world by reconstructing part of the 3D world using structure from motion (SfM)~\cite{longuet1981computer,tomasi1992shape}. This enables us to generate guidance images as shown in Fig.~\ref{fig:guidance_generation} for training our video-to-video synthesis method using datasets captured by regular cameras. Once we have the 3D point cloud of the world, the video synthesis process can be thought of as a camera moving through the world and texturing every new 3D point it sees. Consider a camera moving through space and time as shown in the left part of Fig.~\ref{fig:guidance_generation}. Suppose we generate an output image at $t=0$. This image can be back-projected to the 3D point cloud and colors can be assigned to the points, so as to create a persistent representation of the world. At a later time step, $t=N$, we can obtain the projection of the 3D point cloud to the camera and create a guidance image leveraging estimated motion fields. Our method can then generate an output frame based on the guidance image. Although we generate guidance images using the projection of 3D point clouds, it can also be generated by any other method that gives a reasonable estimate. This makes the concept powerful, as we can use different sources to generate guidance images at training and test time. For example, at test time we can generate guidance images using a graphics engine, which can provide ground truth 3D correspondences. This enables just-in-time colorization of a virtual 3D world with real-world colors and textures, as we move through the world. Note that our guidance image also differs from the projected image used in prior works like Meshry~\emph{et al}\onedot~\cite{meshry2019neural} in several aspects. First, in their case, the 3D point cloud is fixed once constructed, while in our case it is constantly being ``colorized" as we synthesize more and more frames. As a result, our guidance image is blank at the beginning, and can become denser depending on the viewpoint. Second, the way we use these guidance images to generate outputs is also different. The guidance images can have misalignments and holes due to limitations of SfM, for example in the background and in the person's head in Fig.~\ref{fig:guidance_generation}. As a result, our method also differs from DeepFovea~\cite{kaplanyan2019deepfovea}, which inpaints sparsely but accurately rendered video frames. In the following subsection, we describe a method that is robust to noises in guidance images, so it can produce outputs consistent over time and viewpoints.
{ "timestamp": "2020-07-17T02:22:39", "yymm": "2007", "arxiv_id": "2007.08509", "language": "en", "url": "https://arxiv.org/abs/2007.08509" }
\section{Conclusions} We proposed a novel loss function which improves semantic coherence for cross-modal retrieval. Our approach leverages a latent space learned on text alone, in order to enforce proximity within samples of the same modality, in the learned cross-modal space. We constrain text and image embeddings to be close in joint space if they or their partners were close in the unimodal text space. We experimentally demonstrate that our approach significantly improves upon several state-of-the-art loss functions on multiple challenging datasets. We presented qualitative results demonstrating increased semantic homogeneity of retrieval results. Applications of our method include improving retrieval of abstract, non-literal text, visual question answering over news and multimodal media, news curation, and learning general-purpose robust visual-semantic embeddings. \noindent \textbf{Acknowledgements:} This material is based upon work supported by the National Science Foundation under Grant No. 1718262. It was also supported by Adobe and Amazon gifts, and an NVIDIA hardware grant. We thank the reviewers and AC for their valuable suggestions. \section{Introduction} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figs/coco_vs_politics_v4.pdf} \caption{ Image-text pairs from COCO \cite{lin2014microsoft} and Politics \cite{thomas2019predicting}. Traditional image captions (top) are descriptive of the image, while we focus on the more challenging problem of aligning images and text with a non-literal complementary relationship (bottom). } \label{fig:coco_vs_politics} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figs/complementarity.pdf} \caption{ The image on the left symbolizes justice and may be paired with text about a variety of subjects (e.g.~abortion, same sex marriage). Similarly, text regarding immigration (right) may be paired with visually dissimilar images. Our approach enforces that \textit{semantically} similar content (images on the right) is close in the learned space. To discover such content, we use semantic neighbors of the text and their paired images. } \label{fig:complementarity} \end{figure*} Vision-language tasks such as image captioning \cite{you2016image, anderson2018bottom, lu2018neural} and cross-modal generation and retrieval \cite{reed2016generative, zhang2017stackgan, zhang2018photographic} have seen increased interest in recent years. At the core of methods in this space are techniques to bring together images and their corresponding pieces of text. However, most existing cross-modal retrieval methods only work on data where the two modalities (images and text) are well aligned, and provide fairly redundant information. As shown in Fig.~\ref{fig:coco_vs_politics}, captioning datasets such as COCO contain samples where the overlap between images and text is significant (both image and text mention or show the same objects). In this setting, cross-modal retrieval means finding the manifestation of a single concept in two modalities (e.g. learning embeddings such that the word ``banana'' and the pixels for ``banana'' project close by in a learned space). In contrast, real-world news articles contain image and text pairs that cover the same topic, but show complementary information (protest signs vs information about the specific event; guns vs discussion of rights; rainbow flag vs LGBT rights). While a human viewer can still guess which images go with which text, the alignment between image and text is abstract and symbolic. Further, images in news articles are ambiguous \emph{in isolation}. We show in Fig.~\ref{fig:complementarity} that an image might illustrate multiple related texts (shown in green), and each text in turn could be illustrated with multiple visually distant images (e.g. the four images on the right-hand side could appear with the border wall text). Thus, we must first resolve any ambiguities in the image, and figure out ``what it means''. We propose a metric learning approach where we use the semantic relationships between text segments, to guide the embedding learned for corresponding images. In other words, to understand what an image ``means'', we look at what articles it appeared with. Unlike prior approaches, we capture this information not only across modalities, but within the image modality itself. If texts $y_i$ and $y_j$ are semantically similar, we learn an embedding where we explicitly encourage their paired images $x_i$ and $x_j$ to be similar, using a new unimodal loss. Note that in general $x_i$ and $x_j$ need not be similar in the original visual space (Fig.~\ref{fig:complementarity}). In addition, we encourage texts $y_i$ and $y_j$, who were close in the unimodal space, to remain close. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figs/loss_effect_on_learned_space_v4.pdf} \caption{We show how our method enforces cross-modal semantic coherence. Circles represent text and squares images. In (a), we show the untrained cross-modal space. Note $y_i$ and $y_j$ are neighbors in Doc2Vec space and thus semantically similar. (b) shows the space after triplet loss training. $y_i$ and $x_i$, and $y_j$ and $x_j$, are now close as desired, but $y_i$ and $y_j$ have moved apart, and $x_i$ and $x_j$ remain distant. (c) shows our loss's effect. Now, all semantic neighbors (both images and text) are pulled closer.} \label{fig:loss_effect_on_learned_space} \end{figure*} Our novel loss formulation explicitly encourages \textit{within-modality semantic coherence}. Fig.~\ref{fig:loss_effect_on_learned_space} shows the effect. On the left, we show the proximity of samples before cross-modal learning; specifically, while two texts are close in the document space, their paired articles may be far from the texts. In the middle, we show the effect of using a standard triplet loss, which pulls image-text pairs close, but does not necessarily preserve the similarity of related articles; they are now further than they used to be in the original space. In contrast, on the right, we show how our method brings paired images and text closer, while also preserving a semantically coherent region, i.e. the texts remained close. In our approach, we use neighborhoods in the original text document space, to compute semantic proximity. We also experiment with an alternative approach where we compute neighborhoods using the visual space, then guide the corresponding texts to be close. This approach is a variant of ours, and is novel in the sense that it uses proximity in one unimodal space, to guide the other space/modality. While unimodal losses based on visual similarity are helpful over a standard cross-modal loss (e.g. triplet loss), our main approach is superior. Next, we compare to a method \cite{wang2016learning} which utilizes the \emph{set} of text annotations available for an image in COCO, to guide the structure of the learned space. We show that when these ground-truth annotations are available, using them to compute neighborhoods in the textual space is the most reliable. However, on many datasets, such sets of annotations (more than one for the same image) are not available. We show that our approach offers a comparable alternative. Finally, we test the contribution of our additional losses using PVSE \cite{song2019polysemous}, a state-of-the-art visual semantic embedding model, as a backbone. We show that our proposed loss further improves the performance of this model. To summarize, our contributions are as follows. \begin{itemize}[nolistsep,noitemsep] \item We preserve relationships in the original \textit{semantic} space. Because images do not clearly capture semantics, we use the semantic space (from text) to guide the image representation, through a unimodal (within-modality) loss. \item We perform detailed experimental analysis of our proposed loss function, including ablations, on four recent large-scale image-text datasets. One \cite{biten2019good} contains multimodal articles from New York Times, and another contains articles from far-left/right media \cite{thomas2019predicting}. We also conduct experiments on \cite{sharma2018conceptual,lin2014microsoft}. Our approach significantly improves the state-of-the-art in most cases. The more abstract the dataset/alignment, the more beneficial our approach. \item We tackle a new cross-modal retrieval problem where the visual space is much less concrete. This scenario is quite practical, and has applications ranging from automatic caption generation for news images, to detection of fake multimodal articles (i.e. detecting whether an image supports the text). \end{itemize} \section{Method} \label{sec:method} Consider two image-text pairs, $\{x_i, y_i\}$ and $\{x_j, y_j\}$. To ground the ``meaning'' of the images, we use proximity in a generic, pre-trained textual space between the texts $y_i$ and $y_j$. If $y_i$ and $y_j$ are semantically close, we expect that they will also be relatively close in the learned space, and further, that $x_i$ and $x_j$ will be close also. We observed that, while intuitive, this expectation does not actually hold in the learned cross-modal space. The problem becomes more severe when image and paired text do not exhibit literal alignment, as shown in Fig.~\ref{fig:coco_vs_politics}, because images paired via text neighbors could be visually different. We describe how several common existing loss functions tackle cross-modal retrieval, and discuss their limitations. We then propose two constraints which pull within-modality semantic neighbors close to each other. Fig.~\ref{fig:combined} illustrates how our approach differs from standard metric learning losses. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figs/combined_figs_v3.pdf} \caption{(a): $\mathcal{L}_{text}$ and $\mathcal{L}_{img}$ pull semantic neighbors of the same modality closer. The images are visually distinct, but semantically similar. (b): Pull connections are shown in green, and push in red. $\mathcal{L}_{trip}$ and $\mathcal{L}_{ang}$ operate cross-modally, but impose no within-modality constraints. (c): $\mathcal{L}_{ours}$ (which combines all three losses above) exploits the paired nature of the data to enforce the expected inter/intra-modal relationships. Solid lines indicate connections that our loss enforces but triplet/angular do not. } \label{fig:combined} \end{figure*} \subsection{Problem formulation and existing approaches} We assume a dataset $\mathbfcal{D} = \left\{\mathbf{I}, \mathbf{T} \right\}$ of $n$ image-text pairs, where $\textbf{I} = \left\{ x_1,x_2,\ldots,x_n\right\}$ and $\textbf{T} = \left\{ y_1,y_2,\ldots,y_n\right\}$ denote the set of paired images and text, respectively. By pairs, we mean $y_i$ is text related to or co-occurring with image $x_i$. Let $f_I$ denote a convolutional neural network which projects images into the joint space and $f_T$ a recurrent network which projects text. We use the notational shorthand $f_T\left(y\right)=y$ and $f_I\left(x\right)=x$. The goal of training $f_I$ and $f_T$ is to learn a cross-modal manifold $\mathcal{M}$ where semantically similar samples are close. At inference time, we wish to retrieve a ground-truth paired text given an input image, or vice versa. One common technique is triplet loss \cite{schroff2015facenet} which posits that paired samples should be closer to one another than they are to non-paired samples. Let $\mathcal{T} = \left(x_i^a, y_i^p, y_j^n \right)$ denote a triplet of samples consisting of an anchor ($a$), positive or paired sample ($p$), and negative or non-paired sample ($n$) chosen randomly such that $i \neq j$. Let $m$ denote a margin. The triplet loss $\mathcal{L}_{trip}$ is then: \begin{equation} \label{eq:triplet_loss} \mathcal{L}_{trip}\left(\mathcal{T}\right) = \left[ \norm{x_i^a - y_i^p}_2^2 - \norm{x_i^a - y_j^n}_2^2 + m \right]_+ \end{equation} This loss is perhaps the most common one used in cross-modal retrieval tasks, but it has some deficiencies. For example, the gradient of the triplet loss wrt.~each point only considers two points, but ignores their relationship with the third one; for example, $\frac{\partial \mathcal{L}_{trip}}{\partial x_i^a} = 2\left(y_j^n-y_i^p\right)$. This allows for degenerate cases, so angular loss $\mathcal{L}_{ang}$ \cite{wang2017deep} accounts for the angular relationship of all three points: \begin{equation} \label{eq:angular_loss} \mathcal{L}_{ang} \left(\mathcal{T}\right) = \left[ \norm{x_i^a - y_i^p}_2^2 - 4 \tan^2 \alpha \norm{y_j^n - \mathcal{C}_i}_2^2 \right]_+ \end{equation} where $\mathcal{C}_i = \left( x_i^a + y_i^p \right) / 2$ is the center of a circle through anchor and positive. One challenging aspect of these losses is choosing a good negative term in the triplet. If the negative is too far from the anchor, the loss becomes 0 and no learning occurs. In contrast, if negatives are chosen too close, the model may have difficulty converging to a reasonable solution as it continuously tries to move samples to avoid overlap with the negatives. How to best sample triplets to avoid these issues is an active area of research \cite{duan2018deep}. One recent technique, the N-pairs loss \cite{sohn2016improved}, proposes that instead of a single negative sample being used, all negatives within the minibatch should be used. The N-pairs loss $\mathcal{L}_{ang}^{NP}$ pushes the anchor and positive embedding away from \textit{multiple} negatives simultaneously: \begin{equation} \label{eq:npairs_triplet_loss} \mathcal{L}_{ang}^{NP} \left(\mathcal{T}\right) = \sum_{y_j \in \mathrm{minibatch}, ~ j \neq i} \mathcal{L}_{ang}\left(x_i^a, y_i^p, y_j^n \right) \end{equation} The symmetric constraint \cite{zhou2017point} can also be added to explicitly account for bidirectional retrieval, i.e.~text-to-image, by swapping the role of images and text to form symmetric triplets $\mathcal{T}_{sym} = \left(y_i^a, x_i^p, x_i^n \right)$: \begin{equation} \label{eq:npairs_symmetric_loss} \mathcal{L}_{ang}^{NP+SYM} \left(\mathcal{T}, \mathcal{T}_{sym}\right) = \mathcal{L}_{ang}^{NP}\left(\mathcal{T}\right) + \mathcal{L}_{ang}^{NP}\left(\mathcal{T}_{sym}\right) \end{equation} \paragraph{Limitations.} While these loss functions have been used for cross-modal retrieval, they do not take advantage of several unique aspects of the multi-modal setting. Only the dashed pull/push connections in Fig.~\ref{fig:combined} (c) are part of triplet/angular loss. The solid connections are intuitive, but only enforced in our novel formulation. We argue the lack of explicit \textit{within-modality} constraints allows discontinuities within the space for semantically related content from the same modality. \subsection{Our proposed loss} \label{sec:method:semantic_locality} The text domain provides a semantic fingerprint for the image-text pair, since vastly dissimilar visual content may still be semantically related (e.g.~image of White house, image of protest), while similar visual content (e.g.~crowd in church, crowd at mall) could be semantically unrelated. We thus use the text domain to constrain within-modality semantic locality for both images and text. To measure ground-truth semantic similarity, we pretrain a Doc2Vec \cite{le2014distributed} model $\Omega$ on the train set of text. Specifically, let $d$ be the document embedding of article $y_i$, $T$ denote the number of words in $y_i$, $w_{t}$ represent the embedding learned for word $t$, $p(\cdot)$ be the probability of the given word, and $k$ denote the look-around window. $\Omega$ learns word embeddings and document embeddings which maximize the average log probability: $\frac{1}{T} \sum_{t=1}^{T} \log p\left(w_{t} | d, w_{t-k}, \ldots, w_{t+k} \right) $. After training $\Omega$, we use iterative backpropagation to compute the document embedding which maximizes the log probability for every article in the dataset: $\Omega(\mathbf{T})=\left\{\Omega\left(y_1\right),\ldots,\Omega\left(y_n\right)\right\}$. Because Doc2Vec has been shown to capture latent topics within text documents well \cite{niu2015topic2vec}, we seek to enforce that locality originally captured in $\Omega(\mathbf{T})$'s space also be preserved in the cross-modal space $\mathcal{M}$. Let \begin{equation} \Psi\left(\Omega(y_i)\right) = \left< x_{i^\prime}, y_{i^\prime}\right> \label{eq:neighbors} \end{equation} denote a nearest neighbor function over $\Omega(\mathbf{T})$, where $\left< \cdot, \cdot\right>$ is an image-text pair in the train set randomly sampled from the $k=200$ nearest neighbors to $y_i$, and $i \neq i^\prime$. $\Psi\left(\Omega(y_i)\right)$ thus returns an image-text pair semantically related to $y_i$. We formulate two loss functions to enforce within-modality semantic locality in $\mathcal{M}$. The first, $\mathcal{L}_{text}$, enforces locality of the text's projections: \begin{equation} \label{eq:text_loss} \begin{gathered} \mathcal{T}_{text}^\prime = \left(y_i^a, y_{i^\prime}^p, y_j^n \right) \\ \mathcal{L}_{text}\left(\mathcal{T}_{text}^\prime\right) = \mathcal{L}_{ang} \left(\mathcal{T}_{text}^\prime\right) \\ \mathcal{L}_{ang} \left(\mathcal{T}_{text}^\prime\right) = \left[ \norm{y_i^a - y_{i^\prime}^p}_2^2 - 4 \tan^2 \alpha \norm{y_j^n - \mathcal{C}_i}_2^2 \right]_+ \end{gathered} \end{equation} where $y_j^n$ is the negative sample chosen randomly such that $i \neq j$ and $\mathcal{C}_i = \left( y_i^a + y_i^p \right) / 2$. $\mathcal{L}_{text}$ is the most straightforward transfer of semantics from $\Omega(\mathbf{T})$'s space to the joint space: nearest neighbors in $\Omega$ should remain close in $\mathcal{M}$. As Fig.~\ref{fig:combined} (c) shows, $\mathcal{L}_{text}$ also indirectly causes semantically related images to move closer in $\mathcal{M}$: there is now a weak connection between $x_i$ and $x_{i^\prime}$ through the now-connected $y_i$ and $y_{i^\prime}$. To directly ensure smoothness and semantic coherence between $x_i$ and $x_{i^\prime}$, we propose a second constraint, $\mathcal{L}_{img}$: \begin{equation} \label{eq:image_loss} \begin{gathered} \mathcal{T}_{img}^\prime = \left(x_i^a, x_{i^\prime}^p, x_j^n \right) \\ \mathcal{L}_{img}\left(\mathcal{T}_{img}^\prime\right) = \mathcal{L}_{ang} \left(\mathcal{T}_{img}^\prime\right) \\ \mathcal{L}_{ang} \left(\mathcal{T}_{img}^\prime\right) = \left[ \norm{x_i^a - x_{i^\prime}^p}_2^2 - 4 \tan^2 \alpha \norm{x_j^n - \mathcal{C}_i}_2^2 \right]_+ \end{gathered} \end{equation} where $x_j^n$ is the randomly chosen negative sample such that $i \neq j$ and $\mathcal{C}_i = \left( x_i^a + x_i^p \right) / 2$. Note that $x_i$ and $x_{i^\prime}$ are often not going to be neighbors in the original visual space. We use N-pairs over all terms to maximize discriminativity, and symmetric loss to ensure robust bidirectional retrieval: \begin{gather} \mathcal{L}_{ang}^{OURS}\left(\mathcal{T}, \mathcal{T}_{sym}, \mathcal{T}_{text}^\prime, \mathcal{T}_{img}^\prime \right) = \\ \mathcal{L}_{ang}^{NP+SYM} \left(\mathcal{T}, \mathcal{T}_{sym}\right) + \alpha \mathcal{L}_{text}^{NP}\left(\mathcal{T}_{text}^\prime\right) + \beta \mathcal{L}_{img}^{NP}\left( \mathcal{T}_{img}^\prime \right) \notag \end{gather} where $\alpha, \beta$ are hyperparameters controlling the importance of each constraint. \vspace{-0.3cm} \paragraph{Second variant.} We also experiment with a variant of our method where the nearest neighbor function in Eq.~\ref{eq:neighbors} (computed in Doc2Vec space) is replaced with one that computes nearest neighbors in the space of visual (e.g. ResNet) features. Now $x_i, x_{i^\prime}$ are neighbors in the original visual space before cross-modal training, and $y_i, y_{i^\prime}$ are their paired articles (which may not be neighbors in the original Doc2Vec space). We denote this method as \textsc{Ours} (Img NNs) in Table \ref{tab:main_result}, and show that while it helps over a simple triplet- or angular-based baseline, it is inferior to our main method variant described above. \vspace{-0.3cm} \paragraph{Discussion.} At a low level, our method combines three angular losses. However, note that our losses in Eq.~\ref{eq:text_loss} and Eq.~\ref{eq:image_loss} do not exist in prior literature. While \cite{wang2016learning} leverages ground-truth neighbors (sets of neighbors provided together for the same image sample in a dataset), we are not aware of prior work that estimates neighbors. Importantly, we are not aware of prior work that uses the text space to construct a loss over the image space, as Eq.~\ref{eq:image_loss} does. We show that the choice of space in which semantic coherency is computed is important; doing this in the original textual space is superior than using the original image space. We show the contribution of both of these losses in our experiments. \subsection{Implementation details} All methods use a two-stream architecture, with the image stream using a ResNet-50 \cite{he2016deep} architecture initialized with ImageNet features, and the text stream using Gated Recurrent Units \cite{cho2014properties} with hidden state size 512. We use image size 224x224 and random horizontal flipping, and initialize all non-pretrained learnable weights via Xavier init.~\cite{glorot2010understanding}. Text models are initialized with word embeddings of size 200 learned on the target dataset. We apply a linear transformation to each model's output features ($\mathbb{R}^{2048\times256}$ for image, $\mathbb{R}^{512\times256}$ for text) to get the final embedding, and perform $L_2$ normalization. We use Adam \cite{kingma2014adam} with minibatch size 64, learning rate 1.0e-4, and weight decay 1e-5. We decay the learning rate by a factor of 0.1 after every 5 epochs of no decrease in val.~loss. We use a train-val-test split of 80-10-10. For Doc2Vec, we use \cite{rehurek_lrec} with $d \in \mathbb{R}^{200}$ and train using distributed memory \cite{le2014distributed} for 20 epochs with window $k=20$, ignoring words that appear less than 20 times. We use hierarchical softmax \cite{morin2005hierarchical} to compute $p(\cdot)$. To efficiently compute approximate nearest neighbors for $\Psi$, we use \cite{malkov2016efficient}; our method adds negligible computational overhead as neighbors are computed prior to training. We choose $\alpha=0.3, \beta=0.1$ for $\mathcal{L}_{trip}^{OURS}$, and $\alpha=0.2, \beta=0.3$ for $\mathcal{L}_{ang}^{OURS}$, on a held-out val.~set. \section{Related Work} \textit{Cross-modal learning.} A fundamental problem in cross-modal inference is the creation of a shared semantic manifold on which multiple modalities may be represented. The goal is to learn a space where content about related semantics (e.g. images of ``border wall'' and text about ``border wall'') projects close by, regardless of which modality it comes from. Many image-text embedding methods rely on a two-stream architecture, with one stream handling visual content (e.g. captured by a CNN) and the other stream handling textual content (e.g. through an RNN). Both streams are trained with paired data, e.g.~an image and its captions, and a variety of loss functions are used to encourage both streams to produce similar embeddings for paired data. Recently, purely attention-based approaches have been proposed \cite{lu2019vilbert,chen2019uniter}. One common loss used to train retrieval models is triplet loss, which originates in the (single-modality) metric learning literature, e.g. for learning face representations \cite{schroff2015facenet}. In cross-modal retrieval, the triplet loss has been used broadly \cite{murrugarra2019cross,Zhu_2019_CVPR,Mithun_2019_CVPR,Pang_2019_CVPR,ye2018advise,faghri2017vse++}. Alternative choices include angular loss \cite{wang2017deep}, N-pairs loss \cite{sohn2016improved}, hierarchical loss \cite{ge2018deep}, and clustering loss \cite{oh2017deep}. While single-modality losses like triplet, angular and N-pairs have been used across and within modalities, they are not sufficient for cross-modal retrieval. These losses do not ensure that the general semantics of the text are preserved in the new cross-modal space; thus, the cross-modal matching task might distort them too much. This phenomenon resembles forgetting \cite{lwf,goodfellow2013empirical} but in the cross-modal domain. Our method preserves within-modal structure, and a similar effect can be achieved by leveraging category labels as in \cite{Zhen_2019_CVPR,wang2017adversarial,marin2019recipe1m+,carvalho2018cross}; however, such labels are not available in the datasets we consider, nor is it clear how to define them, since matches lie beyond the presence of objects. Importantly, classic retrieval losses losses do not tackle the complementary relationship between images and text, which makes the space of topically related images more visually diffuse. In other words, two images might depict substantially \textit{different visual} content but nonetheless be \textit{semantically related}. Note that we do not propose a new \emph{model} for image-text alignment, but instead propose cross-modal embedding \emph{constraints} which can be used to train any such model. For example, we compare to Song et al. \cite{song2019polysemous}'s recent polysemous visual semantic embedding (PVSE) model, which uses global and local features to compute self-attention residuals. Our loss improves \cite{song2019polysemous}'s performance. Our work is also related to cross-modal distillation \cite{frome2013devise, socher2013zero, Gupta_2016_CVPR, girdhar2019distinit}, which transfers supervision across modalities, but none of these approaches exploit the semantic signal that text neighborhoods carry to constrain the visual representations. Finally, \cite{Zhang_2018_BMVC,kruk2019integrating,alikhani2020clue} detect different types of image-text relationships (e.g. parallel, complementary) but do not retrieve across modalities. \textit{Metric learning} approaches learn distance metrics which meaningfully measure the similarity of objects. These can be broadly categorized into: 1) sampling-based methods \cite{han2015matchnet, simo2015discriminative, wang2015unsupervised, shrivastava2016training, yuan2017hard, lu2017discriminative, oh2017deep, harwood2017smart, wu2017sampling, lu2019sampling, wang2019multi}, which intelligently choose easy/hard samples or weight samples; or 2) loss functions \cite{hadsell2006dimensionality, weinberger2009distance, schroff2015facenet, sohn2016improved, wang2017deep, duan2018deep, ge2018deep} which impose intuitions regarding neighborhood structure, data separation, etc. Our method relates to the second category. Triplet loss \cite{schroff2015facenet, hoffer2015deep} takes into account the \textit{relative} similarity of positives and negatives, such that positive pairs are closer to each other than positives are to negatives. \cite{zhang2016embedding} generalize triplet loss by fusing it with classification loss. \cite{oh2016deep} integrate all positive and negative pairs within a minibatch, such that all pair combinations are updated jointly. Similarly, \cite{sohn2016improved}'s N-pair loss pushes multiple negatives away in each triplet. \cite{wang2016learning} propose a structural loss, which pulls multiple text paired with the same image together, but requires more than one ground truth caption per image (which most datasets lack). In contrast, our approach pulls semantically similar images \textit{and} text together and only requires a single caption per image. More recently, \cite{wang2017deep} propose an angular loss which leverages the triangle inequality to constrain the angle between points within triplets. We show how cross-modal complementary information (semantics paired with diverse visuals) can be leveraged to improve the learned embedding space, regardless of the specific loss used. \section{Experiments} We compare our method to five baselines on four recent large-scale datasets. Our results consistently demonstrate the superiority of our approach at bidirectional retrieval. We also show our method better preserves within-modality semantic locality by keeping neighboring images and text closer in the joint space. \subsection{Datasets} Two datasets feature challenging indirect relations between image and text, compared to standard captioning data. These also exhibit longer text paired with images: 59 and 18 words on average, compared to 11 in COCO. \textbf{Politics} \cite{thomas2019predicting} consists of images paired with news articles. In some cases, multiple images were paired with boilerplate text (website headliner, privacy policy) due to failed data scraping. We removed duplicates using MinHash \cite{broder1997resemblance}. We were left with 246,131 unique image-text pairs. Because the articles are lengthy, we only use the first two sentences of each. \cite{thomas2019predicting} do not perform retrieval. \textbf{GoodNews} \cite{biten2019good} consists of $\sim$466k images paired with their captions. All data was harvested from the New York Times. Captions often feature abstract or indirect text in order to relate the image to the article it appeared with. The method in \cite{biten2019good} takes image and text as input, hence cannot serve as a baseline. We also test on two large-scale standard image captioning datasets, where the relationship between image and text is typically more direct: \textbf{COCO} \cite{lin2014microsoft} is a large dataset containing numerous annotations, such as objects, segmentations, and captions. The dataset contains $\sim$120k images with captions. Unlike our other datasets, COCO contains more than one caption per image, with each image paired with four to seven captions. \textbf{Conceptual Captions} \cite{sharma2018conceptual} is composed of $\sim$3.3M image-text pairs. The text comes from automatically cleaned alt-text descriptions paired with images harvested from the internet and has been found to represent a much wider variety of style and content compared to COCO. \subsection{Baselines} We compare to N-Pairs Symmetric Angular Loss (\textsc{Ang+NP+Sym}, a combination of \cite{wang2017deep,sohn2016improved,zhou2017point}, trained with $\mathcal{L}_{ang}^{NP+SYM}$). For a subset of results, we also replace the angular loss with the weaker but more common triplet loss (\textsc{Trip+NP+Sym}). We show the result of choosing to enforce coherency within the image and text modalities by using images rather than text; this is the second variant of our method, denoted \textsc{Ours} (Img NNs). We also compare our approach against the deep structure preserving loss \cite{wang2016learning} (\textsc{Struc}), which enforces that captions paired with the same image are closer to each other than to non-paired captions. Finally, we show how our approach can improve the performance of a state-of-the-art cross-modal retrieval model. \textsc{PVSE} \cite{song2019polysemous} uses both images and text to compute a self-attention residual before producing embeddings. \subsection{Quantitative results} We formulate a cross-modal retrieval task such that given a query image or text, the embedding of the paired text/image must be closer to the query embedding than non-paired samples also of the target modality. We sample random (non-paired) samples from the test set, along with the ground-truth paired sample. We then compute Recall@1 within each task: that is, whether the ground truth paired sample is closer to its cross-modal embedding than the non-paired embeddings. For our most challenging datasets (GoodNews and Politics), we use a 5-way task. For COCO and Conceptual Captions, we found this task to be too simple and that all methods easily achieved very high performance due to the literal image-text relationship. Because we wish to distinguish meaningful performance differences between methods, we used a 20-way task for Conceptual Captions and a 100-way task for COCO. Task complexities were chosen based on the baseline's performance, before our method's results were computed. \input{table_main} We report the results in Table \ref{tab:main_result}. The first and second group of results all use angular loss, while the third set use triplet loss. We observe that our method significantly outperforms all baselines tested for both directions of cross-modal retrieval for three of the four datasets. Our method achieves a 2\% relative boost in accuracy (on average across both retrieval tasks) vs.~the strongest baseline on GoodNews, and a 4\% boost on Politics. We also observe recall is much worse for all tasks on the Politics dataset compared to GoodNews, likely because the images and article text are much less well-aligned. The performance gap seems small but note that given the figurative use of images in these datasets, often there may not be a clear ground-truth answer. In Fig.~\ref{fig:complementarity}, Themis may be constrained to be close to protestors or border wall. At test time, the ground-truth text paired with Themis may be about the Supreme Court, but one of the ``incorrect'' answers could be about immigration or freedom, which still make sense. Our method keeps more neighbors closer to the query point as shown next, thus may retrieve plausible, but technically ``incorrect'' neighbors for a query. Importantly, we see that while the variant of our method using neighborhoods computed in image space (\textsc{Ours} Img NNs) does outperform \textsc{Ang+NP+Sym}, it is weaker than our main method variant (\textsc{Ours}). We also observe that when adding our loss on top of the PVSE model \cite{song2019polysemous}, accuracy of retrieval improves. In other words, our loss is complementary to advancements accomplished by network model-based techniques such as attention. Our method outperforms the baselines on ConcCap also, but not on COCO, since COCO is the easiest, least abstract of all datasets, with the most literal image-text alignment. Our approach constrains neighboring texts and their images to be close, and for datasets where matching is on a more abstract, challenging level, the benefit of neighbor information outweighs the disadvantage of this inexact similarity. However, for more straightforward tasks (e.g. in COCO), it may introduce noise. For example, for caption ``a man on a bicycle with a banana'', the model may pull that image and text closer to images with a banana in a bowl of fruit. Overall, our approach of enforcing within-modality semantic neighborhoods substantially improves cross-view retrieval, particularly when the relationship between image and text is complementary, rather than redundant. To better ground our method's performance in datasets typically used for retrieval, we also conducted an experimented on Flickr30K \cite{plummer2015flickr30k}. Since that dataset does not exhibit image-text complementarity, we do not expect our method to improve performance, but it should not significantly reduce it. We compared the original PVSE against PVSE with our novel loss. We observed that our method slightly outperformed the original PVSE, on both text-to-image and image-to-text retrieval (0.5419 and 0.5559 for ours, vs 0.5405 and 0.5539 for PVSE). \input{table_deep_structure} \input{table_consistency.tex} \input{table_ablation.tex} \input{figs_qualitative.tex} In Table \ref{tab:deep_structure}, we show a result comparing our method to Deep Structure Preserving Loss \cite{wang2016learning}. Since this method requires a \textit{set} of annotations (captions) for an image, i.e. it requires \emph{ground-truth neighbor relations} for texts, we can only apply it on COCO. In the first column, we show our method. In the second, we show \cite{wang2016learning} using ground-truth neighbors. Next, we show using \cite{wang2016learning} with \emph{estimated neighbors}, as in our method. We see that as expected, using estimated rather than ground-truth text neighbors reduces performance (third vs.~second columns). When estimated neighbors are used in \cite{wang2016learning}'s structural constraint, our method performs better (third vs.~first columns). Interestingly, we observe that defining \cite{wang2016learning}'s structural constraint in image rather than text space is better (fourth vs.~third columns). In both cases, neighborhoods are computed in \emph{text} space (Eq.~\ref{eq:neighbors}). This may be because the structural constraint, which requires the group of neighbors to be closer together than to others, is too strict for estimated text neighbors. That is, the constraint may require the text embeddings to lose useful discriminativity to be closer to neighboring text. Neighboring images are likely to be much more visually similar in COCO than in GoodNews or Politics as they will contain the same objects. We next test how well each method preserves the \textit{semantic neighborhood} given by $\Omega$, i.e.~Doc2Vec space. We begin by computing the embeddings in $\mathcal{M}$ (cross-modal space) for all test samples. For each such sample $s_i$ (either image or text), we compute $\Psi_{\mathcal{M}}\left(s_i\right)$, that is, we retrieve the neighbors (of the same modality as $s_i) $ in $\mathcal{M}$. We next retrieve the neighbors of $s_i$ in $\Omega$, $\Psi_\Omega\left(s_i\right)$, described in Sec.~\ref{sec:method:semantic_locality}. For each sample, we compute $\lvert \Psi_{\mathcal{M}}\left(s_i\right) \cap \Psi_{\Omega}\left(s_i\right) \rvert \, / \, \lvert \Psi_{\Omega}\left(s_i\right) \rvert $, i.e.~the percentage of the nearest neighbors of the sample in $\Omega$ which are also its neighbors in $\mathcal{M}$. That is, we measure how well each method preserves within-modality semantic locality through the number of neighbors in Doc2Vec space which remain neighbors in the learned space. We consider the 200 nearest neighbors. We report the result for competitive baselines in Table $\ref{tab:semantic_consistency}$. We find that our constraints are, indeed, preserving within-modality semantic locality, as sample proximity in $\Omega$ is more preserved in $\mathcal{M}$ with our approach than without it, i.e. we better reconstruct the semantic neighborhood of $\Omega$ in $\mathcal{M}$. We believe this allows our model to ultimately perform better at cross-modal retrieval. We finally test the contribution of each component of our proposed loss. We test two variants of our method, where we remove either $\mathcal{L}_{text}$ or $\mathcal{L}_{img}$. We present our results in Table \ref{tab:ablation}. In every case, \textit{combining} our losses for our full method performs the best, suggesting that each loss plays a complementary role in enforcing semantic locality for its target modality. \subsection{Qualitative results} In this section, we present qualitative results illustrating how our constraints both improve semantic proximity and demonstrate superior retrieval results. \textbf{Semantic proximity: } In Fig.~\ref{fig:relative_distance}, we perform an experiment to discover what samples our constraints affect the most. We randomly sampled 10k image-image and text-text neighbor pairs (in $\Omega$) and computed their distance in $\mathcal{M}$ using features from our method vs.~the baseline \textsc{Ang+NP+Sym}. Small ratios indicate the samples were closer in $\mathcal{M}$ using our method, relative to the baseline, while larger indicate the opposite. We show the samples with the \emph{two smallest} ratios for images and text. We observe that visually dissimilar, but semantically similar images have the smallest ratio (e.g.~E.U.~flag and Merkel, Judge's gavel and Supreme Court), which suggests our $\mathcal{L}_{img}$ constraint has moved the samples closer than the baseline places them. For text, we observe articles about the same issue are brought closer even though specifics differ. \textbf{Cross-modal retrieval results: } In Fig.~\ref{fig:cross_modal_retrieval} we show the top-3 results for a set of queries, retrieved by our method vs.~\textsc{Ang+NP+Sym}. We observe increased semantic homogeneity in the returned samples compared with the baseline. For example, images retrieved for ``drugs'' using our method consistently feature marijuana, while the baseline returns images of pills, smoke, and incorrect retrievals; ``wall'' results in consistent images of the border wall; ``immigration'' features arrests. For text retrieval, we find that our method consistently performs better at recognizing public figures and returning related articles.
{ "timestamp": "2020-07-20T02:03:19", "yymm": "2007", "arxiv_id": "2007.08617", "language": "en", "url": "https://arxiv.org/abs/2007.08617" }
\section{Introduction} \label{sec:Intro} What do boiling water and ferromagnetics have in common? At first sight: not much. However, near the critical point in their phase diagrams, water and ferromagnets exhibit a similar behavior: various physical quantities scale in the same way, with exactly the same critical exponents. This is an example of universality. Despite their widely different microscopics, systems in the same universality class can be described at the critical point by the same scale-invariant theory. Another very different class of physics is explored by scattering experiments, such as the Large Hadron Collider (LHC) at CERN. In order to compare with experimental data, particle theorists compute scattering amplitudes that encode the probability that a given initial state interacts and scatters to a particular final state. A simple example is the scattering of pions $\pi$, a light hadron associated with the strong nuclear force. For a process $\pi \pi \to \pi\pi$, the amplitude $A_4(\pi \pi \to \pi\pi)$ encodes the probability of the process as a function of the center of mass energy and the scattering angle. Specifically, the (differential) scattering cross-section is proportional to a phase-space integral over the norm-squared of the amplitude $|A_4|^2$. The theory that describes interacting massless pions is absolutely not scale-invariant and it is therefore completely different from the type of theory that describes the critical point of boiling water and ferromagnets. Despite their obvious differences, boiling water, ferromagnets, and pion scattering are part of a vastly broader class of physical systems that are explored using a set of powerful methods in modern theoretical physics. The basic idea is to ``bootstrap" the physical observables directly from physical and mathematical consistency constraints rather than calculating them from detailed microscopic descriptions. One then uses the observables --- subject to desired properties and symmetries --- to learn about the landscape of possible theoretical models that can give rise to such observables. A specific goal is to understand the structure of {\em quantum field theories (QFTs)}. QFT is a mathematical framework for theoretical physics. It has a plethora of applications and direct experimental relevance. QFT is relevant for particle physics, condensed matter systems, string theory, gravity, gravitational waves, and beyond. There is not {\em one} quantum field theory but {\em many}. Some QFTs describe particles that are weakly interacting and one can use Lagrangian techniques to study them. Other QFTs are always strongly coupled; in those cases words such as ``particles" and their ``interactions" are not useful and they may have no Lagrangian descriptions. Some QFTs describe physics that depends heavily on the energy scale (or length scale) while other QFTs do not care a whit about scale. The subject of QFT is incredibly rich. QFTs describe the critical points of water and ferromagnets as well as the scattering of pions. The set of consistent QFTs can be thought of as a landscape: an abstract landscape that is so vast and complex and interesting that theorists constantly venture into its unknowns to explore and discover new features, new connections, and new properties. It can be hazardous to venture out on a hike into unknown terrain, so we consult maps in order to know the local topographical features of the landscape; such as the beautiful Rocky Mountains.\footnote{This article is based on a colloquium given August 8, 2019, at the Aspen Center for Physics, Aspen, Colorado, USA.} The peaks of the mountains, the valleys, and the saddlepoints are the most prominent features and they guide our choice of path. Likewise, we wish to map out the landscape of QFTs. There are special places in the QFT landscape that can help us understand it better and navigate it. It is very useful to determine these special QFTs and understand their properties. Examples of such special points in the QFT landscape are known as {\em conformal field theories (CFTs)}: the CFTs are the metaphorical peaks, ridges, and valleys of the QFT landscape. Two modern approaches to explore the landscape of QFTs are: \begin{itemize} \item the Conformal Bootstrap Program focused on the CFTs, and \item the Scattering Amplitudes Program. \end{itemize} The goal of this article is to give colloquium-level introductions to these two highly active research areas and describe how they share a common approach to physics that leads to powerful and novel results. It is my hope that this will be useful for researchers in other fields of physics and math as well as for students. For those with more background in QFT, I have included two sections with technical details beyond the colloquium-level because I wanted to illustrate the ideas concretely and explicitly. \vspace{2mm} \noindent {\bf Overview.} The presentation has two main parts: the first part --- Sections \ref{s:watermagnet} through \ref{s:introbootstrap} --- is intended for a general physics audience with no prior knowledge of the subjects. The second part --- Sections \ref{s:ampbootex} and \ref{s:confboot} --- provides technical details that put more equations behind the words in the first part. We begin with the description of the critical points of water and ferromagnets in Section \ref{s:watermagnet}: we discuss critical exponents and scale invariance. It has been proposed that a 3d conformal field theory describes the physics at the critical point. Before exploring that 3d theory further, we illustrate in Section \ref{s:relQFT} the richness of the landscape of 4d relativistic quantum field theories by describing a few examples of QFTs and we then introduce conformal field theories. In Section \ref{s:amps}, we introduce the modern amplitudes program with focus on the ideas of using scattering amplitudes to explore the landscape of QFTs. Section \ref{s:introbootstrap} offers an introduction to the conformal bootstrap program with particular emphasis on what it teaches us about the critical points of boiling water and ferromagnets. The presentations in Sections \ref{s:amps} and \ref{s:introbootstrap} attempt to avoid the full technical detail, but for those who want to see more, please see Sections \ref{s:ampbootex} and \ref{s:confboot}. In particular, Section \ref{s:ampbootex} provides the full details of how to bootstrap a scalar model from very simple assumptions about the behavior of the scattering processes and we shall see how a global symmetry emerges from the construction. We show how the Lagrangian interaction terms can be reconstructed from the bootstrapped amplitudes and that they can be re-summed to the Fubini-Study metric. Section \ref{s:confboot} presents technical detail about the conformal bootstrap setup and as an example it is shown how the crossing relations requires an interacting 4d CFT to have an infinite number primary operators. We conclude in Section \ref{s:conclude} with very brief closing remarks. \section{Water \& Magnets} \label{s:watermagnet} \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale=0.4, line width=1 pt] \draw (0,0)--(19,0); \draw (0,0)--(0,16); \draw[orange] (0,1.5) .. controls (3,2) and (13, 3) .. (18, 13); \draw[orange] (3.3,2.1) .. controls (2.3,6) .. (1.5, 15.5); \draw[dotted,gray] (0,13)--(18, 13) \draw[dotted,gray] (18,0)--(18, 13) \draw[dotted,gray] (0,2.1)--(3.3,2.1); \draw[dotted,gray] (3.3,0)--(3.3,2.1) \draw[dotted,gray] (0,5)--(10.8,5) \draw[dotted,gray] (10.8,0)--(10.8,5) \draw[dotted,gray] (2.6,0)--(2.6,5); \fill[orange] (3.3,2.1) circle (1.5ex); \fill[orange] (18, 13) circle (1.5ex); \fill[orange] (10.8,5) circle (1.5ex); \fill[orange] (2.6,5) circle (1.5ex); \node[rotate=90] at (-3,7.5) {\small pressure in atm}; \node at (16,-2) {\small temperature in $^\circ$C}; \node at (18, 13.7) {\tiny critical point}; \node at (5.4,1.8) {\tiny triple point}; \node at (14.1,4.7) {\tiny normal boiling point}; \node at (5.9,5.5) {\tiny normal freezing point}; \node at (2.6,-0.6) {\footnotesize $0$}; \node at (4,-0.6) {\footnotesize $0.01$}; \node at (10.8,-0.6) {\footnotesize $100$}; \node at (18,-0.6) {\footnotesize $374$}; \node at (-1.7,2.1) {\footnotesize $0.0060$}; \node at (-0.7,5) {\footnotesize $1$}; \node at (-1.7,13) {\footnotesize $217.75$}; \node[rotate=90,gray] at (1, 9) {\footnotesize solid}; \node[gray] at (9, 10) {\footnotesize liquid}; \node[gray] at (14, 3) {\footnotesize gas}; \end{tikzpicture} \end{center} \caption{\label{waterphases} Sketch of the phase diagram of water. Across the orange lines, the changes of phase require latent heat and are first order phase transitions. At the critical point, the phase transition becomes second order; at this point the system becomes scale invariant. It is the scale invariant model at the critical point that is of interest here.} \end{figure} Consider the phase diagram of water in Figure \ref{waterphases}. Under normal conditions of pressure at about 1\,atm, water freezes at $0^\circ$C and boils at $100^\circ$C, so as the temperature is varied at constant pressure, water exhibits three phases: solid, liquid, and gas. As is well-known by people in mountainous regions and students in thermodynamics classes, the boiling point is lower at higher altitude. In Aspen, at about 8,000\,ft = 2440\,m, the air pressure drops to around 0.75\,atm and the boiling point of water is $92^\circ$C. So it takes a little longer to boil your pasta al dente. The familiar solid-liquid and liquid-gas phase transitions of water involve latent heat and are called {\em first order phase transitions}. As pressure increases, the boiling point of water goes up and at high enough pressure, $p > 217$\,atm, the phases of liquid and gas are no longer distinguishable. The liquid-gas transition curve in the phase diagram ends at a point called the {\em critical point} with $p_c \sim 217$\,atm and $T_c \sim 374^\circ$C. As the critical point is approached, the latent heat needed to transition between liquid and gas goes to zero and at the critical point the phase transition becomes {\em continuous} (also called {\em second order}). OK, so what? Well, near the critical point, something special happens to the correlation length $\xi$ in the system. The correlation length says something about how strongly coupled disparate parts of the system are to each other. As $T \to T_c$, the correlation length diverges as \begin{equation} \label{xi} \xi \sim |(T-T_c)/T_c|^{-\nu} \,. \end{equation} Points at large spatial separation $r$ are correlated with strength $e^{-r/\xi}$, so an infinite correlation length, $\xi \to \infty$, means that every part of the system couples with equal strength to every other part. Not just nearest-neighbor friendliness here, everybody is coupled to everybody else. Moreover, when $\xi\to\infty$ there is no distinguishing scale in the system: it has become {\em scale invariant}. Physically, the phenomenon of scale invariance can be seen as critical opalescence: at the critical point the liquid becomes milky in appearance as the correlation lengths that govern the fluctuations in the system become of the same order as the wavelength of visible light so it scatters and the substance looks cloudy. The number $\nu$ in equation \reef{xi} is an example of a critical exponent. These exponents characterize the approach to the critical point and play an important role for critical systems. For critical point of a liquid-vapor transition, the value of $\nu$ is \begin{equation} \nu \approx 0.63\,. \end{equation} Now let us switch gears and discuss a physically very different system, namely ferromagnets. They exhibit critical behavior at the Curie temperature $T_c$, which separates the ordered ferromagnetic phase at $T<T_c$ from the disordered non-magnetic phase at $T>T_c$. As an example, the Curie temperature of iron is $T_c =1043$\,K = $770^\circ$C; for reference, iron melts at $1811$\,K =$1538^\circ$C. At the critical point set by the Curie temperature, the correlation length between dipoles in the ferromagnet diverges just as in equation \reef{xi}. And here comes the best part: {\em the critical exponent $\nu$ for ferromagnets takes the same value as for water $\nu \approx 0.63$.} This is quite amazing: these are different physical phenomena whose microscopics are totally unrelated. Nonetheless, at their critical points, these two systems --- water and ferromagnets --- behave alike. This is an example of {\em universality}. There is a theoretical model, the 3d Ising model, that describes both water and ferromagnets near their critical points. For the ferromagnet, this is a model in which each site of a 3d lattice can either be spin up or downs. For water, replace spin up/down with occupation number: site has a molecule or not. Block spin techniques allow one to study such a lattice model at greater and greater length scales, meaning lower and lower energies. In the deep infrared, one approaches a fixed point of scale invariance (called the critical 3d Ising model) and finds that the correlation length diverges with a critical exponent $\nu \approx 0.63$. It appears that the value of $\nu \approx 0.63$ has not been measured directly experimentally for the critical point of regular water H$_2$O, but it can be inferred from other measurements of critical exponents. There are, however, multiple other direct measurements of $\nu$ in other systems: small angle neutron scattering in heavy water D$_2$O \cite{Sullivan_2000}, light-scattering experiments in an electrolyte solution \cite{Sengers2009} as well as other systems in the same 3d Ising universality class, see for example Table 7 in \cite{Pelissetto:2000ek}. Now you may have started reading this article in the hope of learning about the landscape of QFTs and instead you have gotten an earful about the phase diagrams of water and ferromagnets. Fear not, there is a purpose behind the madness. Polyakov conjectured that \cite{Polyakov:1970xd} that at critical points, the symmetries of the system are enhanced to conformal symmetry: they can be described by conformal field theories. We have started out with the critical continuous phase transitions in water and ferromagnets because these are real-world examples of the power of quantum field theory and a case where the conformal bootstrap techniques are directly applicable. The point here is that by studying the 3d Ising model (henceforth referring to it with implicit understanding that it is at the fixed point) as a conformal field theory using the techniques we describe in Section \ref{s:introbootstrap}, one can extract, to an incredible precision, information about the critical exponents of the system, such as $\nu$. \section{Field Theories: A Brief Survey} \label{s:relQFT} In this article we study {\em relativistic} QFT.\footnote{Non-relativistic QFT is an important subject too that is highly relevant in condensed matter contexts. The examples of water and ferromagnetics are non-relativistic, but at the critical points the symmetries are expected to be enhanced, as discussed at the end of Section \ref{s:watermagnet}.} This means that we consider QFTs that are invariant under Poincar\'e symmetry: spatial rotations, Lorentz boosts, and spacetime translations. Moreover, we assume that the theories we study are local and unitary. Loosely speaking locality means that there is no action at a distance. Technically, this means that local fluctuations can be described in terms of local operators that depend on a single point of spacetime. Locality manifests itself on the physical observables in terms of what kind of singularity structures they are allowed to have. We discuss this further in Section \ref{s:amps}. As stated in the Introduction, there is not just one QFT but a vast and rich landscape of QFTs. To illustrate how different QFTs can be --- and to set the stage for the later discussion --- I will now briefly discuss some examples of relativistic QFTs. \subsection{Examples of Relativistic QFTs} \label{s:exQFTs} Here follows some key examples of 4d quantum field theories: \begin{itemize} \item {\em Quantum Electrodynamics (QED)} describes the interaction of photons and electrons/positrons. The classical equations of motion are Maxwell's equations for the photons and the Dirac equation for the electrons/positrons. At the quantum level, the strength of the coupling of photons to electrons/positrons depends on the energy scale. At atomic-level energies the effective coupling, the fine structure constant $\alpha$, is $\alpha \approx 1/137$. However, at energies around the mass of the weak force mediators $W^\pm$ and $Z$, which is about $90$ times the proton mass, the strength of the coupling increases to around $\alpha \approx 1/127$. Thus the coupling runs with scale: it approaches zero at very low energies where the theory becomes trivial, but at high energies it becomes stronger and perturbation theory breaks down. \item {\em Quantum Chromodynamics (QCD)} describes the strong nuclear force. More precisely, QCD is the theory that describes gluons, $N_f$ flavors of quarks (with $N_f=6$ in Nature for the $d$, $u$, $s$, $c$, $b$, and $t$ quarks), and their interactions. The dynamics of gluons is captured by {\em Yang-Mills theory}. The coupling strength $\alpha_s$ of QCD also depends on the energy scale. Famously, QCD in our world behaves oppositely to QED in that it becomes free (coupling goes to zero) at high energies while it is strongly coupled and confining at low energies. \item {\em The Standard Model of Particle Physics} combines QED with three generations of leptons (electrons, muons, taus, and their antiparticles, as well as the neutrinos), QCD with six flavors of quarks, the electroweak force, and the Higgs mechanism to give an incredibly successful quantum field theory in which one can calculate physical observables to high precision and compare with experimental data. An example is the multi-digit precision agreement in the measurements of the electron and muon Land\'e $g$-factor. The Higgs mechanism and spontaneously broken symmetry are key ingredients in the Standard Model and the discovery of the Higgs boson announced in 2012 was a tremendous success of the both long-term experimental perseverance and the power of theoretical studies of quantum field theories. \vspace{0.8mm} In the Standard Model, the couplings also run with energy scale. The electromagnetic force and the weak nuclear force unify at about 246 times the proton mass. Grand Unification models seek to unify the electroweak force with the strong force at an even higher scale. \item {\em Gravity} is not included in the Standard Model. However, it is successfully described by its own field theory, namely General Relativity. As a field theory, General Relativity differs from those discussed above in that the gravitational coupling $\kappa = G^{1/2}$ is dimensionful. Specifically, Newton's constant $G$ has dimension of (mass)$^{-2}$. This is in contrast with the dimensionless fine structure constant $\alpha$ of QED. As a consequence, the effective dimensionless coupling in gravity is $E \kappa = E \sqrt{G}$, where $E$ is the energy scale in the particular problem. Thus gravity is a weak force (i.e.~perturbative) only at energy scales much smaller than $\kappa^{-1} = G^{-1/2} \sim M_\text{Planck} \sim 10^{19}$ GeV. In a sense this is like QED in that gravity is weak at low-energies and strong at high energies. But gravity is actually much more complicated than QED. QED is a renormalizable theory meaning that it is a sensible predictive theory at the quantum level. Gravity, on the contrary, is non-renormalizable and therefore, broadly speaking, it is non-predictive at the quantum level at energies approaching the Planck scale. \vspace{0.8mm} It is useful to think of General Relativity is as a {\em low-energy effective field theory}: it gives a description of gravity that makes sense only at sufficiently low energies $E \ll M_\text{Planck}$. Within this regime of validity it is, as we know well, a highly successful theory of gravitational phenomena. At energies above the Planck scale, it needs a UV completion in order to make sense. String theory is a theoretical framework that, among many other properties, is the most promising candidate for a theory of quantum gravity that gives a UV completion of General Relativity. \item {\em Effective Field Theories (EFTs)} describe physical phenomena in an expansion in some small parameter(s). In the context here, we focus on low-energy EFTs. The idea is to work in a low-energy regime, where powers of energy-momentum are suppressed by a particular ``cut-off'' scale $\Lambda_\text{UV}$ above which the expansion in small $E/\Lambda_\text{UV}$ is no longer valid. In terms of a Lagrangian, derivatives are (via Fourier transform) directly counting powers of energy-momentum, so in an EFT one includes higher-derivative terms with increasing suppression by powers of $1/\Lambda_\text{UV}$. This means that the $1/\Lambda_\text{UV}$-expansion becomes a derivative expansion. The principle of EFTs is to include all Lagrangian terms that are allowed by symmetries of the system up to a given order in $1/\Lambda_\text{UV}$. The arbitrary coefficients in this expansion parameterize the (potentially unknown) UV physics. \vspace{0.8mm} As an example, General Relativity is the leading 2-derivative interactions of a gravitational EFT in which higher-derivative corrections are suppressed by inverse powers of the Planck mass. The historical successes of General Relativity --- as well as the frequent use you make of it via the GPS built into your smart phone --- tells you that effective field theories are incredibly useful. \vspace{0.8mm} Physics beyond the standard model, such as proton decay or dark matter, is often encoded in terms of EFTs. The scale of the EFT is then associated with the scale of the new physics at higher energies. \item {\em Non-Linear Sigma Models (NLSM)} describe scalar fields that take values in some manifold, called a target space, and the model inherits the symmetries from this target space. An important class NLSM are the EFTs that govern the low-energy dynamics of massless Goldstone bosons arising from spontaneously broken global symmetries. In these cases, the scale built into the EFT is associated with the symmetry-breaking scale. \vspace{0.8mm} As an example, the Standard Model has an approximate chiral symmetry that is spontaneously broken at a scale of about $1$\,GeV (i.e.~about the mass of the proton). The symmetry breaking gives rise to Goldstone bosons that are identified as the pions. While Goldstone bosons are exactly massless, pions in nature are not; they have low mass (135-140\,MeV) and because of that the chiral symmetry is only approximate. Nonetheless, the EFT that describes (massless) pions is quite useful. It is called {\em chiral perturbation theory} and is an example of a NLSM. \vspace{0.8mm} In our presentation of the amplitude approach to bootstrap the landscape of QFTs in Sections \ref{s:ampboot} and \ref{s:ampbootex}, we focus on pion NLSMs. \end{itemize} We have given a few of examples of QFTs, but this is far from the whole picture. In the examples, I emphasized in each case the dependence on energy scale. The way these theories behave with change in scale is governed by {\em renormalization group (RG) flow}. RG flow is a way to move around in theory space; some theories are connected with RG flows, others are not. We have seen that some theories (like QED) become trivial at low energies, while others (like QCD) become strongly coupled. There is also an option that some theories flow to non-trivial fixed-points at low-energies. This for example is the case for QCD with a sufficiently high number of families of quarks. At an RG fixed point, the physics no longer changes with scale. In many cases, scale invariance is associated with enhanced spacetime symmetry, namely {\em conformal symmetry}. \subsection{Conformal Field Theories} \label{s:CFT} Conformal Field Theories (CFTs) are characterized by having, in addition to Poincar\'e symmetry, conformal boost symmetry. One way to describe it is that special conformal boosts are transformations that preserve angles. Another way is to consider the inversion transformation that sends a spacetime coordinate $x^\mu$ to $x^\mu/x^2$, where $x^2=-t^2+|\vec{x}|^2$ is the relativistic invariant spacetime distance. (We use conventions of $c=1$ throughout.) Then a special conformal boost is what you get when you do inversion, followed by a translation, followed by another inversion. Sounds a little complicated? OK, let us try to get a little intuition. Inversion sends $x^2$ to $1/x^2$. Ignoring the time-component of $x^\mu$ we then see that inversion trades long distance with short distance, and vice versa. In a relativistic theory, this then interchanges low energy and high energy. For a theory to have such a property,\footnote{CFTs need not have inversion symmetry, for example chiral CFTs do not, but for the purpose of this introduction let us just consider CFTs that are invariant under inversion.} it cannot have any preferred scales because inversion symmetry requires that the physics above and below a given scale has to be the same. For instance, if a particle has mass $m$, then for energies $E \ge 2 m c^2$ such particles can be pair-produced, but for $E < 2 m c^2$ they cannot. Thus physics is different above and below $2mc^2$ and hence masses cannot be allowed with inversion symmetry. Similarly no other dimensionful parameters are allowed. Hence, a relativistic theory with inversion symmetry is necessarily also {\em scale invariant}. This is the aspect of CFTs that is most relevant for us in this presentation: CFTs are scale invariant.\footnote{The reverse may in general not be true: scale invariance does not in general imply conformal invariance.} Scale invariance is very different from our everyday experience. It is not the same to take a ice bath as it is to put your finger in boiling water (and neither is pleasant for very long). People age differently over the time scale of a year than they do over ten years. So it may seem like conformal symmetry, or even just scale invariance, is a property very different from what we encounter in our everyday world. That is true, nonetheless, scale invariance does happen in nature --- and it can be found it in the lab. We already encountered an example of scale invariance in Section \ref{s:watermagnet}. Recall that near the critical point of water (or the Curie point for the ferromagnet), the correlation length $\xi$ diverges so that all length scales become of equal relevance and the system becomes scale invariant at the critical point. There is no proof that it also becomes conformal, however, in the Section \ref{s:introbootstrap} we describe how modeling the critical point using conformal field theory techniques (specifically here for the {3d Ising model}) makes it possible to determine critical exponents such as $\nu$ in \reef{xi}. Other examples of CFTs arise in 4-dimensional supersymmetric gauge theories. In Section \ref{s:exQFTs}, we noted that gluons, the spin-1 massless particles of the strong nuclear force, are described by Yang-Mills theory. Yang-Mills theory is not a conformal theory, but the gluons can be coupled to other particles in such a way that the resulting theory is conformal. A useful tool in this context is supersymmetry, a symmetry that partners bosons and fermions into supermultiplets in which all particles must have the same mass and their interactions are quite restricted. The massless fermion superpartners of gluons are called gluinos and the QFTs describing them are called {\em super Yang-Mills (SYM) theories}. In models without gravity, the gluons can have either $\mathcal{N}=0,1,2$ or $4$ gluino partners.\footnote{\label{footieN3} Why not $\mathcal{N}=3$, you ask? Sure, $\mathcal{N}=3$ is fine too. For a Lagrangian theory, CPT invariance (charge conjugation, parity, time-reversal) implies that $\mathcal{N}=3$ SYM theory is equivalent to $\mathcal{N}=4$ SYM.} For the $\mathcal{N}=2$ or $4$ cases, there also needs to be massless scalars coupled supersymmetrically to the gluons and gluinos. The massless $\mathcal{N}=4$ SYM theory in 4d is very special: its Lagrangian is completely fixed by supersymmetry and the couplings do not run at all with scale. Not only is {\em $\mathcal{N}=4$ super Yang-Mills theory} scale-invariant, it is in fact also conformally invariant, even at the quantum level. It is a theory that in many respects is considered ``the simplest" QFT due to its strong constraints from symmetries that allows calculational control that one often lacks in other theories. $\mathcal{N}=4$ super Yang-Mills theory appears in many areas of theoretical high energy physics, often as a ``testing lab" for developing new techniques and gaining insights into similar systems, for example QCD. There is a multitude of other known CFTs. Coupling $\mathcal{N}=2$ SYM supersymmetrically to matter multiplets can give rise to superconformal (supersymmetric and conformal) theories (SCFT). The subject of $\mathcal{N}=2$ SCFTs is in itself very rich, with some $\mathcal{N}=2$ SCFTs described by Lagrangians and others having only strongly coupled dynamics and no Lagrangians. Understanding the landscape of $\mathcal{N}=2$ SCFTs is in itself an actively investigated research subject. There is a multitude of other known CFTs and SCFTs. Some arise in condensed matter systems, others in string theory. Recall that (S)CFTs are the light beacons in the much vaster landscape of QFTs, so it is noteworthy that they themselves make up a rich landscape that is even far from fully explored. \subsection{QFT Observables} The key observables in QFTs are {\em correlation functions}. Correlation functions are familiar from cosmology where they measure the correlations between different locations in the map of the cosmic microwave background. They are familiar from condensed matter physics where we may be interested in whether interactions are short distance or long distance; in fact the correlation length $\xi$ we discussed in Section \ref{s:watermagnet} is the characteristic length associated with a 2-point (connected) correlation function \begin{equation} \< \sigma(x) \sigma(y) \> \sim e^{-|x-y|/\xi}\, \end{equation} for $|x-y| \gg \xi$. It measures the correlations between interactions between locations $x$ and $y$. In a lattice model, the field $\sigma(x)$ can be thought of as designating whether the site at position $x$ has spin up or down (say $\sigma(x) = \pm 1$). In general, the $n$-point correlation function measures the correlation between quantities at $n$ different spacetime locations. One approach to correlation functions is to reduce them from their general form to an ``on-shell" form in momentum space; this gives the on-shell scattering amplitudes from which one can compute the observable scattering cross-sections. The amplitudes are the observables of interest in the Scattering Amplitudes Program. Another approach to study correlation functions to impose on them symmetries and mathematical consistency conditions: that is the what is done in the Conformal Bootstrap Program in the context of CFTs. \section{Introduction to the Modern Scattering Amplitudes Program} \label{s:amps} A scattering experiment consists of banging things together to see what comes out. For example at the LHC protons are collided against protons at about 10,000 times their rest mass. At the microscopic level, the partons (gluons and quarks) inside the protons interact with each other in the collisions. A process representative for the physics we discuss here is the process of two gluons interacting to produce a new set of gluons, e.g.~ \begin{equation} \label{6gluons} g + g ~\to~ g + g + g + g \end{equation} The gluons are described by Yang-Mills theory. At sufficiently high energies (such at the LHC) it is weakly coupled and it therefore makes sense to study the scattering of gluons (and other partons) perturbatively. Eventually the partons hadronize and form jets of mesons and baryons. Here we focus on the high-energy part of the process that just involves gluons scattering inelastically to gluons. The probability for a scattering process to occur is encoded in the scattering amplitude $A_n(i \to f)$, where $n$ is the total number of initial and final state particles in the process $i \to f$. It is related to the experimentally measurable observable, the differential scattering cross-section as \begin{equation} \frac{d\sigma}{d\Omega} \sim \int |A_n(i \to f)|^2\,. \end{equation} The integral here is taken suitably over phase space. For a given initial state $i$, $d\sigma/d\Omega$ gives the probability of measuring the final state $f$ as a function of scattering angles and energies. Amplitudes are traditionally calculated as the sum of Feynman diagrams constructed from the vertices and propagators of a theory described by some Lagrangian. The perturbative expansion in small couplings is organized diagrammatically in a loop-expansion where the leading order is tree-level, the first correction is the sum of one-loop diagrams, the next correction consists of the two-loop diagrams etc. A general rule of thumb: the more loops and the more particles, the harder it is to calculate the amplitudes. For more particles, this is because the number of diagrams tends to grow combinatorially. For higher loops, the number of diagrams is a significant issue too, but not the only one; the evaluation of highly non-trivial integrals over energy-momentum running in the closed loops can be a challenging roadblock as well. Amplitudes depend on {\em ``external data"}: the momenta $p_i^\mu$ for each of the external particles and polarization vectors for spin 1 particles and spin wavefunctions for fermions. The external momenta $p_i^\mu$ are subject to the requirements of being on-shell and satisfying momentum conservation, i.e.~($c=1$) \begin{equation} \label{genmomcons} p_i^2 \equiv -E_i^2 + |\vec{p}_i|^2 = m_i^2 ~~~~\text{and}~~~~ \sum_{i\in \text{incoming}} p_i^\mu = \sum_{i\in \text{outgoing}} p_i^\mu\,. \end{equation} Amplitudes with \reef{genmomcons} satisfied and polarizations or wavefunctions are included as appropriately for all external particles are called {\em on-shell amplitudes}. To calculate the process \reef{6gluons} at the leading order in perturbation theory, one needs to add up all the tree Feynman diagrams with 6 external gluons built from the Feynman rules extracted from the Yang-Mills Lagrangian. There are cubic and quartic gluon self-interactions so one gets \begin{equation} A_6 = \raisebox{-0.55cm}{ \begin{tikzpicture}[scale=0.3, line width=1 pt] \draw [gluon] (-2,2)--(0,0); \draw [gluon] (-2,-2)--(0,0); \draw [gluon] (0,0)--(4,0); \draw [gluon] (4,2)--(4,0); \draw [gluon] (4,-2)--(4,0); \draw [gluon] (9,2)--(8,0); \draw [gluon] (9,-2)--(8,0); \draw [gluon] (8,0)--(4,0); \end{tikzpicture} } ~+ 219~\text{more diagrams}. \end{equation} For scattering of 7 gluons one needs 2485 diagrams and for 8 gluons it would be 34,300 diagrams. Each diagram translates into a rather complicated mathematical expression via the Feynman rules. Calculating these amplitudes by hand using Feynman diagrams is not a great way to spend your time. And this is still only the leading order in perturbation theory, imagine the complications at loop-level! One of the goals of the modern on-shell amplitudes program is indeed to come up with better and more efficient ways to calculate scattering amplitudes, but there are also several other avenues of progress. \subsection{Modern Amplitudes Program} Early modern approaches to scattering amplitudes were pioneered by Bern, Dixon, Kosower, often driven by applications in particle phenomenology. This thrusts continues as the fields has broaden significantly in the past $\sim17$ years and has attracted many more people to the field. We outline five directions in modern research on scattering amplitudes: \begin{enumerate} \item {\bf New Computational Techniques.} One goal of the amplitudes program is to develop new calculational techniques to facilitate more efficient computation of scattering amplitudes --- and to perform calculations of amplitudes that may even be impossible using traditional Feynman diagrammatics. At tree-level, examples of such new techniques are {\em on-shell recursion relations} (they go under names such as BCFW \cite{Britto:2004ap,Britto:2005fq}, CSW expansion \cite{Cachazo:2004kj}, all-line shifts \cite{Cohen:2010mi,Cheung:2015cba}, soft shift recursion \cite{Cheung:2015ota,Elvang:2018dco} etc). Some on-shell recursions are quite general and others are more closely adapted to the field theory they are applied to. The general idea is to recycle lower-point amplitudes into higher-point ones. For example, 3-particle scattering determines 4-particle scattering. Then 3- and 4-particle amplitudes can be recycled into 5-particle scattering etc. Sometimes the recursion relations can be solved exactly, there are for example closed-form expressions for scattering of any $n$ number of gluons at tree-level \cite{Drummond:2008cr}. \vspace{0.8mm} The derivation of on-shell recursion relations exploits knowledge of the analytic structure of amplitudes. Tree amplitudes are rational functions of the external data and they have simple poles where physical particles can be exchanged. On these poles, unitarity guarantees that the tree amplitudes factorize into products of lower-point amplitudes. The information of the location of the poles in momentum space and the factorized form of their residues are the basis of the on-shell recursion relations. \vspace{0.8mm} Loop-amplitudes have a more complicated analytical structure. While they can have rational terms, they generally also involve more complicated analytical functions such as logarithms, polylogarithms, and worse. One powerful technique is the method of {\em generalized unitarity} (pioneered in \cite{Bern:1994zx} and in since applied in countless contexts; see the review \cite{Bern:2011qt} and references therein). Here one uses that the integrand of a loop amplitude is a rational function that develops poles for specific choices of the loop-momenta. On these poles, the integrand factorizes into amplitudes with fewer loops. At 1-loop level, such ``cuts" (and their $d$-dimensional generalizations) can be used to determine the integrand from tree-level amplitudes. At 2-loop level, the 1-loop and trees are recycled into determining the cuts, and so on. \vspace{0.8mm} The fact that the loop-integrand is rational can be used to construct on-shell recursion relations at loop-level in certain models, such as for the planar limit of $\mathcal{N}=4$ SYM \cite{ArkaniHamed:2010kv}. \item {\bf Mathematical Structure and Geometry.} We have already alluded to how the mathematical structure of amplitudes inform the development of calculational tools. But on-shell amplitudes themselves also harbor hidden structures that cannot be inferred from the Lagrangian. For example, many amplitudes have analytic expressions that are much simpler than the Feynman diagram representations would suggest. What is responsible for such simplifications? Elucidating the mathematical structure of amplitudes, uncovering hidden symmetries, reformulating the scattering problem in novel mathematical terms is another goal of the amplitudes program. \vspace{0.8mm} Recent ideas include representations of amplitudes in terms of contour integrals in Grassmannian spaces (which are spaces of $k$-planes in $n$-dimensional space) \cite{ArkaniHamed:2009dn} and geometrizations such as polytopes \cite{Hodges:2009hk,ArkaniHamed:2010gg}, amplituhedrons \cite{Arkani-Hamed:2013jha}, associahedrons and more generally positive geometry \cite{Arkani-Hamed:2017mur}. The idea is that the amplitude is related to a volume form for a geometric object in some abstract mathematical space. The boundaries of the geometric object correspond to the location of poles. Different triangulations of the volume of this object can be mapped to different equivalent mathematical formulas for the amplitudes, one is the Feynman diagram representation, some correspond to the results of on-shell recursion relations, and others again are inherently different. This is explored at both tree- and loop-level. \vspace{0.8mm} Through the connection to interesting mathematical structures, there are now fruitful collaborations between the amplitude community and mathematicians on subjects such as positive geometry and cluster algebras. \item {\bf Exploring the Space of QFTs: Amplitude Bootstrap.} Traditionally one starts with a Lagrangian, writes down the Feynman rules, and use them to compute the amplitudes. Any symmetries of the Lagrangian manifest themselves on the amplitudes as ``Ward identities". Here is an example: if the Lagrangian has a symmetry that gives charge conservation, the associated Ward identity says that the amplitude of any process that violates charge conservation has to vanish. \vspace{0.8mm} A new approach to QFTs is to turn this logic on its head and instead of starting with the Lagrangian, one takes the physical observables, the amplitudes in particular, as the starting point, impose constraints on particle spectrum and symmetries on the amplitudes and subject them to tests of mathematical consistency. This then allows one to explore the existence of QFTs with the assumed properties. In particular, it gives a systematic way to explore the landscape of possible theories with a set of specified symmetries. We describe this further in Section \ref{s:ampboot} and in more detail in the technical Section \ref{s:ampbootex}. \item {\bf Double-Copy.} In the mid-80s it was realized that tree-level closed string amplitudes can be written as sums of products of tree-level open-string amplitudes \cite{Kawai:1985xq} --- these are known as the Kawai-Lewellen-Tye (KLT) relations. In the limit of infinite string tension, this becomes the field theory statement that the graviton tree amplitudes can be obtained as a sum of products of gluon scattering amplitudes. This relation is sometimes written \begin{equation} \text{``gravity~~=~~(gauge theory)$^2$\,"} \end{equation} and is by now referred to as an example of the {\em double copy}. \vspace{0.8mm} Starting in 2008, it became clear that there is more to this story. Bern, Carrasco, and Johansson \cite{Bern:2008qj} found that tree-level gauge theory amplitudes of gluon scattering could be written in a form where certain kinematic numerators obey the same Jacobi identities as the algebraic color-factors of the non-abelian gauge group of the theory. This is called {\em color-kinematics duality}. Moreover, if one replaces the color factors in this representation of the amplitude with the kinematic factors of gauge theory, remarkably the result is the gravity tree amplitude! This is the {\em BCJ double copy} and it has since been generalized to amplitudes of other field theories (e.g.~\cite{Cachazo:2014xea}). BCJ conjectured (and it has been tested in multiple contexts) that a similar color-kinematics prescription and double-copy also holds at the level of the loop-integrand \cite{Bern:2010ue}. \vspace{0.8mm} Since tree-level scattering represents the classical physics, it is natural to explore if there is a similar way to double copy solutions to the classical equations of motion. For example, the double-copy of a Coulomb-type solution in gauge theory to a black hole solution in General Relativity as a weak-field expansion. This direction has thus attracted attention of researchers from other fields, such as theorists studying classical solutions in General Relativity and supergravity and cosmologists. For a recent review of the double-copy and its applications, see \cite{Bern:2019prr}. \item {\bf Gravitational Wave Physics.} With the recent detection of gravitational waves from black hole inspirals made by the LIGO detector, and the 2017 Nobel Prize to the pioneers of experimental gravitational wave physics, the field of gravitational waves has received intense interest. Remarkably, amplitude techniques prove very useful here too. Starting with the Einstein equation, it is not hard to derive the linearized solution for a freely propagating gravitational wave. It is, however, a highly non-trivial matter to model the gravitational wave radiation resulting from the inspiral of two heavy objects such as black holes or neutron stars. Numerical breakthroughs have been an essential part of the study. There are also many other techniques, such as effective field theory formulations \cite{Goldberger:2004jt}. \vspace{0.8mm} On the analytic side, one uses a post-Minkowskian or post-Newtonian expansion for a Hamiltonian with an effective potential; the corrections here are in terms of powers of Newton's constant and orbital speed $v/c$, respectively. The inspiral problem is an elliptical solution to this Hamiltonian: they are the bound states. At first sight this has absolutely nothing to do with scattering amplitudes. However, there are also hyperbolic solutions: a classical example of the hyperbolic problem is the famous deflection of light by a heavy object. \vspace{0.8mm} The deflection (i.e.~hyperbolic) problem is basically a scattering process: two massive objects are in the initial state, they interact and then fly apart again after some exchange of energy-momentum. We can compute the scattering of massive particles under exchange of gravitons. The higher order graphs in such a calculation are in direct correspondence with the higher-order corrections in the effective Hamiltonian, so the effective Hamiltonian can be reconstructed from the scattering amplitudes and then used to study the bound state problem. If this were done with Feynman rules there would be limited calculational advantage. However, with the modern on-shell amplitude machinery, very promising progress has been made. At this stage, an example in this direction includes the calculation of the Hamiltonian for massive spinless binary systems to 3rd post-Minkowskian order (meaning order $G^3$ in the Newton coupling) \cite{Bern:2019nnu,Bern:2019crd,Bern:2020buy}. There are also approaches \cite{Kalin:2019rwq,Kalin:2019inp} that try to circumvent the construction of the effective potential and directly get gravitational wave information from the scattering amplitudes as well as related EFT approaches \cite{Kalin:2020fhe}. This is a rapidly developing field that has facilitated fruitful interactions between the community of General Relativity theorists and the amplitude community. \end{enumerate} There are many other very interesting developments in the field of scattering amplitudes. One is the system of {\em scattering equations} that has led to the so-called CHY construction of amplitudes that comes with its own formulation of the double-copy and has led to the realization of new examples of its application \cite{Cachazo:2013hca,Cachazo:2013iea,Cachazo:2014xea}. Another direction has been the connection between the universal {\em soft behavior} of gravitons, and the infinite-dimensional BMS symmetry in General Relativity \cite{He:2014laa}. There are other types of amplitude bootstraps too, such as the {\em integrability} approach \cite{Basso:2013vsa}, the {\em loop-amplitude bootstrap} using cluster algebraic structures \cite{Caron-Huot:2020bkp}, and the {\em S-matrix bootstrap} that exploits the conformal bootstrap \cite{Paulos:2016fap}. The latter is an example of the fruitful overlap between the conformal bootstrap and amplitudes communities. Furthermore, phenomenologists are increasingly using amplitude techniques to study physics beyond the Standard Model, for example to organize higher-derivative operators in {\em Standard Model Effective Field Theory (SMEFT)} \cite{Shadmi:2018xan,Craig:2019wmo}. Finally, using basic properties of amplitudes, one can prove a number of interesting general theorems about QFTs such as \cite{Benincasa:2007xk,McGady:2013sga,Elvang:2016qvq,Elvang:2013cua,Elvang:2015rqa}: \begin{itemize} \item There can be no theories in flat space with massless particles of spin greater than 2 interacting with gravity. \item There can only be one graviton-field (i.e.~only one massless spin-2 particle), it must self-interact and it must couple exactly the same way to any other particle (the equivalence principle). \item A spin 3/2 particle must couple supersymmetric to the spin-2 graviton. \item Spin-1 massless fields can only self-interact if there is a Lie algebra structure with 3-index fully antisymmetric structure constants, and \item A $\mathcal{N}$=8 superconformal 3d theory requires the existence of fully 4-index antisymmetric structure constants \cite{Huang:2010rn}. \end{itemize} As this hopefully illustrates, the field of amplitudes concerns a diverse range of subjects. At this point, the annual Amplitudes conference, now in its 12th year, attracts a few hundred international participants (and even more in its recent online Zoomplitudes version). We have highlighted here some general directions and current areas of interests, but of course this is in no way complete. The interested reader may want to consult the several newer reviews and textbooks on modern methods in amplitudes, see for example \cite{Elvang:2013cua,Elvang:2015rqa,Henn:2014yza,Dixon:2013uaa,Cheung:2017pzi}. I am now switching gears to discuss in a little more detail one direction that shares ideology with the conformal bootstrap program. In the following, I outline the ideas, then in Section \ref{s:ampbootex} I provide a very detailed example of its implementation and results. \subsection{Amplitude Bootstrap on the Space of QFTs} \label{s:ampboot} Suppose someone asks: \begin{quote} {\em ``Does there exist a local relativistic QFT with two massless real scalars such that every tree amplitude vanishes in the limit where a single momentum is taken soft? Is such a model unique? Must it have any particular symmetries, such as an interchange symmetry that requires the scalar particles to be on equal footing?".} \end{quote} % The vanishing soft limit means that $A_n \to 0$ as $p_i^\mu \to 0$ for any on-shell external momentum $i=1,2,\ldots,n$. These soft limits are called Adler zeros and were first discovered in the context of pion scattering \cite{Adler:1964um}. The physical context of this question is that such a model describes the low-energy dynamics of two massless Goldstone scalar particles arising from some spontaneous symmetry breaking. For any explicitly given symmetry breaking patterns of some symmetry group $G$ to a subgroup $H$, there are techniques for systematic construction of Lagrangians of the Goldstone modes. But for more open-ended questions aimed at understanding the space of possible theories and any additional emergent symmetries they may have, the Lagrangian approach is limited and often complicated by field-redefinitions that can obscure symmetry properties. A traditional `bottom-up' approach to such a question is to try to write down a Lagrangian with kinetic terms for the two scalar fields and some local interactions that preserve the desired symmetries. Suppose we fail to construct a Lagrangian with these properties: does it mean that such a theory does not exist? Or did we miss out on some smart way to do this? Or suppose we did succeed in writing down a Lagrangian with these properties, is it unique? Or did we miss some other allowed interactions? Are there different ways to write the Lagrangian that nonetheless result in exactly the same observables? For example by being related by field redefinitions. A modern approach to such questions is to start with the physical observables, namely the amplitudes. A clear advantage of this approach is that the amplitudes are independent of field redefinitions (and in case of gauge fields, the amplitudes are gauge invariant so gauge-choices and so on do not matter). The symmetries of the model manifest themselves on the amplitudes via Ward identities; linear relationships among amplitudes, valid either at all generic momenta or in certain momentum limits. In the on-shell approach, one imposes on the amplitudes a set of assumptions about the particle spectrum of the model and its symmetries (exact or spontaneously broken) as well as mathematical consistency on the amplitudes. At tree-level, consistency conditions refer to properties like: \begin{tabular}{lcp{11cm}} {locality} ~~&$\implies$&{correct simple poles corresponding to exchanges of physical particles; no spurious (unphysical) poles.} \\ {unitarity} ~~&$\implies$&{the residues on the simple poles factorize into lower-point on-shell amplitudes.} \end{tabular} One starts with the most general ansatz for the lowest-point amplitudes subject to the symmetries. As the lowest-point amplitudes in the model, they cannot have any physical poles since there is no lower-point amplitudes they can factorize into. So they must be polynomials in the external data (momenta, polarizations etc) and each independent polynomial corresponds to an independent interaction term in an associated Lagrangian. Independence means under the use of momentum conservation and other algebraic identities; at the level of the Lagrangian, momentum conservation simply translates to integration-by-parts. Next fuse the lowest-point amplitudes together to make higher-point amplitudes, for example via a recursion relation of some valid form. The higher-point amplitude must have the required symmetries too. This may fix constants in the parametrization of the amplitudes. It may even set the amplitudes to zero. If all constants are set to zero by the mathematical consistency conditions, it means that there are no amplitudes that respect the requested symmetries and hence there can be no such non-trivial QFT. For if there were, it would produce non-vanishing scattering amplitudes. On the other hand, if all imposed mathematical consistency checks are satisfied with some non-vanishing scattering amplitudes, then it is evidence that {\em perhaps} such a QFT may exist. It cannot be a definitive ``yes" because there could be further restrictions arising at higher-points and one would have to work harder to {\em prove} existence of a field theory. What we have described here is the ``amplitude bootstrap". It is very powerful as a tool to rule out the existence of theories with too strong symmetry requirements, but cannot say ``yes, it does exist" without further input. As it turns out, this ability to answer ``no" or ``maybe" is one thing it has in common with the conformal bootstrap, as we shall see in Section \ref{s:introbootstrap}. Let us illustrate the idea briefly using a combination of Lagrangian reasoning and on-shell amplitudes. We want a model of two massless scalars $\phi_1$ and $\phi_2$ such that the amplitudes vanish in the limit where any one of the particle momenta goes soft, i.e.~$p^\mu_i \to 0$ for any one of the external momenta in the process. Any model with $\phi^4$-type interaction would fail the criteria since the 4-point amplitude is a constant and therefore does not vanish for any choice of momentum. What about an interaction term like $\phi_1^2 (\partial \phi_2)^2$? Since $\partial_\mu \to i p_\mu$ and $p_i^2 = 0$, this gives a 4-point amplitude $A_4(\phi_1 \phi_1 \phi_2 \phi_2) = 2 p_3 \cdot p_4 = (p_3 + p_4)^2$ (the momenta are labeled 1,2,3,4 and related to the particles as the order in which they are given in the amplitude). This vanishes for any one of the momenta going to zero since the massless particles have $p_i^2=0$ and momentum conservation ensures $(p_1 + p_2)^2 = (p_3 + p_4)^2$. So we are good, right? Not so fast! At 6-point, the amplitude $A_6(\phi_1\phi_1\phi_1\phi_1\phi_2\phi_2)$ includes diagrams like \begin{equation} \label{A6diagex} \raisebox{-0.58cm}{ \begin{tikzpicture}[scale=0.4, line width=1 pt] \draw (-1,1)--(0,0); \draw (-1,-1)--(0,0); \draw[dashed] (-1.5,0)--(3.5,0); \draw (2,0)--(3,1); \draw (2,0)--(3,-1); \node at (-1.4,1) {\small $1$}; \node at (-1.9,0) {\small $5$}; \node at (-1.4,-1) {\small$2$}; \node at (3.5,-1) {\small$4$}; \node at (3.9,0) {\small $6$}; \node at (3.5,1) {\small $3$}; \node at (1.17,0.55) {\small $P$}; \end{tikzpicture} } =(p_1 + p_2)^2\frac{1}{(p_1 + p_2 + p_5)^2}(p_3 + p_4)^2\,, \end{equation} where the solid lines indicates the $\phi_1$-particles and the dashed line the $\phi_2$-particles. In the limit where $p_5^\mu \to 0$, the diagram gives $(p_3 + p_4)^2 = 2 p_3\cdot p_4$ which is non-zero for generic momenta. So even though we had a 4-particle interaction that did the job we wanted for 4-point amplitudes, it failed at 6-point. Now there are of course other diagrams that contribute too: for example, we can exchange $(2 \leftrightarrow 3)$ and $(2 \leftrightarrow 4)$ in \reef{A6diagex}. Together with \reef{A6diagex}, these then contribute \begin{equation} \label{interm1} 2 p_3\cdot p_4 + 2 p_2\cdot p_4 + 2 p_3\cdot p_2 = (p_2 + p_3 + p_4)^2 \end{equation} to the $p_5 \to 0$ soft limit of the amplitude. But there are also diagrams where line 1 is not on the same vertex as the soft line 5. They contribute \begin{equation} \label{interm2} 2 p_1\cdot p_2 + 2 p_1\cdot p_3 + 2 p_1\cdot p_4 = 2 p_1 (p_2 + p_3 + p_4). \end{equation} Thus the sum of the pole diagrams contribute the sum of \reef{interm1} and \reef{interm2} \begin{equation} (p_1+ p_2 + p_3 + p_4)^2 = (p_5 + p_6)^2 \end{equation} by momentum conservation. This is generically non-zero, so there is no way to get zero if only these 4-point interactions are included. We need 6-particle interactions in order to cancel the non-vanishing results from diagrams such as \reef{A6diagex}. This can for example be engineered from a Lagrangian interaction term of the form $\phi_1^4(\partial \phi_2)^2$ whose coefficient is tuned such that its contribution to the 6-point amplitudes exactly cancels that of the pole terms like \reef{A6diagex} in the soft limit. Along with other terms needed at 6-point order, the continued consistency with vanishing soft limits at higher points ends up fully dictating the model! As for being on equal footing, it would appear from the above construction that $\phi_1$ and $\phi_2$ are not interchangeble. However, when one is more careful about setting up the problem, a symmetry between $\phi_1$ and $\phi_2$ does in fact emerge. This is shown in full detail in Section \ref{s:ampbootex}. For now, what I wanted to illustrate here was the logic of how the constraints are imposed on the amplitudes and how it allows us to learn about the structure of the model. In the more general approach, we parameterize all 4-particle amplitudes in the most general way and then impose the soft constraints on the 4- and higher-point amplitudes (Section \ref{s:ampbootex}). This allow us to get ``no, does not exist" or ``maybe" as the answer to whether such a QFT exists. This is very similar to how the conformal bootstrap works, as we now explain. \section{Introduction to the Modern Conformal Bootstrap} \label{s:introbootstrap} To set up a scattering problem, we start at time $t = - \infty$ with initial state particles that are infinitely far apart and non-interacting (i.e.~free). Then the particles come together, scatter through some (weak) interaction process, and end up far apart as free particles in the far future, $t= +\infty$. These so-called initial and final asymptotic states of the far past and future are the in and out states of the perturbative scattering amplitudes. In a conformal field theory, there is no scale, so there is no sense of ``far apart" or ``far into the past/future". A CFT is scale invariant and has no asymptotic states. For that reason, a scattering amplitude is a priori not a good observable.\footnote{Nonetheless, amplitudes in some CFTs, such as in $\mathcal{N}=4$ SYM, play a central role in the amplitudes program. They can be understood as limits of amplitudes in a a non-conformal QFT.} Moreover, if we wish to ask ``what CFTs exist?" we have to include also theories without weakly coupled limits, i.e.~those in which we cannot talk about the ``free" particle states because the interactions are always strong. The {\em conformal bootstrap} is a method to explore the landscape of CFTs without need for Lagrangians, weak couplings, or the concept of particles or scattering amplitudes; instead physical and mathematical consistency constraints are imposed on the observables in the CFT, namely the correlation functions of local operators. \subsection{Observables and CFT Data} Correlation functions are the central observables in a CFT. So, what does that mean? Classical fields are familiar from 4d electromagnetism: at each point in space and time, the electric $\vec{E}(\vec{x},t)$ and magnetic $\vec{B}(\vec{x},t)$ fields give a vector-valued result for the strength and direction of the electromagnetic field. They are local fields in that they depend on a single point in spacetime. Under Lorentz transformations, the electric and magnetic fields are mixed, and in relativistic contexts it is useful to combine them into the field strength $F_{\mu\nu}$, where in a given inertial frame the components of the electric and magnetic fields are $E_i = F_{ti}$ and $B_i = \sum_{j,k=1}^3 \epsilon_{ijk} F_{jk}$, Here $\epsilon_{ijk}$ is fully antisymmetric in its indices and $\epsilon_{123}=1$. The field $F_{\mu\nu}$ is an example of an operator with non-zero spin. We can form other operators from the field strength, such as $F^2 = F_{\mu\nu} F^{\mu\nu}$ and $F^4$ etc. Examples of other operators are spin-0 scalar fields $\phi(x)$ and spin-1/2 fermion fields $\psi(x)$ and powers thereof, i.e.~$\phi^n$ and $\psi^2$. We can take derivatives of operators to get new ``descendant" operators, such as $\partial_\mu \phi = \partial \phi/\partial x^\mu$ etc. These are examples of operators based on local fields, but more generally we consider local operators in a completely abstract sense. We denote such operators as $\mathcal{O}_i$, where $i$ is a collective index that includes both operator type and any Lorentz indices it may have. A property of operators that is important for the our discussion is their {\em scaling dimension}. Under a scaling $x^\mu \to \lambda x^\mu$ of the spacetime coordinates, an operator scales homogeneously as \begin{equation} \label{defDelta} \mathcal{O}_i(x) \to \lambda^{\Delta_i} \mathcal{O}_i(\lambda x)\,. \end{equation} Consider a scalar field $\phi$ in a $d$-dimensional free theory. The action just has the kinetic term \begin{equation} S = \int d^d x\, \partial_\mu \phi \partial^\mu \phi \,. \end{equation} The scalar field has mass-dimension $[\phi]=(d-2)/2$ in order for the action to be dimensionless ($\hbar=c=1$). Performing a trivial change in integration variable $x \to x' = \lambda x$, we see that we must also have $\Delta_\phi = (d-2)/2$. As in this example, the scaling dimension $\Delta_i$ is the same as the mass-dimension of an operator $\mathcal{O}_i$ in a free theory. However, in an interacting quantum theory, $\Delta_i$ receives quantum corrections and generally departs from the free value. Hence $\Delta_i$ is considered a real number in the following. In a conformal field theory, the symmetries are so powerful that the 1-, 2-, and 3-point correlation functions are completely fixed up to a set of constants in the 3-point correlator. 1-point functions vanish $\<\mathcal{O}_i(x) \> = 0$. We write the 2-point and 3-point correlators of operators $\mathcal{O}_i$ diagrammatically as \begin{equation} \big \< \mathcal{O}_i(x) \mathcal{O}_j(y) \big\> = \raisebox{-0.2cm}{ \begin{tikzpicture}[scale=0.4, line width=1 pt] \draw (-1,0)--(1.2,0); \node at (-1.9,0) {\small $i$}; \node at (2.3,-.1) {\small $j$}; \end{tikzpicture} } ~~~~ \text{and} ~~~~~ \big \< \mathcal{O}_i(x) \mathcal{O}_j(y) \mathcal{O}_k(z) \big\> = \raisebox{-0.6cm}{ \begin{tikzpicture}[scale=0.4, line width=1 pt] \draw (-1,1)--(0,0); \draw (-1,-1)--(0,0); \draw (-0,0)--(1.2,0); \node at (-1.4,1) {\small $i$}; \node at (-1.4,-1) {\small$j$}; \node at (1.6,0) {\small $k$}; \end{tikzpicture} } \end{equation} Operators with different scaling dimensions are orthogonal in the sense that their 2-point functions vanish. If there are degenerate operators with the same operator dimension, they can be organized in a basis such that their 2-point correlators vanish for distinct operators: \begin{equation} \big \< \mathcal{O}_i(x) \mathcal{O}_j(y) \big\> = 0 ~~~ \text{for}~~~i \ne j \,. \end{equation} The unfixed constant in the 3-point correlator is called the ``OPE coefficient". This comes from the idea of the Operator Product Expansion (OPE) which states that in a local theory, the fluctuation generated by a product of operators in close proximity should be expressible as a sum of local operators. The coefficients of the operators in that sum are the OPE coefficients $c_{ijk}$: so as $y \to x$ \begin{equation} \label{simpleOPE} \mathcal{O}_i(x) \mathcal{O}_j(y) \sim \sum_{n} c_{ijn} \,\mathcal{O}_n(x) \,. \end{equation} (We defer discussion of some of the spacetime dependence to Section \ref{s:confboot}.) The subscripts $i,j,n,\dots$ is some collective index that labels the operators in the CFT. The sum on the right is in principle over all operators in the theory, but some coefficients may be zero. For example, a fermionic operator would not appear in the OPE of two bosonic operators. Multiplying \reef{simpleOPE} by $\mathcal{O}_k(x)$ and taking the expectation value (think of the analogue to undergraduate quantum mechanics here) selects on the RHS the coefficient $c_{ijk}$ via the ``orthogonality" property of the 2-point correlation function, $\big \< \mathcal{O}_k(x) \mathcal{O}_n(y) \big\> \sim \delta_{kn} \times$(function of $|x-y|$). Hence one finds that the LHS $\big \< \mathcal{O}_i(x) \mathcal{O}_j(y) \mathcal{O}_k(z) \big\>$ is determined by the OPE coefficient $c_{ijk}$. In the above, we are completely glossing over the details of the dependence of the spacetime coordinates $x,y,z$; those who wish to see a more detailed account can find it in the technical review in Section \ref{s:confboot}. In order to explore the landscape of CFTs, we have to describe what characterizes a CFT. For the purpose here we will take a CFT to be determined by \begin{itemize} \item all operators $\mathcal{O}_i$ with their {\em spin} $s$ ($s= 0,\tfrac{1}{2}, 1\ldots$) and {\em scaling dimension} $\Delta_{i}$, and \item the OPE coefficients $c_{ijk}$. \end{itemize} This is jointly called the CFT data: $\{ (s_i,\Delta_i) , c_{ijk} \}$. This data defines a CFT assuming that it satisfies the mathematical consistency constraints of a conformal field theory. We now proceed to discuss one such key constraint used in the conformal bootstrap. The 4-point correlators are not completely fixed by conformal symmetry, but we still have a certain handle of them. Suppressing the spacetime dependence, $\big \< \mathcal{O}_1 \mathcal{O}_2 \mathcal{O}_3 \mathcal{O}_4\big\>$ can be expressed via the OPE. For example, we can use the OPE on $\mathcal{O}_1 \mathcal{O}_2$ and $\mathcal{O}_3 \mathcal{O}_4$ or alternatively we could do with $\mathcal{O}_1 \mathcal{O}_4$ and $\mathcal{O}_2 \mathcal{O}_3$. The two expansions have to give the same result. Diagrammatically, we can illustrate this as \begin{equation} \label{crossingsimple} \big \< \mathcal{O}_1 \mathcal{O}_2 \mathcal{O}_3 \mathcal{O}_4\big\> ~~=~~ \sum_\mathcal{O} \raisebox{-0.6cm}{ \begin{tikzpicture}[scale=0.4, line width=1 pt] \draw (-1,1)--(0,0); \draw (-1,-1)--(0,0); \draw (0,0)--(2,0); \draw (2,0)--(3,1); \draw (2,0)--(3,-1); \node at (-1.4,1) {$1$}; \node at (-1.4,-1) {$2$}; \node at (3.4,-1) {$3$}; \node at (3.4,1) {$4$}; \node at (1.2,0.55) {$\mathcal{O}$}; \end{tikzpicture} } ~~=~~ \sum_{\mathcal{O}'} \hspace{-0.3cm} \raisebox{-1.cm}{ \begin{tikzpicture}[scale=0.4, line width=1 pt] \draw (-1,2)--(0,1); \draw (1,2)--(0,1); \draw (0,-1)--(0,1); \draw (-1,-2)--(0,-1); \draw (1,-2)--(0,-1); \node at (-1.4,2) {$1$}; \node at (-1.4,-2) {$2$}; \node at (1.45,-2) {$3$}; \node at (1.45,2) {$4$}; \node at (0.8,0.1) {$\mathcal{O}'$}; \end{tikzpicture} } \,. \end{equation} This is called the {\em crossing relation}. The idea of the conformal bootstrap is to assume some CFT data and then subject them to the constraints of the crossing relation. It sounds perhaps too simple that the equivalence of two infinite sums can lead to significant and powerful constraints on the CFT data, but this is nonetheless the case. \subsection{Bootstrap of the 3d Ising Model} Consider, as an example, a scalar field $\sigma(x)$ in a 3d CFT. It has spin 0 and its scaling dimension is classically equal to its mass-dimension: $\Delta_\sigma= \tfrac{1}{2}$. In a free theory, i.e.~with no interactions, the operator $\varepsilon(x) = (\sigma(x))^2$ has scaling dimension twice that of $\sigma(x)$ so $\Delta_\varepsilon= 1$. Now suppose quantum corrections in some putative interactive CFT makes $\Delta_\sigma= 0.53$. Now $\Delta_\varepsilon$ no longer has to be $2\Delta_\sigma$, it has a quantum life of its own. But what possible values can it take? Would it be realistic to think that a small quantum correction that increases $\Delta_\sigma$ from $0.5$ to $0.53$ allows $\Delta_\varepsilon$ to become as big as, say, 7? Our intuition says that this is unreasonable: if the correction to $\Delta_\sigma$ is small, then the correction to $\Delta_\varepsilon$ should also be small. Indeed, by analyzing the crossing relation \reef{crossingsimple} for the 4-point correlator $\<\sigma \sigma \sigma \sigma \>$, it can be shown numerically \cite{ElShowk:2012ht} that there exists no 3d CFTs with $\Delta_\sigma = 0.53$ and $\Delta_\varepsilon \gtrsim 1.45$. The numerical implementation of the conformal bootstrap takes advantage of the fact that the crossing relation for $\<\sigma \sigma \sigma \sigma \>$ can be reformulated into a statement that schematically looks like \begin{equation} \label{schm} \sum_{\mathcal{O}} c_{\sigma\sigma\mathcal{O}}^2\, \vec{v}_{\sigma\sigma\mathcal{O}} ~=~ 0\,, \end{equation} where $\vec{v}_{\sigma\sigma\mathcal{O}}$ represent vectors in a multi-dimensional abstract space (for more details, see Section \ref{s:confboot}). In a unitary CFT, the OPE coefficients $c_{\sigma\sigma\mathcal{O}}$ are real-valued so that means the coefficients in \reef{schm} are non-negative. Thus, consistency with the crossing relation requires that a sum of vectors with non-negative coefficients vanish. This is a non-trivial constraint. The key point then is that the functional form of the vectors $\vec{v}_{\sigma\sigma\mathcal{O}}$ is known and the only unknowns in \reef{schm} are the scaling dimensions $\Delta_\mathcal{O}$ that the $\vec{v}_{\sigma\sigma\mathcal{O}}$ depend on and the OPE coefficients $c_{\sigma\sigma\mathcal{O}}$. So for given input, say $\Delta_\sigma = 0.55$ in 3d, one can scan over values of $\Delta_\varepsilon$ and test numerically if there exists solutions to \reef{schm}. A ``no" means NO: there is no 3d CFT with scalar operators of those scaling dimensions. This is how the bound $\Delta_\varepsilon \gtrsim 1.45$ for $\Delta_\sigma = 0.53$ was found in \cite{ElShowk:2012ht}. A ``yes" means MAYBE: the putative CFT is not ruled out, but it does not mean it exists. One has to study higher dimensional operators and a broader set of 4-point correlators to find out if they are consistent with crossing too. So this analysis does not state whether or not there exists any 3d CFT with $\Delta_\sigma = 0.53$ and $\Delta_\varepsilon < 1.45$.\footnote{I'm glossing over much detail here. For example, the sum in \reef{schm} is over infinitely many operators, but it is controlled by knowing that the contributions from operators with large scaling dimension $\Delta_{\mathcal{O}}$ tend to be exponentially suppressed. There are also typically other assumptions made on the spectrum, for example such as the existence of a symmetry $\sigma \to -\sigma$ that implies that $\<\sigma\sigma\sigma\> = 0$ and hence $\sigma$ cannot appear in the $\sigma\sigma$ OPE. Further it is assumed that $\varepsilon$ is the lowest-dimensional operator that can appear in the $\sigma\sigma$ OPE.} \begin{figure}[t!] \centerline{\includegraphics[width=11cm]{Fig-kink-1203-6064.pdf}} \caption[Ising kink]{Plot showing the bounds on the scaling dimensions $\Delta_\sigma$ and $\Delta_\varepsilon$ of the two lowest-dimension operators in a 3d CFTs with $\mathbb{Z}_2$ symmetry. The white region is excluded. This is based on numerical examination of the crossing constraints on a single correlator $\< \sigma\sigma\sigma\sigma\>$. The kink in the boundary between the excluded and non-excluded regions occurs near the expected location of the 3d Ising mode. This plot was originally presented in \cite{ElShowk:2012ht} by El-Showk, Paulos, Poland, Rychkov, Simmons-Duffin, and Vichi, and reproduced here with the permission of the authors.} \label{fig:kink} \end{figure} When the crossing constraints are simultaneuosly applied to multiple correlators, the numerical bootstrap becomes even more powerful. For example, when applied \cite{Kos:2014bka} simultaneously to the three correlators $\<\sigma \sigma \sigma \sigma \>$, $\<\sigma \sigma \varepsilon \varepsilon \>$, and $\<\varepsilon \varepsilon \varepsilon \varepsilon \>$, the conformal bootstrap with $\sigma \to - \sigma$ symmetry {\em rules out} any interacting 3d CFT with $\Delta_\sigma = 0.53$! While this is nice, there is an even more impressive and important result coming out of this analysis. To back up, consider again the numerical bootstrap applied to a single $\<\sigma \sigma \sigma \sigma \>$. As a function of $\Delta_\sigma$, a scan over possible values of $\Delta_\varepsilon$ gives a bound $\Delta_\varepsilon < f(\Delta_\sigma)$ for some function $f$. The plot is shown in Figure \ref{fig:kink}. The white region is rule out, the shaded region is not ruled out in this analysis. The key property to note here is the indication of a kink in the boundary curve near $\Delta_\sigma \approx 0.52$ and $\Delta_\varepsilon \approx 1.42$ \cite{ElShowk:2012ht}. Those are in fact close to the values expected for the two lowest-dimension operators in the 3d Ising model! Now beefing up the analysis to apply the numerical bootstrap to the three correlators simultanously, it was found \cite{Kos:2014bka} that a small island around the expected `location' of the 3d Ising model is cut out: see Figure \ref{fig:island}. This means that a large region of parameter space (white in the plot) is ruled out by the crossing constraints and there is a small shaded island-region not ruled out. \begin{figure}[t!] \centerline{\includegraphics[width=11cm]{Fig-island-1406-4858}} \caption{Plot showing the excluded regions of scaling dimensions $\Delta_\sigma$ and $\Delta_\varepsilon$ of the two lowest-dimension operators in a 3d CFTs with $\mathbb{Z}_2$ symmetry. This is based on numerical examination of the crossing constraints on three correlators: $\< \sigma\sigma\sigma\sigma\>$, $\< \sigma\sigma\varepsilon\varepsilon\>$, and $\< \varepsilon\varepsilon\varepsilon\varepsilon\>$. Where the kink occurred in Figure \ref{fig:kink}, there is now a small island of non-excluded theory-space, narrowing in on the 3d Ising mode. This plot was originally presented in \cite{Kos:2014bka} by Kos, Poland, and Simmons-Duffin, reproduced here with the authors' permission.} \label{fig:island} \end{figure} Precision numerics has made it possible to zoom in on this island and constrain it much further. The authors of \cite{Kos:2014bka,Kos:2016ysd} used this to determine \begin{equation} (\Delta_\sigma, \Delta_\varepsilon) = (0.5181489(10),1.412625(10)) \,, \end{equation} which is higher precision that the available Monte Carlo results; see the comparison in Figure \ref{fig:bootMC}. Moreover, the numerical bootstrap determines the scaling dimensions and OPE coefficients to high precision of several of the lowest dimension operators in the 3d Ising model, not just of $\sigma$ and $\varepsilon$, see for example Table II in \cite{Poland:2018epd}. This offers evidence that the 3d Ising model may be a CFT, as proposed by Polyakov \cite{Polyakov:1970xd}. Earlier in this presentation we mentioned the 3d Ising model: recall from Section \ref{s:watermagnet} that the critical point of water and ferromagnets is expected to be described by the 3d Ising model. The critical exponents of the approach to the critical point are directly related to the scaling dimensions $\Delta_\sigma$ and $\Delta_\varepsilon$. For example, the critical exponent $\nu$ of the correlation length in \reef{xi} is determined by $\Delta_\varepsilon$ as \begin{equation} \nu = \frac{1}{d-\Delta_\varepsilon} \,. \end{equation} Hence, for $d=3$ and the numerical bootstrap value of $\Delta_\varepsilon$, one finds $\nu = 0.62999(5)$ \cite{El-Showk:2014dwa}. Compare that with the experimental value of $\nu = 0.63$. It is clear that the formal theory exploration of the landscape of CFTs is relevant also for observable physics in our world. The idea of the conformal bootstrap dates back about 50 years \cite{Ferrara:1973yt,Polyakov:1974gs}, but it had a revival starting around 2008 when problem was phrased in terms of bounding operators and especially when it was realized that the condition \reef{crossingsimple} can be phrased a form that makes it particularly well-suited for numerical implementation. Initial work was driven by particle physics applications (see e.g.~\cite{Rattazzi:2008pe} and \cite{Poland:2010wg}) and the recent application to the 3d Ising model was initiated in \cite{Rychkov:2011et,ElShowk:2012ht}. Practically, this means that the crossing relation is rephrased a statement about whether a set of vectors can add to zero using only non-negative coefficients. Semidefinite programming techniques \cite{Simmons-Duffin:2015qma} have turned out to be powerful approaches to asses such questions numerically. By now, quite a number of analytical results have also be developed to further enhance the exploration of the CFT landscape. \begin{figure}[t!] \centerline{\includegraphics[width=11cm]{Fig-BootvsMC-1603-04436}} \caption{Comparing the multi-correlator bootstrap results for the 3d Ising model versus Monte Carlo. This plot was originally presented in \cite{Kos:2016ysd} by Kos, Poland, Simmons-Duffin, and Vichi, reproduced here with the permission of the authors. } \label{fig:bootMC} \end{figure} \vspace{3mm} \noindent {\bf Other examples: Helium, QED$_3$, SCFTs Across Dimensions}\\[1mm] There have been a multitude of applications of --- and other developments closely related to --- the conformal bootstrap. A very nice example are 3d CFTs with $O(N)$ global symmetry studied using the conformal bootstrap in \cite{Kos:2015mba,Kos:2016ysd}. In particular, the $O(2)$ model that is expected to describe the $\lambda$-line superfluid transition in Helium-4. The physics of this critical transition lies in a different universality class (sometimes also called the 3d XY universality class) from that of the liquid-vapor critical point that we described in Section \ref{s:watermagnet} for water and the Curie point of ferromagnets. The difference is manifest in the values of the critical exponents such as $\nu$ associated with the divergence of the correlation length: for the liquid-vapor critical point $\nu = 0.63\ldots$, but for the $\lambda$-line transition it is approximately $\nu = 0.67\ldots$. Experimental results \cite{Lipa:2003zz}, Monte Carlo simulations \cite{Campostrini_2006,Hasenbusch_2019}, and the conformal bootstrap methods \cite{Chester:2019ifh} agree on the value of $\nu$ to 2-decimal places. The theoretical methods are in agreement beyond the leading digits within the uncertainties, with the conformal bootstrap giving the highest precision result \cite{Chester:2019ifh,Ryckkovpage} \begin{equation} \nu = 0.67175(10) \,. \end{equation} However, there is an 8$\sigma$ tension between the theoretical values of $\nu$ and the experimental one \cite{Campostrini_2006,Chester:2019ifh,Ryckkovpage}. There are many other applications of the conformal bootstrap. Supersymmetric CFTs have been proposed to be relevant in the context of topological superconductors and for the description of a critical point on the surface of topological insulators. Another class of 3d CFTs arise as fixed points of 3d models with gauge fields. Two main classes have been examined: 3d Chern-Simons fields coupled conformally to matter and bosonic QED$_3$ models. A nice overview of these topics are given in the review \cite{Poland:2018epd}. There are several explorations of 4d CFTs as well as in 5d and 6d. The theories of particular interest to explore with the conformal bootstrap are those without Lagrangian descriptions, for example the 6d (2,0) supersymmetric CFT of M5 branes \cite{Beem:2015aoa} or the non-Lagrangian $\mathcal{N}=2$ supersymmetric CFTs in 4d \cite{Beem:2014zpa}. We mentioned in footnote \ref{footieN3} that Lagrangian $\mathcal{N}=3$ SYM was equivalent to the $\mathcal{N}=4$ SYM theory. However, non-Lagrangian theories with $\mathcal{N}=3$ theories without $\mathcal{N}=4$ SUSY are not ruled out \cite{Aharony:2015oyb,Garcia-Etxebarria:2015wns} and the conformal bootstrap techniques have been applied to place bounds on the space of $\mathcal{N}=3$ SCFTs \cite{Lemos:2016xke}. The conformal bootstrap has also been applied in conjunction with another modern technique, supersymmetric localization, to derive results about the M2-brane theory in M-theory \cite{Agmon:2017xes}. The conformal bootstrap is a powerful method with wide applicability. In one form, its numerical implementation has resulted in new results not only to explore the space of possible CFTs, but also to pinpoint the properties of known ones, such as the critical exponents. There are very nice reviews about CFTs and the conformal bootstrap, see for example \cite{Poland:2016chs}, \cite{Simmons-Duffin:2016gjk}, \cite{Rychkov:2016iqz}, \cite{Poland:2018epd}, and \cite{Chester:2019wfx}. The presentations here and in Section \ref{s:confboot} rely heavily on those references. \section{Technical: Amplitude Bootstrap} \label{s:ampbootex} In this section, we revisit the question posed in the beginning of Section \ref{s:ampboot} and address it in full technical detail: \begin{quote} {\em ``Does there exist a local relativistic QFT with two massless real scalars such that every tree amplitude vanishes in the limit where a single momentum is taken soft? Is such a model unique? Must it have any particular symmetries, such as an interchange symmetry that requires the scalar particles to be on equal footing?".} \end{quote} Let us begin by phrasing the question in the traditional QFT languae, i.e.~in terms of fields and symmetries of a Lagrangian. First, it is convenient to combine the two real scalar fields $\phi_1$ and $\phi_2$ into a complex scalar \begin{equation} Z = \phi_1 + i \phi_2\,,~~~~~~ \bar{Z} = \phi_1 - i \phi_2\,. \end{equation} A canonical kinetic term can be written $\partial_\mu Z \partial^\mu \bar{Z}$. In the bottom-up Lagrangian approach, one assumes such a kinetic term and then builds up interaction terms. Secondly, the vanishing soft limit is related to (but not identical to) the Lagrangian having a shift symmetry $Z \to Z + c + \ldots$, where $c$ is a complex-valued constant and the ellipses stand for field-dependent terms. (For a more precise statement of this relation, see Section \ref{s:coset}.) Models in which there is at least one derivative on every scalar field trivially have such a shift symmetry and also have amplitudes with vanishing soft limits. For example interactions like $(\partial Z)^4$ gives Feynman rules in which each term is a products of the four momenta and therefore the vertex vanishes when any one of them is taken soft. This trivial way of realizing the shift symmetry gives contributions at $O(p^4)$ to amplitudes, so the question here really is if there are models that realize the symmetries at lower order in the low-energy expansion. With no derivatives on the fields in the interactions, the model cannot have a shift symmetry or vanishing soft limits. A 2-derivative theory gives $O(p^2)$ amplitudes and it could have 4-point interaction such as $Z^2 (\partial {Z})^2$ and similar with $\bar{Z}$'s too. Such terms do not obviously have a shift symmetry or give amplitudes with vanishing soft limits. So we have to add other interaction terms to achieve this and the question is how much freedom there is to do so. Thirdly, the question of whether the two real scalars $\phi_1$ and $\phi_2$ are on equal-footing translates to whether the model has an $SO(2)$ symmetry that acts on $(\phi_1,\phi_2)$ as a rotation. In the language of the complex scalar, this is equivalent to asking if there is a $U(1)$ symmetry that takes $Z \to e^{i\alpha}Z$ and $\bar{Z} \to e^{-i\alpha} \bar{Z}$ for some constant $\alpha$. We do {\em not} assume a $U(1)$ symmetry, but will see that it emerges in the leading-order model. Rather than attempting to build a Lagrangian whose Feynman rules give amplitudes that vanish in the soft limit, the modern on-shell amplitude approach starts with the amplitudes to systematically determined what models can be ruled out and map out which ones may exist. \subsection{Setup} We rephrase the problem in terms of on-shell amplitudes. \vspace{2mm} \noindent {\bf Variables}. We consider scattering of $n$ massless particles so the external 4-momenta $p_i^\mu$, where $i=1,2,3,\dots, n$, are required to satisfy the on-shell condition $p_i^2 = p_{i\mu} p_i^\mu = 0$. Momentum conservation requires\footnote{The conservation of momentum is really that the sum of incoming momenta must equal the sum of outgoing momenta. As a technical tool, crossing symmetry is often used to trade in-coming particles for outgoing ones, so that the amplitude has all external particles on equal footing as outgoing. This is helps make the calculations a bit simpler and once a result is obtained, particles can simply the ``crossed" back to let some of them be incoming again for the purpose of computing the cross-section.} \begin{equation} \label{mymomcons} \sum_{i=1}^n p_i^\mu = 0\,. \end{equation} The amplitude must depend on the external momenta in a Lorentz invariant way, namely as dot-products $p_i.p_j = p_{i\mu} p_j^\mu$, where $i$ is the particle label $i=1,2,3,\dots, n$. Since $p_i^2=0$ for all $i$, we have \begin{equation} s_{ij} \equiv (p_i + p_j)^2 = 2 p_i.p_j \,,~~~~~~~ s_{ijk} \equiv (p_i + p_j+p_k)^2 = 2 p_i.p_j +2 p_i.p_k+2 p_j.p_k \,,~~~~\text{etc.} \end{equation} The Mandelstam variables $s_{ij\ldots}$ are not all independent due to the constraints of momentum conservation \reef{mymomcons}. For example, for $n=4$, we have \begin{equation} \label{n4momcons} n=4\text{:}~~~~~~ s_{12}=s_{34}\,,~~~~ s_{13}=s_{24} \,,~~~~ s_{14}=s_{23}\,,~~~~ \text{and}~~~~ s_{12} + s_{13}+ s_{14} = 0\,. \end{equation} Thus the scalar amplitudes are functions of Mandelstam variables subject to the constraints of momentum conservation. \vspace{2mm} \noindent {\bf Analytic Structure.} Tree amplitudes must be rational functions of Mandelstam variables. In a local theory of massless scalars, they can have simple poles (and no higher-order poles) at locations where $s_{ij\ldots k} \to 0$ and the residue of such a pole is, by unitarity, a product of lower-point amplitudes. There can also be polynomial terms in the amplitude; in an $n$-point amplitude such polynomial terms can arise from $n$-point interactions in the Lagrangian. \vspace{2mm} \noindent {\bf Bose Symmetry.} Bose symmetry requires that the amplitude is invariant under exchanges of identical bosons. Specifically, $A_n(Z \dots Z \bar{Z} \dots \bar{Z})$ must be symmetric under all permutations of the momenta of the $Z$'s, and likewise for those of the $\bar{Z}$'s \vspace{2mm} \noindent {\bf Vanishing Single Soft Limits}. As any one of the external momenta is taken to zero, the amplitude has to vanish in the momentum: \begin{equation} \label{softlimit} A_n(Z \dots Z \bar{Z} \dots \bar{Z}) \to 0 ~~~\text{for any}~~~p_i\to 0\,. \end{equation} As noted in Section \ref{s:ampboot}, this is called a vanishing soft theorem or an Adler zero \cite{Adler:1964um}. \vspace{2mm} \noindent {\bf $U(1)$ Symmetry}. The $U(1)$ symmetry can be understood as the statement that $Z$ particles have charge $+1$ and $\bar{Z}$ particles have charge $-1$. The associated Ward identity then says \begin{equation} A_n(\underbrace{Z \dots Z}_{n_Z} \underbrace{\bar{Z} \dots \bar{Z}}_{n_{\bar{Z}}}) ~=~ 0 ~~~~\text{for}~~~~ n_Z \ne n_{\bar{Z}}\,. \end{equation} We do not assume $U(1)$ symmetry, we shall see it emerge in the leading order theory in the sense that it only has non-vanishing amplitudes with $n_Z = n_{\bar{Z}}$. \vspace{2mm} The goal is to construct the model subject to the constraints of vanishing soft limit at the lowest possibles order in the energy-momentum expansion; higher order terms are considered in Section \ref{s:hdcorrections}. To streamline the discussion, we first look at amplitudes that violate the $U(1)$, then those that respect it. \subsection{Bootstrapping the Model: $U(1)$-Violating Amplitudes} \label{s:cp1bootnoU1} Amplitudes that do not obey the $U(1)$ symmetry are those with an unequal number of $Z$ and $\bar{Z}$'s. \begin{itemize} \item {\bf 3-point.} Momentum conservation for 3 massless particles sets all Mandelstams to zero, e.g.~$s_{12} = (p_1+p_2)^2 = p_3^2 = 0$. So there can be no momentum-dependence in the 3-particle scalar amplitudes, they have to be constants. For example, $A_3(ZZZ) = d_0$ and $A_3(ZZ\bar{Z}) = d_0'$. But this is at odds with the assumption of vanishing soft behavior \reef{softlimit}.\footnote{The special kinematics associated with taking soft limits of a 3-particle amplitude may appear tricky; however, a constant 3-particle amplitude necessarily implies a divergence of the 4-particle amplitudes via pole diagrams \cite{Elvang:2016qvq}, so the requirement of a vanishing soft limit rules out the 3-particle amplitudes.} We conclude that there can be no 3-point amplitudes. \item {\bf 4-point.} At 4-points, the $U(1)$ violating amplitudes are $A_4(Z Z Z Z )$, $A_4(Z Z Z \bar{Z} )$, and their conjugates. They cannot have poles, since there are no 3-particle amplitudes they could factor into (unitarity), so they have to be polynomial in the Mandelstam variables. Constant terms are excluded by the soft limit \reef{softlimit}. At $O(p^2)$, the only Mandelstam polynomial compatible with the Bose symmetry of 3 or 4 identical scalars is $s_{12}+s_{13}+s_{23}$, but that vanishes by momentum conservation \reef{n4momcons}. Hence $A_4(Z Z Z Z )$ or $A_4(Z Z Z \bar{Z} )$ start at $O(p^4)$. \item {\bf 5-point.} In the absence of 3-particle amplitudes, the 5-particle amplitudes cannot have poles, so they must be polynomial in the Mandelstam variables. A constant is incompatible with the vanishing soft limit. At order $O(p^2)$, Bose symmetry requires $A_5(ZZ Z ZZ)$ and $A_5(ZZ Z Z\bar{Z} )$ to be $\sum_{1<i<j<5} s_{ij}$, but this is zero due to momentum conservation. The amplitude with three $Z$'s and two $\bar{Z}$'s is uniquely determined at $O(p^2)$ to be $A_5(ZZ Z \bar{Z}\bar{Z} ) = a s_{45}$, but this does not vanish in the limit of taking (say) $p_1 \to 0$. So we must set $a=0$ and hence 5-point amplitudes are at least $O(p^4)$. \item {\bf 6-point and above.} Just as in the 5-point case, any amplitude with all $Z$'s or a single $\bar{Z}$ vanishes at $O(p^2)$. With at least two of both $Z$ and $\bar{Z}$, there is a unique Bose symmetric Mandelstam polynomial at $O(p^2)$, namely $s_{12\ldots n_Z}$ where we have chosen the $Z$ particles to have momentum labels $i=1,2,\ldots,n_Z$. However, this is non-vanishing in the soft limits of any $\bar{Z}$ momenta. \end{itemize} Based on the above, we conclude that any $U(1)$-violating amplitude obeying the vanishing soft limit condition have to be at least $O(p^4)$. Such higher-order corrections are considered in Section \ref{s:hdcorrections}. The lesson here is that any complex scalar 2-derivative (i.e.~$O(p^2)$ amplitudes) model with vanishing single soft limits {\em must} have $U(1)$ symmetry: this is an example of a --- perhaps surprising --- emergent symmetry. \subsection{Bootstrapping the Model: $U(1)$-Conserving Amplitudes} \label{s:cp1boot} Amplitudes with $U(1)$ symmetry have an equal number of $Z$ and $\bar{Z}$'s, so in particular they must be even-point. \begin{itemize} \item {\bf 4-point $U(1)$ conserving.} As the lowest-point amplitude $A_4(Z \bar{Z} Z \bar{Z})$ must be polynomial in the Mandelstam variables. A constant is excluded by the vanishing soft limit constraint, so we write as most general ansatz with the appropriate Bose symmetry: \begin{equation} A_4(Z \bar{Z} Z \bar{Z} ) = \frac{a_1}{\Lambda^2} \, s_{13} +O(p^4)\,. \end{equation} At order $O(p^2)$, the polynomial $s_{12}+s_{23} = s_{34}+s_{14}$ is also compatible with Bose symmetry, but by momentum conservation \reef{n4momcons} is equal to $-s_{13}$. We have included a scale $\Lambda$ of mass-dimension $1$ so that $a_1$ is a pure number. The amplitude vanishes in the soft limit of any one of the momenta, so there are no constraints on $a_1$. \item {\bf 6-point.} The argument in Section \ref{s:cp1bootnoU1} shows that any polynomial term at $O(p^2)$ in a scalar amplitude with more than 4 external legs is non-vanishing in the single soft limit. This means that to achieve a vanishing soft limit beyond 4-points, there must be cancellations among the pole-terms and the contact terms. \vspace{0.8mm} At 6-points the pole terms must have residues that are products of two 4-point amplitudes, for example on the 123-channel: \begin{equation} \label{poleterm} \raisebox{-0.58cm}{ \begin{tikzpicture}[scale=0.4, line width=1 pt] \draw (-1,1)--(0,0); \draw (-1,-1)--(0,0); \draw (-1.5,0)--(3.5,0); \draw (2,0)--(3,1); \draw (2,0)--(3,-1); \node at (-1.4,1) {\small $1$}; \node at (-1.9,0) {\small $\bar{2}$}; \node at (-1.4,-1) {\small$3$}; \node at (3.5,-1) {\small$\bar{4}$}; \node at (3.9,0) {\small $5$}; \node at (3.5,1) {\small $\bar{6}$}; \node at (1.17,0.55) {\small $P$}; \end{tikzpicture} } =~ \frac{A_4(Z_1 \bar{Z}_2 Z_3 \bar{Z}_P ) A_4(Z_{-P} \bar{Z}_4 Z_5 \bar{Z}_6 )}{s_{123}} = \frac{a_1^2}{\Lambda^4} \frac{s_{13} s_{46}}{s_{123}} \,. \end{equation} The most general contact terms can be parameterized as \begin{equation} \label{CT6} \raisebox{-0.58cm}{ \begin{tikzpicture}[scale=0.4, line width=1 pt] \draw (-1,1)--(0,0); \draw (-1,-1)--(0,0); \draw (-1.5,0)--(1.5,0); \draw (1,1)--(0,0); \draw (1,-1)--(0,0); \node at (-1.4,1) {\small $1$}; \node at (-1.9,0) {\small$\bar{2}$}; \node at (-1.4,-1) {\small$3$}; \node at (1.5,-1) {\small$\bar{4}$}; \node at (1.9,0) {\small $5$}; \node at (1.5,1) {\small $\bar{6}$}; \end{tikzpicture} } = ~b_0 + b_1 \,s_{246} + O(p^4)\,. \end{equation} There are no other independent polynomials with Bose symmetry at $O(p^2)$. So we can write the 6-point amplitude as the sum of all the pole diagrams and the possible contact terms: \begin{equation} \label{A6} A_6(Z \bar{Z} Z \bar{Z}Z \bar{Z} ) = \frac{a_1^2}{\Lambda^4} \bigg( \frac{s_{13} s_{46}}{s_{123}} +\frac{s_{13} s_{26}}{s_{143}} +\frac{s_{13} s_{24}}{s_{163}} + (1 \leftrightarrow 5)+ (3 \leftrightarrow 5) \bigg) + b_0 + b_1\, s_{246} + O(p^4)\,. \end{equation} In the soft limit $p_6 \to 0$, the first two pole terms in \reef{A6} vanish while the third one reduces to $s_{24}$. Under the exchanges $(1 \leftrightarrow 5)$ and $(3 \leftrightarrow 5)$ two more $s_{24}$'s are generated. So we get \begin{equation} A_6(Z \bar{Z} Z \bar{Z}Z \bar{Z} ) \to 3 \frac{a_1^2}{\Lambda^4} s_{24} + b_0 + b_1 s_{24} + O(p^4)\,. \end{equation} Therefore, in order to have vanishing soft limits \reef{softlimit} at 6-points, we must have \begin{equation} \label{b1fixed} b_0=0 ~~~~\text{and}~~~~ b_1 =- 3 \frac{a_1^2}{\Lambda^4}\,. \end{equation} Hence, the 4-point and 6-point amplitudes are completely fixed at order $O(p^2)$ in terms of just one number, the coupling constant $a_1$. \item {\bf 8-point.} The above pattern continues to higher-point amplitudes: at $O(p^2)$ the whole model is uniquely fixed by the symmetry requirements in terms of a number, $a_1$, and a single dimensionful scale $\Lambda$. \vspace{0.8mm} Consider at 8-points, the $p_8 \to 0$ soft limit. Some diagrams directly vanish in this limit. Of diagrams that do not vanish, those that result in terms with poles directly vanish among themselves. To see this, consider the three diagrams with a $1/s_{123}$ pole that do not vanish in the $p_8 \to 0$ limit: \begin{eqnarray} \label{8ptA} \hspace{-1cm} \phantom{33} \raisebox{-0.78cm}{ \begin{tikzpicture}[scale=0.4, line width=1 pt] \draw (-1,1)--(0,0); \draw (-1,-1)--(0,0); \draw (-1.5,0)--(3.5,0); \draw (2,0)--(3.2,0.9); \draw (2,0)--(3.2,-0.9); \draw (2,0)--(2.5,1.3); \draw (2,0)--(2.5,-1.3); \node at (-1.4,1) {\small $1$}; \node at (-1.9,0) {\small$\bar{2}$}; \node at (-1.4,-1) {\small$3$}; \node at (2.9,1.5) {\small $\bar{8}$}; \node at (3.6,-1) {\small$7$}; \node at (3.9,0) {\small $\bar{6}$}; \node at (3.6,1) {\small $5$}; \node at (2.9,-1.5) {\small $\bar{4}$}; \end{tikzpicture} } &=&~ \frac{\big(\frac{a_1}{\Lambda^2}s_{13}\big)\big(-\tfrac{3a_1^2}{\Lambda^4}s_{468}\big)}{s_{123}} ~\to~ -3 \frac{a_1^3}{\Lambda^6} \frac{s_{13} s_{46}}{s_{123}} \,, \\ \label{8ptB} \hspace{-1cm} \raisebox{-0.83cm}{ \begin{tikzpicture}[scale=0.4, line width=1 pt] \draw (-1,1)--(0,0); \draw (-1,-1)--(0,0); \draw (-1.5,0)--(5.5,0); \draw (4,0)--(5,1); \draw (4,0)--(5,-1); \draw (2,-1)--(2,1); \node at (-1.4,1) {\small $1$}; \node at (-1.9,0) {\small $\bar{2}$}; \node at (-1.4,-1) {\small$3$}; \node at (5.5,-1) {\small${5}$}; \node at (5.9,0) {\small $\bar{8}$}; \node at (5.5,1) {\small ${7}$}; \node at (2,1.6) {\small $\bar{4}$}; \node at (2,-1.6) {\small $\bar{6}$}; \end{tikzpicture} } &=&~ \frac{\big(\frac{a_1}{\Lambda^2}s_{13}\big) \big(\frac{a_1}{\Lambda^2}s_{46}\big) \big(\frac{a_1}{\Lambda^2}s_{57}\big)}{s_{123}\,s_{578} } ~\to~ \frac{a_1^3}{\Lambda^6} \frac{s_{13} s_{46}}{s_{123}} \,, \end{eqnarray} and \begin{equation} \label{8ptC} \begin{split} \raisebox{-0.83cm}{ \begin{tikzpicture}[scale=0.4, line width=1 pt] \draw (-1,1)--(0,0); \draw (-1,-1)--(0,0); \draw (-1.5,0)--(5.5,0); \draw (4,0)--(5,1); \draw (4,0)--(5,-1); \draw (2,-1)--(2,1); \node at (-1.4,1) {\small $1$}; \node at (-1.9,0) {\small $\bar{2}$}; \node at (-1.4,-1) {\small$3$}; \node at (5.5,-1) {\small$\bar{4}$}; \node at (5.9,0) {\small $5$}; \node at (5.5,1) {\small $\bar{6}$}; \node at (2,1.6) {\small $7$}; \node at (2,-1.6) {\small $\bar{8}$}; \end{tikzpicture} } ~+~(5 \leftrightarrow 7) &~=~ \frac{\big(\frac{a_1}{\Lambda^2}s_{13}\big) \big(\frac{a_1}{\Lambda^2}s_{4568}\big) \big(\frac{a_1}{\Lambda^2}s_{46}\big)}{s_{123}\,s_{456} } ~+~(5 \leftrightarrow 7)\\[-2mm] &~\to ~2 \frac{a_1^3}{\Lambda^6} \frac{s_{13} s_{46}}{s_{123}} \,. \end{split} \end{equation} The three contributions \reef{8ptA}, \reef{8ptB}, and \reef{8ptC} cancel, and it is clear why: this is guaranteed by the vanishing of the 6-point in the soft limit. \vspace{0.8mm} Finally there are pole diagrams with non-vanishing soft limits that do no cancel among themselves but leave behind polynomial terms: for example \begin{equation} \raisebox{-0.78cm}{ \begin{tikzpicture}[scale=0.4, line width=1 pt] \draw (-1,1)--(0,0); \draw (-1,-1)--(0,0); \draw (-1.5,0)--(3.5,0); \draw (2,0)--(3.2,0.9); \draw (2,0)--(3.2,-0.9); \draw (2,0)--(2.5,1.3); \draw (2,0)--(2.5,-1.3); \node at (-1.4,1) {\small $7$}; \node at (-1.9,0) {\small$\bar{8}$}; \node at (-1.4,-1) {\small$1$}; \node at (2.9,1.5) {\small $\bar{6}$}; \node at (3.6,-1) {\small$3$}; \node at (3.9,0) {\small $\bar{4}$}; \node at (3.6,1) {\small $5$}; \node at (2.9,-1.5) {\small $\bar{2}$}; \end{tikzpicture} } = ~\frac{\big( \frac{a_1}{\Lambda^2}\big) s_{17}\big(-3\frac{a_1^2}{\Lambda^4}s_{246}\big)}{s_{178}} ~\to~ -3 \frac{a_1^3}{\Lambda^6} s_{246} ~~~~\text{as}~~~p_8 \to 0\,. \end{equation} There are 6 distinct such diagrams (from the 6 possible pairings of odd-numbered momenta 1357 on the LHS), so that means that \begin{equation} A_8(Z \bar{Z} Z \bar{Z}Z \bar{Z} Z \bar{Z}) \to -18 \frac{a_1^3}{\Lambda^6} s_{246} ~~~~\text{as}~~~ p_8 \to 0\,. \end{equation} This can be canceled by an 8-point local contact term $+18\tfrac{a_1^3}{\Lambda^6} s_{2468}$ in order to ensure the vanishing soft theorem. \item {\bf 10-point.} The pattern continues. The local terms that arise in the soft limit $p_{10} \to 0$ come from the pole diagrams with the 8-point contact terms, so it gives $18 \tfrac{a_1^4}{\Lambda^8} s_{2468}$. There are (5 choose 2)=10 such diagrams, so the $p_{10}$ soft limit of all the pole terms can be cancelled by a contribution $-180 \tfrac{a_1^4}{\Lambda^8} s_{2468\,10}$ from a local 10-point interaction. \end{itemize} To summarize, we have found that the assumptions have fixed the amplitudes in the theory completely in the leading order of the low-energy expansion: the lowest-point non-vanishing amplitudes are at order $O(p^2)$ \begin{equation} \begin{split} A_4(Z \bar{Z} Z \bar{Z} ) &= \frac{1}{\Lambda^2} \, s_{13} \,, \\ A_6(Z \bar{Z} Z \bar{Z}Z \bar{Z} ) &= \frac{1}{\Lambda^4} \bigg( \frac{s_{13} s_{46}}{s_{123}} +\frac{s_{13} s_{26}}{s_{143}} +\frac{s_{13} s_{24}}{s_{163}} + (1 \leftrightarrow 5)+ (3 \leftrightarrow 5) -3s_{246}\bigg) \,, \\ A_8(Z \bar{Z} Z \bar{Z}Z \bar{Z} Z \bar{Z} ) &= \frac{1}{\Lambda^6}\bigg(\text{pole terms} +18 s_{2468}\bigg) \,, \\ A_{10}(Z \bar{Z} Z \bar{Z}Z \bar{Z} Z \bar{Z}Z \bar{Z} ) &= \frac{1}{\Lambda^8}\bigg(\text{pole terms} -180 s_{2468\,10}\bigg) \,. \end{split} \end{equation} Here we have set the 4-point coupling $a_1=1$ without any loss of generality. For those who love Lagrangians, one can retro-engineer the interaction terms based on the polynomial terms above to find \begin{equation} \label{CP1lag} \begin{split} \mathcal{L} = -\pa_\mu Z \pa^\mu \bar{Z} &+ \frac{1}{\Lambda^2}Z \bar{Z}\pa_\mu Z \pa^\mu \bar{Z} -\frac{3}{4} \frac{1}{\Lambda^4}Z^2 \bar{Z}^2\pa_\mu Z \pa^\mu \bar{Z} \\ & + \frac{1}{2}\frac{1}{\Lambda^6}Z^3 \bar{Z}^3\pa_\mu Z \pa^\mu \bar{Z} - \frac{5}{16}\frac{1}{\Lambda^8}Z^4 \bar{Z}^4\pa_\mu Z \pa^\mu \bar{Z} + \ldots \,, \end{split} \end{equation} where the dots stand for interactions with more than 10 fields. Here we have used that the $(2n)$-point matrix element of \begin{equation} \label{2nptint} Z^{n-1} \bar{Z}^{n-1} \pa_\mu Z \pa^\mu \bar{Z} = \frac{1}{n^2} (\pa_\mu Z^n)(\pa^\mu \bar{Z}^n) \end{equation} is \begin{equation} \label{2nptFeyn} \frac{1}{n^2} (n!)^2 \, i^2\bigg( \sum_{i~\text{odd}} p_i \bigg) \bigg( \sum_{j~\text{even}} p_j \bigg) ~=~ \big[(n-1)!\big]^2 s_{246 \dots 2n} \,, \end{equation} using momentum conservation. This was used to normalize the interaction terms so they exactly reproduce the local terms in the amplitudes above. For example, for the 10-point term in \reef{CP1lag}, the overall numerical factor of the local contact term contribution is $- \frac{5}{16} (4!)^2 = -180$. We can extent this to reasoning to $2n$-points. Suppose the numerical coefficient of the $2n$-point interaction term in the Lagrangian is $\a_n$; .e.g.~$\a_4= \tfrac{1}{2}$. Then from \reef{2nptint} and \reef{2nptFeyn}, the polynomial term it contributes to the amplitude is $\a_n \big[(n-1)!\big]^2 s_{246 \dots 2n}$. On the other hand, the purpose of this term will be to cancel the local contribution from the soft limit of the ($n$ choose 2) pole diagrams involving the $2(n-1)$-point local contribution, which has coefficient $\a_{n-1} \big[(n-2)!\big]^2$. So we see that \begin{equation} \label{alpharecrel} \a_n \big[(n-1)!\big]^2 = -\binom{n}{2} \, \a_{n-1} \big[(n-2)!\big]^2 ~~~~\implies~~~~ \a_n = -\frac{1}{2} \frac{n}{(n-1)}\a_{n-1} \,. \end{equation} With $\a_1 = -1$ (the kinetic term), we get $\a_2 = 1$ and likewise we reproduce the numerical coefficients of other terms in \reef{CP1lag}. The recursive formula \reef{alpharecrel} is straightforward to solve and one finds \begin{equation} \a_n = (-1)^n \frac{n}{2^{n-1}} \,. \end{equation} These are exactly the series coefficients of $1/{(1+\tfrac{1}{2}Z \bar{Z})^2}$ expanded around zero! Thus, including the scale $\Lambda$, we have discovered that the Lagrangian can be re-summed to \begin{equation} \label{CP1Lag} \mathcal{L} = -\frac{\pa_\mu Z \pa^\mu \bar{Z}}{\big(1+\tfrac{1}{2\Lambda^2}Z \bar{Z}\big)^2} = -G_{Z\bar{Z}} \pa_\mu Z \pa^\mu \bar{Z}\,, \end{equation} where $G_{Z\bar{Z}} = 1/{(1+\tfrac{1}{2\Lambda^2}Z \bar{Z})^2}$ can be identified as the Fubini-Study K\"ahler metric on $\mathbb{CP}^1$. This 2-scalar model is well-known, it is the $\mathbb{CP}^1$ nonlinear sigma model (NLSM). It describes two real Goldstone modes arising from the spontaneous symmetry breaking of $SU(2)$ to $U(1)$. The real Goldstones are paired into the complex scalar $Z$ that ``lives" in the symmetric coset space $SU(2)/U(1) \sim \mathbb{CP}^1$. The coset has the $U(1)$ symmetry: this exactly the $U(1)$ symmetry that emerged in our on-shell analysis. At this point, the engaged reader may complain: but you said that the vanishing soft limits were associated with a shift symmetry $Z \to Z + c + \dots$\,!!!? This does not appear to be a symmetry of the Lagrangian \reef{CP1Lag}. However, a Lagrangian is not unique but can take a different form on field redefinitions; this cannot change the physical observables, the amplitudes. So the shift symmetry can be accompanied by a field redefinition and in fact the Lagrangian \reef{CP1Lag} is invariant under infinitesimal shift symmetry \begin{equation} Z \to Z + c + \bar{c} \frac{a_1}{2 \Lambda^2} Z^2 \,, ~~~~~ \bar{Z} \to \bar{Z} + \bar{c} + {c} \frac{a_1}{2 \Lambda^2} \bar{Z}^2 \,. \end{equation} It is one of the appealing features of the on-shell amplitudes approach that it is independent of having to deal with redundancies such as those arising from field redefinitions (or gauge transformations). The very simple amplitude analysis shows that at the leading 2-derivative order, the model had to have $U(1)$ symmetry; in that sense it is {\em emergent}! Recognizing the leading-order model as the $\mathbb{CP}^1 \sim SU(2)/U(1)$ NLSM, it is clear why the $U(1)$ had to be there. Next we discuss interactions beyond the leading order. \subsection{Beyond Leading Order} \label{s:hdcorrections} In the $\mathbb{CP}^1$ NLSM, it is fairly easy to see that there is only one possible 2-derivative operator at 4-point: $Z \bar{Z} \partial_\mu Z \partial^\mu \bar{Z} = \tfrac{1}{4}(\partial_{\mu} Z^2) (\partial_{\mu} \bar{Z}^2)$. Any other way of arranging the derivatives on the four fields is equivalent to this using integration by parts and the leading order equations of motion (EOM) $\Box Z = \partial_\mu \partial^\mu Z = 0$ and $\Box \bar{Z} = 0$. But what about higher derivative corrections? There are many ways of sprinkling four derivatives on the four fields, but how many are actually independent under integration by parts and use of the EOM? And with $2k$-derivatives? This is very easy to answer using the on-shell amplitude methods as we now demonstrate. \vspace{2mm} \noindent {\bf Higher Order Corrections.} A $2k$-derivative term generates contributions to the amplitudes at $O(p^{2k})$. So the question of the number of independent $2k$-derivative operators is simply rephrased as: how many Bose symmetric 2nd order polynomials in the Mandelstam variables are independent under the relations of momentum conservation (translates to integration by parts) and on-shellness (translates to EOM)? For the 4-point $U(1)$-conserving $O(p^4)$ case we find two such independent polynomials, \begin{equation} A_4^{O(p^4)}(Z \bar{Z} Z \bar{Z} ) = \frac{a_1}{\Lambda^2} \, s_{13} + a_2 \, s_{13}^2 + a_2' \, (s_{12}^2 + s_{14}^2) + O(p^6)\,. \end{equation} The two terms correspond to the two independent Lagrangian terms $\partial_\mu Z \partial_\nu \bar{Z} \partial^\mu Z \partial^\nu \bar{Z}$ and $Z \bar{Z} \partial_\mu \partial_\nu Z \partial^\mu \partial^\nu \bar{Z}$. For the cases of $2k$-derivative one similarly finds \begin{equation} \begin{array}{r|cccccccccc} 2k~& 0 & 2 & 4& 6 & 8 & 10 & 12 \\ \hline \# ~\text{independent $\pa^{2k} Z^2 \bar{Z}^2$ operators } & 1& 1 & 2 & 2& 3 & 3 & 4 \end{array} \end{equation} As $k$ or $n$ grows, it gets harder to brute force determine the number of independent Mandelstam polynomials subject to the given constraints. However, there are powerful mathematical tools, such as the Gr\"obner basis, for solving such problems and they have indeed been applied for these purposes \cite{Beisert:2010jx}. At 4-point, the matrix elements trivially satisfy the vanishing soft-limit condition, simply due to the special 3-particle kinematics of the resulting limit. So one has to analyze more carefully at 6-point level (and higher) whether cancellations can occur to ensure that the soft limit gives zero. \vspace{3mm} \noindent {\bf Lowest order $U(1)$-Violating Operators.} At leading order, the model we consider has $U(1)$ symmetry, but at subleading orders, our formulation of the problem allows for $U(1)$-violating terms. We found in Section \ref{s:cp1bootnoU1} that at 4-point, the $U(1)$-violating amplitudes start at $O(p^4)$. The explicit matrix elements at this order are \begin{equation} \label{u1viol4pt} A_4(ZZZZ) = \frac{d_1}{\Lambda^4} \big(s_{12}^2+s_{13}^2+s_{23}^2\big)+ O(p^6)\,, ~~~~~ A_4(ZZZ \bar{Z}) = \frac{d_2}{\Lambda^4} \big(s_{12}^2+s_{13}^2+s_{23}^2\big)+ O(p^6)\,, \end{equation} and similarly for those with conjugate states. At 5-point, the all-$Z$ matrix element is non-vanishing at $O(p^4)$, \begin{equation} \label{u1viol5pt} A_5(ZZZZZ) = \frac{d_3}{\Lambda^5}\sum_{i<j} s_{ij}^2 + O(p^6)\,, \end{equation} however, this amplitude does not vanish in the soft limit. Likewise, $A_5(ZZZZ\bar{Z})$ and $A_5(ZZZ\bar{Z}\bar{Z})$ have 2 and 3 independent matrix elements, respectively, but no linear combination of them vanish in the soft limit. At $O(p^6)$, there are two independent terms in the 5-point amplitude with vanishing soft limit: \begin{equation} \begin{split} A_5(ZZZ\bar{Z}\bar{Z}) = &\frac{e_1}{\Lambda^7} s_{45} \big[ (s_{14}+s_{15})^2+(s_{24}+s_{25})^2 + (s_{34}+s_{35})^2 - 2s_{45}^2\big] \\ &+\frac{e_2}{\Lambda^7} \big[ s_{145} s_{14} s_{15} + s_{245} s_{24} s_{25} +s_{345} s_{34} s_{35}\big]\,. \end{split} \end{equation} Both vanish trivially as $p_4$ or $p_5 \to 0$. When one of the $Z$ particles goes soft, say $p_1 \to 0$, the resulting 4-particle kinematics $p_2+p_3+p_4+p_5 = 0$ ensures that both expressions in $[... ]$ vanish. These $O(p^6)$ operators are subleading to the $O(p^4)$ $U(1)$-violating ones from \reef{u1viol4pt}. There are of course many other operators one can construct. These examples simply illustrate the amplitude-based method. \subsection{Postscript: Coset Story} \label{s:coset} The context of the problem studied here is the low-energy physics of Goldstone bosons arising from spontaneous breaking of an internal symmetry group $G$ to a subgroup $H$. The number of Goldstone bosons is equal to the number of broken symmetry generators, i.e.~dim$(G/H)$=dim($G$)-dim($H$). The scalars `live' in the coset space $G/H$. When $G/H$ is a symmetric space and there are no cubic interactions, it can be shown that the amplitudes of the Goldstone bosons vanish in the single scalar soft limit. For specific symmetry breaking patterns $G \to H$, there are techniques for systematic construction of Lagrangians of the Goldstone modes \cite{Coleman:1969sm,Callan:1969sn,Volkov:1973vd}. But for more open-ended questions aimed at understanding the space of possible theories and any additional emergent symmetries they may have, the Lagrangian approach is limited. In our particular example with two real Goldstone modes, there are two obvious candidate theories: 1) $SU(2)$ broken to $U(1)$, or 2) $U(1) \times U(1)$ completely broken. As we have seen, the vanishing soft limit criterion selects for the former. The example serves to illustrate how the bottom-up approach to construction of theories via amplitude can have symmetries emerge that were not part of the input assumptions. There are examples where the emergence of symmetries are perhaps more surprising. This is, for example, the case with supersymmetric extensions of the $\mathbb{CP}^1$ model. When constructed from the on-shell amplitudes approach, one finds \cite{Elvang:2018dco} that not only does the $\mathcal{N}=1$ supersymmetric $\mathbb{CP}^1$ model have the $U(1)$ symmetry under which the scalars $Z$ and $\bar{Z}$ are charged (and their fermions are uncharged), it also has a second global $U(1)$ symmetry under which the scalars and fermions in the same supermultiplet have the same charge. \section{Technical: Conformal Bootstrap} \label{s:confboot} This section gives a more technical account of the conformal bootstrap, offering some details that was left out in the introduction of the method given in Section \ref{s:introbootstrap}. We start with some CFT background, then discuss the bootstrap method. \subsection{CFT background} As described in Section \ref{s:introbootstrap}, an operator $\mathcal{O}_\Delta(x)$ is characterized by its spin $s$ (how is transforms under the Lorentz group) and its scaling dimension $\Delta$ introduced via the homogenous scaling property \reef{defDelta}. Unitary enforces a lower bound on the possible value of $\Delta$ for given spin $s$. For a scalar operator, ($s=0$) in $d$-dimensions, this bound is \begin{equation} \label{Deltabound} \Delta \ge d/2 -1 \,. \end{equation} The bound is exactly saturated for a free scalar field, which has $\Delta =d/2 -1$. The correlation functions $\< \mathcal{O}_1(x_1) \mathcal{O}_2(x_2)\ldots \>$, i.e.~the vacuum expectation values of strings of local operators at different space time locations $x_i$, have to respect Poincar\'e symmetry, in particular translation invariance means that they depend on the spacetime coordinates only via the differences $x_{ij}^\mu = (x_i -x_j)^\mu$. In particular, this means that a 1-point function $\< \mathcal{O}(x)\>$ must be a constant. By dimensionality, it therefore has to vanish in a CFT since a conformal theory has no dimensionful constants. One can use scale invariance to prove that a 2-point correlation function $\< \mathcal{O}_{1}(x_1) \mathcal{O}_{2}(x_2) \>$ vanishes unless $\Delta_1 = \Delta_2$. Moreover, the form of the correlation function is completely fixed by symmetries and one can organize the operators such that the 2-point correlation functions of scalar operators take the form \begin{equation} \label{2ptcorr} \< \mathcal{O}_i(x_i) \mathcal{O}_{i}(x_j) \> = \frac{\delta_{ij}~}{|x_{ij}|^{2\Delta}} \,, \end{equation} where $|x_{ij}|^2 = x_{ij}^\mu x_{ij\mu}$. This expression has the correct scaling behavior under \reef{defDelta}. Scale invariance fixes a 3-point correlation function up to a constant $\lambda_{ijk}$ as \begin{equation} \label{3ptcorr} \<\mathcal{O}_i(x_i) \mathcal{O}_j(x_j) \mathcal{O}_k(x_k) \> = \frac{\lambda_{ijk}}{|x_{ij}|^{\Delta_i + \Delta_j - \Delta_k} |x_{ik}|^{\Delta_i + \Delta_k - \Delta_j} |x_{jk}|^{\Delta_j + \Delta_k - \Delta_i}} \,. \end{equation} In a unitary theory, the $\lambda_{ijk}$'s are real. Now it feels like we are on a good roll with scale invariance determining almost everything for us, but the fun stops at 3-point. Or maybe we should say that the fun begins at 4-points? Starting with 4-point correlation functions, one can build conformal cross-ratios like \begin{equation} \label{uv} u = \frac{|x_{12}|^2 |x_{34}|^2}{|x_{13}|^2 |x_{24}|^2} ~~~~~\text{and}~~~~ v = \frac{ |x_{14}|^2 |x_{23}|^2}{|x_{13}|^2 |x_{24}|^2} \,, \end{equation} which are scale-invariant. One can also check that they invariant under inversion, hence since conformal boosts are inversion-translation-inversion, they are also conformally invariant. As far as scale invariance is concerned, a 4-point correlation function can depend in arbitrary ways on $u$ and $v$; for example for four identical scalar operators with scaling dimension $\Delta$, the expression \begin{equation} \label{4ptA} \<\mathcal{O}(x_1) \mathcal{O}(x_2) \mathcal{O}(x_3) \mathcal{O}(x_4) \> = \frac{1}{|x_{12}|^{2\Delta}|x_{34}|^{2\Delta}} \, g(u,v) \, \end{equation} has the correct scaling for any function $g$ of the conformal cross-ratios \reef{uv}. We could also have exchanged $2 \leftrightarrow 4$ and written \begin{equation} \label{4ptB} \<\mathcal{O}(x_1) \mathcal{O}(x_2) \mathcal{O}(x_3) \mathcal{O}(x_4) \> = \frac{1}{|x_{14}|^{2\Delta}|x_{23}|^{2\Delta}} \, g(v,u) \,. \end{equation} Note that $u \leftrightarrow v$ under $2 \leftrightarrow 4$, hence the exchange of the arguments in the function $g$. The two expressions \reef{4ptA} and \reef{4ptB} must give rise to the same correlation function, so the function $g$ is in fact not completely arbitrary but must obey \begin{equation} \label{gflip} g(u,v) = \bigg(\frac{u}{v}\bigg)^{\Delta}\,g(v,u) \,. \end{equation} This constraint plays a role in the following. Let us now dive into a little more depth with the Operator Product Expansion (OPE) expansion than we did in Section \ref{s:introbootstrap}. When $x^\mu$ is close to $y^\mu$, the product $\mathcal{O}_i(x) \mathcal{O}_j(y)$ of two local operators create a local fluctuation and as such it should be possible to describe it by some linear combination of local operators. In particular, in the limit $x^\mu \to 0$, the OPE expansion is \begin{equation} \label{OPE} \mathcal{O}_i(x) \mathcal{O}_j(0) \sim \sum_{n} c_{ijn}(x) \, \mathcal{O}_n(0)\,. \end{equation} The OPE functions $c_{ijn}(x)$ depend on $x$ and can in general be expected to be divergent as $x\to 0$. Applied to the 3-point correlation function $\<\mathcal{O}_i(x_i) \mathcal{O}_j(0) \mathcal{O}_k(x_k) \>$ as $x_i \to 0$, the OPE \reef{OPE} reduces it to a (sum of) simple 2-point correlators \reef{2ptcorr}. If we compare that result with the expression \reef{3ptcorr} with $x_j = 0$ and expand around $x_i = 0$, we find \begin{equation} c_{ijk}(x) = \frac{\lambda_{ijk}}{|x|^{\Delta_i+\Delta_j-\Delta_k}} \Big (1 + \alpha \, x^\mu \frac{\pa}{\pa x_k^\mu} + \ldots\Big) \end{equation} where the dots stand for terms with subleading powers in small $x$ with coefficients (like $\alpha$) that depend on the operator scaling dimensions $\Delta_i$, $\Delta_j$, and $\Delta_k$. This means that the OPE can be written \begin{equation} \label{OPE2} \mathcal{O}_i(x_i) \mathcal{O}_j(x_j) = \sum_{n} \lambda_{ijn} \,C_{ijn}(x_{ij},\tfrac{\pa}{\pa x_j}) \, \mathcal{O}_n(x_j)\,, \end{equation} where the sum is over {\em primary operators} $\mathcal{O}_n$. One can think of primary operators as the operators that cannot be obtained as derivatives (descendants) of other operators. Because of their appearance in \reef{OPE2}, the constants $\lambda_{ijk}$ in the 3-point correlation functions \reef{3ptcorr} are called {OPE coefficients}. One can apply the OPE to obtain expressions for higher-point correlation functions in terms of the so-called {\em CFT data}: \begin{itemize} \item A list $\{\Delta_i, R_i\}$ of all the operator scaling dimensions $\Delta_i$ and spin representations $R_i$ of the local primary operators of the theory. \item A list of all OPE coefficients $\lambda_{ijk}$. \end{itemize} Not any list of $\{\Delta_i, R_i\}$ and OPE coefficients define a CFT; there are certain constraints that must be obeyed. At the core, the conformal bootstrap is about how to impose such a constraint and the remarkable milage gained from it. \subsection{Bootstrap} Suppose we apply the OPE \reef{OPE2} twice in a 4-point correlation function to get \begin{equation} \label{4ptOPE12} \big\< \mathcal{O}_1(x_1) \mathcal{O}_2(x_2) \mathcal{O}_3(x_3) \mathcal{O}_4(x_4) \big\> = \! \sum_{ \mathcal{O}^a, \mathcal{O}^b} \lambda_{12a} \lambda_{34b} \, C_{12a}(x_{12},\pa_2) C_{34b}(x_{34},\pa_4) \big\< \mathcal{O}^a(x_2) \mathcal{O}^b(x_4) \big\>. \end{equation} The sum is over primary operators and $a, b$ are collective indices that include both operators and the index structure associated with their spin. Here we made a choice to pair 1 and 2 in the OPE and 3 with 4, but obviously there are 2 other possible choices. These three choices have to be equivalent since they are simply different representations of the same correlation function. The requirement of equivalence is very non-trivial and gives rise to the {\em crossing relations}, sometimes also called {\em OPE associativity}. Pictorially we can illustrate the crossing relations for the 12 pairing and the 14 pairing as \begin{equation} \sum_\mathcal{O} \lambda_{12\mathcal{O}} \lambda_{34\mathcal{O}} \raisebox{-0.6cm}{ \begin{tikzpicture}[scale=0.4, line width=1 pt] \draw (-1,1)--(0,0); \draw (-1,-1)--(0,0); \draw (0,0)--(2,0); \draw (2,0)--(3,1); \draw (2,0)--(3,-1); \node at (-1.4,1) {$1$}; \node at (-1.4,-1) {$2$}; \node at (3.4,-1) {$3$}; \node at (3.4,1) {$4$}; \node at (1.2,0.55) {$\mathcal{O}$}; \end{tikzpicture} } ~~=~~ \sum_{\mathcal{O}'} \lambda_{14\mathcal{O}'} \lambda_{23\mathcal{O}'} \hspace{-0.3cm} \raisebox{-1.cm}{ \begin{tikzpicture}[scale=0.4, line width=1 pt] \draw (-1,2)--(0,1); \draw (1,2)--(0,1); \draw (0,-1)--(0,1); \draw (-1,-2)--(0,-1); \draw (1,-2)--(0,-1); \node at (-1.4,2) {$1$}; \node at (-1.4,-2) {$2$}; \node at (1.45,-2) {$3$}; \node at (1.45,2) {$4$}; \node at (0.8,0.1) {$\mathcal{O}'$}; \end{tikzpicture} } \,. \end{equation} It is not hard to see that if the 4-point correlation functions satisfy crossing relations, then it is also true of the $n$-point functions. For the purpose of this presentation, we consider a CFT to be defined as a set of CFT data ($\{\Delta_i, R_i\}$ and $\lambda_{ijk}$'s) that satisfy the crossing relations for 4-point correlators.\footnote{~Other constraints may also be useful or needed, such as modular invariance in 2d or constraints arising from supersymmetry or the presence of boundaries, interfaces, and line operators \cite{Simmons-Duffin:2016gjk}.} As described in Section \ref{s:introbootstrap}, the idea of the {\em conformal bootstrap} is to take a set of CFT data of a putative CFT with specified symmetries and apply the constraints of the crossing relations to find out if such a CFT may exist. Take all four operators in the 4-point correlator to be identical scalar operators $\mathcal{O}$ with scaling dimension $\Delta$. The expression \reef{4ptOPE12} can then be written \begin{equation} \big\< \mathcal{O}(x_1) \mathcal{O}(x_2) \mathcal{O}(x_3) \mathcal{O}(x_4) \big\> = \frac{1}{|x_{12}|^{2\Delta}|x_{34}|^{2\Delta}} \sum_{ \mathcal{O}^a} \lambda_{\mathcal{O}\mathcal{O}a}^2 \, g_{\Delta_a,\ell_a}(u,v)\,, \end{equation} where we have introduced the {\em conformal blocks} \begin{equation} g_{\Delta_a,\ell_a}(u,v) = |x_{12}|^{2\Delta}|x_{34}|^{2\Delta} \,C_{12a}(x_{12},\pa_2) \,C_{34b}(x_{34},\pa_4) \, \frac{Y_{ab}}{|x_{24}|^{2\Delta}} \end{equation} using \begin{equation} \big\< \mathcal{O}_a(x_2) \mathcal{O}_b(x_4) \big\> = \frac{Y_{ab}(x_{24})}{|x_{24}|^{2\Delta}} \end{equation} in which $Y^{ab}$ captures the appropriate index structure for operators with non-vanishing spin and is simply $\delta^{ab}$ for scalar operators. (For the case here, only operators with even spin appear in the OPE.) Comparing with \reef{4ptA}, we see that the OPE has decomposed the function $g(u,v)$ as \begin{equation} g(u,v) = \sum_{ \mathcal{O}^a} \lambda_{\mathcal{O}\mathcal{O}a}^2~ g_{\Delta_a,\ell_a}(u,v) \,. \end{equation} This sum over primary operators is called the conformal block decomposition. Now we saw already in the previous subsection that the different ways of decomposing the 4-point correlation function required the function $g$ to satisfy \reef{gflip}, i.e.~$v^{\Delta} g(u,v) - u^{\Delta} g(v,u) = 0$. This means that the crossing relation becomes the mathematical requirement \begin{equation} \label{crossing1} \sum_{ \mathcal{O}^a} \lambda_{\mathcal{O}\mathcal{O}a}^2~ \Big( \underbrace{v^{\Delta} g_{\Delta_a,\ell_a}(u,v) - u^{\Delta} g_{\Delta_a,\ell_a}(v,u)}_{F^{\Delta}_{\Delta_a,\ell_a}(u,v)} \Big) ~=~0 \,. \end{equation} The conformal blocks $g_{\Delta_a,\ell_a}(u,v)$ are fixed by conformal symmetry, for example via a differential equation derived from the quadratic Casimir (or by series expansion or by recursion relations). In 2d and 4d, they can be expressed in terms of hypergeometric functions ${}_2F_1$. Hence, the functional form of $F^{\Delta}_{\Delta_a,\ell_a}(u,v)$ is known, and the only unknown ingredients in \reef{crossing1} are the scaling dimensions and the OPE coefficients. Thus for given input CFT data, the crossing relations \reef{crossing1} is a consistency condition. It is the central work-horse of the conformal bootstrap. We mentioned in \reef{schm} that the bootstrap equation \reef{crossing1} has a geometric interpretation: for a given list of vectors $\vec{v}_{\sigma\sigma\mathcal{O}}$, which we now identify as $F^{\Delta}_{\Delta_a,\ell_a}(u,v)$, does there exist real non-negative coefficients\footnote{ The OPE coefficients were called $c_{ijk}$ in Section \ref{s:introbootstrap} because we glossed over the dependence on the spacetime coordinates.} $\lambda_{\mathcal{O}\mathcal{O}a}^2$ such that the linear combination \reef{crossing1} vanishes? If one can show that for given assumptions on the conformal dimensions, all $F^{\Delta}_{\Delta_a,\ell_a}(u,v)$'s lie on one side of some plane then no CFT can exist with that data because it could never satisfy the crossing relations. The basic approach in the (numerical) bootstrap can be described algorithmically: \begin{itemize} \item Make assumptions about the spectrum of the lowest dimension operators of a putative CFT: their scaling dimensions and spin. \item Test the crossing relation. Numerically, can be done by searching for a functional $\alpha$ such that $\alpha (F^{\Delta}_{\Delta_a,\ell_a}(u,v)) \ge 0$. \end{itemize} If such an $\alpha$ exists, then the crossing relations can never be satisfied and no such CFT can exist. This means that the bootstrap algorithm is especially powerful for ruling theories out. If no $\alpha$ is found, it is not prove that a theory exists, it is at best a ``maybe". Scanning through the space of possible scaling dimensions of the lowest dimension operators numerically using semidefinite programming techniques has resulted in powerful bounds on existence of CFTs in diverse dimensions, as we described for the 3d Ising model and other applications in Section \ref{s:introbootstrap}. We finish with one other example. \subsection{Example: Infinite Number of Primary Operators} As an example (borrowed from \cite{Rattazzi:2008pe,Rychkov:2009ij}, see also the review \cite{Simmons-Duffin:2016gjk}) of analytic bootstrap, we ask if there exist any 4d CFTs with a finite number of primary operators? To answer this question, introduce complex variables $z$ and $\bar{z}$ such that the conformal cross-ratios \reef{uv} are \begin{equation} u = z\bar{z} ~~~\text{and}~~~ v = (1-z)(1-\bar{z})\,. \end{equation} Using the representation of the conformal blocks in terms of hypergeometric functions one can show that in the limit $z \to 0$, taken along the line $\bar{z} = z$, one has \begin{equation} g_{\Delta_a,\ell_a}(u,v) \to z^\Delta + \ldots ~~~~\text{and}~~~~ g_{\Delta_a,\ell_a}(v,u) \to \log(z) + \ldots \,. \end{equation} This means that $g(u,v)$ is dominated by the operator with the smallest scaling dimension, namely the identity operator which has $\Delta=0$, so that \begin{equation} g(u,v) =\lambda_{\mathcal{O}\mathcal{O}1}^2 \cdot 1 + \ldots\,. \end{equation} On the other hand, as $z \to 0$ along the line $\bar{z} = z$, the other side of the crossing relation \reef{gflip} behaves as \begin{equation} \bigg(\frac{u}{v}\bigg)^{\Delta} g(v,u) \to z^\Delta \sum_{\mathcal{O}^a} \lambda_{\mathcal{O}\mathcal{O}a}^2 \log(z) + \ldots \end{equation} Clearly this vanishes unless there are infinitely many terms in the sum. So in order for crossing to hold in the form \reef{gflip}, {\em there must be an infinite number of primary operators in any 4d CFT.} \section{Concluding Remarks} \label{s:conclude} The amplitudes program and the conformal bootstrap share a common `philosophy': at the center of the explorations are the physical observables, the on-shell amplitudes and the correlation functions, respectively. The overlap goes beyond that, for example in the increasing use of common tools. Joint workshops, like the two summer programs on the Conformal Bootstrap and Amplitudes in 2015 and 2019 at the Aspen Center for Physics contribute to the increased communication between the communities of researchers. And that will hopefully continue in the future. \section*{Acknowledgements} I would like to thank Gordon Baym for the invitation to write up my Aspen Colloquium. I am grateful to Huan-Hang Chi, Aidan Herderschee, Callum Jones, Matt Mitchell, and Shruti Paranjape for discussions and detailed feedback on the manuscript. I would like to thank Yu-tin Huang, David Poland, Rafael Porto, Silviu Pufu, and Slava Rychkov for helpful comments and corrections. This work was initiated at the Aspen Center for Physics which is supported by National Science Foundation grant PHY-1607611. The author is supported in part by the US Department of Energy under Grant No.~DE-SC0007859. \section*{References} \bibliographystyle{JHEP}
{ "timestamp": "2020-08-07T02:18:29", "yymm": "2007", "arxiv_id": "2007.08436", "language": "en", "url": "https://arxiv.org/abs/2007.08436" }
\section{Introduction} A \emph{distance oracle} is a data structure that answers distance queries (or approximate distance queries) w.r.t.~some underlying graph or metric space. On general graphs there are many well known distance oracles that pit space against multiplicative approximation~\cite{ThorupZ05}, space against mixed multiplicative/additive approximation~\cite{PatrascuR14,AbrahamG11}, and, in sparse graphs, space against query time~\cite{Agarwal14,SommerVY09}. Refer to Sommer \cite{Sommer14} for a survey on distance oracles. Whereas \emph{approximation} seems to be a necessary ingredient to achieve any reasonable space/query time on general graphs, structured graph classes may admit \emph{exact} distance oracles with attractive time-space tradeoffs. In this paper we continue a long line of work~\cite{ArikatiCCDSZ96,Djidjev96,ChenX00,FakcharoenpholR06,Klein05,Wulff-Nilsen10,Nussbaum11,Cabello12,MozesS2012,Cohen-AddadDW17,GawrychowskiMWW18,CharalampopoulosGMW19} focused on exact distance oracles for weighted, directed planar graphs. \paragraph{History.} Between 1996-2012, work of Arikati et al.~\cite{ArikatiCCDSZ96}, Djidjev~\cite{Djidjev96}, Chen and Xu~\cite{ChenX00}, Fakcharoenphol and Rao~\cite{FakcharoenpholR06}, Klein~\cite{Klein05}, Wulff-Nilsen~\cite{Wulff-Nilsen10}, Nussbaum~\cite{Nussbaum11}, Cabello~\cite{Cabello12}, and Mozes and Sommer~\cite{MozesS2012} achieved space $\tilde{O}(S)$ and query time $\tilde{O}(n/\sqrt{S})$, for various ranges of $S$ that ultimately covered the full range $[n,n^2]$. In 2017, Cabello~\cite{Cabello19} introduced \emph{planar} Voronoi diagrams as a tool for solving metric problems in planar graphs, such as diameter and sum-of-distances. This idea was incorporated into new planar distance oracles, leading to $\tilde{O}(n^{5/2}/S^{3/2})$ query time~\cite{Cohen-AddadDW17} for $S\in [n^{3/2},n^{5/3}]$ and $\tilde{O}(n^{3/2}/S)$ query time~\cite{GawrychowskiMWW18} for $S\in [n,n^{3/2}]$. Finally, in a major breakthrough Charalampopoulos, Gawrychowski, Mozes, and Weimann~\cite{CharalampopoulosGMW19} demonstrated that up to $n^{o(1)}$ factors, \emph{there is no tradeoff} between space and query time, i.e., space $n^{1+o(1)}$ and query time $n^{o(1)}$ can be achieved simultaneously. In more detail, they proved that space $O(n^{4/3}\sqrt{\log n})$ allows for query time $O(\log^2 n)$, space $\tilde{O}(n^{1+\epsilon})$ allows for query time $O(\log n)^{1/\epsilon-1}$, and space $O(n\log^{2+1/\epsilon} n)$ allows for query time $O(n^{2\epsilon})$. The Charalampopoulos et al.~structure is based on a hierarchical $\vec{r}$-decomposition of the graph, $\vec{r}=(n,n^{(m-1)/m},\ldots,n^{1/m})$. (See Section~\ref{sect:prelims}.) Given $u,v$, it iteratively finds the last boundary vertex $u_i$ on the shortest $u$-$v$ path that lies on the boundary of the level-$i$ region containing $u$. Given $u_{i-1}$, finding $u_i$ amounts to solving a \emph{point location} problem on an \emph{external} Voronoi diagram, i.e., a Voronoi diagram of the \emph{complement} of a region in the hierarchy. Each point location query is solved via a kind of binary search, and each step of the binary search involves 3 \emph{recursive} distance queries that begin at a ``higher'' level in the hierarchy. This leads to a tradeoff between space $\tilde{O}(n^{1+1/m})$ and query time $O(\log n)^{m-1}$. See Table~\ref{tab:priorwork} for a summary of the space-time tradeoffs exact and approximate planar distance oracles. \begin{table}[] \centering \begin{tabular}{|l|l|l|} \multicolumn{1}{l}{\large\sc Reference} & \multicolumn{1}{l}{\large\sc Space} & \multicolumn{1}{l}{\large\sc Query Time}\\ \hline\hline \istrut[4]{5.5}\parbox{110pt}{\small Arikati, Chen, Chew\\ Das, Smid \& Zaroliagis}\hfill{\small 1996} & $S\in[n^{3/2},n^{2}]$ & $O\paren{\frac{n^{2}}{S}}$ \\ \hline \multirow{2}*{\rb{-1}{\small Djidjev}}\hfill\multirow{2}*{\rb{-1}{\small 1996}} & \istrut[2.5]{4.5}$S\in[n,n^{2}]$ & $O\paren{\frac{n^{2}}{S}}$ \\\cline{2-3} ~ & \istrut[2.5]{4.5}$S\in[n^{4/3},n^{3/2}]$ & $O\paren{\frac{n}{\sqrt{S}}\log n}$ \\ \hline \small Chen \& Xu \hfill{\small 2000} & \istrut[2.5]{4.5}$S\in[n^{4/3},n^{2}]$ & $O\paren{\frac{n}{\sqrt{S}}\log\paren{\frac{n}{\sqrt{S}}}}$ \\ \hline \small Fakcharoenphol \& Rao \hfill{\small 2006} & \istrut[2.5]{4.5}$O(n\log n)$ & $O(\sqrt{n}\log^{2} n)$ \\\hline \small Wulff-Nilsen\hfill{\small 2010} & \istrut[2.5]{4.5}$O(n^{2}\frac{\log^4\log n}{\log n})$ & $O(1)$ \\ \hline \multirow{2}*{\small Nussbaum}\hfill\multirow{2}*{\small 2011} & \istrut[2.5]{4.5} $O(n)$ & \istrut[2.5]{4.5} $O(n^{1/2+\epsilon})$ \\\cline{2-3} ~ & \istrut[2.5]{4.5} $S\in[n^{4/3},n^{2}]$ & $O\paren{\frac{n}{\sqrt{S}}}$\\ \hline \small Cabello\hfill{\small 2012} & \istrut[2.5]{4.5}$S\in[n^{4/3}\log^{1/3}n,n^{2}]$ & $O\paren{\frac{n}{\sqrt{S}}\log^{3/2} n}$ \\ \hline \multirow{2}*{\small Mozes \& Sommer}\hfill\multirow{2}*{\small 2012} & \istrut[2.5]{4.5}$S\in[n\log\log n, n^{2}]$ & $O\paren{\frac{n}{\sqrt{S}}\log^{2}n\log^{3/2}\log n}$ \\\cline{2-3} ~ & \istrut[2.5]{4.5}$O(n)$ & $O(n^{1/2+\epsilon})$ \\ \hline \istrut[4]{5}\parbox{110pt}{\small Cohen-Addad, Dahlgaard\\ \& Wulff-Nilsen}\hfill{\small 2017} & $S\in[n^{3/2},n^{5/3}]$ & $O\paren{\frac{n^{5/2}}{S^{3/2}}\log n}$ \\ \hline \istrut[4]{5}\parbox{110pt}{\small Gawrychowski, Mozes,\\ Weimann \& Wulff-Nilsen}\hfill{\small 2018} & $\tilde{O}(S)$ for $S\in[n,n^{3/2}]$ & $\tilde{O}\paren{\frac{n^{3/2}}{S}}$ \\ \hline \multirow{2}*{\parbox{110pt}{\small Charalampopoulos, Gawrychowski, Mozes\\ \& Weimann}}\hfill\multirow{2}*{\small 2019} & \istrut[3]{4}$O(n^{4/3}\sqrt{\log n})$ & $O(\log^2 n)$ \\\cline{2-3} & \istrut[2]{4}$n^{1+o(1)}$ & $n^{o(1)}$ \\ \hline \multirow{2}*{{\bf new}}\hfill\multirow{2}*{\small 2020} & $n^{1+o(1)}$ & $\log^{2+o(1)} n$ \\\cline{2-3} & $n\log^{2+o(1)} n$ & $n^{o(1)}$ \\\hline\hline \multicolumn{3}{l}{}\\ \multicolumn{1}{l}{\large\sc $(1+\epsilon)$-Approx. Oracles} & \multicolumn{1}{l}{\large\sc Space} & \multicolumn{1}{l}{\large\sc Query Time}\\\hline\hline \multirow{2}*{\small Thorup}\hfill \multirow{2}*{\small 2001} & \istrut[2.5]{4.5}$O(n\epsilon^{-1}\log^2 n)$ & $O(\log\log n + \epsilon^{-1})$\\\cline{2-3} ~ & \istrut[2.5]{4.5}$O(n\epsilon^{-1}\log n)$ & $O(\epsilon^{-1})$\hfill (Undir.)\\\hline {\small Klein}\hfill {\small 2002} & \istrut[2.5]{4.5}$O(n(\log n + \epsilon^{-1}\log\epsilon^{-1}))$ & $O(\epsilon^{-1})$\hfill (Undir.)\\\hline \parbox{110pt}{\small Kawarabayashi,\\ Klein, \& Sommer}\hfill{\small 2011} & \istrut[3.5]{5.5}$O(n)$ & $O(\epsilon^{-2}\log^2 n)$\hfill (Undir.)\\\hline \multirow{2}*{\parbox{110pt}{\small Kawarabayashi,\\ Sommer, \& Thorup}}\hfill \multirow{2}*{\small 2013} & \istrut[2]{4.5}$\overline{O}(n\log n)$ & $\overline{O}(\epsilon^{-1})$\hfill (Undir.)\\\cline{2-3} ~ & \istrut[2]{4.5}$\overline{O}(n)$ & $\overline{O}(\epsilon^{-1})$\hfill \ \ \ (Undir.,Unweight.)\\\hline\hline \end{tabular} \caption{Space-query time tradeoffs for exact and approximate planar distance oracles. $\overline{O}$ hides $\log(\epsilon^{-1}\log n)$ factors.} \label{tab:priorwork} \end{table} \nocite{Thorup04} \nocite{KawarabayashiKS11} \nocite{KawarabayashiST13} \nocite{Klein02} \paragraph{New Results.} In this paper we develop a more direct and more efficient way to do point location in external Voronoi diagrams. It uses a new persistent data structure for maintaining sets of non-crossing systems of \emph{chords}, which are paths that begin and end at the boundary vertices of a region, but are internally vertex disjoint from the region. By applying this point location method in the framework of Charalampopoulos et al.~\cite{CharalampopoulosGMW19}, we obtain a better time-space tradeoff, which is most noticeable at the ``extremes'' when $\tilde{O}(n)$ space or $\tilde{O}(1)$ query time is prioritized. \begin{theorem}\label{thm:maintheorem} Let $G$ be an $n$-vertex weighted planar digraph with no negative cycles, and let $\kappa,m\geq 1$ be parameters. A distance oracle occupying space $O(m\kappa n^{1+1/m+1/\kappa})$ can be constructed in $\tilde{O}(n^{3/2+1/m} + n^{1+1/m+1/\kappa})$ time that answers exact distance queries in $O(2^{m}\kappa\log^{2}n\log\log n)$ time. At the two extremes of the space-time tradeoff curve, we can construct oracles in $n^{3/2+o(1)}$ time with either \begin{itemize} \item $n^{1+o(1)}$ space and $\log^{2+o(1)}n$ query time, or \item $n\log^{2+o(1)}n$ space and $n^{o(1)}$ query time. \end{itemize} \end{theorem} Our new point-location routine suffices to get the query time down to $O(\log^3 n)$. In order to reduce it further to $O(\log^{2+o(1)} n)$, we develop a new dynamic tree data structure based on Euler-Tour trees~\cite{HenzingerK99} with $O(\kappa n^{1/\kappa})$ update time and $O(\kappa)$ query time. This allows us to generate \textsf{MSSP}{} (multiple-source shortest paths) structures with a similar space-query tradeoff, specifically, $O(\kappa n^{1+1/\kappa})$ space and $O(\kappa\log\log n)$ query time. Our \textsf{MSSP}{} construction follows Klein~\cite{Klein05} (see also~\cite{GawrychowskiMWW18}), but uses our new dynamic tree in lieu of Sleator and Tarjan's Link-Cut trees~\cite{SleatorT83}, and uses persistent arrays~\cite{Dietz89} in lieu of~\cite{DriscollSST89} to make the data structure persistent. \paragraph{Organization.} In Section~\ref{sect:prelims} we review background on planar embeddings, planar separators, multiple-source shortest paths, and weighted Voronoi diagrams. In Section~\ref{sect:DistanceOracle} we introduce key parts of the data structure and describe the query algorithm, assuming a certain point location problem can be solved. Section~\ref{sect:NavigationOracle} introduces several more components of the data structure, and shows how they can be applied to solve this particular point location problem in near-logarithmic time. The space and query-time claims of Theorem~\ref{thm:maintheorem} are proved in Section~\ref{sect:analysis}. The construction time claims of Theorem~\ref{thm:maintheorem} are proved in Appendix~\ref{sect:construction}. Appendix~\ref{sect:Euler} gives the \textsf{MSSP}{} construction based on Euler Tour trees. Appendix~\ref{sect:MultipleHoles} explains how to remove a simplifying assumption made throughout the paper, that the boundary vertices of every region in the $\vec{r}$-decomposition lie on a \emph{single} hole, which is bounded by a \emph{simple} cycle. \section{Preliminaries}\label{sect:prelims} \subsection{The Graph and Its Embedding} A weighted planar graph $G=(V,E,\ell)$ is represented by an abstract embedding: for each $v\in V(G)$ we list the edges incident to $v$ according to a clockwise order around $v$. We assume the graph has no negative weight cycles and further assume the following, without loss of generality. \begin{itemize} \item All the edge-weights can be made non-negative ($\ell : E\rightarrow \mathbb{R}_{\ge 0}$)~\cite{Johnson77}. Furthermore, via randomized or deterministic perturbation~\cite{EricksonFL18}, we can assume there are no zero weight edges, and that shortest paths are \emph{unique} in \emph{any} subgraph of $G$. \item The graph is connected and triangulated. Assign all artificial edges weight $n\cdot \max_{e\in E(G)}\{\ell(e)\}$ so as not to affect any finite distances. \item If $(u,v)\in E(G)$ then $(v,u)\in E(G)$ as well. (In the circular ordering around $v$, they are represented as a single element $\{u,v\}$.) \end{itemize} Suppose $P=(v_0,v_1,\ldots,v_k)$ is a path oriented from $v_0$ to $v_k$, and $e=(v_i,u)$ is an edge not on $P$, $i\in [1,k-1]$. Then $e$ is to the right of $P$ if $e$ appears between $(v_i,v_{i+1})$ and $(v_{i-1},v_i)$ in the clockwise order around $v_i$, and left of $P$ otherwise. \subsection{Separators and Divisions} Lipton and Tarjan~\cite{LiptonT80} proved that every planar graph contains a \emph{separator} of $O(\sqrt{n})$ vertices that, once removed, breaks the graph into components of at most 2/3 the size. Miller~\cite{Miller86} showed that every triangulated planar graph has a $O(\sqrt{n})$-size separator that consists of a simple cycle. Frederickson~\cite{Frederickson87} defined a \emph{division} to be a set of edge-induced subgraphs whose union is $G$. A vertex in more than one region is a \emph{boundary} vertex; the boundary of a region $R$ is denoted $\partial R$. Edges along the boundary between two regions appear in both regions. The $r$-divisions of~\cite{Frederickson87} have $\Theta(n/r)$ regions each with $O(r)$ vertices and $O(\sqrt{r})$ boundary vertices. We use a linear-time algorithm of Klein, Mozes, and Sommer~\cite{KleinMS13} for computing a hierarchical $\vec{r}$-division, where $\vec{r}=(r_m,\ldots,r_1)$ and $n = r_m > \cdots > r_1 = \Omega(1)$. Such an $\vec{r}$-division has the following properties: \begin{itemize} \item (Division \& Hierarchy) For each $i$, $\mathcal{R}_i$ is the set of regions in an $r_i$-division of $G$, where $\mathcal{R}_m = \{G\}$ consists of the graph itself. For each $i<i'\leq m$ and $R_i \in \mathcal{R}_i$, there is a unique $R_{i'}\in \mathcal{R}_{i'}$ such that $E(R_i) \subseteq E(R_{i'})$. The $\vec{r}$-division is therefore represented as a rooted tree of regions. \item (Boundaries and Holes) The $O(\sqrt{r_i})$ boundary vertices of any $R_i\in \mathcal{R}_{i}$ lie on a constant number of faces of $R_i$ called \emph{holes}, each bounded by a cycle (not necessarily simple). \end{itemize} We supplement the $\vec{r}$-division with a zeroth level. The layer-0 $\mathcal{R}_0 = \{\{v\} \mid v\in V(G)\}$ consists of singleton sets, and each $\{v\}$ is attached as a (leaf) child of an arbitrary $R\in \mathcal{R}_1$ for which $v\in R$. Suppose $f$ is one of the $O(1)$ holes of region $R$ and $C_f$ the cycle around $f$. The cycle $C_f$ partitions $E(G) - C_f$ into two parts. Let $R^{f,\operatorname{out}}$ be the graph induced by the part disjoint from $R$, together with $C_f$, i.e., $C_f$ appears in both $R$ and $R^{f,\operatorname{out}}$. To keep the description of the algorithm as simple as possible, \emph{we will assume that $\partial R$ lies on a single simple cycle (hole)} $f_R$, and let $R^{\operatorname{out}}$ be short for $R^{f_R,\operatorname{out}}$. The modifications necessary to deal with multiple holes and non-simple boundary cycles are explained in Appendix~\ref{sect:MultipleHoles}. \subsection{Multi-source Shortest Paths} Suppose $H$ is a weighted planar graph with a distinguished face $f$ on vertices $S$. Klein's \textsf{MSSP}{} algorithm takes $O(|H|\log |H|)$ time and produces an $O(|H|\log |H|)$-size data structure such that given $s\in S$ and $v\in V(H)$, returns $\operatorname{dist}_H(s,v)$ in $O(\log |H|)$ time. Klein's algorithm can be viewed as continuously moving the source vertex around the boundary face $f$, recording all changes to the SSSP tree in a dynamic tree data structure~\cite{SleatorT83}. It is shown~\cite{Klein05} that each edge in $H$ enters and leaves the SSSP tree exactly once, meaning the number of changes is $O(|H|)$. Each change to the tree is effected in $O(\log |H|)$ time~\cite{SleatorT83}, and the generic persistence method of~\cite{DriscollSST89} allows for querying any state of the SSSP tree. The important point is that the total space is linear in the number of updates to the structure ($O(|H|)$) times the update time ($O(\log|H|)$). As observed in~\cite{GawrychowskiMWW18}, this structure can also answer other useful queries in $O(\log |H|)$ time. Lemma~\ref{lem:MSSP} is similar to~\cite{Klein05,GawrychowskiMWW18} except that we use a dynamic tree data structure based on Euler Tour trees~\cite{HenzingerK99} rather thank Link-Cut trees~\cite{SleatorT83}, which allows for a more flexible tradeoff between update and query time. Because our data structure does not satisfy the criteria of Driscoll et al.'s~\cite{DriscollSST89} persistence method for pointer-based data structures, we use the folklore implementation of persistent arrays\footnote{Dietz~\cite{Dietz89} credits this method to an oral presentation of Dietzfelbinger et al.~\cite{DietzfelbingerKMHRT88}, which highlighted it as an application of dynamic perfect hashing.} to make any RAM data structure persistent, with doubly-logarithmic slowdown in the query time. See Appendix~\ref{sect:Euler} for a proof of Lemma~\ref{lem:MSSP}. \begin{lemma}\label{lem:MSSP} (Cf.~Klein~\cite{Klein05}, Gawrychowski et al.~\cite{GawrychowskiMWW18}) Let $H$ be a planar graph, $S$ be the vertices on some distinguished face $f$, and $\kappa \ge 1$ be a parameter. An $O(\kappa |H|^{1+1/\kappa})$-space data structure can be computed in $O(\kappa|H|^{1+1/\kappa})$ time that answers the following queries in $O(\kappa\log\log|H|)$ time. \begin{itemize} \item Given $s\in S, v\in V(H)$, return $\operatorname{dist}_H(s,v)$. \item Given $s\in S, u,v\in V(H)$, return $(x,e_u,e_v)$, where $x$ is the least common ancestor of $u$ and $v$ in the SSSP tree rooted at $s$ and $e_z$ is the edge on the path from $x$ to $z$ (if $x\neq z$), $z\in \{u,v\}$. \end{itemize} \end{lemma} The purpose of the second query is to tell whether $u$ lies on the shortest $s$-$v$ path ($x=u$) or vice versa, or to tell which direction the $s$-$u$ path branches from the $s$-$v$ path. Once we retrieve the LCA $x$ and edges $e_u,u_v$, we get the edge $e_x$ from $x$ to its parent. The clockwise order of $e_x,e_u,e_v$ around $x$ tells us whether $s$-$u$ branches from $s$-$v$ to the left or right. See Figure~\ref{fig:LCA}. \begin{figure} \centering \scalebox{.4}{\includegraphics{LCA.pdf}} \caption{The clockwise order of $e_x,e_u,e_v$ around $v$ tells us whether the shortest $s$-$u$ path branches from the shortest $s$-$v$ path to the right or left.} \label{fig:LCA} \end{figure} \subsection{Additively Weighted Voronoi Diagrams} Let $H$ be a weighted planar graph, $f$ a distinguished face whose vertices $S$ are called \emph{sites}, and $\omega : S \to \mathbb{R}_{\ge 0}$ be a weight function on sites. We augment $H$ with large-weight edges so that it is triangulated, except for $f$. For $s\in S, v\in V(H)$, define \[ d^\omega(s,v) \stackrel{\operatorname{def}}{=} \omega(s) + \operatorname{dist}_H(s,v). \] The \emph{Voronoi diagram} $\mathrm{VD}[H,S,\omega]$ is a partition of $V(H)$ into \emph{Voronoi cells}, where for $s\in S$, \[ \operatorname{Vor}(s) \stackrel{\operatorname{def}}{=} \{v \in V(H) \mid \forall s'\neq s.\; (d^\omega(s,v), -\omega(s)) < (d^\omega(s',v), -\omega(s'))\} \] In other words, $\operatorname{Vor}(s)$ is the set of vertices that are closer to $s$ than any other site, breaking ties in favor of larger $\omega$-values. We usually work with the dual representation of a Voronoi diagram. It is constructed as follows. \begin{itemize} \item Define $\hat{S}$ to be the set of sites with nonempty Voronoi cells, i.e., $\hat{S} = \{s\in S \mid s\in \operatorname{Vor}(s)\}$. The case $|\hat{S}|=1$ is trivial, so assume $|\hat{S}|\ge 2$. \item Add large-weight dummy edges to $H$ so that $\hat{S}$ appear on the boundary of a single face $\hat{f}$, but is otherwise triangulated. Observe that this has no effect on the Voronoi cells. \item An edge is \emph{bichromatic} if its endpoints are in different cells. In particular, the edges bounding $\hat{f}$ are entirely bichromatic. Define $\mathrm{VD}_0^*$ to be the (undirected) subgraph of $H^*$ consisting of the duals of bichromatic edges. \item Obtain $\mathrm{VD}_1^*$ from $\mathrm{VD}_0^*$ by repeatedly contracting edges incident to a degree-2 vertex, terminating when there are no degree-2 vertices, or when it becomes a self-loop.\footnote{The latter case only occurs when $|\hat{S}|=2$.} Observe that in $\mathrm{VD}_1^*$, $\hat{f}^*$ has degree $|\hat{S}|$ and all other vertices have degree 3; moreover, the faces of $\mathrm{VD}_1^*$ are in one-to-one correspondence with the Voronoi cells. \item We obtain $\mathrm{VD}^* = \mathrm{VD}^*[H,S,\omega]$ by splitting $\hat{f}^*$ into $|\hat{S}|$ degree-1 vertices, each taking an edge formerly incident to $\hat{f}^*$. It was proved in~\cite[Lemma 4.1]{GawrychowskiMWW18} that $\mathrm{VD}^*$ is a single tree.\footnote{If we skipped the step of forming the face $\hat{f}$ on the site-set $\hat{S}$ and triangulating the rest, $\mathrm{VD}^*$ would still be acyclic, but perhaps disconnected. See~\cite{GawrychowskiMWW18,CharalampopoulosGMW19}.} \item We store with $\mathrm{VD}^*$ supplementary information useful for point location. Each degree-3 vertex $g^*$ in $\mathrm{VD}^*$ corresponds a \emph{trichromatic} face $g$ whose three vertices, say $y_0,y_1,y_2$, belong to different Voronoi cells. We store in $\mathrm{VD}^*$ the sites $s_0,s_1,s_2\in S$ such that $y_i \in \operatorname{Vor}(s_i)$. We also store a \emph{centroid decomposition} of $\mathrm{VD}^*$. A centroid of a tree $T$ is a vertex $c$ that partitions the edge set of $T$ into disjoint subtrees $T_1,\ldots,T_{\deg(c)}$, each containing at most $(|E(T)|+1)/2$ edges, and each containing $c$ as a leaf. The decomposition is a tree rooted at $c$, whose subtrees are the centroid decompositions of $T_1,\ldots,T_{\deg(c)}$. The recursion bottoms out when $T$ consists of a single edge, which is represented as a single (leaf) node in the centroid decomposition.\footnote{I.e., internal nodes correspond to vertices of $T$; leaf nodes correspond to edges of $T$.} \end{itemize} \begin{figure} \centering \begin{tabular}{c@{\hspace{1.5cm}}c} \multicolumn{2}{c}{\scalebox{.30}{\includegraphics{VD.pdf}}}\\ &\\ \multicolumn{2}{c}{\bf (a)}\\ \scalebox{.25}{\includegraphics{VDdual-tree.pdf}} &\scalebox{.3}{\includegraphics{VDdual-centroid-decomp.pdf}}\\ &\\ {\bf (b)} &{\bf (c)} \end{tabular} \caption{{\bf (a)} The original $H$ is a triangulated grid, with $f$ being the exterior face. The boundary vertices $\hat{S}$ with non-empty Voronoi cells are marked with colored halos. Edges are added so that $\hat{S}$ are on the exterior face $\hat{f}$. The vertices of $\mathrm{VD}^*$ are the duals of trichromatic faces, and those derived by splitting $\hat{f}^*$ into $|\hat{S}|$ vertices. The edges of $\mathrm{VD}^*$ correspond to paths of duals of bichromatic edges. {\bf (b)} The dual representation $\mathrm{VD}^*$. {\bf (c)} A centroid decomposition of $\mathrm{VD}^*$.} \label{fig:VD} \end{figure} The most important query on Voronoi diagrams is \emph{point location}. \begin{lemma}\label{lem:PointLocate} (Gawrychowski et al.~\cite{GawrychowskiMWW18}) The $\boldsymbol{\mathsf{PointLocate}}(\mathrm{VD}^*[H,S,\omega],v)$ function is given the dual representation of a Voronoi diagram $\mathrm{VD}^*[H,S,\omega]$ and a vertex $v\in V(H)$ and reports the $s\in S$ for which $v\in \operatorname{Vor}(s)$. Given access to an \textsf{MSSP}{} data structure for $H$ with source-set $S$ and query time $\tau$, we can answer $\boldsymbol{\mathsf{PointLocate}}(\mathrm{VD}^*[H,S,\omega],v)$ queries in $O(\tau \cdot \log |H|)$ time. \end{lemma} The challenge in our data structure (as in~\cite{CharalampopoulosGMW19}) is to do point location when our space budget precludes storing all the relevant \textsf{MSSP}{} structures. Nonetheless, we do make use of $\boldsymbol{\mathsf{PointLocate}}$ when the \textsf{MSSP}{} data structures are available. \section{The Distance Oracle} \label{sect:DistanceOracle} As in~\cite{CharalampopoulosGMW19}, the distance oracle is based on an $\vec{r}$-decomposition, $\vec{r}=(r_m,\ldots,r_1)$, where $r_i = n^{i/m}$ and $m$ is a parameter. Suppose we want to compute $\operatorname{dist}_G(u,v)$. Let $R_0 =\{u\}$ be the artificial level-0 region containing $u$ and $R_i \in \mathcal{R}_i$ be the level-$i$ ancestor of $R_0$. (Throughout the paper, we will use ``$R_i$'' to refer specifically to the level-$i$ ancestor of $R_0=\{u\}$, as well as to a \emph{generic} region at level-$i$. Surprisingly, this will cause no confusion.) Let $t$ be the smallest index for which $v\not\in R_{t}$ but $v\in R_{t+1}$. Define $u_i$ to be the \emph{last} vertex on $\partial R_i$ encountered on the shortest path from $u$ to $v$. The main task of the distance query algorithm is to compute the sequence $(u=u_0,\ldots,u_t)$. Suppose that we know the identity of $u_{i}$ and $t > i$. Finding $u_{i+1}$ now amounts to a point location problem in $\mathrm{VD}^*[R_{i+1}^{\operatorname{out}},\partial R_{i+1},\omega]$, where $\omega(s)$ is the distance from $u_i$ to $s\in \partial R_{i+1}$. However, we cannot apply the fast $\boldsymbol{\mathsf{PointLocate}}$ routine because we cannot afford to store an \textsf{MSSP}{} structure for every $(R_{i+1}^{\operatorname{out}},\partial R_{i+1})$, since $|R_{i+1}^{\operatorname{out}}|=\Omega(|G|)$. Our point location routine narrows down the number of possibilities for $u_{i+1}$ to at most two candidates in $O(\kappa \log^{2+o(1)} n)$ time, then decides between them using two recursive distance queries, but starting at a higher level in the hierarchy. There are about $2^m$ recursive calls in total, leading to a $O(2^m \kappa \log^{2+o(1)} n)$ query time. The data structure is composed of several parts. Parts (A) and (B) are explained below, while parts (C)--(E) will be revealed in Section~\ref{sect:MoreDataStructures}. \begin{enumerate} \item[(A)] {\bf (\textsf{MSSP}{} Structures)} For each $i\in[0,m-1]$ and each region $R_i\in\mathcal{R}_i$ with parent $R_{i+1}\in\mathcal{R}_{i+1}$, we store an \textsf{MSSP}{} data structure (Lemma~\ref{lem:MSSP}) for the graph $R_i^{\operatorname{out}}$, and source set $\partial R_i$. However, the structure only answers queries for $s\in\partial R_i$ and $u,v \in R_i^{\operatorname{out}} \cap R_{i+1}$. Rather than represent the \emph{full} SSSP tree from each root on $s\in \partial R_i$, the \textsf{MSSP}{} data structure only stores the tree induced by $R_i^{\operatorname{out}} \cap R_{i+1}$, i.e., the parent of any vertex $v\in R_i^{\operatorname{out}} \cap R_{i+1}$ is its nearest ancestor $v'$ in the SSSP tree such that $v' \in R_i^{\operatorname{out}} \cap R_{i+1}$. If $(v',v)$ is a ``shortcut'' edge corresponding to a path in $R_{i+1}^{\operatorname{out}}$, it has weight $\operatorname{dist}_{R_i^{\operatorname{out}}}(v',v)$. We fix a $\kappa$ and let the update time in the dynamic tree data structure be $O(\kappa n^{1/\kappa})$ time. Thus, the space for this structure is $O((|R_i^{\operatorname{out}}\cap R_{i+1}| + |\partial R_i|\cdot|\partial R_{i+1}|)\cdot \kappa n^{1/\kappa}) = O(r_{i+1}\cdot \kappa n^{1/\kappa})$ since each edge in $R_i^{\operatorname{out}} \cap R_{i+1}$ is swapped into and out of the SSSP tree once~\cite{Klein05}, and the number of shortcut edges on $\partial R_{i+1}$ swapped into and out of the SSSP is at most $|\partial R_{i+1}|$ for each of the $|\partial R_i|$ sources. Over all $i$ and $\Theta(n/r_i)$ choices of $R_i$, the space is $O(m\kappa n^{1 + 1/m + 1/\kappa})$ since $r_{i+1}/r_i = n^{1/m}$. \item[(B)] {\bf (Voronoi Diagrams)} For each $i\in [0,m-1]$ and $R_{i}\in\mathcal{R}_{i}$ with parent $R_{i+1}\in \mathcal{R}_{i+1}$, and each $q \in \partial R_{i}$, define $\VD^*_{\out}(q,R_{i+1})$ to be $\mathrm{VD}^*[R_{i+1}^{\operatorname{out}},\partial R_{i+1},\omega]$, with $\omega(s) = \operatorname{dist}_{G}(q,s)$. The space to store the dual diagram and its centroid decomposition is $O(|\partial R_{i+1}|)=O(\sqrt{r_{i+1}})$. Over all choices for $i,R_i,$ and $q$, the space is $O(mn^{1+1/(2m)})$ since $\sqrt{r_{i+1}/r_i}=n^{1/(2m)}$. \end{enumerate} Due to our tie-breaking rule in the definition of $\operatorname{Vor}(\cdot)$, locating $u_{i+1}$ ($t\ge i+1$) is tantamount to performing a point location on a Voronoi diagram in part (B) of the data structure. \begin{lemma}\label{lemma:lastsite} Suppose that $q\in \partial R_i$ and $v\not\in R_{i+1}$. Consider the Voronoi diagram associated with $\VD^*_{\out}(q,R_{i+1})$ with sites $\partial R_{i+1}$ and additive weights defined by distances from $q$ in $G$. Then $v\in \operatorname{Vor}(s)$ if and only if $s$ is the \underline{last} $\partial R_{i+1}$-vertex on the shortest path from $q$ to $v$ in $G$, and $d^\omega(s,v) = \operatorname{dist}_G(q,v)$. \end{lemma} \begin{proof} By definition, $d^\omega(s,v)$ is the length of the shortest path from $q$ to $v$ that passes through $s$ and whose $s$-$v$ suffix does not leave $R_{i+1}^{\operatorname{out}}$. Thus, $d^\omega(s,v) \geq \operatorname{dist}_G(q,v)$ for every $s$, and $d^\omega(s,v) = \operatorname{dist}_G(q,v)$ for some $s$. Because of our assumption that all edges are strictly positive, and our tie-breaking rule for preferring larger $\omega$-values in the definition of $\operatorname{Vor}(\cdot)$, if $v\in \operatorname{Vor}(s)$ then $s$ must be the \emph{last} $\partial R_{i+1}$-vertex on the shortest $q$-$v$ path. \end{proof} \subsection{The Query Algorithm}\label{sect:queryalg} A distance query is given $u,v\in V(G)$. We begin by identifying the level-0 region $R_{0}=\{u\}\in \mathcal{R}_{0}$ and call the function $\boldsymbol{\mathsf{Dist}}(u,v,R_0)$. In general, the function $\boldsymbol{\mathsf{Dist}}(u_i,v,R_i)$ takes as arguments a region $R_i$, a source vertex $u_i$ on the boundary $\partial R_i$, and a target vertex $v\not\in R_i$. It returns a value $d$ such that \begin{equation}\label{eqn:Spec} \operatorname{dist}_{G}(u_i,v) \leq d \leq \operatorname{dist}_{R_{i}^{\operatorname{out}}}(u_i,v). \end{equation} Note that $R_{0}^{\operatorname{out}} = G$, so the initial call to this function correctly computes $\operatorname{dist}_G(u,v)$. When $v$ is ``close'' to $u_i$ ($v\in R_i^{\operatorname{out}}\cap R_{i+1}$) it computes $\operatorname{dist}_{R_i^{\operatorname{out}}}(u_i,v)$ without recursion, using part (A) of the data structure. When $v\in R_{i+1}^{\operatorname{out}}$ it performs point location using the function $\boldsymbol{\mathsf{CentroidSearch}}$, which culminates in up to two recursive calls to $\boldsymbol{\mathsf{Dist}}$ on the level-$(i+1)$ region $R_{i+1}$. Thus, the correctness of $\boldsymbol{\mathsf{Dist}}$ hinges on whether $\boldsymbol{\mathsf{CentroidSearch}}$ correctly computes distances when $v\in R_{i+1}^{\operatorname{out}}$. \begin{algorithm}[H] \caption{$\boldsymbol{\mathsf{Dist}}(u_{i},v,R_{i})$\label{alg:DistGlobal}} \begin{algorithmic}[1] \Require A region $R_{i}$, a source $u_{i}\in\partial R_{i}$ and a destination $v\in R_{i}^{\operatorname{out}}$. \Ensure A value $d$ such that $\operatorname{dist}_G(u_i,v)\leq d\leq \operatorname{dist}_{R_i^{\operatorname{out}}}(u_i,v)$. \If{$v \in R_i^{\operatorname{out}}\cap R_{i+1}$} \Comment{I.e., $i=t$} \State \Return $d \gets \operatorname{dist}_{R_{i}^{\operatorname{out}}}(u_{i},v)$ \Comment{Part (A)} \EndIf \Comment{$v\in R_{i+1}^{\operatorname{out}}$} \State $f^* \gets$ root of the centroid decomposition of $\VD^*_{\out}(u_{i},R_{i+1})$ \State \Return $d \gets \boldsymbol{\mathsf{CentroidSearch}}(\VD^*_{\out}(u_{i},R_{i+1}),v,f^*)$ \end{algorithmic} \end{algorithm} The procedure $\boldsymbol{\mathsf{CentroidSearch}}$ is given $u_i\in \partial R_i$, $v\in R_{i+1}^{\operatorname{out}}$, $\VD^*_{\out} = \VD^*_{\out}(u_i,R_{i+1})$ and a node $f^*$ on the centroid decomposition of $\VD^*_{\out}$. It ultimately computes $u_{i+1} \in \partial R_{i+1}$ for which $v\in \operatorname{Vor}(u_{i+1})$ and returns \begin{align*} &\omega(u_{i+1}) + \boldsymbol{\mathsf{Dist}}(u_{i+1},v,R_{i+1}) & \mbox{Line 5 or 9 of $\boldsymbol{\mathsf{CentroidSearch}}$}\\ &\leq \operatorname{dist}_G(u_i,u_{i+1}) + \operatorname{dist}_{R_{i+1}^{\operatorname{out}}}(u_{i+1},v) & \mbox{Defn. of $\omega$; guarantee of $\boldsymbol{\mathsf{Dist}}$ (Eqn.~(\ref{eqn:Spec}))}\\ &= \operatorname{dist}_{G}(u_i,v). & \mbox{Lemma~\ref{lemma:lastsite}} \end{align*} The algorithm is recursive, and bottoms out in one of two base cases (Line 5 or Line 9). The first way the recursion can end is if we reach the bottom of the centroid decomposition. If $f^*$ is a leaf of the decomposition, it corresponds to an edge in $\VD^*_{\out}$ separating the Voronoi cells of two sites, say $s_1$ and $s_2$. At this point we know that either $u_{i+1} = s_1$ or $u_{i+1} = s_2$, and determine which case is true with two recursive calls to $\boldsymbol{\mathsf{Dist}}(s_j,v,R_{i+1})$, $j\in\{1,2\}$ (Lines 2--5). In general, $f^*$ is dual to a trichromatic face $f$ composed of three vertices $y_0,y_1,y_2$ in clockwise order, which are, respectively, in distinct Voronoi cells of $s_0,s_1,s_2$. The three shortest $s_j$-$y_j$ paths and $f$ partition the vertices of $R_{i+1}^{\operatorname{out}}$ into six parts, namely the shortest $s_j$-$y_j$ paths themselves, and the interiors of the regions bounded by $\partial R_{i+1}$, two of the $s_j$-$y_j$ paths and an edge of $f$. See Figure~\ref{fig:CentroidSearch}. The $\boldsymbol{\mathsf{Navigation}}$ function returns a pair $(\operatorname{flag},a^*)$ that identifies which part $v$ is in. If $\operatorname{flag}=\operatorname{terminal}$ then $a^* \in \{s_0,s_1,s_2\}$ is interpreted as a site, indicating that $v$ lies on the shortest path from $a^*$ to its $f$-vertex. In this case we return $\omega(a^*) + \boldsymbol{\mathsf{Dist}}(a^*,v,R_{i+1}) = \operatorname{dist}_G(u_i,v)$ with just one call to $\boldsymbol{\mathsf{Dist}}$. If $\operatorname{flag}=\operatorname{nonterminal}$ then $a^*$ is the correct child of $f^*$ in the centroid decomposition. In particular, $f^*$ is incident to three edges $e_0^*, e_1^*, e_2^*$ dual to $\{y_0,y_2\},\{y_1,y_0\},\{y_2,y_1\}$. The children of $f^*$ in the centroid decomposition are $f_0^*,f_1^*,f_2^*$, with $f_j^*$ ancestral to $e_j^*$. We have $a^* = f_j^*$ if $v$ lies to the right of the chord $(s_j,\ldots,y_j,y_{j-1},\ldots,s_{j-1})$ in $R_{i+1}^{\operatorname{out}}$. For example, in Figure~\ref{fig:CentroidSearch}, $v$ lies to the right of the $(s_0,\ldots,y_0,y_2,\ldots,s_2)$ path. In this case we continue the search recursively from $a^* = f_0^*$. \begin{figure} \centering \scalebox{.35}{\includegraphics{CentroidSearch.pdf}} \caption{Here $f^*$ is a degree-3 vertex in $\VD^*_{\out}(u_i,R_{i+1})$, corresponding to a trichromatic face $f$ on vertices $y_0,y_1,y_2$, which are in the Voronoi cells of $s_0,s_1,s_2$ on the boundary $\partial R_{i+1}^{\operatorname{out}}$. The shortest $s_j$-$y_j$ paths partition $V(R_{i+1}^{\operatorname{out}})$ into six parts: the three shortest paths and the three regions bounded by them and $f$. Let $e_0^*,e_1^*,e_2^*$ be the edges in $\VD^*_{\out}$ dual to $\{y_0,y_2\},\{y_1,y_0\},\{y_2,y_1\}$. In the centroid decomposition $e_0^*,e_1^*,e_2^*$ are in separate subtrees of $f^*$. Let $f_j^*$ be the child of $f^*$ ancestral to $e_j^*$, which is either $e_j^*$ itself, or a trichromatic face to the right of the ``chord'' $(s_j,\ldots, y_j,y_{j-1}, \ldots, s_{j-1})$. $\boldsymbol{\mathsf{CentroidSearch}}$ locates the site whose Voronoi cell contains $v$ via recursion. It calls $\boldsymbol{\mathsf{Navigation}}$, a function that finds which of the 6 parts contains $v$. If $v$ lies on an $s_j$-$y_j$ path the $\boldsymbol{\mathsf{CentroidSearch}}$ recursion terminates; otherwise it recurses on the correct child $f_j^*$ of $f^*$.} \label{fig:CentroidSearch} \end{figure} \begin{algorithm}[H] \caption{$\boldsymbol{\mathsf{CentroidSearch}}(\VD^*_{\out}(u_{i},R_{i+1}),v,f^{*})$} \label{alg:SpecificCentroidSearch} \begin{algorithmic}[1] \Require The dual representation $\VD^*_{\out} = \VD^*_{\out}(u_{i},R_{i+1})$ of a Voronoi diagram with additive weights $\omega(s)=\operatorname{dist}_{G}(u_i,s)$, a vertex $v\in R_{i+1}^{\operatorname{out}}$, and a node $f^{*}$ in the centroid decomposition of $\VD^*_{\out}$. \Ensure The distance $\operatorname{dist}_G(u_i,v)$. \If {$f^{*}$ is a leaf in the centroid decomposition (an edge in $\VD^*_{\out}$)} \State $s_{1},s_{2} \gets$ sites whose Voronoi cells are bounded by $f^{*}$ \Comment{Candidates for $u_{i+1}$} \State $d_{1}\gets \omega(s_{1}) + \boldsymbol{\mathsf{Dist}}(s_{1},v,R_{i+1})$ \State $d_{2}\gets \omega(s_{2}) + \boldsymbol{\mathsf{Dist}}(s_{2},v,R_{i+1})$ \State \Return $\min(d_{1},d_{2})$ \EndIf \State $(\operatorname{flag}, a^*)\gets \boldsymbol{\mathsf{Navigation}}(\VD^*_{\out}(u_{i},R_{i+1}), v, f^{*})$ \If {$\operatorname{flag} = \operatorname{terminal}$} \Comment{$a^*$ is interpreted as a site} \State \Return $\omega(a^*) + \boldsymbol{\mathsf{Dist}}(a^*, v, R_{i+1})$ \Comment{$a^* = u_{i+1}$} \Else \; (i.e., $\operatorname{flag} = \operatorname{nonterminal}$) \Comment{$a^*$ is the centroid child of $f^*$} \State \Return $\boldsymbol{\mathsf{CentroidSearch}}(\VD^*_{\out}(u_{i},R_{i+1}),v,a^*)$ \EndIf \end{algorithmic} \end{algorithm} \begin{lemma} $\boldsymbol{\mathsf{CentroidSearch}}$ correctly computes $\operatorname{dist}_{G}(u_i,v)$ \end{lemma} \begin{proof} Define $f,y_j,s_j,e_j^*,f_j^*$ as usual, and let $u_{i+1}$ be such that $v\in \operatorname{Vor}(u_{i+1})$. The loop invariant is that in the subtree of the centroid decomposition rooted at $f^*$, there is \emph{some} leaf edge on the boundary of the cell $\operatorname{Vor}(u_{i+1})$. This is clearly true in the intial recursive call, when $f^*$ is the root of the centroid decomposition. Suppose that $\boldsymbol{\mathsf{Navigation}}$ tells us that $v$ lies to the right of the oriented chord $C^\star = (s_j,\ldots,y_j,y_{j-1},\ldots,s_{j-1})$. Observe that since the $s_j$-$y_j$ and $s_{j-1}$-$y_{j-1}$ shortest paths are monochromatic, all edges of the centroid decomposition correspond to paths in $G^*$ that lie strictly to the left or right of $C^\star$, with the exception of $e_j^*$. Moreover, since $v\in \operatorname{Vor}(u_{i+1})$, $\operatorname{Vor}(u_{i+1})$ must be bounded by \emph{some} edge that is either $e_j^*$ or one entirely to the right of $C^\star$, from which it follows that $f_j^* = a^*$ is ancestral to at least one edge bounding $\operatorname{Vor}(u_{i+1})$. When $f^*$ is a single edge on the boundary of $\operatorname{Vor}(s_1),\operatorname{Vor}(s_2)$ the loop invariant guarantees that either $u_{i+1}=s_1$ or $u_{i+1}=s_2$; suppose that $u_{i+1}=s_1$. It follows from the specification of $\boldsymbol{\mathsf{Dist}}$ (Eqn.~(\ref{eqn:Spec})) and Lemma~\ref{lemma:lastsite} that \[ d_1 = \omega(s_1) + \boldsymbol{\mathsf{Dist}}(s_1,v,R_{i+1}) \leq \operatorname{dist}_G(u_i,s_1) + \operatorname{dist}_{R_{i+1}^{\operatorname{out}}}(s_1,v) = \operatorname{dist}_G(u_i,v). \] Furthermore, \[ d_2 = \omega(s_2) + \boldsymbol{\mathsf{Dist}}(s_2,v,R_{i+1}) \geq \operatorname{dist}_G(u_i,s_2) + \operatorname{dist}_{G}(s_2,v) \geq \operatorname{dist}_G(u_i,v), \] so in this base case $\boldsymbol{\mathsf{CentroidSearch}}$ correctly returns $d_1=\operatorname{dist}_G(u_i,v)$. If $\boldsymbol{\mathsf{Navigation}}$ ever reports that $v$ is on an $s_j$-$y_j$ path, then by definition $v\in \operatorname{Vor}(s_j)$. By the specification of $\boldsymbol{\mathsf{Dist}}$ (Eqn.~(\ref{eqn:Spec})) and Lemma~\ref{lemma:lastsite} we have \[ \omega(s_j) + \boldsymbol{\mathsf{Dist}}(s_j,v,R_{i+1}) \leq \operatorname{dist}_G(u_i,s_j) + \operatorname{dist}_{R_{i+1}^{\operatorname{out}}}(s_j,v) = \operatorname{dist}_G(u_i,v) \] and the base case on Lines 8--9 also works correctly. \end{proof} \medskip Thus, the main challenge is to design an efficient $\boldsymbol{\mathsf{Navigation}}$ function, i.e., to solve the restricted point location problem in $R_{i+1}^{\operatorname{out}}$ depicted in Figure~\ref{fig:CentroidSearch}. Whereas Charalampopoulos et al. \cite{CharalampopoulosGMW19} solve this problem using several \emph{more} recursive calls to $\boldsymbol{\mathsf{Dist}}$, we give a new method to do this point location directly, in $O(\kappa \log^{1+o(1)} n)$ time per call to $\boldsymbol{\mathsf{Navigation}}$. \section{The Navigation Oracle}\label{sect:NavigationOracle} The input to $\boldsymbol{\mathsf{Navigation}}$ is the same as $\boldsymbol{\mathsf{CentroidSearch}}$, except that $f^*$ is guaranteed to correspond to a trichromatic face $f$. Define $y_j,s_j,e_j,f_j$, $j\in\{0,1,2\}$ as in the discussion of $\boldsymbol{\mathsf{CentroidSearch}}$. The $\boldsymbol{\mathsf{Navigation}}$ function determines the location of $v$ relative to $f$ and the shortest $s_j$-$y_j$ paths. It delegates nearly all the actual computation to two functions: $\boldsymbol{\mathsf{SitePathIndicator}}$, which returns a boolean indicating whether $v$ is on the shortest $s_j$-$y_j$ path, and $\boldsymbol{\mathsf{ChordIndicator}}$, which indicates whether $v$ lies strictly to the right of the oriented chord $(s_j,\ldots,y_j,y_{j-1},\dots,s_{j-1})$. If so, we return the centroid child $f_j^*$ of $f^*$ in this region. Three calls each to $\boldsymbol{\mathsf{SitePathIndicator}}$ and $\boldsymbol{\mathsf{ChordIndicator}}$ suffice to determine the location of $v$. \begin{algorithm}[H] \caption{$\boldsymbol{\mathsf{Navigation}}(\VD^*_{\out}(u_{i},R_{i+1}),v,f^{*})$} \label{alg:Navigation} \begin{algorithmic}[1] \Require The dual representation $\VD^*_{\out}(u_{i},R_{i+1})$ of a Voronoi diagram, a vertex $v\in R_{i+1}^{\operatorname{out}}$, and a centroid $f^{*}$ in the centroid decomposition. The face $f$ is on $y_0,y_1,y_2$, which are in the Voronoi cells of $s_0,s_1,s_2$, and $f_j^*$ is the child of $f^*$ containing the edge dual to $\{y_j,y_{j-1}\}$. % \Ensure $(\operatorname{terminal}, s_{j})$ if $v$ is on the shortest $s_j$-$y_j$ path, or $(\operatorname{nonterminal}, f_j^*)$ where $f_j^*$ is the child of $f^*$ ancestral to an edge bounding $v$'s Voronoi cell. \State $s_{0},s_{1},s_{2}\gets$ sites corresponding to $f^{*}$ \For {$j = 0,1,2$} \If {$\boldsymbol{\mathsf{SitePathIndicator}}(\VD^*_{\out}(u_{i},R_{i+1}),v,f^{*},j)$ returns \textbf{True}} \State \Return $(\operatorname{terminal}, s_{j})$ \EndIf \EndFor \For {$j = 0,1,2$} \If {$\boldsymbol{\mathsf{ChordIndicator}}(\VD^*_{\out}(u_{i},R_{i+1}),v,f^{*},j)$ returns \textbf{True}} \State \Return $(\operatorname{nonterminal}, f_{j}^{*})$ \EndIf \EndFor \end{algorithmic} \end{algorithm} In Section~\ref{sect:chords} we formally introduce the notion of \emph{chords} used informally above, as well as some related concepts like \emph{laminar} sets of chords and \emph{maximal} chords. In Section~\ref{sect:MoreDataStructures} we introduce parts (C)-(E) of the data structure used to support $\boldsymbol{\mathsf{Navigation}}$. The functions $\boldsymbol{\mathsf{SitePathIndicator}}$ and $\boldsymbol{\mathsf{ChordIndicator}}$ are presented in Sections~\ref{sect:SitePathIndicator} and \ref{sect:ChordIndicator}. \subsection{Chords and Pieces}\label{sect:chords} We begin by defining the key concepts of our point location method: \emph{chords}, \emph{laminar chord sets}, \emph{pieces}, and the \emph{occludes} relation. \begin{definition}\label{def:chord} {\bf (Chords)} Fix an $R$ in the $\vec{r}$-decomposition and two vertices $c_0,c_1\in \partial R$. An oriented simple path $\chord{c_0c_1}$ is a \emph{chord} of $R^{\operatorname{out}}$ if it is contained in $R^{\operatorname{out}}$ and is internally vertex-disjoint from $\partial R$. When the orientation is irrelevant we write it as $\overline{c_0c_1}$. \end{definition} \begin{definition}\label{def:laminar} {\bf (Laminar Chord Sets)} A set of chords $\mathcal{C}$ for $R^{\operatorname{out}}$ is \emph{laminar} (non-crossing) if for any two such chords $C=\chord{c_0c_1},C'=\chord{c_2c_3}$, if there exists a $v\in (C\cap C')-\partial R$ then the subpaths from $c_0$ to $v$ and from $c_2$ to $v$ are identical; in particular $c_0=c_2$. \end{definition} The orientation of chords does not always coincide with a natural orientation of paths defined by the algorithm. For example, in Figure~\ref{fig:CentroidSearch}, the oriented chord $\chord{s_0s_2} = (s_0,\ldots,y_0,y_2,\ldots,s_2)$ is composed of three parts: a shortest $s_0$-$y_0$ path (whose natural orientation coincides with that of $\chord{s_0s_2}$), the edge $\{y_0,y_2\}$ (which has no natural orientation in this context), and the shortest $s_2$-$y_2$ path (whose natural orientation is the reverse of its orientation in $\chord{s_0s_2}$). The orientation serves two purposes. In Definition~\ref{def:chord} we can speak unambiguously about the parts of $R^{\operatorname{out}}$ to the \emph{right} and \emph{left} of $\chord{s_0s_2}$. In Definition~\ref{def:laminar} the role of the orientation is to ensure that the partition of $R^{\operatorname{out}}$ into \emph{pieces} induced by $\mathcal{C}$ can be represented by a \emph{tree}, as we show in Lemma~\ref{lem:piecetree}. \begin{definition}\label{def:pieces} {\bf (Pieces)} A laminar chord set $\mathcal{C}$ for $R^{\operatorname{out}}$ partitions the faces of $R^{\operatorname{out}}$ into pieces, excluding the face on $\partial R$. Two faces $f,g$ are in the same piece iff $f^*$ and $g^*$ are connected by a path in $(R^{\operatorname{out}})^*$ that avoids to duals of edges in $\mathcal{C}$ and edges along the boundary cycle on $\partial R$. A piece is regarded as the subgraph induced by its faces, i.e., it includes their constituent vertices and edges. Two pieces $P_1,P_2$ are \emph{adjacent} if there is an edge $e$ on the boundary of $P_1$ and $P_2$ and $e$ is in a \underline{\emph{unique}} chord of $\mathcal{C}$. See Figure~\ref{fig:PieceTree}. \end{definition} \begin{figure} \centering \scalebox{.4}{\includegraphics{PieceTree.pdf}} \caption{A laminar set of chords partition $R^{\operatorname{out}}$ into pieces. Observe that the chords separating pieces $P_5$--$P_9$ overlap in certain prefixes. The piece tree is indicated by diamond verties and pink edges. Note two pieces (e.g. $P_5$ and $P_9$) may share a boundary, but not be \emph{adjacent}.} \label{fig:PieceTree} \end{figure} \begin{lemma}\label{lem:piecetree} Suppose $\mathcal{C}$ is a laminar chord set for $R^{\operatorname{out}}$, $\mathcal{P}=\mathcal{P}(\mathcal{C})$ is the corresponding piece set and $\mathcal{E}$ are pairs of adjacent pieces. Then $\mathcal{T}=(\mathcal{P},\mathcal{E})$ is a tree, called the \emph{piece tree} induced by $\mathcal{C}$. \end{lemma} \begin{proof} The claim is clearly true when $\mathcal{C}$ contains zero or one chords, so we will try to reduce the general case to this case via a peeling argument. We will find a piece $P$ with degree 1 in $\mathcal{T}$, remove it and the chord bounding it, and conclude by induction that the truncated instance is a tree. Reattaching $P$ implies $\mathcal{T}$ is a tree. Let $C=\chord{c_0c_1}\in\mathcal{C}$ be a chord such that no edge of any other chord appears strictly to one side of $C$, say to the right of $C$. Let $P$ be the piece to the right of $C$. (In Figure~\ref{fig:PieceTree} the chords bounding $P_1,P_2,P_{11},P_{12}$ would be eligible to be $C$.) Let $C=(c_0=v_0,v_1,v_2,\ldots,v_k=c_1)$ and $v_{j^\star}$ be such that the edges of the suffix $(v_{j^\star},\ldots,v_k)$ are on no other chord, meaning the vertices $\{v_{j^\star+1},\ldots,v_{k-1}\}$ are on no other chord. Let $g_j$ be the face to the left of $(v_j,v_{j+1})$. It follows that there is a path from $g_{j^\star}^*$ to $g_{k-1}^*$ in $(R^{\operatorname{out}})^*$ that avoids the duals of all edges in $\mathcal{C}$ and along $\partial R$. All pieces adjacent to $P$ contain some face among $\{g_{j^\star},\ldots,g_{k-1}\}$, but these are in a single piece, hence $P$ corresponds to a degree-1 vertex in $\mathcal{T}$. Let $P$ be bounded by $C$ and an interval $B$ of the boundary cycle on $\partial R$. Obtain the ``new'' $R^{\operatorname{out}}$ by cutting along $C$ and removing $P$, the new $\partial R$ by substituting $C$ for $B$, and the new chord-set $\mathcal{C}$ by removing $C$ and trimming any chords that shared a non-empty prefix with $C$. By induction the resulting piece-adjacency graph is a tree; reattaching $P$ as a degree-1 vertex shows $\mathcal{T}$ is a tree. \end{proof} \begin{definition}\label{def:occludes} {\bf (Occludes Relation)} Fix $R^{\operatorname{out}}$, chord $C$, and two faces $f,g$, neither of which is the hole defined by $\partial R$. If $f$ and $g$ are on opposite sides of $C$, we say that from vantage $f$, $C$ \emph{occludes} $g$. Let $\mathcal{C}$ be a set of chords. We say $C\in \mathcal{C}$ is \emph{maximal} in $\mathcal{C}$ with respect to a vantage $f$ if there is no $C' \in \mathcal{C}$ such that $C'$ occludes a \emph{strict} superset of the faces that $C$ occludes. (Note that the orientation of chords is irrelevant to the occludes relation.) \end{definition} It follows from Definition~\ref{def:occludes} that if $\mathcal{C}$ is laminar, the set of maximal chords with respect to $f$ are exactly those chords intersecting the boundary of $f$'s piece in $\mathcal{P}(\mathcal{C})$. We can also speak unambiguously about a chord $C$ occluding a \emph{vertex} or \emph{edge} not on $C$, from a certain vantage. Specifically, we can say that from some vantage, $C$ occludes an \emph{interval} of the boundary cycle on $\partial R$, say according to a clockwise traversal around the hole on $\partial R$ in $R^{\operatorname{out}}$.\footnote{This is one place where we use the assumption that all boundary holes are simple cycles.} This will be used in the $\boldsymbol{\mathsf{ChordIndicator}}$ procedure of Section~\ref{sect:ChordIndictor-procedure}. \subsection{Data Structures for Navigation}\label{sect:MoreDataStructures} Parts (C)--(E) of the data structure are used to implement the $\boldsymbol{\mathsf{SitePathIndicator}}$ and $\boldsymbol{\mathsf{ChordIndicator}}$ functions. \begin{enumerate} \item[(C)] \textbf{(More Voronoi Diagrams)} For each $i$, each $R_{i}\in \mathcal{R}_{i}$, and each $q\in\partial R_{i}$, we store $\VD^*_{\out}(q,R_i)$, which is $\mathrm{VD}^*[R_i^{\operatorname{out}},\partial R_i,\omega]$, where $\omega(s)=\operatorname{dist}_G(q,s)$. The total space for these diagrams is $\tilde{O}(n)$ and dominated by part (B). \item[(D)] \textbf{(Chord Trees; Piece Trees)} For each $i$, each $R_i\in\mathcal{R}_i$, and source $q\in \partial R_i$, we store the SSSP tree from $q$ induced by $\partial R_i$ as a \emph{chord tree} $T_{q}^{R_i}$. In particular, the parent of $x\in\partial R_i$ in $T_{q}^{R_i}$ is the nearest ancestor in the SSSP tree from $q$ that lies on $\partial R_i$. Every edge of $T_{q}^{R_i}$ is designated a \emph{chord} if the corresponding path is contained in $R_{i}^{\operatorname{out}}$ but not in $R_i$, or a \emph{non-chord} otherwise. Define $\mathcal{C}_{q}^{R_i}$ to be the set of all chords in $T_{q}^{R_i}$, oriented away from $q$; this is clearly a laminar set since shortest paths are unique and all prefixes of shortest paths are shortest paths. Define $\mathcal{P}_{q}^{R_i}$ to be the corresponding partition of $R_{i}^{\operatorname{out}}$ into pieces, and $\mathcal{T}_{q}^{R_i}$ the corresponding piece tree. Define $T_{q}^{R_i}[x]$ to be the path from $q$ to $x$ in $T_{q}^{R_i}$, $\mathcal{C}_{q}^{R_i}[x]$ the corresponding chord-set, and $\mathcal{P}_{q}^{R_i}[x]$ the corresponding piece-set. The data structure answers the following queries \begin{description} \item[$\boldsymbol{\mathsf{MaximalChord}}(R_i,q,x,P,P')$:] We are given $R_i$, $q,x\in\partial R_i$, a piece $P\in \mathcal{P}_{q}^{R_i}$, and possibly another piece $P'\in \mathcal{P}_{q}^{R_i}$ (which may be \textbf{Null}). If $P'$ is \textbf{Null}, return any maximal chord in $\mathcal{C}_{q}^{R_i}[x]$ from vantage $P$. If $P'$ is not \textbf{Null}, return the maximal chord $\mathcal{C}_{q}^{R_i}[x]$ (if any) that occludes $P'$ from vantage $P$. \item[$\boldsymbol{\mathsf{AdjacentPiece}}(R_i,q,e)$:] Here $e$ is an edge on the boundary cycle on $\partial R_i$. Return the unique piece in $\mathcal{P}_{q}^{R_i}$ with $e$ on its boundary.\footnote{This is another place where we use the assumption that holes are bounded by simple cycles.} \end{description} \item[(E)] \textbf{(Site Tables; Side Tables)} Fix an $i$ and a diagram $\VD^*_{\out} = \VD^*_{\out}(u',R_i)$ from part (B) or (C). Let $f^*$ be any node in the centroid decomposition of $\VD^*_{\out}$, with $y_j,s_j$, $j\in\{0,1,2\}$ defined as usual, and let $R_{i'}\in \mathcal{R}_{i'}$ be the ancestor of $R_i$, $i'\ge i$. Fix $j\in\{0,1,2\}$ and $i'>i$. Define $q$ and $x$ to be the \emph{first} and \emph{last} vertices on the shortest $s_j$-$y_j$ path that lie on $\partial R_{i'}$. We store $(q,x)$ and $\operatorname{dist}_{G}(u',x)$. We also store whether $R_{i'}^{\operatorname{out}}$ lies to the left or right of the site-centroid-site chord $\chord{s_jy_jy_{j-1}s_{j-1}}$ in $R_i^{\operatorname{out}}$, or {\bf Null} if the relationship cannot be determined, i.e., if the chord crosses $\partial R_{i'}$. These tables increase the space of (B) and (C) by a negligible $O(m)$ factor. \end{enumerate} Part (D) of the data structure is the only one that is non-trivial to store compactly. Our strategy is as follows. We fix $R_i$ and $q\in \partial R_i$ and build a dynamic data structure for these operations relative to a dynamic subset $\hat{\mathcal{C}} \subseteq \mathcal{C}_{q}^{R_i}$ subject to the insertion and deletion of chords in $O(\log|\partial R_i|)$ time. By inserting/deleting $O(|\partial R_i|)$ chords in the correct order, we can arrange that $\hat{\mathcal{C}} = \mathcal{C}_{q}^{R_i}[x]$ at some point in time, for every $x\in \partial R_i$. Using the generic persistence technique for RAM data structures (see~\cite{Dietz89}) we can answer $\boldsymbol{\mathsf{MaximalChord}}$ queries relative to $\mathcal{C}_{q}^{R_i}[x]$ in $O(\log|\partial R_i|\log\log |\partial R_i|)$ time.\footnote{Our data structure works in the pointer machine model, but it has unbounded in-degrees so the theorem of Driscoll et al.~\cite{DriscollSST89,SarnakT86} cannot be applied directly. It is probably possible to improve the bound to $O(\log|\partial R_i|)$ but this is not a bottleneck in our algorithm.} \begin{lemma}\label{lem:partD} Part (D) of the data structure can be stored in $O(mn\log n)$ total space and answer $\boldsymbol{\mathsf{MaximalChord}}$ queries in $O(\log n\log\log n)$ time and $\boldsymbol{\mathsf{AdjacentPiece}}$ queries in $O(1)$ time. \end{lemma} \begin{proof} We first address $\boldsymbol{\mathsf{MaximalChord}}$. Let $\mathcal{T} = \mathcal{T}_{q}^{R_i}$ be the piece tree. The edges of $\mathcal{T}$ are in 1-1 correspondence with the chords of $\mathcal{C}=\mathcal{C}_{q}^{R_i}$ and if $P,P'\in \mathcal{P}=\mathcal{P}_{q}^{R_i}$ are two pieces, the path from $P$ to $P'$ in $\mathcal{T}$ crosses exactly those chords that occlude $P'$ from vantage $P$ (and vice versa). We will argue that to implement $\boldsymbol{\mathsf{MaximalChord}}$ it suffices to design an efficient dynamic data structure for the following problem; initially all edges are \emph{unmarked}. \begin{description} \item[Mark$(e)$] Mark an edge $e\in E(\mathcal{T})$. \item[Unmark$(e)$] Unmark $e$. \item[LastMarked$(P',P)$] Return the \emph{last} marked edge on the path from $P'$ to $P$, or \textbf{Null} if all are unmarked. \end{description} By doing a depth-first traversal of the chord tree $T_{q}^{R_i}$, marking/unmarking chords as they are encountered, the set $\{e\in E(\mathcal{T}) \mid e \mbox{ is marked}\}$ will be equal to $\mathcal{C}_{q}^{R_i}[x]$ precisely when $x$ is first encountered in DFS. To answer a $\boldsymbol{\mathsf{MaximalChord}}(R_i,q,x,P,P')$ query we interact with the state of the data structure when the marked set is $\mathcal{C}_{q}^{R_i}[x]$. If $P'$ is not null we return {\bf LastMarked}$(P',P)$. Otherwise we pick an arbitrary (marked) chord $C\in \mathcal{C}_{q}^{R_i}[x]$, get the adjacent pieces $P_1',P_2'$ on either side of $C$, then query {\bf LastMarked}$(P_1',P)$ and {\bf LastMarked}$(P_2',P)$. At least one of these queries will return a chord and that chord is maximal w.r.t.~vantage $P$. (Note that $C$ must separate $P$ from either $P_1'$ or $P_2'$.) We now argue how all three operations can be implemented in $O(\log n)$ worst case time. The ideas are standard, so we do not go into great detail. Root $\mathcal{T}$ arbitrarily and subdivide every edge; the resulting tree is also called $\mathcal{T}$. Every node in $\mathcal{T}$ knows its depth. The \emph{vertices} corresponding to subdivided edges may carry marks. In order to answer {\bf LastMarked} queries it suffices to be able to find least common ancestors, and, given nodes $P_d,P_a$, where $P_a$ is an ancestor of $P_d$, to find the \emph{first \underline{and} last} marked node on the path from $P_d$ to $P_a$. Decompose the vertices of $\mathcal{T}$ using a heavy path decomposition. Each vertex points to the path that it is in. Each path in the decomposition is a data structure that maintains an ordered set of its marked nodes, a pointer to the most ancestral marked node in the path, and a pointer to the parent, in $\mathcal{T}$, of the root of the path. It is straightforward to find LCAs in this structure in $O(\log n)$ time.\footnote{Of course, $O(1)$ time is also possible~\cite{HarelT84,BenderF00} but this is not the bottleneck in the algorithm.} Suppose we want the first and last marked node on the path from $P_d$ to $P_a$, an ancestor of $P_d$. Let $Z_0,Z_1,...,Z_{\ell}$, $\ell=O(\log n)$ be the heavy paths ancestral to $P_d$ such that $P_d\in Z_0, P_a\in Z_{\ell}$. Let $v_j\in Z_j$ be the nearest ancestor to $P_d$. We can find $j^\star$ such that $Z_{j^\star}$ contains the first marked node on the $P_d$-$P_a$ path in $\ell=O(\log n)$ time by comparing $v_j$ against the most ancestral marked node in $Z_j$, $j=0,1,\ldots$. We can then find the first marked node by finding the marked predecessor of $v_{j^\star}$ in $Z_{j^\star}$, in $O(\log n)$ time. Finding the last marked node on the path from $P_d$ to $P_a$ is similar. {\bf Mark} and {\bf Unmark} are implemented by keeping a balanced binary search tree over the marked nodes in each heavy path. For fixed $R_i,q\in \partial R_i$ there are $O(|\partial R_i|)$ \textbf{Mark} and \textbf{Unmark} operations, each taking $O(\log n)$ time. Over all choices of $i, R_i$, and $q$ the total update time is $O(mn\log n)$. After applying generic persistence for RAM data structures (see~\cite{Dietz89}) the space becomes $O(mn\log n)$ and the query time for \textbf{LastMarked} becomes $O(\log n\log\log n)$. Turning to $\boldsymbol{\mathsf{AdjacentPiece}}(R_i,q,e)$, there are $|\partial R_i|^2$ choices of $(q,e)$. Hence all answers can be precomputed in a lookup table in $O(mn)$ space. \ignore{ We now turn to $\boldsymbol{\mathsf{AdjacentPiece}}$. Let $h$ be the hole on $\partial R_i$, which we regard as cyclically ordered according to a clockwise walk around $h$. Assume that we can perform the usual dictionary operations (predecessor, successor, etc.) on the endpoints of $\mathcal{C} = \mathcal{C}_{q}^{R_i}[x]$ along $\partial R_i$. The query $e=(w_j,w_{j+1})$ joins consecutive vertices on $\partial R_i$ w.r.t. clockwise order. Let $w_{j_0}$ be the predecessor of $w_j$ that is incident to a chord in $\mathcal{C}$; if it is incident to multiple chords let $C_0=\overline{w_{j_0}w_{j_1}}$ be the chord with $w_{j_1}$ being earliest, in a clockwise traversal starting from $w_{j_0}$. Symmetrically, let $C_1 = \overline{w_{j_2}w_{j_3}}$ be the chord obtained by finding the successor $w_{j_2}$ of $w_{j+1}$ incident to chords in $\mathcal{C}$, breaking ties by letting $w_{j_3}$ be earliest in a counter-clockwise traversal starting from $w_{j_2}$. There are three outcomes, either $C_0,C_1$ do not exist because $\mathcal{C}$ is empty, or $C_0,C_1$ exist and are distinct, or $C_0=C_1$. In the first case there is only one piece, which is returned. In the other two cases the piece $P$ bounded by $e$ is also bounded by $C_0$ and $C_1$. If $C_0\neq C_1$ (see Figure~\ref{fig:AdjacentPiece}(a)) then we look for their common neighbor $P$ and return it. If $C_0=C_1$ (Figure~\ref{fig:AdjacentPiece}(b)), then we return the piece $P$ to the right of $\chord{w_{j_0}w_{j_1}}$ and return it. The dictionary on the endpoints of $\mathcal{C}_q^{R_i}[x]$ can be built using the general persistence technique of~\cite{SarnakT86,DriscollSST89}, in $\tilde{O}(n)$ total space and $O(\log n)$ query time for predecessor/successor queries. \begin{figure} \centering \begin{tabular}{cc} \scalebox{.5}{\includegraphics{AdjacentPiece1.pdf}} & \scalebox{.5}{\includegraphics{AdjacentPiece2.pdf}}\\ {\bf (a)} & {\bf (b)} \end{tabular} \caption{Caption} \label{fig:AdjacentPiece} \end{figure} } \end{proof} \subsection{The $\boldsymbol{\mathsf{SitePathIndicator}}$ Function}\label{sect:SitePathIndicator} The $\boldsymbol{\mathsf{SitePathIndicator}}$ function is relatively simple. We are given $\VD^*_{\out}(u_i,R_{i+1})$, $v\in R_{i+1}^{\operatorname{out}}$, a centroid $f^* \in R_{i+1}^{\operatorname{out}}$, $f$ being a trichromatic face on $y_0,y_1,y_2$, which are, respectively, in the Voronoi cells of $s_0,s_1,s_2\in \partial R_{i+1}$, and an index $j\in\{0,1,2\}$. We would like to know if $v$ is on the shortest $s_j$-to-$y_j$ path. Recall that $t$ is such that $v\not\in R_t$ but $v\in R_{t+1}$. \begin{figure} \centering \begin{tabular}{c@{\hspace*{1cm}}c} \scalebox{.30}{\includegraphics{SitePathIndicator1.pdf}} & \scalebox{.30}{\includegraphics{SitePathIndicator2.pdf}}\\ & \\ {\bf (a)} & {\bf (b)} \end{tabular} \caption{{\bf (a)} If $z=x$ and $y_j$ is not in $R_{t+1}$, $x'$ is the last boundary vertex of $\partial R_{t+1}$ on the $s_j$-$y_j$ path. {\bf (b)} If $z=x$ and $y_j$ is in $R_t^{\operatorname{out}}\cap R_{t+1}$ then $x'=y_j$. (Not depicted: if $y_j\in R_t$ then $x'=x$.) We test whether $v$ is on the shortest $x$-$x'$ path. If $z\neq x$ then $z'$ is well defined and the position of $y_j$ is immaterial; we test whether $v$ is on the shortest $z$-$z'$ path (depicted in {\bf (a)}).} \label{fig:SitePathIndicator} \end{figure} Using the lookup tables in part (E) of the data structure, we find the first and last vertices ($q$ and $x$) of $\partial R_t$ on the $s_j$-$y_j$ path. If $q,x$ do not exist then $v$ is certainly not on the $s_j$-$y_j$ path (Line 4). Using parts (A,C,E) of the data structure, we invoke $\boldsymbol{\mathsf{PointLocate}}$ to find the last point $z$ of $\partial R_t$ on the shortest path (in $G$) from $q$ to $v$. (See Lemma~\ref{lemma:lastsite}.) If $z$ is not on the path from $q$ to $x$ in $G$ (which corresponds to it not being on the path from $q$ to $x$ in $T_q^{R_t}$, stored in Part (D)), then once again $v$ is certainly not on the $s_j$-$y_j$ path (Line 8). So we may assume $z$ lies on the $q$-$x$ path. If $z=x$ then there are three cases to consider, depending on whether the destination $y_j$ of the path is in $R_t^{\operatorname{out}}\cap R_{t+1}$, or in $R_{t+1}^{\operatorname{out}}$, or in $R_t$. If $y_j\in R_t^{\operatorname{out}}\cap R_{t+1}$ we let $x' = y_j$; if $y_j\in R_{t+1}^{\operatorname{out}}$ we let $x'$ be the last vertex of $\partial R_{t+1}$ encountered on the shortest $s_j$-$y_j$ path (part (E)); and if $y_j\in R_t$ we let $x'=x$. In all cases, $x'$ is the last vertex of the shortest $s_j$-$y_j$ path that is contained in relevant subgraph $R_t^{\operatorname{out}}\cap R_{t+1}$. (Figure~\ref{fig:SitePathIndicator}(a,b) illustrates the first two possibilities for $x'$.) Now $v$ is on the $s_j$-$y_j$ path iff it is on the $x$-$x'$ shortest path, which can be answered using part (A) of the data structure (Lines 19, 21). (Figure~\ref{fig:SitePathIndicator}(b) illustrates one way for $v$ to appear on the $x$-$x'$ path.) In the remaining case $z$ is on the shortest $q$-$x$ path but is not $x$, meaning the child $z'$ of $z$ on $T_q^{R_t}[x]$ is well defined. If $\chord{zz'}$ is a chord (corresponding to a path in $R_t^{\operatorname{out}}$) then $v$ is on the shortest $s_j$-$y_j$ path iff it is on the shortest $z$-$z'$ path in $R_t^{\operatorname{out}}$, which, once again, can be answered with part (A) of the data structure (Lines 26, 28). See Figure~\ref{fig:SitePathIndicator}(a) for an illustration of this case. \begin{remark}\label{remark:PointLocate} Strictly speaking we cannot apply Lemma~\ref{lem:PointLocate} (Gawrychowski et al.~\cite{GawrychowskiMWW18}) since we do not have an \textsf{MSSP}{} structure for all of $R_{t}^{\operatorname{out}}$. Part (A) only handles distance/LCA queries when the query vertices are in $R_t^{\operatorname{out}}\cap R_{t+1}$. It is easy to make Gawrychowski et al.'s algorithm work using parts (A) and (E) of the data structure. See the discussion at the end of Section~\ref{sect:sidequeries}. \end{remark} \begin{algorithm}[H] \caption{$\boldsymbol{\mathsf{SitePathIndicator}}(\VD^*_{\out}(u_{i},R_{i+1}),v,f^{*},j)$} \label{alg:SiteIndicator} \begin{algorithmic}[1] \Require The dual representation $\VD^*_{\out}(u_{i},R_{i+1})$ of a Voronoi diagram, a vertex $v\in R_{i+1}^{\operatorname{out}}$, and an $s_{j}$-to-$y_{j}$ site-centroid shortest path ($s_{j},y_{j}$ are with respect to $f^{*}$) in $\mathrm{VD}^{*}$. \Ensure \textbf{True} if $v$ is on $s_{j}$-to-$y_{j}$ shortest path, or \textbf{False} otherwise. \State $R_{t}\gets$ the ancestor of $R_{i}$ s.t. $v\notin R_{t},v\in R_{t+1}$. \State $(q,x) \gets$ first and last $\partial R_t$ vertices on the shortest $s_j$-$y_j$ path. \Comment{Part (E) of the data structure} \If {$q,x$ are \textbf{Null}} \State \Return \textbf{False} \EndIf \State $z\gets \boldsymbol{\mathsf{PointLocate}}(\VD^*_{\out}(q, R_{t}),v)$ \Comment{Uses parts (A,C,E) of the data structure} \If {$z$ is not on $T_{q}^{R_{t}}[x]$} \State \Return \textbf{False} \EndIf \If {$z=x$} \If {$y_j$ is in $R_t^{\operatorname{out}} \cap R_{t+1}$} \State $x' \gets y_j$ \ElsIf{$y_j \not\in R_{t+1}$} \State $x' \gets $ last $\partial R_{t+1}$ vertex on $s_j$-$y_j$ path. \Comment{Part (E)} \Else \State $x' \gets x$ \Comment{I.e., $y_j \not\in R_t^{\operatorname{out}}$} \EndIf \If {$v$ is on the shortest $x$-$x'$ path} \Comment{Part (A)} \State \Return \textbf{True} \Else \State \Return \textbf{False} \EndIf \EndIf \State $z'\gets$ the child of $z$ on $T_{q}^{R_{t}}[x]$ \Comment{Part (D)} \If {$\chord{zz'}$ is a chord in $\mathcal{C}_q^{R_t}[x]$ and $v$ is on the shortest $z$-$z'$ path in $R_t^{\operatorname{out}}$} \Comment{Part (A)} \State \Return \textbf{True} \EndIf \State \Return \textbf{False} \end{algorithmic} \end{algorithm} \subsection{The $\boldsymbol{\mathsf{ChordIndicator}}$ Function}\label{sect:ChordIndicator} The $\boldsymbol{\mathsf{ChordIndicator}}$ function is given $\VD^*_{\out}(u_i,R_{i+1})$, $v\in R_{i+1}^{\operatorname{out}}$, a centroid $f^*$, with $\{y_j,s_j\}$ defined as usual, and an index $j\in\{0,1,2\}$. The goal is to report whether $v$ lies to right of the oriented \emph{site-centroid-site} chord \[ C^\star = \chord{s_jy_jy_{j-1}s_{j-1}}, \] which is composed of the shortest $s_j$-$y_j$ and $s_{j-1}$-$y_{j-1}$ paths, and the single edge $\{y_j,y_{j-1}\}$. It is guaranteed that $v$ does not lie on $C^\star$, as this case is already handled by the $\boldsymbol{\mathsf{SitePathIndicator}}$ function. Figure~\ref{fig:SiteCentroidSite} illustrates why this point location problem is so difficult. Since we know $v\in R_{t+1}$ but not in $R_t$, we can narrow our attention to $R_t^{\operatorname{out}}\cap R_{t+1}$. However the projection of $C^\star$ onto $R_t^{\operatorname{out}}$ can touch the boundary $\partial R_t$ an arbitrary number of times. Define $\mathcal{C}$ to be the set of oriented chords of $R_t^{\operatorname{out}}$ obtained by projecting $C^\star$ onto $R_t^{\operatorname{out}}$. \begin{figure} \centering \begin{tabular}{ccc} \multicolumn{3}{c}{\scalebox{.4}{\includegraphics{SiteCentroidSite.pdf}}}\\ &&\\ \multicolumn{3}{c}{{\bf (a)}}\\ &&\\ &&\\ \scalebox{.3}{\includegraphics{C1.pdf}} & \scalebox{.3}{\includegraphics{C2.pdf}} & \scalebox{.3}{\includegraphics{C3.pdf}}\\ &&\\ {\bf (b)} & {\bf (c)} & {\bf (d)} \end{tabular} \caption{{\bf (a)} The projection of a site-centroid-site chord $C^\star = \protect\overrightarrow{s_jy_jy_{j-1}s_{j-1}}$ of $R_{i+1}^{\operatorname{out}}$ onto $R_t^{\operatorname{out}}$ yields a set $\mathcal{C}$ of chords of $R_t^{\operatorname{out}}$, partitioned into three classes. Let $q_j,x_j$ and $q_{j-1},x_{j-1}$ be the first and last $\partial R_t$-vertices on the $s_j$-$y_j$ and $s_{j-1}$-$y_{j-1}$ paths. {\bf (b)} $\mathcal{C}_1$: all chords in $T_{q_j}^{R_t}[x_j]$. {\bf (c)} $\mathcal{C}_2$: all chords in $T_{q_{j-1}}^{R_t}[x_{j-1}]$. Their orientation is the reverse of their counterparts in $C^\star$. {\bf (d)} $\mathcal{C}_3$: the single chord $\protect\overrightarrow{x_jy_jy_{j-1}x_{j-1}}$. } \label{fig:SiteCentroidSite} \end{figure} Luckily $\mathcal{C}$ has some structure. Let $(q_j,x_j)$ and $(q_{j-1},x_{j-1})$ be the first and last $\partial R_t$ vertices on the shortest $s_j$-$y_j$ and $s_{j-1}$-$y_{j-1}$ paths, respectively. (One or both of these pairs may not exist.) The chords of $\mathcal{C}$ are in one-to-one correspondence with the chords of $\mathcal{C}_1 \cup \mathcal{C}_2 \cup \mathcal{C}_3$, defined below, but as we will see, sometimes with their orientation reversed. \begin{enumerate} \item[$\mathcal{C}_1$:] By definition $\mathcal{C}_1 = C_{q_j}^{R_t}[x_j]$ contains all the chords on the path from $q_j$ to $x_j$, stored in part (D) of the data structure. Moreover, the orientation of $\mathcal{C}_1$ agrees with the orientation of $C^\star$. The blue chords of Figure~\ref{fig:SiteCentroidSite}(a) are isolated as $\mathcal{C}_1$ in Figure~\ref{fig:SiteCentroidSite}(b). % \item[$\mathcal{C}_2:$] By definition $\mathcal{C}_2 = C_{q_{j-1}}^{R_t}[x_{j-1}]$ contains all the chords on the path from $q_{j-1}$ to $x_{j-1}$. The red chords of $\mathcal{C}$ in Figure~\ref{fig:SiteCentroidSite}(a) are \emph{represented} by chords $\mathcal{C}_2$, but with reversed orientation. Figure~\ref{fig:SiteCentroidSite}(c) depicts $\mathcal{C}_2$. % \item[$\mathcal{C}_3:$] This is the singleton set containing the oriented chord $\chord{x_jx_{j-1}}$ consisting of the shortest $x_j$-$y_j$ and $x_{j-1}$-$y_{j-1}$ paths and the edge $\{y_j,y_{j-1}\}$. \end{enumerate} The chord-set $\mathcal{C}$ partitions $R_t^{\operatorname{out}}$ into a piece-set $\mathcal{P}$, with one such piece $P\in \mathcal{P}$ containing $v$. (Remember that $v$ is not on $C^\star$.) We can also consider the piece-sets $\mathcal{P}_1,\mathcal{P}_2,\mathcal{P}_3$ generated by $\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3$. Let $P_1\in\mathcal{P}_1, P_2\in \mathcal{P}_2, P_3\in\mathcal{P}_3$ be the pieces containing $v$. Since, ignoring orientation, $\mathcal{C}=\mathcal{C}_1\cup\mathcal{C}_2\cup\mathcal{C}_3$, it must be that $P=P_1\cap P_2 \cap P_3$. In order to determine whether $v$ is to the right of $C^\star$, it suffices to find some chord $C\in\mathcal{C}$ bounding $P$ and ask whether $v$ is to the right of $C$. Thus, $C$ must also be on the boundary of one of $P_1,P_2,$ or $P_3$. The high-level strategy of $\boldsymbol{\mathsf{ChordIndicator}}$ is as follows. First, we will find some piece $P_1' \in \mathcal{P}_{q_j}^{R_t}$ that is contained in $P_1$ using the procedure $\boldsymbol{\mathsf{PieceSearch}}$ described below, in Section~\ref{sect:PieceSearchprocedure}. The chords of $\mathcal{C}_1$ bounding $P_1$ are precisely the \emph{maximal} chords in $\mathcal{C}_1$ from vantage $P_1'$. Using $\boldsymbol{\mathsf{MaximalChord}}$ (part (D)) we will find a candidate chord $C_1\in\mathcal{C}_1$, and one edge $e$ on the boundary cycle of $\partial R_t$ occluded by $C_1$ from vantage $P_1'$. Turning to $\mathcal{C}_2$, we use $\boldsymbol{\mathsf{AdjacentPiece}}$ to find the piece $P_e \in \mathcal{P}_{q_{j-1}}^{R_t}$ adjacent to $e$. Then, using $\boldsymbol{\mathsf{PieceSearch}}$ and $\boldsymbol{\mathsf{MaximalChord}}$ again, we find a $P_2'\in\mathcal{P}_{q_{j-1}}^{R_t}$ contained in $P_2$ and the maximal chord $C_2$ occluding $P_e$ from vantage $P_2'$. Let $C_3$ be the singleton chord in $\mathcal{C}_3$. We determine the ``best'' chord $C_\ell \in \{C_1,C_2,C_3\}$, decide whether $v$ lies to the right of $C_{\ell}$, and return this answer if $\ell\in\{1,3\}$ or reverse it if $\ell=2$. Recall that chords in $\mathcal{C}_2$ have the opposite orientation as their counterparts in $\mathcal{C}$. $\boldsymbol{\mathsf{PieceSearch}}$ is presented in Section~\ref{sect:PieceSearchprocedure} and $\boldsymbol{\mathsf{ChordIndicator}}$ in Section~\ref{sect:ChordIndictor-procedure}. \subsubsection{$\boldsymbol{\mathsf{PieceSearch}}$}\label{sect:PieceSearchprocedure} We are given a region $R_t$, a vertex $v\in R_t^{\operatorname{out}}\cap R_{t+1}$, and two vertices $q,x\in \partial R_t$. We must locate \emph{any} piece $P' \in\mathcal{P}_q^{R_t}$ that is contained in the unique piece $P \in \mathcal{P}_q^{R_t}[x]$ containing $v$. The first thing we do is find the \emph{last} $\partial R_t$ vertex $z$ on the shortest path from $q$ to $v$, which can be found with a call to $\boldsymbol{\mathsf{PointLocate}}$ on $\VD^*_{\out}(q,R_t)$. (This uses parts (A,C,E) of the data structure.) The shortest path from $z$ to $v$ cannot cross any chord in $\mathcal{C}_q^{R_t}[x]$ (since they are part of a shortest path), but it can coincide with a prefix of some chord in $\mathcal{C}_q^{R_t}[x]$. Thus, if no chord of $\mathcal{C}_q^{R_t}[x]$ is incident to $z$, then we are free to return \emph{any} piece containing $z$. (There may be multiple options if $z$ is an endpoint of a chord in $\mathcal{C}_q^{R_t}$. This case is depicted in Figure~\ref{fig:PieceSearch}. When $z = z_0$, we know that $v\in P_5\cup \cdots \cup P_9$ and return any piece containing $z$.) In general $z$ may be incident to up to two chords $C_1,C_2 \in \mathcal{C}_q^{R_t}[x]$. (This occurs when the shortest $q$-$x$ path touches $\partial R_t$ at $z$ without leaving $R_t^{\operatorname{out}}$.) In this case we determine which side of $C_1$ and $C_2$ $v$ is on (using parts (A) and (E) of the data structure; see Lemma~\ref{lem:chordside} in Section~\ref{sect:sidequeries} for details) and return the appropriate piece adjacent to $C_1$ or $C_2$. This case is depicted in Figure~\ref{fig:PieceSearch} with $z=z_1$; the three possible answers coincide with $v\in\{v_1,v_2,v_3\}$. \begin{algorithm} \caption{$\boldsymbol{\mathsf{PieceSearch}}(R_{t},q,x,v)$} \label{alg:PieceSearch} \begin{algorithmic}[1] \Require A region $R_{t}$, two vertices $q,x \in\partial R_{t}$, and a vertex $v$ not on the $q$-to-$x$ shortest path in $G$. \Ensure A piece $P'\in\mathcal{P}_{q}^{R_{t}}$, which is a subpiece of the unique piece $P\in\mathcal{P}_{q}^{R_{t}}[x]$ containing $v$. \State $z \gets \boldsymbol{\mathsf{PointLocate}}(\VD^*_{\out}(q, R_{t}),v)$ \Comment{Uses parts (A,C,E) of the data structure} \If {$z$ is not the endpoint of any chord in $\mathcal{C}_{q}^{R_{t}}[x]$} \State \Return any piece in $\mathcal{P}_{q}^{R_{t}}$ containing $z$. \EndIf \State $C_{1},C_{2}\gets$ two chords in $\mathcal{C}_{q}^{R_{t}}[x]$ adjacent to $z$ ($C_2$ may be {\bf Null}) \State Determine whether $v$ is to the left or right of $C_1$ and $C_2$. \Comment{Part (A); see Lemma~\ref{lem:chordside}} \State \Return a piece adjacent to $C_1$ or $C_2$ that respects the queries of Line 6. \end{algorithmic} \end{algorithm} \begin{figure} \centering \scalebox{.45}{\includegraphics{PieceSearch.pdf}} \caption{Solid chords are in $\mathcal{C}_q^{R_t}[x]$. Dashed chords are in $\mathcal{C}_q^{R_t}$ but not $\mathcal{C}_q^{R_t}[x]$. When $z = z_0, v = v_0$, the piece in $\mathcal{P}_q^{R_t}[x]$ containing $v$ is the union of $P_5$--$P_9$. $\boldsymbol{\mathsf{PieceSearch}}$ reports any piece containing $z_0$. When $z=z_1, v\in \{v_1,v_2,v_3\}$, $z$ is incident to two chords $C_1,C_2$. $\boldsymbol{\mathsf{PieceSearch}}$ decides which side of $C_1,C_2$ $v$ is on (see Lemma~\ref{lem:chordside}), and returns the appropriate piece adjacent to $C_1$ or $C_2$.} \label{fig:PieceSearch} \end{figure} \subsubsection{$\boldsymbol{\mathsf{ChordIndicator}}$}\label{sect:ChordIndictor-procedure} Let us walk through the $\boldsymbol{\mathsf{ChordIndicator}}$ function. If $C^\star = \chord{s_jy_jy_{j-1}s_{j-1}}$ does not touch the interior of $R_t^{\operatorname{out}}$ then the left-right relationship between $C^\star$ and $v\not\in R_t$ is known, and stored in part (E) of the data structure. If this is the case the answer is returned immediately, at Line 3. A relatively simple case is when $\mathcal{C}_1$ and $\mathcal{C}_2$ are empty, and $\mathcal{C}=\mathcal{C}_3$ consists of just one chord $C_3 = \chord{x_jy_jy_{j-1}x_{j-1}}$. We determine whether $v$ is to the right or left of $C_3$ and return this answer (Line 8). (Lemma~\ref{lem:chordside} in Section~\ref{sect:sidequeries} explains how to test whether $v$ is to one side of a chord.) Thus, without loss of generality we can assume $\mathcal{C}_1\neq \emptyset$ and $\mathcal{C}_2$ may or may not be empty. Recall that $P_1$ is $v$'s piece in $\mathcal{P}_{q_j}^{R_t}[x_j]$. Using $\boldsymbol{\mathsf{PieceSearch}}$ we find a piece $P_1'\subseteq P_1$ in the more refined partition $\mathcal{P}_{q_j}^{R_t}$ and find a $\boldsymbol{\mathsf{MaximalChord}}$ $C_1\in\mathcal{C}_1$ from vantage $P_1'$, and hence from vantage $v$ as well. We regard $\partial R_t$ as circularly ordered according to a clockwise walk around the hole on $\partial R_t$ in $R_t^{\operatorname{out}}$. The chord $C_1$ occludes an interval $I_1$ of $\partial R_t$ from vantage $v$. If $C_1$ is \emph{not} one of the chords bounding $P$, then $C_3$ or some $C_2\in\mathcal{C}_2$ must occlude a superset of $I_2$, so we will attempt to find such a $C_2$, as follows. Let $e$ be the first edge on the boundary cycle occluded by $C_1$, i.e., $e$ joins the first two elements of $I_1$. Using $\boldsymbol{\mathsf{AdjacentPiece}}$ we find the unique piece $P_e\in \mathcal{P}_{q_{j-1}}^{R_t}$ with $e$ on its boundary. Using $\boldsymbol{\mathsf{PieceSearch}}$ again we find $P_2'\in\mathcal{P}_{q_{j-1}}^{R_t}$ contained in $P_2$, and using $\boldsymbol{\mathsf{MaximalChord}}$ again, we find the maximal chord $C_2 \in \mathcal{C}_2$ that occludes $P_e$ from vantage $P_2'$, and hence from vantage $v$ as well. Observe that since all chords in $\mathcal{C}_2$ are vertex-disjoint from $C_1$, if $C_2\neq $ \textbf{Null} then $C_2$ must occlude a strictly larger interval $I_2 \supset I_1$ of $\partial R_t$. (If $C_2$ is \textbf{Null} then $I_2=\emptyset$.) It may be that $C_1$ and $C_2$ are both not on the boundary of $P$, but the only way that could occur is if $C_3 \in \mathcal{C}_3$ occludes a superset of $I_1$ and $I_2$ on the boundary $\partial R_t$. We check whether $v$ lies to the right or left of $C_3$ and let $I_3$ be the interval of $\partial R_t$ occluded by $C_3$ from vantage $v$. If $I_3$ does not cover $e$, then we cannot conclude that $C_3$ is superior than $C_1/C_2$. Thus, we find the chord $C_{\ell} \in \{C_1,C_2,C_3\}$ that covers $e$ and maximizes $|I_\ell|$. $C_\ell$ must be on the boundary of $P$, so the left-right relationship between $v$ and $C^\star$ is exactly the same as the left-right relationship between $v$ and $C_\ell$, if $\ell\in\{1,3\}$, and the reverse of this relationshp if $\ell=2$ since chords in $\mathcal{C}_2$ have the opposite orientation as their subpath counterparts in $C^\star$. Figure~\ref{fig:ChordIndicator} illustrates how $\ell$ could take on all three values. \begin{figure} \begin{tabular}{ccc} \hspace{-.3cm}\scalebox{.26}{\includegraphics{ChordIndicator-ell3.pdf}} &\hspace{-.35cm}\scalebox{.26}{\includegraphics{ChordIndicator-ell2.pdf}} &\hspace{-.4cm}\scalebox{.26}{\includegraphics{ChordIndicator-ell1.pdf}}\\ (a) & (b) & (c) \end{tabular} \caption{The intervals $I_1,I_2,I_3$ are represented as pink circular arcs. In (a) $C_2$ exists and $C_3$ is better than $C_1,C_2$ since $I_3 \supset I_2 \supset I_1$. In (b) $C_2$ exists, but $C_3$ occludes an interval $I_3$ that does not contain $e$, so $C_2$ is the best chord. In (c) $C_2$ is \textbf{Null}, and $C_3$ does not occlude $e$ from $v$, so $C_1$ is the only eligible chord. (In the figure $I_3 \subset I_1$ but it could also be as in (b), with $I_3$ disjoint from $I_1$.)} \label{fig:ChordIndicator} \end{figure} \begin{algorithm} \caption{$\boldsymbol{\mathsf{ChordIndicator}}(\VD^*_{\out}(u_{i},R_{i+1}),v,f^{*},j)$} \label{alg:CentroidIndicator} \begin{algorithmic}[1] \Require The dual representation $\VD^*_{\out} = \VD^*_{\out}(u_{i},R_{i+1})$ of a Voronoi diagram, a centroid $f^*$ in $\VD^*_{\out}$ with face $f$ on vertices $y_0,y_1,y_2$, which are in the Voronoi cells of $s_0,s_1,s_2$, an index $j\in\{0,1,2\}$, and a vertex $v \in R_{i+1}^{\operatorname{out}}$ that does not lie on the site-centroid-site chord $C^\star = \overrightarrow{s_{j}y_{j}y_{j-1}s_{j-1}}$. \Ensure \textbf{True} if $v$ lies to the right of $C^\star$, and $\textbf{False}$ otherwise. % \State $R_{t}\gets$ the ancestor of $R_{i}$ s.t. $v\notin R_{t}, v\in R_{t+1}$. $\mathcal{C}$ is the projection of $C^\star$ onto $R_t^{\operatorname{out}}$. \If {the left/right relationship between $R_t^{\operatorname{out}}$ and $C^\star = \chord{s_{j}y_{j}y_{j-1}s_{j-1}}$ is known} \State \Return stored \textbf{True}/\textbf{False} answer. \Comment{Part (E)} \EndIf \Comment{(It follows that $C^\star$ crosses $\partial R_t$ and that $\mathcal{C}\neq\emptyset$)} \State $(q_{j},x_{j}) \gets $ first and last $\partial R_t$-vertices on shortest $s_{j}$-$y_{j}$ path. \Comment{Part (E)} \State $(q_{j-1},x_{j-1}) \gets$ first and last $\partial R_t$-vertices on shortest $s_{j-1}$-$y_{j-1}$ path. \Comment{Part (E)} \If{$\mathcal{C}_1 = \mathcal{C}_2 = \emptyset$} \State \Return \textbf{True} if $v$ is to the right of the $\mathcal{C}_3$-chord $\chord{x_jy_jy_{j-1}x_{j-1}}$, or \textbf{False} otherwise. \EndIf \Comment{\emph{W.l.o.g., continue under the assumption that $\mathcal{C}_1 \neq \emptyset$.}} \State $P_1' \gets \boldsymbol{\mathsf{PieceSearch}}(R_{t},q_{j},x_{j},v)$ \Comment{Uses parts (A,C)} \State $C_1 \gets \boldsymbol{\mathsf{MaximalChord}}(R_t,q_j,x_j,P_1',\perp)$ \Comment{Part (D)} \State $I_1 \gets $ the clockwise interval of hole $\partial R_t$ occluded by $C_1$ from vantage $v$. \State $e \gets $ edge joining first two elements of $I_1$. \State $P_e \gets \boldsymbol{\mathsf{AdjacentPiece}}(R_t,q_{j-1},e)$ \Comment{Part (D)} \State $P_2' \gets \boldsymbol{\mathsf{PieceSearch}}(R_t,q_{j-1},x_{j-1},v)$ \Comment{Uses parts (A,C)} \State $C_2 \gets \boldsymbol{\mathsf{MaximalChord}}(R_t,q_{j-1},x_{j-1},P_2',P_e)$ \Comment{Part (D); may return {\bf Null}} \State $I_2 \gets $ the clockwise interval of hole $\partial R_t$ occluded by $C_2$ from vantage $v$. \Comment{$\emptyset$ if $C_2=$ {\bf Null}} \State $C_3 \gets $ single chord in $\mathcal{C}_3$, if any. \Comment{May be {\bf Null}} \State $I_3 \gets $ the clockwise interval of hole $\partial R_t$ occluded by $C_3$ from vantage $v$. \Comment{$\emptyset$ if $C_3=$ {\bf Null}} \State $\ell \gets $ index such that $I_\ell$ covers $e$, and $|I_\ell|$ is maximum. \If{$v$ is to the right of $C_\ell$ and $\ell\in\{1,3\}$ or $v$ is to the left of $C_\ell$ and $\ell=2$} \State \Return \textbf{True} \EndIf \State \Return \textbf{False} \end{algorithmic} \end{algorithm} \subsubsection{Side Queries}\label{sect:sidequeries} Lemma~\ref{lem:chordside} explains how we test whether $v$ is to the right or left of a chord, which is used in both $\boldsymbol{\mathsf{PieceSearch}}$ and $\boldsymbol{\mathsf{ChordIndicator}}$. \begin{lemma}\label{lem:chordside} For any $C\in\mathcal{C}_1\cup \mathcal{C}_2\cup \mathcal{C}_3$ and $v$ not on $C$, we can test whether $v$ lies to the right or left of $C$ in $O(\kappa\log\log n)$ time, using parts (A) and (E) of the data structure. \end{lemma} \begin{proof} There are several cases. \paragraph{Case 1.} Suppose that $C = \chord{c_0c_1} \in \mathcal{C}_1\cup\mathcal{C}_2$ corresponds to the shortest path from $c_0$ to $c_1$ in $R_t^{\operatorname{out}}$, $c_0,c_1\in \partial R_t$. Let $c_0',c_1'$ be pendant vertices attached to $c_0,c_1$ embedded inside the face of $R_t^{\operatorname{out}}$ bounded by $\partial R_t$. The shortest $c_0'$-$v$ paths and $c_0'$-$c_1'$ paths branch at some point. We ask the \textsf{MSSP}{} structure (part (A)) for the least common ancestor, $w$, of $v$ and $c_1'$ in the shortcutted SSSP tree rooted at $c_0'$. This query also returns the two tree edges $e_{v},e_{c_1'}$ leading to $v$ and $c_1'$, respectively. Let $e_w$ be the edge connecting $w$ to its parent.\footnote{The purpose of adding $c_0',c_1'$ is to make sure all three edges $e_w,e_v,e_{c_1'}$ exist. The vertices $c_0',c_1'$ are not represented in the \textsf{MSSP}{} structure. The edges $(c_0',c_0)$ and $(c_1,c_1')$ can be simulated by inserting them between the two boundary edges on $\partial R_t$ adjacent to $c_0$ and $c_1$, respectively.} If the clockwise order around $w$ is $e_w,e_{c_1'},e_v$ then $v$ lies to the right of $\chord{c_0c_1}$; otherwise it lies to the left. Note that if the shortest $c_0'$-$c_1'$ and $c_0'$-$v$ paths in $G$ branch at a point in $R_{t+1}^{\operatorname{out}}$, then $w$ will be the nearest ancestor of the branchpoint on $\partial R_{t+1}$ and one or both of $e_v,e_{c_1'}$ may be ``shortcut'' edges in the \textsf{MSSP}{} structure. See Figure~\ref{fig:leftright}(a) for a depiction of this case. \begin{figure}[h] \centering \begin{tabular}{cc} \multicolumn{2}{c}{\scalebox{.33}{\includegraphics{leftright1.pdf}}}\\ \multicolumn{2}{c}{ }\\ \multicolumn{2}{c}{(a)}\\ \scalebox{.32}{\includegraphics{leftright2.pdf}} &\scalebox{.32}{\includegraphics{leftright3.pdf}}\\ &\\ (b) & (c) \end{tabular} \caption{(a) The chord $C\in \mathcal{C}_1\cup\mathcal{C}_2$ corresponds to a shortest path, which may pass through $R_{t+1}^{\operatorname{out}}$, in which case it is represented in the \textsf{MSSP}{} structure with shortcut edges (solid, angular edges). (b) The chord $C = \protect\overrightarrow{x_jy_jy_{j-1}x_{j-1}}$ is in $\mathcal{C}_3$, and $f$ lies in $R_{t}^{\operatorname{out}}\cap R_{t+1}$. This is handled similarly to (a). (c) Here $f$ lies in $R_{t+1}^{\operatorname{out}}$, $\hat{x}_j,\hat{x}_{j-1}$ are the last $\partial R_{t+1}$ vertices on the $s_j$-$y_j$ and $s_{j-1}$-$y_{j-1}$ paths. If the shortest $x_j'$-$\hat{x}_j$ and $x_j'$-$v$ paths branch, we can answer the query as in (b). If $x_j'$-$\hat{x}_j$ is a prefix of $x_j'$-$v$, $e_v = (\hat{x}_j,\hat{v})$, and $\hat{v}\in \partial R_{t+1}$, then we can use the clockwise order of $\hat{x}_j,\hat{v},\hat{x}_{j-1}$ around the hole on $\partial R_{t+1}$ to determine whether $v$ lies to the right of $C$. (Not depicted: the case when $\hat{v}\not\in \partial R_{t+1}$.) } \label{fig:leftright} \end{figure} \paragraph{Case 2.} Now suppose $C = \chord{x_jy_jy_{j-1}x_{j-1}}$ is the one chord in $\mathcal{C}_3$. Consider the following distance function $\hat{d}$ for vertices in $z\in R_t^{\operatorname{out}}$: \[ \hat{d}(z) = \min\Big\{ \operatorname{dist}_{G}(u_i,x_j) + \operatorname{dist}_{G}(x_j,z),\; \; \operatorname{dist}_{G}(u_i,x_{j-1}) + \operatorname{dist}_{G}(x_{j-1},z)\Big\}. \] Observe that the terms involving $u_i$ are stored in part (E) and, if $z\in R_t^{\operatorname{out}}\cap R_{t+1}$, the other terms can be queried in $O(\kappa\log\log n)$ time using part (A). It follows that the shortest path forest w.r.t.~$\hat{d}$ has two trees, rooted at $x_j$ and $x_{j-1}$. Using part (A) of the data structure we compute $\hat{d}(v)$, which reveals the $j^\star \in \{j,j-1\}$ such that $v$ is in $x_{j^\star}$'s tree. At this point we break into two cases, depending on whether $f$ is in $R_{t}^{\operatorname{out}}\cap R_{t+1}$, or in $R_{t+1}^{\operatorname{out}}$. We assume $j^\star = j$ without loss of generality and depict only this case in Figure~\ref{fig:leftright}(b,c). \paragraph{Case 2a.} Suppose that $f$ is in $R_t^{\operatorname{out}}\cap R_{t+1}$. Let $y'_{j}$ be a pendant vertex attached to $y_{j}$ embedded inside $f$ and let $x_j'$ be a pendant attached to $x_j$ embedded in the face on $\partial R_t$. The shortest $x_{j}'$-$y_{j}'$ and $x_{j}'$-$v$ paths diverge at some point. We query the \textsf{MSSP}{} structure (part (A)) to get the least common ancestor $w$ of $y_j'$ and $v$ and the three edges $e_{y_j'},e_v,e_w$ around $w$, then determine the left/right relationship as in Case 1. (If $j^\star = j-1$ then we would reverse the answer due to the reversed orientation of the $x_{j-1}$-$y_{j-1}$ subpath w.r.t.~$C$.) Once again, some of $e_{y_j'},e_v,e_w$ may be shortcut edges between $\partial R_{t+1}$-vertices or artificial pendant edges. See Figure~\ref{fig:leftright}(b) \paragraph{Case 2b.} Now suppose $f$ lies in $R_{t+1}^{\operatorname{out}}$. We get from part (E) the last vertices $\hat{x}_j, \hat{x}_{j-1} \in \partial R_{t+1}$ that lie on the $s_j$-$y_j$ and $s_{j-1}$-$y_{j-1}$ shortest paths. We ask the \textsf{MSSP}{} structure of part (A) for the least common ancestor $w$ of $\hat{x}_j$ and $v$ in the shortcutted SSSP tree rooted at $x_j'$, and also get the three incident edges $e_{\hat{x}_j},e_v,e_w$. The edges $e_v$ and $e_w$ exist and are different, but $e_{\hat{x}_j}$ may not exist if $w=\hat{x}_j$, i.e., if $v$ is a descendant of $\hat{x}_j$. If all three edges $\{e_{\hat{x}_j},e_v,e_w\}$ exist we can determine whether $v$ lies to the right of $C$ as in Case 1 or 2a. \paragraph{Case 2b(i).} Suppose $w = \hat{x}_j$ and $e_{\hat{x}_j}$ does not exist. Let $e_v = (\hat{x}_j,\hat{v})$. If $\hat{v} \in \partial R_{t+1}$ then $e_v$ represents a path that is completely contained in $R_{t+1}^{\operatorname{out}}$. Thus, if we walk clockwise around the hole of $R_{t+1}^{\operatorname{out}}$ on $\partial R_{t+1}$ and encounter $\hat{x}_j,\hat{v},\hat{x}_{j-1}$ in that order then $v$ lies to the right of $C$, and if we encounter them in the reverse order then $v$ lies to the left of $C$. See Figure~\ref{fig:leftright}(c). \paragraph{Case 2b(ii).} Finally, suppose $\hat{v} \not\in \partial R_{t+1}$ and $e_v = (\hat{x}_j,\hat{v})$ is a normal edge in $G$. Redefine $e_{\hat{x}_j}$ to be the first edge on the path from $\hat{x}_j$ to $y_j$.\footnote{We could store $e_{\hat{x}_j}$ in part (E) of the data structure but that is not necessary. If $e_0,e_1$ are the edges adjacent to $\hat{x}_j$ on the boundary cycle of $\partial R_{t+1}$, then we can use any member of $\{e_0,e_1\}\backslash \{e_w\}$ as a proxy for $e_{\hat{x}_j}$.} Now we can determine if $v$ is to the right of $C$ by looking at the clockwise order of $e_{w},e_{v},e_{\hat{x}_j}$ around $\hat{x}_j$. \end{proof} \medskip As pointed out in Remark~\ref{remark:PointLocate}, Lemma~\ref{lem:MSSP} does not immediately imply that Line 6 of $\boldsymbol{\mathsf{SitePathIndicator}}$ and Line 1 of $\boldsymbol{\mathsf{PieceSearch}}$ can be implemented efficiently. Gawrychowski et al.'s~\cite{GawrychowskiMWW18} implementation of $\boldsymbol{\mathsf{PointLocate}}$ requires \textsf{MSSP}{} access to $R_t^{\operatorname{out}}$, whereas part (A) only lets us query vertices in $R_t^{\operatorname{out}}\cap R_{t+1}$. Gawrychowski et al.'s algorithm is identical to $\boldsymbol{\mathsf{CentroidSearch}}$, except that $\boldsymbol{\mathsf{Navigation}}$ is done directly with \textsf{MSSP}{} structures. Suppose we are currently at $f^*$ in the centroid decomposition, with $y_j,s_j$ defined as usual. Gawrychowski's algorithm finds $j$ minimizing $\omega(s_j)+\operatorname{dist}_{R_t^{\operatorname{out}}}(s_j,v)$ using three distance queries to the \textsf{MSSP}{} structure, then decides whether the $s_j'$-$v$ shortest path is a prefix of the $s_j'$-$y_j'$ shortest path, and if not, which direction it branches in.\footnote{$s_j',y_j'$ being pendant vertices attached to $s_j,y_j$, as in Lemma~\ref{lem:chordside}.} If $f$ is in $R_{t}^{\operatorname{out}}\cap R_{t+1}$ we can proceed exactly as in Gawrychowski et al.~\cite{GawrychowskiMWW18}. If not, we retrieve from part (E) the last vertex $\hat{x}$ of $\partial R_{t+1}$ on the $s_j$-$y_j$ shortest path, use $\hat{x}$ in lieu of $y_j'$ for the LCA queries, and tell whether the $s_j'$-$v$ path branches to the right exactly as in Lemma~\ref{lem:chordside}, Case 2b. \section{Analysis}\label{sect:analysis} This section constitutes a proof of the claims of Theorem~\ref{thm:maintheorem} concerning space complexity and query time; refer to Appendix~\ref{sect:construction} for an efficient construction algorithm. Combining Lemmas~\ref{lem:MSSP} and \ref{lem:PointLocate} (see Section~\ref{sect:sidequeries}), $\boldsymbol{\mathsf{PointLocate}}$ runs in $O(\kappa\log n\log\log n)$ time. Together with Lemma~\ref{lem:chordside} it follows that $\boldsymbol{\mathsf{PieceSearch}}$ also takes $O(\kappa\log n\log\log n)$ time. $\boldsymbol{\mathsf{SitePathIndicator}}$ uses $\boldsymbol{\mathsf{PointLocate}}$, the $\textsf{MSSP}$ structure, and $O(1)$-time tree operations on $T_q^{R_i}$ and the $\vec{r}$-hierarchy like least common ancestors and level ancestors~\cite{HarelT84,BenderF00,BenderF04,Hagerup20}. Thus $\boldsymbol{\mathsf{SitePathIndicator}}$ also takes $O(\kappa\log n\log\log n)$ time. The calls to $\boldsymbol{\mathsf{MaximalChord}}$ and $\boldsymbol{\mathsf{AdjacentPiece}}$ in $\boldsymbol{\mathsf{ChordIndicator}}$ take $O(\log n\log\log n)$ time by Lemma~\ref{lem:piecetree}, and testing which side of a chord $v$ lies on takes $O(\kappa\log\log n)$ time by Lemma~\ref{lem:chordside}. The bottleneck in $\boldsymbol{\mathsf{ChordIndicator}}$ is still $\boldsymbol{\mathsf{PieceSearch}}$, which takes $O(\kappa\log n\log\log n)$ time. The only non-trivial parts of $\boldsymbol{\mathsf{Navigation}}$ are calls to $\boldsymbol{\mathsf{SitePathIndicator}}$ and $\boldsymbol{\mathsf{ChordIndicator}}$, so it, too, takes $O(\kappa\log n\log\log n)$ time. An initial call to $\boldsymbol{\mathsf{CentroidSearch}}$ (Line 5 of $\boldsymbol{\mathsf{Dist}}$) generates at most $\log n$ recursive calls to $\boldsymbol{\mathsf{CentroidSearch}}$, culminating in the last recursive call making 1 or 2 calls to $\boldsymbol{\mathsf{Dist}}$ with the ``$i$'' parameter incremented. Excluding the cost of recursive calls to $\boldsymbol{\mathsf{Dist}}$, the cost of $\boldsymbol{\mathsf{CentroidSearch}}$ is dominated by calls to $\boldsymbol{\mathsf{Navigation}}$, i.e., an initial call to $\boldsymbol{\mathsf{CentroidSearch}}$ costs $\log n \cdot O(\kappa\log n\log\log n) = O(\kappa\log^2 n\log\log n)$ time. Let $T(i)$ be the cost of a call to $\boldsymbol{\mathsf{Dist}}(u_i,v,R_i)$. We have \begin{align*} T(m-1) &= O(\kappa\log\log n) & \mbox{$\boldsymbol{\mathsf{Dist}}$ returns at Line 2 with one \textsf{MSSP}{} query}\\ T(i) &= 2T(i+1) + O(\kappa\log^2 n\log\log n) \end{align*} It follows that the time to answer a distance query is $T(0) = O(2^m\cdot \kappa\log^2 n\log\log n)$. \medskip The space complexity of each part of the data structure is as follows. (A) is $O(\kappa m n^{1+1/m+1/\kappa})$ by Lemma~\ref{lem:MSSP} and the fact that $r_{i+1}/r_i = n^{1/m}$. (B) is $O(mn^{1+1/(2m)})$ since $\sqrt{r_{i+1}/r_i} = n^{1/(2m)}$. (C) is $O(mn)$ since $\sum_{i} n/r_i\cdot (\sqrt{r_i})^2 = O(mn)$. (D) is $O(mn\log n)$ by Lemma~\ref{lem:partD}, and (E) is $O(m)$ times the space cost of (B) and (C), namely $O(m^2 n^{1+1/(2m)})$. The bottleneck is (A). \medskip We now explain how $m,\kappa$ can be selected to achieve the extreme space and query complexities claimed Theorem~\ref{thm:maintheorem}. To optimize for query time, pick $\kappa = m$ to be any function of $n$ that is $\omega(1)$ and $o(\log\log n)$. Then the query time is \[ O(2^m\kappa \log^2 n\log\log n) = \log^{2+o(1)} n \] and the space is \[ O(m\kappa n^{1+1/m+1/\kappa})=n^{1+o(1)}. \] To optimize for space, choose $\kappa = \log n$ and $m$ to be a function that is $\omega(\log n/\log\log n)$ and $o(\log n)$. Then the space is \[ O\left(m\kappa n^{1+1/m+1/\kappa}\right) = o\left(n^{1+1/m}\log^{2}n\right) = n\cdot 2^{o(\log\log n)}\cdot \log^{2} n = n\log^{2+o(1)}n, \] and the query time \[ O(2^{m}\kappa\log^{2}n \log\log n) = 2^{o(\log n)}\log^{3}n\log\log n = n^{o(1)}. \] \subsection{Speeding Up the Query Time} Observe that the space of (B) is asymptotically smaller than the space of (A). Replace (B) with (B') \begin{enumerate} \item[(B')] {\bf (Voronoi Diagrams)} Fix $i$, a region $R_{i}\in\mathcal{R}_{i}$ with ancestors $R_{i+1}\in \mathcal{R}_{i+1}$ and $R_{i+4} \in \mathcal{R}_{i+4}$. For each $q \in \partial R_{i}$ store \begin{align*} \VD^*_{\out}(q,R_{i+1}) &= \mathrm{VD}^*[R_{i+1}^{\operatorname{out}},\partial R_{i+1},\omega]\\ \VD^*_{\operatorname{farout}}(q,R_{i+4}) &= \mathrm{VD}^*[R_{i+4}^{\operatorname{out}},\partial R_{i+4},\omega] & \mbox{only if $i<m-4$} \end{align*} with $\omega(s)=\operatorname{dist}_G(q,s)$ in both cases. Over all regions $R_i$, the space for storing all $\VD^*_{\out}$s is $\tilde{O}(n^{1+1/(2m)})$ since $\sqrt{r_{i+1}/r_i} = n^{1/(2m)}$ and the space for $\VD^*_{\operatorname{farout}}$s is $\tilde{O}(n^{1+2/m})$ since $\sqrt{r_{i+4}/r_i}=n^{2/m}$. \end{enumerate} Now the space for (A) is $\tilde{O}(n^{1+1/m+1/\kappa})=\tilde{O}(n^{1+2/m})$ is balanced with (B'). In the $\boldsymbol{\mathsf{Dist}}$ function we now consider three possibilities. If $v\in R_{i+1}$ we use part (A) to solve the problem without recursion. If $v\not\in R_{i+1}$ but $v\in R_{i+4}$ we proceed as usual, calling $\boldsymbol{\mathsf{CentroidSearch}}(\VD^*_{\out}(u_i,R_{i+1}),v,\cdot)$, and if $v\not\in R_{i+4}$ we call $\boldsymbol{\mathsf{CentroidSearch}}(\VD^*_{\operatorname{farout}}(u_i,R_{i+4}),v,\cdot)$. Observe that the depth of the $\boldsymbol{\mathsf{Dist}}$-recursion is now at most $t/4+O(1) < m/4+O(1)$, giving us a query time of $O(m2^{m/4}\log^2 n\log\log n)$ with space $\tilde{O}(n^{1+2/m})$. \section{Conclusion} In this paper we have proven that it is possible to simultaneously achieve optimal space \underline{\emph{or}} query time, up to a $\log^{2+o(1)} n$ factor, and near-optimality in the other complexity measure, up to an $n^{o(1)}$ factor. The main open question in this area is whether there exists an exact distance oracle with $\tilde{O}(n)$ space and $\tilde{O}(1)$ query time. This will likely require new insights into the structure of shortest paths, which could lead, for example, to storing correlated versions of Voronoi diagrams more efficiently, or avoiding the binary branching recursion in our query algorithm. \medskip \paragraph{Acknowledgements.} We thank Danny Sleator and Bob Tarjan for discussing update/query time tradeoffs for dynamic trees. \bibliographystyle{plain}
{ "timestamp": "2020-07-20T02:02:25", "yymm": "2007", "arxiv_id": "2007.08585", "language": "en", "url": "https://arxiv.org/abs/2007.08585" }
\section{Proof of Theorem~\ref{thm:main_result_rmax_pg}} We need to understand the error $\ensuremath{\mathbb{E}}_{(s,a)\sim d} [\ensuremath{\mathds{1}}(s \in \Kcal)(A^\pi(s,a) - \theta^\star\cdot \bar \phi(s,a))]$, where $\pi$ is some policy and $d$ is some distribution. $\bar \phi$ refers to the centered features $\bar \phi(s,a) = \phi(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi}(\cdot\mid s)[\phi(s,a')]$. Here $\theta^\star$ is the minimizer of \[ \theta^\star = \argmin_{\|\theta \|\leq W} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho} [(Q^\pi(s,a) - \theta\cdot \phi(s,a))^2]. \] Breaking the expectation up over aggregate states $z$, we see that \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho} [(Q^\pi(s,a) - \theta\cdot \phi(s,a))^2] &= \sum_{z\in \Zcal} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho}\ensuremath{\mathds{1}}(\phi(s,a) = z) [(Q^\pi(s,a) - \theta\cdot \phi(s,a))^2]\\ &\sum_{z\in \Zcal} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho}\ensuremath{\mathds{1}}(\phi(s,a) = z) [(Q^\pi(s,a) - \theta_z)^2]. \end{align*} Consequently, we see that the optimal solution here sets $\theta^\star_z = \ensuremath{\mathbb{E}}_{(s,a)\sim \rho}\ensuremath{\mathds{1}}(\phi(s,a) = z) Q^\pi(s,a)$ which we further summarize as $\theta^\star_z = Q_\rho^\pi(z)$. Note that we have assumed here that the norm bound is large enough to make this solution feasible, for which $W \geq \sqrt{|\Zcal|}/(1-\gamma)$ suffices. Now we see that \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim d} [\ensuremath{\mathds{1}}(s \in \Kcal)(A^\pi(s,a) - \theta^\star\cdot \bar \phi(s,a))] \\ &= \ensuremath{\mathbb{E}}_{(s,a)\sim d} [\ensuremath{\mathds{1}}(s \in \Kcal)(Q^\pi(s,a) - \theta^\star\cdot \phi(s,a))] - \ensuremath{\mathbb{E}}_{s\sim d, a\sim \pi} [\ensuremath{\mathds{1}}(s \in \Kcal)(Q^\pi(s,a) - \theta^\star\cdot \phi(s,a))]. \end{align*} Let us simplify the first term, as the second one will follow similarly. We see that \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim d} [\ensuremath{\mathds{1}}(s \in \Kcal)(Q^\pi(s,a) - \theta^\star\cdot \phi(s,a))] \\ &= \sum_{z\in \Zcal}\ensuremath{\mathbb{E}}_{(s,a)\sim d}\ensuremath{\mathds{1}}(\phi(s,a) = z) [\ensuremath{\mathds{1}}(s \in \Kcal)(Q^\pi(s,a) \pm Q^\pi_\rho(z) - \theta^\star\cdot \phi(s,a))] \\ &\leq \sum_{z\in \Zcal} \ensuremath{\mathbb{E}}_{(s,a)\sim d} \ensuremath{\mathds{1}}(\phi(s,a) = z) |Q^\pi(s,a) - Q^\pi_\rho(z)| + 0\\%\ensuremath{\mathbb{E}}_{(s,a)\sim d} [\ensuremath{\mathds{1}}(s\in \Kcal) (Q^\pi_\rho(z) - \theta^\star \cdot \phi(s,a))] \\ &\leq \sum_{z\in \Zcal}\ensuremath{\mathbb{E}}_{(s,a)\sim d} \ensuremath{\mathds{1}}(\phi(s,a) = z)\, \epsilon_z = \ensuremath{\mathbb{E}}_{(s,a)\sim d} [\epsilon_{\phi(s,a)}] \end{align*} Here the first inequality uses the value of $\theta^\star$ while the second follows by that of $\epsilon_z$. Proceeding similarly for the second term gives the overall bound: \[ \ensuremath{\mathbb{E}}_{(s,a)\sim d} [\epsilon_{\phi(s,a)}] + \ensuremath{\mathbb{E}}_{s\sim d,a\sim \pi} [\epsilon_{\phi(s,a)}] \] \section{The EPOC\xspace Algorithm and Main Results \section{The Exploration for Policy Optimization with learned policy Covers\xspace (EPOC\xspace) Algorithm} \label{section:alg} To motivate the algorithm, first consider the original objective function: \begin{equation}\label{eq:obj1} \textrm{Original objective:} \quad \max_{\pi\in\Pi} V^\pi(s_0;r) \end{equation} where $r$ is the true cost function. Simply doing policy gradient ascent on this objective function may easily lead to poor stationary points due to lack of coverage (i.e. lack of exploration). For example, if the initial visitation measure $d^{\pi^0}$ has poor coverage over the state space (say $\pi^0$ is a random initial policy), then $\pi^0$ may already being a stationary point of poor quality (e.g see Lemma 4.3 in~\cite{agarwal2019optimality}). In such cases, a more desirable objective function is of the form: \begin{equation}\label{eq:obj2} \textrm{A wide coverage objective:} \quad \max_{\pi\in\Pi} \ensuremath{\mathbb{E}}_{s_0,a_0\sim\rho_{\mix}}\left[ Q^\pi(s_0,a_0;r)\right] \end{equation} where $\rho_{\mix}$ is some initial state-action distribution which has wider coverage over the state space. As argued in~\citep{agarwal2019optimality,kakade2002approximately,scherrer2014local,Scherrer:API}, wide coverage initial distributions $\rho_{\mix}$ are critical to the success of policy optimization methods. However, in the RL setting, our agent can only start from $s_0$. \input{pseudo_epoc} The idea of our iterative algorithm, EPOC\xspace (Algorithm~\ref{alg:epoc}), is to successively improve \emph{both} the current policy $\pi$ and the coverage distribution $\rho_{\mix}$. The algorithm starts with some policy $\pi^0$ (say random), and works in episodes. At episode $n$, we have $n+1$ previous policies $\pi^0, \ldots \pi^n$. Each of these policies $\pi^i$ induces a distribution $d^i := d^{\pi^i}$ over the state space. Let us consider the average state-action visitation measure over \emph{all} of these previous policies: \begin{align} \rho_{\mix}^n(s,a) = \sum_{i=0}^{n} d^i(s,a)/(n+1) \label{eq:cover_def} \end{align} Intuitively, $\rho_{\mix}^n$ reflects the coverage the algorithm has over the state-action space at the start of the $n$-th episode. EPOC\xspace then uses $\rho_{\mix}^n$ in the previous objective~\eqref{eq:obj2} with two modifications: EPOC\xspace modifies the instantaneous reward function $r$ with a bonus $b^n$ in order to encourage the algorithm to find a policy $\pi^{n+1}$ which covers a novel part of the state-action space. It also modifies the policy class from $\Pi$ to $\Pi_{\text{bonus}}$, where all policies $\pi \in \Pi_{\text{bonus}}$ are constrained to simply take a random rewarding action for those states where the bonus is already large (random exploration is reasonable when the exploration bonus is already large, see Eq~\ref{eq:pg_update} in Alg.~\ref{alg:npg}). \iffalse It also modifies the policy class from $\Pi$ to $\Pi_{\text{bonus}}$ where policies $\pi' \in \Pi_{\text{bonus}}$ agree with some policy $\pi \in \Pi$ in all states where no action gets a non-zero bonus, while chooses uniformly amongst actions with a non-zero bonus in any state where such actions exist. This is reasonable from the perspective of maximizing the reward $r + b^n$ since the bonus will be set to the largest possible reward. \fi With this, EPOC\xspace's objective at the $n$-th episode is: \begin{equation}\label{eq:obj3} \textrm{EPOC\xspace's objective:} \quad \max_{\pi\in\Pi_{\text{bonus}}} \ensuremath{\mathbb{E}}_{s_0,a_0\sim\rho^n_{\mix}}\left[ Q^\pi(s_0,a_0;r+b^n)\right] \end{equation} The idea is that EPOC\xspace can effectively optimize over the region where $\rho^n_{\mix}$ has coverage. Furthermore, by construction of the bonus, the algorithm is encouraged to escape the current region of coverage to discover novel parts of the state-action space. We now describe the bonus and optimization steps in more detail. \input{pseudo_npg} \paragraph{Reward bonus construction.} At each episode $n$, EPOC\xspace maintains an estimate of feature covariance of the policy cover $\rho^n_\mix$ (Line \ref{line:feature_cov_mix} of \pref{alg:epoc}). Next we use this covariance matrix to identify state-action pairs which are adequately covered by $\rho^n_\mix$. The goal of the reward bonus is to identify state, action pairs whose features are less explored by $\rho^n_{\mix}$ and incentivize visiting them. The bonus $b^n(s,a)$ defined in Line~\ref{line:bonus} achieves this. \iffalse \begin{equation} b^n(s,a) = \frac{\one\{(s,a)~:~ \phi(s,a)^\top (\widehat{\Sigma}_{\mix})^{-1}\phi(s,a) \geq \beta\}}{1-\gamma}. \label{eq:bonus} \end{equation} If the quadratic form defining the bonus is large, then $\Sigma_{\mix}$ has a small eigenvalue along $\phi(s,a)$ and we assign the largest possible future reward from this $(s,a)$ pair to encourage exploration. \fi If $\widehat{\Sigma}^n_{\mix}$ has a small eigenvalue along $\phi(s,a)$, then we assign the largest possible reward-to-go (i.e., $1/(1-\gamma)$) for this $(s,a)$ pair to encourage exploration.\footnote{For an infinite dimensional RKHS, the bonus can be computed in the dual using the kernel trick (e.g., \cite{valko2013finite}).} \paragraph{Policy Optimization.} With the bonus, we update the policy via $T$ steps of natural policy gradient (\pref{alg:npg}). In the NPG update, we first approximate the value function $Q^{\pi^t}(s,a; r+ b^n)$ under the policy cover $\rho^n_\mix$ (\pref{line:learn_critic}). Specifically, we use linear function approximator to approximate $Q^{\pi^t}(s,a; r+b^n) - b^n(s,a)$ via constrained linear regression (\pref{line:learn_critic}), and then approximate $Q^{\pi^t}(s,a;r+b^n)$ by adding bonus back: \begin{align*} \overline{Q}^t_{b^n}(s,a) := b^n(s,a) + \theta^t\cdot \phi(s,a), \end{align*} Note that the error of $\overline{Q}^t_{b^n}(s,a)$ to $Q^{\pi^t}(s,a;r+b^n)$ is simply the prediction error of $\theta^t\cdot \phi(s,a)$ to the regression target $Q^{\pi^t}(s,a;r+b^n) - b^n$. The purpose of structuring the value function estimation this way, instead of directly approximating $Q^t(s,a;r+b^n)$ with a linear function, for instance, is that the regression problem defined in \pref{line:learn_critic} will have a good linear solution for the special case linear MDPs, while we cannot guarantee the same for $Q^t(s,a;r+b^n)$ due to the non-linearity of the bonus. We then use the critic $\overline{Q}^t_{b^n}$ for updating policy (Eq.~\pref{eq:pg_update}). These are the exponential gradient updates (as in ~\cite{Kakade01,agarwal2019optimality}), but are constrained for $s\in\Kcal^n$ (see \pref{line:known} for the definition of $\Kcal^n$). The initialization and the update ensure that $\pi^t$ chooses actions uniformly from $\{a: b^n(s,a) > 0\}\subseteq\mathcal{A}$ at any state $s$ with $\left\lvert \{ a: b^n(s,a) > 0\}\right\vert > 0$ (the policy is restricted to act uniformly among positive bonus actions). \paragraph{Intuition for tabular setting.} In tabular MDPs (with ``one-hot'' features for each state-action pair), $(\widehat \Sigma^n_{\mix})^{-1}$ is a diagonal matrix with entries proportional to $1/n_{s,a}$, where $n_{s,a}$ is the number of times $(s,a)$ is observed in the data collected to form the matrix $\widehat\Sigma^n_{\mix}$. Hence the bonus simply rewards infrequently visited state-action pairs, and thus encourages reaching new state-action pairs. \section{The EPOC\xspace Algorithm and Main Results \label{section:alg} We now present the EPOC algorithm and the theoretical guarantees for linear MDPs (using the policy class $\Pi_{linear}$), and for state aggregation. To motivate the algorithm, first consider the original objective function: \begin{equation}\label{eq:obj1} \textrm{Original objective:} \quad \max_{\pi\in\Pi} V^\pi(s_0;r) \end{equation} where $r$ is the true cost function. Simply doing policy gradient descent on this objective function may easily lead to poor stationary points due to lack of coverage (i.e. lack of exploration). For example, suppose $\pi^0$ is a random initial policy used for policy gradient descent. If the visitation measure $d^{\pi^0}$ has poor coverage over the state space, this could potential lead to $\pi^0$ already being a stationary point. In such cases, a more desirable objective function would be of the form: \begin{equation}\label{eq:obj2} \textrm{A wide coverage objective:} \quad \max_{\pi\in\Pi} \ensuremath{\mathbb{E}}_{s_0,a_0\sim\rho_{\mix}}\left[ Q^\pi(s_0,a_0;r)\right] \end{equation} where $\rho_{\mix}$ is some initial state-action distribution which has wider coverage of state space. As argued in~\citep{agarwal2019optimality,kakade2002approximately,scherrer2014local,Scherrer:API}, wide coverage $\rho_{\mix}$ are critical to the success of policy optimization methods. However, in the RL setting, our agent can only start from $s_0$. The idea of our iterative algorithm, EPOC, will be to successively improve both the current policy $\pi$ and the coverage distribution $\rho_{\mix}$. The algorithm will start with some policy $\pi^0$ (say random), and it will work in episodes: at episode $n$, we have $n$ previous policies $\pi^0, \ldots \pi^n$. Each of these policies $\pi^i$ induces a distribution $d^{\pi^i}:=d^i$ over the state space. In fact, let us consider the average state-action visitation measure over \emph{all} of these previous policies: \[ \rho_{\mix}^n = \frac{1}{n+1}\sum_{i=0}^n d^i \] Intuitively, $\rho_{\mix}^n$ is a crude reflection of the coverage the algorithm has over state-action space at episode $n$ (by simply using a flat average of the past). EPOC will then use $\rho_{\mix}^n$ in the previous objective~\eqref{eq:linear_policies} with one modification: EPOC will modify the instantaneous reward function $r$ with a bonus $b^n$ in order to encourage the algorithm to find a policy $\pi^{n+1}$ which covers a novel part of the state-action space. EPOC's objective at the $n$-th iteration will be: \begin{equation}\label{eq:obj3} \textrm{EPOC's objective:} \quad \max_{\pi\in\Pi} \ensuremath{\mathbb{E}}_{s_0,a_0\sim\rho^n_{\mix}}\left[ Q^\pi(s_0,a_0;r+b^n)\right] \end{equation} Let $\pi^{n+1}$ be the approximate solution to this that EPOC finds. The idea is that EPOC will be able to effectively optimize over the region where $\rho^n$ has coverage. Furthermore, by construction of the bonus, the algorithm is encouraged to escape the current region of coverage to discover novel parts of the state-action space. The idea of the proof is that either the algorithm makes progress in finding a wider cover or it will have found a good policy (from the original start state of interest, $s_0$). In terms of model agnostic learning, as we shall see, EPOC is always optimizing an objective function (in \eqref{eq:obj3}) regardless of wether or not the linear MDP modelling assumptions hold or not; it is not evident if there an underlying objective function for methods such as UCB or Q-learning. \iffalse \subsection{The EPOC\xspace Algorithm} \label{sec:rmax_pg} \begin{table}[!th] \begin{tabular}{cc} \hspace*{-0.2in} \begin{minipage}{0.55\textwidth} \begin{algorithm}[H] \begin{algorithmic}[1] \STATE \textbf{Require}: iterations $K$, threshold $\beta$, regularizer $\lambda$ \STATE Initialize $\pi^0(a|s)$ to be uniform \STATE Estimate $\pi^0$'s feature covariance matrix $ \widehat\Sigma^0$ \FOR{episode $n = 0, \dots K-1$} \STATE Set $\rho^n_{\mix} = \sum_{i=0}^n d^{\pi^i} / (n+1)$ \label{line:mixture} \STATE Estimate the policy cover's feature covariance \label{line:feature_cov}\\ \begin{center} $\widehat\Sigma_\mix = \sum_{i=0}^n \widehat\Sigma^i + \lambda \mathbf{I}$ \label{eq:covariance_est} \end{center} \STATE Define known set $\mathcal{K}^n$ \begin{align} \label{eq:estimate_known} \hspace*{-0.2in}\mathcal{K}^n \!=\! \big\{(s,a)\!:\!\phi(s,a)^{\top}\big(\widehat\Sigma_\mix\big)^{-1} \phi(s,a) \leq \beta\big\} \end{align} and set the exploration bonus $b^n$ \label{line:bonus} \[ {b}^n(s,a) = \frac{\one\{ (s,a)\not\in\mathcal{K}^n \}}{1-\gamma} \] \STATE Policy update (\pref{alg:npg}): \label{line:npg} $\pi^{n+1} = \text{NPG}( \rho^n_\mix, b^n)$ \STATE Estimate $\pi^{n+1}$'s feature covariance matrix $ \widehat\Sigma^{n+1}$ \ENDFOR \STATE \textbf{Return} $\pi^{K}$ \end{algorithmic} \caption{EPOC\xspace with linear actor-critic} \label{alg:rmaxpg_linear} \end{algorithm} \end{minipage} & \begin{minipage}{0.45\textwidth} \begin{algorithm}[H] \begin{algorithmic}[1] \STATE \textbf{Require}: cover $ \rho$, bonus $b$;\\ NPG params: iterations $T$, stepsize $\eta$, sample size $M$ \STATE Initialize $\pi^0(a|s)$ to be uniform. \FOR{iteration $t = 0, \dots T-1 $} \STATE Critic: approximate $Q^{^t}_{b}$: \label{line:learn_critic} \begin{align*} \hspace*{-0.2in}\theta^t = \argmin_{\|\theta\|\leq W} \sum_{i=1}^M ( \theta\cdot \phi(s_i,a_i) - \widehat{Q}^{t}(s_i,a_i) )^2, \end{align*} where $s_i,a_i\sim \rho$, and $\widehat{Q}^t (s_i,a_i)$'s are unbiased estimates of $Q^{t}_{b}(s_i,a_i)$ \STATE Policy update via exponential gradient: \label{line:learn_actor} \IF{$s\in \Kcal$} \STATE \mbox{$\pi^{t+1}(a|s) \!\propto\! \pi^t(a|s) \exp\left(\eta \theta^t\cdot \phi(s,a)\right)$ \ELSE \STATE \mbox{$\pi^{t+1}(\cdot|s) \!=\! \text{Uniform}\left(\{a:(s,a)\not\in\mathcal{K}\}\right)$ \ENDIF % % \ENDFOR \STATE \textbf{Return} $\pi^{T}$ \end{algorithmic} \caption{NPG$(\rho,b, \Kcal)$} \label{alg:npg} \end{algorithm} \end{minipage} \end{tabular} \end{table} \fi \subsection{The EPOC\xspace Algorithm} \input{main_pseudocode} At the high-level, our algorithm iteratively constructs a policy cover, and uses policy optimization from the cover's state distribution as the initial distribution. We design a reward bonus to encourage exploration of novel states and actions not adequately covered. For the bonus, we borrow ideas from linear bandits~\citep{dani2008stochastic,abbasi2011improved} to reward $(s,a)$ pairs whose features $\phi(s,a)$ have a small projection on the covariance matrix of the policy cover. The policy optimizer uses Natural Policy Gradient \citep{Kakade01,Bagnell:2003:CPS:1630659.1630805,Peters:2008:NA:1352927.1352986}.% \textbf{Policy cover and reward bonus.} At each episode $n$, we maintain an estimate of the feature covariance matrix of $\pi^n$, denoted as: Note that $\widehat\Sigma^n$ is an unbiased estimate of $\Sigma^n:= \ensuremath{\mathbb{E}}_{(s,a)\sim d^n} \phi(s,a)\phi(s,a)^{\top}$. To sample a state-action pair from $d^{\pi}_{\rho}$ with $\rho\in\Delta(S)$ being some initial state distribution, we can sample an $h \in\mathbb{N}$ with probability proportional to $\gamma^h$, and then execute $\pi$ to step $h$ starting from the initial distribution $\rho$. At the beginning of each episode, we form a \emph{policy cover} $\rho_{\mix}^n$ by averaging all previous policies' distributions and form an estimated covariance matrix $\widehat{\Sigma}_{\mix}$ (\pref{line:feature_cov}). Next we use this covariance matrix and the linear MDP structure to identify state, action pairs which are adequately covered by $\rho$. Since the $Q$-functions of all policies are linear, if we visit features like $\phi(s,a)$ often under $\rho_{\mix}$, then we can estimate $Q$ values of any policy in $(s,a)$ easily, say by rollouts. Hence, the goal of the reward bonus is to identify state, action pairs whose features are less explored by $\rho_{\mix}$ and incentivize visiting them. The definition of bonus $b^n(s,a)$ at iteration $n$ of the algorithm shown below achieves this: \begin{equation} b^n(s,a) = \frac{\one\{(s,a)~:~ \phi(s,a)^\top (\widehat{\Sigma}_{\mix})^{-1}\phi(s,a) \geq \beta\}}{1-\gamma}. \label{eq:bonus} \end{equation} If the quadratic form defining the bonus is large, then $\Sigma_{\mix}$ has a small eigenvalue along $\phi(s,a)$ and we assign the largest possible future reward from this $(s,a)$ pair to encourage exploration.\footnote{For infinite dimensional RKHS, Eq.~\ref{eq:bonus} can be computed in the dual using the kernel trick (e.g., \cite{valko2013finite}).} We note that the use of the largest reward in an underexplored state is reminiscent of the Rmax algorithm~\cite{brafman2002r} which does the same in the tabular setting. The remaining steps of the algorithm perform a slightly modified Natural Policy Gradient (NPG) update to optimize the combined reward $r+b^n$ with $\rho_{\mix}^n$ as the restart distribution. Note we do not need to precompute bonuses for every $(s,a)$: we just need to compute bonuses along the rollouts generated from the NPG procedure. \textbf{Policy Optimization.} We denote $Q^{\pi}_{b^n}$ and $V^{\pi}_{b^n}$ as the $Q$ and value function of policy $\pi$ under reward $r+b^n$. In the $t$-th iteration of the policy optimization inner loop (lines~\ref{line:for_beg}-\ref{line:for_end}), we learn a linear function $\theta^t\cdot \phi(s,a)$ to approximate $Q^{t}_{b^n}(s,a)$ (\pref{line:learn_critic}). Specifically, we draw $M$ many samples $(s_i,a_i)\iidsim \rho_{\mix}^n$ and for each $(s_i,a_i)$, we compute an unbiased estimate of $Q^t_{b^n}(s_i,a_i)$. Drawing a state-action pair from $\rho_{\mix}^n$ can be implemented by first uniformly sampling a policy $\pi$ from $\{\pi_0,\pi_1,\dots \pi_n\}$, and then sampling a state-action pair from $d^{\pi}$. For any $(s,a)$, to get an unbiased estimate $\widehat{Q}^{\pi}(s,a)$ of $Q^{\pi}(s,a)$, we can sample a time step $h\geq 0$ with probability proportional to $\gamma^h$, and roll out our policy $\pi$ from $(s,a)$ to time step h. We set $\widehat{Q}^{\pi}(s,a)$ as the undiscounted sum of the rewards along the rollout and then perform constrained linear regression using the $M$ data points to estimate $\theta^t$. Using $\theta^t$, we update the policy per-state as shown in~\pref{line:learn_actor}. Note that the first equation in ~\pref{line:learn_actor} can be rewritten in an equivalent policy parameter updater step: \begin{center} $w^{t+1} := w^{t} + \eta \theta^t$ with $\pi^t(s,a) \propto \exp(w^t\cdot \phi(s,a))$. \end{center} That is, $\theta^t$ is the natural gradient direction and $\eta$ is the learning rate. For states which have non-zero bonuses, we simply take actions with bonuses uniform randomly (the second equation in \pref{line:learn_actor}). \textbf{Intuition for tabular setting.} For a more intuitive understanding of the bonus, let us consider tabular MDPs. In this case, $(\widehat \Sigma_{\mix})^{-1}$ is a diagonal matrix with entries proportional to $1/n_{s,a}$, where $n_{s,a}$ is the number of times $(s,a)$ is observed in the data collected to form the matrix $\widehat\Sigma_{\mix}$. Hence the bonus simply rewards state, action pairs with a small number of samples when executing the policy cover, and thereby encourages reaching new states. Policy optimization under NPG for tabular problem decouples across states~\cite{Kakade01,agarwal2019optimality}. \textbf{On model-free exploration} Typical exploration algorithms for tabular~\cite{kearns2002near,brafman2002r,agrawal2017posterior} as well as linear MDPs~\cite{jin2019provably} implicitly or explicitly build an empirical MDP model and then plan in the model. State, action pairs which are less explored are typically modified to have dynamics leading to a maximally rewarding absorbing state. In contrast EPOC\xspace plans in the real environment through its policy optimization subroutine. Dynamics of the real world cannot be altered and adding bonus to rewards is the only natural mechanism. This model-free and on-policy nature of our approach, where we do not store all the past data but only incrementally update the policy from its execution in the environment presents technical challenges in the analysis, but also affords robustness to model misspecification as we discuss in the next section. \subsection{Main Results} \label{sec:analysis} For analysis we focus on proving sample complexity results for linear MDPs with potentially infinite dimensional feature $\phi$. We then show how our algorithm can work under model misspecification in arguably one of the most basic models, that of state-aggregation. \textbf{Well specified case: Linear MDPs.} To measure the sample complexity, we define \emph{intrinsic dimension} of the underlying MDP $\mathcal{M}$. First, denote the covariance matrix of any policy $\pi$ as $\Sigma^{\pi} = \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\pi}}\left[\phi(s,a)\phi(s,a)^{\top}\right]$.% We define the intrinsic dimension below: \begin{definition}[Intrinsic Dimension $\widetilde{d}$] $\widetilde{d}: =\max_{n\in\mathbb{N}^+} \max_{\{\pi^i\}_{i=1}^n } \frac{ \log\det\left(\sum_{i=1}^n \Sigma^{\pi^i} + \mathbf{I}\right) }{\log(n + 1)}$. \label{def:int_dim} \end{definition} This quantity is identical the intrinsic dimension in Gaussian Processes bandits \citep{srinivas2010gaussian}; one viewpoint of this quantity is as the information gain from a Bayesian perspective (\cite{srinivas2010gaussian}). A related quantity occurs in a more restricted linear MDPs model, in \citet{yang2019reinforcement}. Note that when $\phi(s,a)\in \mathbb{R}^d$, we have that $\log\det\left(\sum_{i=1}^n \Sigma^{\pi^i} + \mathbf{I}\right) \leq d \log(n + 1)$ (as $\|\phi(s,a)\|_2\leq 1$), which means that the intrinsic dimension is $d$. Note that $\widetilde{d}\ll d$ if the covariance matrices from a sequence of policies only concentrated in a low-dimensional subspace. \begin{theorem}[Sample Complexity of EPOC\xspace for Linear MDPs] Fix $\epsilon, \delta \in (0,1)$. Suppose that $\mathcal{M}$ is a linear MDP. There exists a setting of the parameters such that EPOC\xspace uses a number of samples at most $\text{poly}\left( \frac{1}{1-\gamma},\log(A), \frac{1}{\epsilon}, \widetilde{d}, W, \ln\left(\frac{1}{\delta}\right) \right)$ and, with probability greater than $1-\delta$, returns a policy $\widehat \pi$ such that: $V^{\widehat\pi}(s_0) \geq \max_{\pi\in\Pi_{linear}}V^{\pi}(s_0) - \epsilon$. \label{thm:linear_mdp} \end{theorem} The detailed bound with the rates, along with the setup of hyperparameters $\beta, \lambda$, $T$, $M$, $\eta$ (learning rate in NPG) are in~\pref{app:rmaxpg_sample}. Our theorem assumes discrete actions but the sample complexity only scales polylogarithmically with respect to the number of actions $A$, making the algorithm scalable to large action spaces. \begin{remark}For tabular MDPs, as $\phi$ is a $|\mathcal{S}||\mathcal{A}|$ indictor vector, the theorem above immediately extends to tabular MDPs with $\widetilde{d}$ being replaced by $|\mathcal{S}|\mathcal{A}|$ and $\Pi_{linear}$ replaced by $\Pi_{tab}$. \end{remark} \begin{remark} In contrast with LSVI-UCB \citep{jin2019provably}, EPOC\xspace works for infinite dimensional $\phi$ with a polynomial dependency on the intrinsic dimension $\widetilde{d}$. To the best of our knowledge, this is the first infinite dimensional result for the model proposed by \citet{jin2019provably}. \end{remark} \begin{remark} EPOC\xspace can run in reward-free setting in linear MDPs (\pref{thm:reward_free} in \pref{app:reward_free_explore}): $r(s,a) = 0$, $\forall (s,a)$. Here, EPOC\xspace identifies a subset $\Kcal^n$, such that the features of every $(s,a)\in\Kcal^n$ are well explored, while no policy can escape $\Kcal^n$ will probability more than $\epsilon$. \end{remark} \input{state_aggregation} \section{Theory and Examples} \label{sec:analysis} For the analysis, we first state sample complexity results for linear MDPs. Specifically, we focus on analyzing linear MDPs with infinite dimensional features (i.e., the transition and reward live in an RKHS) and show that EPOC\xspace's sample complexity scales polynomially with respect to the maximum information gain \citep{srinivas2010gaussian}. We then demonstrate the robustness of EPOC\xspace to model misspecification in two concrete ways. We first provide a result for state aggregation, showing that error incurred is only an average model error from aggregation averaged over the fixed comparator's abstracted state distribution, as opposed to an $\ell_{\infty}$ model error (i.e., the maximum possible model error over the entire state-action space due to state aggregation). We then move to a more general agnostic setting and show that our algorithm is robust to model-misspecification which is measured in a new concept of \emph{transfer error} introduced by \cite{agarwal2019optimality} recently. Compared to the Q-NPG analysis from \cite{agarwal2019optimality}, we show that EPOC\xspace eliminates the assumption of having access to a well conditioned initial distribution (recall in our setting agent can only reset to a fixed initial state $s_0$), as our algorithm actively maintains a policy cover. We also provide other examples where the linear MDP assumption is only valid for a sub-part of the MDP, and the algorithm competes with the best policy on this sub-part, while most prior approaches fail due to the delusional bias of Bellman backups under function approximation and model misspecification~\citep{lu2018non}. \subsection{Well specified case: Linear MDPs} \label{sec:linear} Let us define linear MDPs first \citep{jin2019provably}. Rather than focusing on finite feature dimension as \citet{jin2019provably} did, we directly work on linear MDPs in a general Reproducing Kernel Hilbert space (RKHS). \begin{definition}[Linear MDP] Let $\ensuremath{\mathcal{H}}$ be a Reproducing Kernel Hilbert Space (RKHS), and define a feature mapping $\phi:\mathcal{S}\times\mathcal{A}\to \ensuremath{\mathcal{H}}$. An MDP $(\mathcal{S}, \mathcal{A}, P, r, \gamma, s_0)$ is called a linear MDP if the reward function lives in $\ensuremath{\mathcal{H}}$: $r(s,a) = \langle \theta, \phi(s,a) \rangle_{\ensuremath{\mathcal{H}}}$, and the transition operator $P(s'|s,a)$ also lives in $\ensuremath{\mathcal{H}}$: $P(s'|s,a) = \langle \mu(s'), \phi(s,a) \rangle_{\ensuremath{\mathcal{H}}}$ for all $(s,a,s')$. Denote $\mu$ as a matrix whose each row corresponds to $\mu(s)$. We assume the parameter norms\footnote{The norms are induced by the inner product in the Hilbert space $\ensuremath{\mathcal{H}}$, unless stated otherwise.} are bounded as $\|\theta\| \leq \omega$, $ \|v^{\top}\mu\| \leq \xi $ for all $v\in\mathbb{R}^{|\mathcal{S}|}$ with $\|v\|_{\infty} \leq 1$. \label{def:linear_mdp} \end{definition} As our feature vector $\phi$ could be infinite dimensional, to measure the sample complexity, we define the \emph{maximum information gain} of the underlying MDP $\mathcal{M}$. First, denote the covariance matrix of any policy $\pi$ as $\Sigma^{\pi} = \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\pi}}\left[\phi(s,a)\phi(s,a)^{\top}\right]$.% We define the maximum information gain below: \begin{definition}[Maximum Information Gain $\mathcal{I}_N(\lambda)$] We define the maximum information gain as: \begin{align*} \mathcal{I}_N(\lambda) := \max_{\{\pi^i\}_{i=0}^{N-1}} \log\det\left( \frac{1}{\lambda}\sum_{i=0}^{N-1} \Sigma^{\pi^i} + I \right), \end{align*} where $\lambda \in\mathbb{R}^+$. \label{def:int_dim} \end{definition} \begin{remark} This quantity is identical to the maximum information gain in Gaussian Process bandits \citep{srinivas2010gaussian} from a Bayesian perspective. A related quantity occurs in a more restricted linear MDP model, in \citet{yang2019reinforcement}. Note that when $\phi(s,a)\in \mathbb{R}^d$, we have that $\log\det\left(\sum_{i=1}^n \Sigma^{\pi^i} + \mathbf{I}\right) \leq d \log(nB^2/\lambda + 1)$ assuming $\|\phi(s,a)\|_2\leq B$, which means that the information gain is always at most $\widetilde{O}(d)$. Note that $\mathcal{I}_N(\lambda)\ll d$ if the covariance matrices from a sequence of policies are concentrated in a low-dimensional subspace (e.g., $\phi$ is infinite dimensional while all policies only visits a two dimensional subspace). \end{remark} For linear MDPs, we leverage the following key observation: we have that $Q^{\pi}(s,a;r+b^n) - b^n(s,a)$ is linear with respect to $\phi(s,a)$ for any possible bonus function $b^n$ and policy $\pi$, which we prove in \pref{claim:linear_property}. The intuition is that the transition dynamics are still linear (as we do not modify the underlying transitions) with respect to $\phi$, so a Bellman backup $r(s,a) + \ensuremath{\mathbb{E}}_{s'\sim P_{(s,a)}} V^{\pi}(s' ; r+b^n)$ is still linear with respect to $\phi(s,a)$ (recall that linear MDP has the property that a Bellman backup on any function $f(s')$ yields a linear function in features $\phi(s,a)$). This means that we can successfully find a linear critic to approximate $Q^{\pi^t}(s,a; r+b^n) - b^n(s,a)$ under $\rho^n_\mix$ up to a statistical error, i.e., \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix} \left(\theta^t \cdot \phi(s,a) - \left(Q^{\pi^t}(s,a;r+b^n) -b^n(s,a) \right) \right)^2 = O\left( 1/\sqrt{M} \right), \end{align*} where $M$ is number of samples used for constrained linear regression (\pref{line:learn_critic}). This further implies that $\theta^t\cdot \phi(s,a) + b^n(s,a)$ approximates $Q^{\pi^t}(s,a;r+b^n)$ up to the same statistical error. With this intuition, the following theorem states the sample complexity of EPOC\xspace under the linear MDP assumption. \begin{theorem}[Sample Complexity of EPOC\xspace for Linear MDPs] Fix $\epsilon, \delta \in (0,1)$ and an arbitrary comparator policy $\pi^\star$ (not necessarily an optimal policy). Suppose that $\mathcal{M}$ is a linear MDP (\ref{def:linear_mdp}). There exists a setting of the parameters such that EPOC\xspace uses a number of samples at most $\text{poly}\left( \frac{1}{1-\gamma},\log(A), \frac{1}{\epsilon}, \mathcal{I}_N(1), \omega, \xi, \ln\left(\frac{1}{\delta}\right) \right)$ and, with probability greater than $1-\delta$, returns a policy $\widehat \pi$ such that: \begin{align*} V^{\widehat\pi}(s_0) \geq V^{\pi^\star}(s_0) - \epsilon. \end{align*} \label{thm:linear_mdp} \end{theorem} A few remarks are in order: \begin{remark}For tabular MDPs, as $\phi$ is a $|\mathcal{S}||\mathcal{A}|$ indictor vector, the theorem above immediately extends to tabular MDPs with $\mathcal{I}_N(1)$ being replaced by $|\mathcal{S}|\mathcal{A}| \log(N+1)$. \end{remark} \begin{remark} In contrast with LSVI-UCB \citep{jin2019provably}, EPOC\xspace works for infinite dimensional $\phi$ with a polynomial dependency on the maximum information gain $\mathcal{I}_N(1)$. To the best of our knowledge, this is the first efficient model-free on-policy policy gradient result for linear MDPs and also the first infinite dimensional result for the linear MDP model proposed by \citet{jin2019provably}. \end{remark} Instead of proving \pref{thm:linear_mdp} directly, we will state and prove a general theorem of EPOC\xspace for general MDPs with model-misspecification measured in a new concept \emph{transfer error} (\pref{ass:transfer_bias}) introduced by \cite{agarwal2019optimality} in \pref{sec:agnostic_result}. \pref{thm:linear_mdp} can be understood as a corollary of a more general agnostic theorem (\pref{thm:agnostic}). Detailed proof of \pref{thm:linear_mdp} is included in \pref{app:app_to_linear_mdp}. \iffalse Instead of proving \pref{thm:linear_mdp} directly, we will state and prove a general theorem of EPOC\xspace for general MDPs with model-misspecification measured in a new concept \emph{transfer errror} (\pref{ass:transfer_bias}) introduced by \cite{agarwal2019optimality} in \pref{sec:agnostic_result}. In the proof of \pref{thm:linear_mdp}, we then use the property the transfer error is naturally zero under the linear MDP assumption (\pref{app:app_to_linear_mdp}). Hence, \pref{thm:linear_mdp} can be understood as a directly consequence of a more general agnostic result to be stated in \pref{thm:agnostic}. Below we move beyond well-specified case and consider an example of model misspecification with state-aggregation. \fi \input{state_aggregation} \input{transfer_bias_no_ind} \input{misspecified_example} \section{Experimental Details} \label{app:exp} \subsection{Algorithm Implementation} We implemented two versions of the algorithm: one with a reward bonus which is added to the environment reward (shown in Algorithm \ref{alg:rmaxpg_implemented_reward_bonus}), and one which performs reward-free exploration, optionally followed by reward-based exploitation using the policy cover as a start distribution (shown in Algorithm \ref{alg:rmaxpg_implemented_reward_free}). Both of these use NPG as a subroutine, which performs policy optimization using the restart distribution induced by a policy mixture $\Pi_\mathrm{mix}$. The implementation of NPG is described in Algorithm \ref{alg:npg_implemented}. We sample states from the restart distribution by randomly sampling a roll-in policy from the cover and a horizon length $h'$, and following the sampled policy for $h'$ steps. Rewards gathered during these roll-in steps are not used for optimization. With probability $\epsilon$, a random action is taken at the beginning of the rollout. We then roll out using the current policy being optimized, and use the rewards gathered for optimization. The policy parameters can be updated using any policy gradient method, we used PPO \citep{schulman2017proximal} in our experiments. For all experiments, we optimized the policy mixture weights $\alpha_1,..., \alpha_n$ at each episode using $2000$ steps of gradient descent, using an Adam optimizer and a learning rate of $0.001$. All implementations are done in PyTorch \citep{PyTorch}, and build on the codebase of \citep{deeprl}. Experiments were run on a GPU cluster which consisted of a mix of 1080Ti, TitanV, K40, P100 and V100 GPUs. \begin{algorithm}[h!] \begin{algorithmic}[1] \State \textbf{Require}: kernel function $\phi: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}^d$ \State Initialize policy $\pi_1$ randomly \State Initialize policy mixture $\Pi_\mathrm{mix} \leftarrow \{\pi_1\}$ \State Initialize episode buffer: $\mathcal{R} \leftarrow \emptyset$ \For{episode $n = 1, \dots K$} \For{trajectory $k = 1, \dots K$} \State Gather trajectory $\tau_k = \{s_h^{(k)}, a_h^{(k)}\}_{h=1}^H$ following $\pi_n$ \State $\mathcal{R} \leftarrow \mathcal{R} \cup \{(s_h^{(k)}, a_h^{(k)})\}_{h=1}^H$ \EndFor \State Compute empirical covariance matrix: $\hat{\Sigma}_n = \sum_{(s, a) \in \mathcal{R}} \phi(s, a) \phi(s, a)^\top$ \State Define exploration bonus: $b_n(s, a) = \phi(s, a)^\top \hat{\Sigma}_n^{-1} \phi(s, a)$ \State Optimize policy mixture weights: $\alpha^{(n)} = \argmin_{\alpha=(\alpha_1, ..., \alpha_n), \alpha_i \geq 0, \sum_i \alpha_i = 1} \log \det \Big[ \sum_{i=1}^n \alpha_i \hat{\Sigma}_i \Big]$ \State $\pi_{n+1} \leftarrow \mathrm{NPG}(\pi_n, \Pi_\mathrm{mix}, \alpha^{(n)}, N_\mathrm{update}, r + b_n)$ \State $\Pi_\mathrm{mix} \leftarrow \Pi_\mathrm{mix} \cup \{\pi_{n+1}\}$ \EndFor \end{algorithmic} \caption{EPOC\xspace (reward bonus version)} \label{alg:rmaxpg_implemented_reward_bonus} \end{algorithm} \begin{algorithm}[h!] \begin{algorithmic}[1] \State \textbf{Require}: kernel function $\phi: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}^d$ \State Initialize policy $\pi_1$ randomly \State Initialize policy mixture $\Pi_\mathrm{mix} \leftarrow \{\pi_1\}$ \State Initialize episode buffer: $\mathcal{R} \leftarrow \emptyset$ \For{episode $n = 1, \dots K$} \For{trajectory $k = 1, \dots K$} \State Gather trajectory $\tau_k = \{s_h^{(k)}, a_h^{(k)}\}_{h=1}^H$ following $\pi_n$ \State $\mathcal{R} \leftarrow \mathcal{R} \cup \{(s_h^{(k)}, a_h^{(k)})\}_{h=1}^H$ \EndFor \State Compute empirical covariance matrix: $\hat{\Sigma}_n = \sum_{(s, a) \in \mathcal{R}} \phi(s, a) \phi(s, a)^\top$ \State Define exploration bonus: $b_n(s, a) = \phi(s, a)^\top \hat{\Sigma}_n^{-1} \phi(s, a)$ \State Optimize policy mixture weights: $\alpha^{(n)} = \argmin_{\alpha=(\alpha_1, ..., \alpha_n), \alpha_i \geq 0, \sum_i \alpha_i = 1} \log \det \Big[ \sum_{i=1}^n \alpha_i \hat{\Sigma}_i \Big]$ \State $\pi_{n+1} \leftarrow \mathrm{NPG}(\pi_n, \Pi_\mathrm{mix}, \alpha^{(n)}, N_\mathrm{update}, b_n)$ \State $\Pi_\mathrm{mix} \leftarrow \Pi_\mathrm{mix} \cup \{\pi_{n+1}\}$ \EndFor \State Initialize policy $\pi_\mathrm{exploit}$ randomly \State $\pi_\mathrm{exploit} \leftarrow \mathrm{NPG}(\pi_\mathrm{exploit}, \Pi_\mathrm{mix}, \alpha^{(K)}, N_\mathrm{update}, r)$ \end{algorithmic} \caption{EPOC\xspace (reward-free exploration version)} \label{alg:rmaxpg_implemented_reward_free} \end{algorithm} \begin{algorithm}[h!] \begin{algorithmic}[1] \State \textbf{Input} policy $\pi$, policy mixture $\Pi_\mathrm{mix}=\{\pi_1, ..., \pi_n\}$, mixture weights $(\alpha_1, ..., \alpha_n)$, optional reward bonus $b: \mathcal{S} \times \mathcal{A} \rightarrow [0, 1]$ \For{policy update $j = 1, \dots N_\mathrm{update}$} \State Sample roll in policy index $j \sim \mathrm{Multinomial}\{\alpha_1, ..., \alpha_n\}$ \State Sample roll in horizon index $h' \sim \mathrm{Uniform}\{0, ..., H-1\}$ \State Sample start state $s_0 \sim P(s_0)$ \For{$h=1, \dots, h'$} \State $a_h \sim \pi_j(\cdot | s_h), s_{h+1} \sim P(\cdot | s_h, a_h)$ \EndFor \For{$h=h'+1, \dots, H$} \State $a_h \sim \pi(\cdot | s_h)$ ($\epsilon$-greedy if $h=h'+1$) \State $s_{h+1}, r_{h+1} \sim P(\cdot | s_h, a_h)$ \EndFor \State Perform policy gradient update on return $R = \sum_{h=h'}^H r(s_h, a_h)$ \EndFor \State Return $\pi$ \end{algorithmic} \caption{$\mathrm{NPG}(\pi, \Pi_\mathrm{mix}, \alpha, N_\mathrm{update}, r)$} \label{alg:npg_implemented} \end{algorithm} \subsection{Environments} \subsubsection{Bidirectional Diabolical Combination Lock} \label{app:combolock} The environment consists of a start state $s_0$ where the agent is placed (deterministically) at the beginning of every episode. The action space consists of $10$ discrete actions, $\mathcal{A} = \{1, 2, ..., 10\}$. In $s_0$, actions $1-5$ lead the agent to the initial state of the first lock and actions $6-10$ lead the agent to the initial state of the second lock. Each lock $l$ consists of $3H$ states, indexed by $s_{1, h}^l, s_{2, h}^l, s_{3, h}^l$ for $h \in \{1, ..., H\}$. A high reward of $R_l$ is obtained at the last states $s_{1, H}^l, s_{2, H}^l$. The states $\{s_{3, h}^l\}_{h=1}^H$ are all ``dead states'' which yield $0$ reward. Once the agent is in a dead state $s_{3, h}^l$, it transitions deterministically to $s_{3, h+1}^l$; thus entering a dead state at any time makes it impossible to obtain the final reward $R^l$. At each ``good'' state $s_{1, h}^l$ or $s_{2, h}^l$, a single action leads the agent (stochastically with equal probability) to one of the next good states $s_{1, h+1}^l, s_{2, h+1}^l$. All other $9$ actions lead the agent to the dead state $s_{3, h+1}^l$. The correct action changes at every horizon length $h$ and the stochastic nature of the transitions precludes algorithms which plan deterministically. In addition, the agent receives a negative reward of $-1/H$ for transitioning to a good state, and a reward of $0$ for transitioning to a dead state. Therefore, a locally optimal solution is to learn a policy which transitions to a dead state as quickly as possible, since this avoids the $-1/H$ penalty. States are encoded using a binary vector. The start state $s_0$ is simply the zero vector. In each lock, the state $s_{i, h}^l$ is encoded as a binary vector which is the concatenation of one-hot encodings of $i, h, l$. One of the locks (randomly chosen) gives a final reward of $5$, while the other lock gives a final reward of $2$. Therefore, in addition to the locally optimal policy of quickly transitioning to the dead state (with return $0$), another locally optimal solution is to explore the lock with reward $2$ and gather the reward there. This leads to a return of $V = 2 - \sum_{h=1}^H \frac{1}{H} = 1$, whereas the optimal return for going to the end of lock with reward $5$ is $V^\star = 5 - \sum_{h=1}^H \frac{1}{H} = 4$. In order to ensure that the optimal reward is discovered for every lock, the agent must therefore explore both locks to the end. We used Algorithm \ref{alg:rmaxpg_implemented_reward_free} for this environment. \subsubsection{Mountain Car} \label{app:mountaincar} We used the \texttt{MountainCarContinuous-v0} OpenAI Gym environment at \url{https://gym.openai.com/envs/MountainCarContinuous-v0/}. This environment has a 2-dimensional continuous state space and a 1-dimensional continuous action space. We used Algorithm \ref{alg:rmaxpg_implemented_reward_bonus} for this environment. \subsubsection{Mazes} We used the source code from \url{https://github.com/junhyukoh/value-prediction-network/blob/master/maze.py} to implement the maze environment, with the following modifications: i) the blue channel (originally representing the goal) is set to zero ii) the same maze is used across all episodes iii) the reward is set to be a constant $0$. We set the maze size to be $20 \times 20$. There are $5$ actions: \texttt{\{up, down, left, right, no-op\}}. We used Algorithm \ref{alg:rmaxpg_implemented_reward_free} for this environment, omitting the exploitation step. \subsection{Hyperparameters} All methods were based on the PPO implementation of \citep{deeprl}. For the Diabolical Combination Lock and the MountainCar environments, we used the same policy network architecture: a 2-layer fully connected network with 64 hidden units at each layer and ReLU non-linearities. For the Diabolical Combination Lock environment, the last layer outputs a softmax over 10 actions and for Mountain Car the last layer outputs the parameters of a 1D Gaussian. For the Maze environments, we used a convolutional network with 2 convolutional layers ($32$ kernels of size $3 \times 3$ for the first, $64$ kernels of size $3 \times 3$ for the second, both with stride 2), followed by a single fully-connected layer with $512$ hidden units, and a final linear layer mapping to a softmax over the $5$ actions. In all cases the RND network has the same architecture as the policy network, except that the last linear layer mapping hidden units to actions is removed. We found that tuning the intrinsic reward coefficient was important for getting good performance for RND. Hyperparameters are shown in Tables \ref{table:ppo-hparams-cont} and \ref{table:ppo-hparams-maze}. \begin{table}[h!] \caption{PPO+RND Hyperparameters for Combolock and Mountain Car} \centering \begin{tabular}{lllll} \toprule Hyperparameter & Values Considered & Final Value (Combolock) & Final Value (Mountain Car) \\ \hline Learning Rate & $10^{-3}, 5\cdot 10^{-4}, 10^{-4}$ & $10^{-3}$ & $10^{-4}$ \\ Hidden Layer Size & $64$ & $64$ & $64$ \\ $\tau_\mathrm{GAE}$ & 0.95 & 0.95 & 0.95 \\ Gradient Clipping & $5.0$ & $5.0$ & $5.0$ \\ Entropy Bonus & $0.01$ & $0.01$ & $0.01$ \\ PPO Ratio Clip & $0.2$ & $0.2$ & $0.2$ \\ PPO Minibatch Size & $160$ & $160$ & $160$ \\ PPO Optimization Epochs & $5$ & $5$ & $5$ \\ Intrinsic Reward Normalization & true, false & false & false \\ Intrinsic Reward coefficient & $0.5, 1, 10, 10^2, 10^3, 10^4$ & $10^3$ & $10^3$ \\ Extrinsic Reward coefficient & $1.0$ & $1.0$ & $1.0$ \\ \bottomrule \end{tabular} \label{table:ppo-hparams-cont} \end{table} \begin{table}[h!] \caption{PPO+RND Hyperparameters for Mazes} \centering \begin{tabular}{lllll} \toprule Hyperparameter & Values Considered & Final Value \\ \hline Learning Rate & $10^{-3}, 5\cdot 10^{-4}, 10^{-4}$ & $10^{-3}$ \\ Hidden Layer Size & $512$ & $512$ \\ $\tau_\mathrm{GAE}$ & 0.95 & 0.95 \\ Gradient Clipping & $0.5$ & $0.5$ \\ Entropy Bonus & $0.01$ & $0.01$ \\ PPO Ratio Clip & $0.1$ & $0.1$ \\ PPO Minibatch Size & $128$ & $128$ \\ PPO Optimization Epochs & $10$ & $10$ \\ Intrinsic Reward Normalization & true, false & true \\ Intrinsic Reward coefficient & $1, 10, 10^2, 10^3, 10^4$ & $10^3$\\ \bottomrule \end{tabular} \label{table:ppo-hparams-maze} \end{table} The hyperparameters used for EPOC\xspace are given in Tables \ref{table:rmaxpg-hparams-cont} and \ref{table:rmaxpg-hparams-maze}. For the Diabolical Combination Lock experiments, we used a kernel $\phi(s, a) = s$, where $s$ is the binary vector encoding the state described in Section \ref{app:combolock}. For Mountain Car, we used a Random Kitchen Sinks kernel \citep{randomkitchensinks} with 10 features using the following implementation: \url{https://scikit-learn.org/stable/modules/generated/sklearn.kernel_approximation.RBFSampler.html}. For the Maze environments, we used a randomly initialized convolutional network with the same architecture as the RND network as a kernel. \begin{table}[h] \caption{EPOC\xspace Hyperparameters for Combolock and Mountain Car} \centering \begin{tabular}{llll} \toprule Hyperparameter & Values Considered & Final Value (Combolock) & Final Value (MountainCar) \\ \hline Learning Rate & $10^{-3}, 5\cdot 10^{-4}, 10^{-4}$ & $10^{-3}$ & $5 \cdot 10^{-4}$ \\ Hidden Layer Size & $64$ & $64$ & $64$ \\ $\tau_\mathrm{GAE}$ & 0.95 & 0.95 & 0.95 \\ Gradient Clipping & $5.0$ & $5.0$ & $5.0$ \\ Entropy Bonus & $0.01$ & $0.01$ & $0.01$ \\ PPO Ratio Clip & $0.2$ & $0.2$ & $0.2$ \\ PPO Minibatch Size & $160$ & $160$ & $160$ \\ PPO Optimization Epochs & $5$ & $5$ & $5$ \\ $\epsilon$-greedy sampling & $0, 0.01, 0.05$ & $0.05$ & $0.05$ \\ \bottomrule \end{tabular} \label{table:rmaxpg-hparams-cont} \end{table} \begin{table}[h] \caption{EPOC\xspace Hyperparameters for Mazes} \centering \begin{tabular}{llll} \toprule Hyperparameter & Values Considered & Final Value \\ \hline Learning Rate & $10^{-3}, 5\cdot 10^{-4}, 10^{-4}$ & $5 \cdot 10^{-4}$ \\ Hidden Layer Size & $512$ & $512$ \\ $\tau_\mathrm{GAE}$ & 0.95 & 0.95 \\ Gradient Clipping & $0.5$ & $0.5$ \\ Entropy Bonus & $0.01$ & $0.01$ \\ PPO Ratio Clip & $0.1$ & $0.1$ \\ PPO Minibatch Size & $128$ & $128$ \\ PPO Optimization Epochs & $10$ & $10$ \\ $\epsilon$-greedy sampling & $0.05$ & $0.05$\\ \bottomrule \end{tabular} \label{table:rmaxpg-hparams-maze} \end{table} \clearpage \clearpage \section{Discussion on Transfer Bias and Concentrability Coefficient} \label{app:transfer_bias_concentrability} We discuss the relationship between transfer bias and the usual concentrability coefficient here. To compare to existing analysis on NPG/CPI/TRPO, here we do not restrict our discussion to linear function approximation. Given a function class $\mathcal{F} = \{f: \mathcal{S}\times \mathcal{A}\to [0, 1/(1-\gamma)]\}$, given a policy $\pi$, let us define transfer bias as: \begin{align*} \varepsilon_{bias} : = \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} \left( A^\pi(s,a) - f_\star(s,a) \right), \end{align*} where $f^\star\in \argmin_{f\in\Fcal} \ensuremath{\mathbb{E}}_{(s,a)\sim \mu_0} \left( A^\pi(s,a) - f(s,a) \right)$ is the best on-policy fit under $\mu_0$. Denote the concentrability coefficient as $\|d^\star/\mu_0\|_{\infty}$. Note to ensure $\|d^\star/\mu_0\|_{\infty} < \infty$, one needs the assumption that $d^\star$ is absolutely continuous with respect to $\mu_0$, which does not always hold (i.e., imagine $\mu_0$ concentrates on a single initial state as we assumed in this work). \begin{claim}[Transfer bias versus Concentrability] Assume $d^\star$ happens to be absolutely continuous with respect to initial state distribution $\mu_0$, we always have: $\varepsilon_{bias} \leq \sqrt{\left\|\frac{ d^\star }{ \mu_0} \right\|_{\infty} \epsilon}$, where $\epsilon$ is the error of the best critic under $\mu_0$, i.e., $\epsilon := \min_{f\in\Fcal }\ensuremath{\mathbb{E}}_{(s,a)\sim \mu_0}\left( A^{\pi}(s,a) - f(s,a) \right)^2$. \end{claim} \begin{proof} The proof is an application of the change of variable trick. \begin{align*} &\varepsilon_{bias} := \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} \left( A^\pi(s,a) - f_\star(s,a) \right) \leq \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} \left\lvert A^\pi(s,a) - f_\star(s,a) \right\rvert \\ & \leq \sqrt{ \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} (A^\pi(s,a) - f_\star(s,a))^2} \leq \sqrt{ \sup_{(s,a)} \left\| d^\star(s,a) / \mu_0(s,a) \right\| \ensuremath{\mathbb{E}}_{(s,a)\sim \mu_0} (A^\pi(s,a) - f_\star(s,a))^2 } \\ & \leq \sqrt{ \left\|\frac{ d^\star }{ \mu_0} \right\|_{\infty} \epsilon }. \end{align*} This concludes the proof. \end{proof} Note that $\varepsilon_{bias}$ is always finite while the concentrability coefficient could easily be infinite in continuous state setting (i.e., linear MDPs), in which case algorithms like NPG/CPI with $\mu_0$ as the reset distribution will converge to arbitrary suboptimal solutions. As we have showed in \pref{app:app_to_linear_mdp}, transfer bias is linear with $\Fcal$ being linear function approximation with no assumptions on $\mu_0$. In \pref{app:examples} we show some extra examples where transfer bias is zero while concentrability coefficient is infinite. \fi \section{Auxiliary Lemmas} \label{app:tech_lemmas} \begin{lemma}[Dimension-free Least Square Guarantees] \label{lemma:least_square_dim_free} Consider the following learning process. Initialize $\theta_1 = \mathbf{0}$. For $i = 1,\dots, N$, draw $x_i, y_i \sim \nu$, $y_i \in [0, H]$, $\|x_i\| \leq 1$;% Set $\theta_{i+1} =\prod_{\Theta:=\{\theta:\|\theta\|\leq W\}} \left(\theta_{i} - \eta_i (\theta_i\cdot x_i - y_i) x_i\right)$ with $\eta_i = (W^2)/((W+H)\sqrt{N})$. Set $\hat{\theta} = \frac{1}{N} \sum_{i=1}^{N} \theta_i$, we have that with probability at least $1-\delta$: \begin{align*} \ensuremath{\mathbb{E}}_{x\sim \nu}\left[\left(\hat\theta\cdot x - \ensuremath{\mathbb{E}}\left[y|x\right]\right)^2\right] \leq \ensuremath{\mathbb{E}}_{x\sim \nu}\left[\left( \theta^\star\cdot x - \ensuremath{\mathbb{E}}\left[y|x\right] \right)^2\right] +\frac{R\sqrt{\ln(1/\delta)}}{\sqrt{N}}, \end{align*} with any $\theta^\star$ such that $\|\theta^\star\|\leq W$ and $R = 3(W^2 + WH)$ which is dimension free and only depends on the norms of the feature and $\theta^\star$ and the bound on $y$. \label{lem:sgd_dim_free} \end{lemma} \begin{proof} Note that we compute $\theta_i$ using Projected Online Gradient Descent \citep{zinkevich2003online} on the sequence of loss functions $(\theta\cdot x_i - y_i)^2$. Using the projected online gradient descent regret guarantee, we have that: \begin{align*} \sum_{i=1}^{N} (\theta_i\cdot x_i - y_i)^2 \leq \sum_{i=1}^{N}(\theta^\star\cdot x_i -y_i)^2 + \underbrace{W(W+H)}_{:=Q}\sqrt{N}. \end{align*} Denote random variable $z_i = (\theta_i\cdot x_i - y_i)^2 - (\theta^\star\cdot x_i - y_i)^2$. Denote $\ensuremath{\mathbb{E}}_{i}$ as the expectation taken over the randomness at step $i$ conditioned on all history $t=1$ to $i-1$. Note that for $\ensuremath{\mathbb{E}}_{i}[z_i]$, we have: \begin{align*} &\ensuremath{\mathbb{E}}_{i} \left[ (\theta_i\cdot x - y)^2 - (\theta^\star\cdot x - y)^2 \right]\\ & = \ensuremath{\mathbb{E}}_{i} \left[ (\theta_i\cdot x - \ensuremath{\mathbb{E}}[y|x])^2\right] \\ & \qquad \qquad - \ensuremath{\mathbb{E}}_{i}\left[2(\theta_i\cdot x - \ensuremath{\mathbb{E}}[y|x])( \ensuremath{\mathbb{E}}[y|x] - y ) - (\theta^\star\cdot x - \ensuremath{\mathbb{E}}[y|x])^2 + 2(\theta^\star\cdot x - \ensuremath{\mathbb{E}}[y|x])(\ensuremath{\mathbb{E}}[y|x] - y) ) \right]\\ & = \ensuremath{\mathbb{E}}_{i}\left[ (\theta_i\cdot x - \ensuremath{\mathbb{E}}[y|x])^2 - ( \theta^\star\cdot x - \ensuremath{\mathbb{E}}[y|x])^2 \right], \end{align*} where we use $\ensuremath{\mathbb{E}}[\ensuremath{\mathbb{E}}[y|x] - y] = 0$. Also for $|z_i|$, we can show that for $|z_i|$ we have: \begin{align*} \left\lvert z_i\right\rvert = \left\lvert (\theta_i\cdot x_i - \theta^\star\cdot x_i)(\theta_i\cdot x_i +\theta^\star\cdot x_i - 2y_i) \right\rvert \leq W( 2W + 2H ) = 2W(W+H). \end{align*} Note that $z_i$ forms a Martingale difference sequence. Using Azuma-Hoeffding's inequality, we have that with probability at least $1-\delta$: \begin{align*} \left\lvert\sum_{i=1}^N z_i - \sum_{i=1}^N \ensuremath{\mathbb{E}}_{i}\left[ (\theta_i \cdot x - \ensuremath{\mathbb{E}}[y|x])^2 - (\theta^\star \cdot x - \ensuremath{\mathbb{E}}[y|x])^2\right]\right\rvert \leq 2W(W+H) \sqrt{{\ln(1/\delta)}{N}}, \end{align*} which implies that: \begin{align*} &\sum_{i=1}^N \ensuremath{\mathbb{E}}_{i}\left[ (\theta_i \cdot x - \ensuremath{\mathbb{E}}[y|x])^2 - (\theta^\star \cdot x - \ensuremath{\mathbb{E}}[y|x])^2\right] \leq \sum_{i=1}^N z_i + 2W(W+H) \sqrt{{\ln(1/\delta)}{N}} \\ &\leq 2W(W+H) \sqrt{{\ln(1/\delta)}{N}} + Q\sqrt{N}. \end{align*} Apply Jensen's inequality on the LHS of the above inequality, we have that: \begin{align*} \ensuremath{\mathbb{E}}\left( \hat{\theta}\cdot x - \ensuremath{\mathbb{E}}[y|x]\right)^2 \leq \ensuremath{\mathbb{E}}\left(\theta^\star\cdot x - \ensuremath{\mathbb{E}}[y|x]\right)^2 + (Q+2W(W+H)) \sqrt{\frac{\ln(1/\delta)}{N}}. \end{align*} \end{proof} \iffalse \begin{lemma} \label{lem:trace_tele} Consider the following process. For $n = 1, \dots, N$, $M_n = M_{n-1} + \Sigma_{n}$ with $M_{0} = \lambda \mathbf{I}$ and $\Sigma_n$ being PSD matrix with eigenvalues upper bounded by $1$. We have that: \begin{align*} \log\det( M_N) - \log\det( \lambda\mathbf{I}) \geq \sum_{n=1}^N \textsc{Trace}\left( \Sigma_{i} M_{i}^{-1} \right). \end{align*} \end{lemma} \begin{proof} Via the concavity of $\log\det$, we have: \begin{align*} \log\det\left( M_{t-1} \right) \leq \log\det\left( M_t \right) + \tilde{r}\left(M_t^{-1} \left(M_{t-1} - M_t\right)\right), \end{align*} which means that: \begin{align*} \sum_{n=1}^N \tilde{r}\left( \Sigma_t M_t^{-1} \right) \leq \log\det\left( M_N \right) - \log\det(M_0), \end{align*} which concludes the proof. \end{proof} \fi \begin{lemma} \label{lem:trace_tele} Consider the following process. For $n = 1, \dots, N$, $M_n = M_{n-1} + \Sigma_{n}$ with $M_{0} = \lambda \mathbf{I}$ and $\Sigma_n$ being PSD matrix with eigenvalues upper bounded by $1$. We have that: \begin{align*} 2 \log\det( M_N) - 2 \log\det( \lambda\mathbf{I}) \geq \sum_{n=1}^N \textsc{Trace}\left( \Sigma_{i} M_{i-1}^{-1} \right). \end{align*} \end{lemma} \begin{proof} Note that $M_0$ is PD, and since $\Sigma_n$ is PSD for all $n$, we must have $M_n$ being PD as well. Using matrix inverse lemma, we have: \begin{align*} \det(M_{n+1}) = \det(M_n) \det( \mathbf{I} + M_n^{-1/2} \Sigma_{n+1} M_n^{-1/2} ). \end{align*} Add $\log$ on both sides of the above equality, we have: \begin{align*} &\log\det(M_{n+1}) = \log\det(M_n) + \log\det( I + M_n^{-1/2}\Sigma_{n+1} M_n^{-1/2}).% \end{align*} Denote the eigenvalues of $M_n^{-1/2}\Sigma_{n+1} M_n^{-1/2}$ as $\sigma_1,\dots, \sigma_d$, we have: \begin{align*} &\log\det(M_{n+1}) = \log\det( M_n) + \sum_{i=1}^{d} \log \left( 1 + \sigma_i \right) % \end{align*} Note that $\sigma_i \leq 1$, and we have $\log(1+x) \geq x/2$ for $x\in [0,1]$. Hence, we have: \begin{align*} &\log\det(M_{n+1}) \geq \log\det( M_n) + \sum_{i=1}^d \sigma_i / 2 = \log\det(M_n) + \frac{1}{2} \textsc{Trace}\left( M_n^{-1/2}\Sigma_{n+1} M_n^{-1/2} \right) \\ &= \log\det(M_n) + \frac{1}{2}\textsc{Trace}\left( \Sigma_{n+1} M_n^{-1} \right), \end{align*}where we use the fact that $\textsc{Trace}(A B) = \textsc{Trace}(BA)$ and the trace of PSD matrix is the sum of its eigenvalues. Sum over from $n = 0$ to $N$ and cancel common terms, we conclude the proof. \end{proof} \begin{lemma}[Covariance Matrix Concentration] \label{lemma:covariance_concentration} Given $\nu\in \Delta(\mathcal{S}\times\mathcal{A})$ and $N$ i.i.d samples $\{s_i,a_i\}\sim \nu$. Denote $\Sigma = \ensuremath{\mathbb{E}}_{(s,a)\sim \nu}\phi(s,a)\phi(s,a)^{\top}$ and $X_i = \phi(s_i,a_i)\phi(s_i,a_i)^{\top}$ and $X = \sum_{i=1}^N X_i$. Note that $N \Sigma = \ensuremath{\mathbb{E}}[X] = \sum_{i=1}^N \ensuremath{\mathbb{E}}[X_i]$. % Then, with probability at least $1-\delta$, we have that: \begin{align*} \left\lvert x^{\top} \left( \sum_{i=1}^N \phi(s_i,a_i)\phi(s_i,a_i)^{\top}/N - \Sigma \right) x \right\rvert \leq \frac{2 \ln(8\widetilde{d}/\delta)}{ 3N } + \sqrt{ \frac{ 2\ln(8\widetilde{d}/\delta) }{N} }, \end{align*} with $\widetilde{d} = \textsc{Trace}(\Sigma)/\|\Sigma\|$ being the intrinsic dimension of $\Sigma$. \end{lemma} \begin{proof} Denote random matrix $X_i = \phi(s_i,a_i)\phi(s_i,a_i)^{\top} - \Sigma$. Note that the maximum eigenvalue of $X_i$ is upper bounded by 1. Also note that $\ensuremath{\mathbb{E}}[X_i] = 0$ for all $i$. Denote $V = \sum_{i=1}^N \ensuremath{\mathbb{E}}[ X_i^2]$. For any $i$, consider $X_i^2$. Denote the eigendecomposition of $X_i$ as $U_i \Lambda_i U_i^{\top}$. We have $X_i^2 = U_i \Lambda_i^2 U_i^{\top}$. Note that the maximum absolute value of the eigenvalues of $X_i$ is bounded by 1. Hence the maximum eigenvalue of $X_i^2$ is bounded by 1 as well. Hence $\ensuremath{\mathbb{E}}[X_i^2]$'s maximum eigenvalue is also upper bounded by $1$. This implies that $\| V \| \leq N$. Now apply Matrix Bernstein inequality \citep{tropp2015introduction}, we have that for any $t \geq \sqrt{N} + 1/3$, \begin{align*} \Pr\left( \sigma_{\max}(\sum_{i=1}^N X_i) \geq t \right) \leq 4\widetilde{d} \exp\left( \frac{-t^2/2}{N + t/3} \right). \end{align*} Since $\sigma_{\max}\left( \sum_{i=1}^N X_i \right) = N \sigma_{\max}\left(\sum_{i=1}^N X_i / N\right)$, we get that: \begin{align*} \Pr\left( \sigma_{\max}\left(\sum_{i=1}^N X_i / N \right) \geq \epsilon \right) \leq 4\widetilde{d}\exp\left( \frac{-\epsilon^2 N / 2 }{ 1 + \epsilon / 3} \right), \end{align*} for any $\epsilon \geq \frac{1}{\sqrt{N}} + \frac{1}{3N}$. Set $4d\exp(-\epsilon^2 N / (2(1+\epsilon/3))) = \delta$, we get: \begin{align*} \epsilon = \frac{2 \ln(4\widetilde{d}/\delta)}{ 3N } + \sqrt{ \frac{ 2\ln(4\widetilde{d}/\delta) }{N} }, \end{align*} which is trivially bigger than $1/\sqrt{N} + 1/(3N)$ as long as $d\geq 1$ and $\delta \leq 1$. This concludes that with probability at least $1-\delta$, we have: \begin{align*} \sigma_{\max}\left(\sum_{i=1}^N \phi(s_i,a_i)\phi(s_i,a_i)^{\top}/ N - \Sigma \right) \leq \frac{2 \ln(4\widetilde{d}/\delta)}{ 3N } + \sqrt{ \frac{ 2\ln(4\widetilde{d}/\delta) }{N} }. \end{align*} We can repeat the same analysis for random matrices $\{X_i: = \Sigma - \phi(s_i,a_i)\phi(s_i,a_i)^{\top}\}$ and we can show that with probability at least $1-\delta$, we have: \begin{align*} \sigma_{\max}\left( \Sigma - \sum_{i=1}^N \phi(s_i,a_i)\phi(s_i,a_i)^{\top}/N \right) \leq \frac{2 \ln(4\widetilde{d}/\delta)}{ 3N } + \sqrt{ \frac{ 2\ln(4\widetilde{d}/\delta) }{N} }. \end{align*} Hence, with probability $1-\delta$, for any $x$, we have: \begin{align*} &x^{\top}\left(\Sigma - \sum_{i=1}^N \phi(s_i,a_i)\phi(s_i,a_i)^{\top}/N \right) x \leq \frac{2 \ln(8\widetilde{d}/\delta)}{ 3N } + \sqrt{ \frac{ 2\ln(8\widetilde{d}/\delta) }{N} }, \\ & x^{\top}\left( \sum_{i=1}^N \phi(s_i,a_i)\phi(s_i,a_i)^{\top}/N - \Sigma \right)x \leq \frac{2 \ln(8\widetilde{d}/\delta)}{ 3N } + \sqrt{ \frac{ 2\ln(8\widetilde{d}/\delta) }{N} }. \end{align*} This concludes the proof. \end{proof} \iffalse \begin{lemma}[ Concentration with the Inverse of Covariance Matrix] Given a distribution $\nu\in\Delta(\mathcal{S}\times\mathcal{A})$ with $N$ i.i.d samples $\{s_i,a_i\}$ from $\nu$ and $\Sigma = \ensuremath{\mathbb{E}}_{(s,a)\sim \nu}\phi(s,a)\phi(s,a)^{\top}$ with $\|\phi(s,a)\|_2 \leq 1$ for all $(s,a)$. Denote the empirical covariance as $\widehat\Sigma = \sum_{i=1}^N \phi(s_i,a_i)\phi(s_i,a_i)^{\top}/ N$. Fix $\lambda\in\mathbb{R}^+$ and assume that $N \gtrsim \ln(\widetilde{d}/\delta)/\lambda^2$ with $\widetilde{d} =\textsc{Trace}{\Sigma}/\|\Sigma\|$. % With probability at least $1-\delta$, we have: \begin{align*} x^{\top}(\widehat\Sigma + \lambda \mathbf{I})^{-1} x \leq 2x^{\top}(\Sigma + \lambda \mathbf{I})^{-1} x, \end{align*} % for all $x$ with $\|x\|_2 \leq 1$ . % \label{lem:inverse_covariance} \end{lemma} \begin{proof}% Denote $\eta(N) = \frac{2 \ln(8\widetilde{d}/\delta)}{ 3N } + \sqrt{ \frac{ 2\ln(8\widetilde{d}/\delta) }{N} }$. Note that $N \geq (36/2)\ln(8\widetilde{d}/\delta)/\lambda^2 \gtrsim \ln(\widetilde{d}/\delta)/\lambda^2$ implies that $\eta(N) \leq \lambda/2$ for $\lambda\leq 1$. From Lemma~\ref{lemma:covariance_concentration}, we know that: \begin{align*} \Sigma + \eta(N)\mathbf{I} + \lambda \mathbf{I} \succeq \widehat\Sigma + \lambda\mathbf{I} \succeq \Sigma - \eta(N)\mathbf{I} + \lambda\mathbf{I}, \end{align*} which implies that: \begin{align*} \left(\Sigma - \eta(N)\mathbf{I} + \lambda \mathbf{I} \right)^{-1} \succeq \left(\widehat\Sigma + \lambda\mathbf{I} \right)^{-1} \succeq \left(\Sigma + \eta(N)\mathbf{I} +\lambda\mathbf{I}\right)^{-1}, \end{align*} under the assumption that $\eta(N) \leq \lambda$. Let $U\Lambda U^{\top}$ be the eigendecomposition of $\Sigma$. \begin{align*} &x^{\top}\left( \widehat{\Sigma} + \lambda \mathbf{I} \right)^{-1} x - x^{\top} \left( \Sigma + \lambda I \right)^{-1} \leq x^{\top} \left(\left(\Sigma +(-\eta(N)+ \lambda)\mathbf{I}\right)^{-1} - \left({\Sigma} + \lambda \mathbf{I}\right)^{-1}\right)x \\ & = \sum_{i} \left( (\sigma_i+\lambda - \eta(N))^{-1} - (\sigma_i + \lambda ))^{-1} \right)(x\cdot u_i)^2 \end{align*} Since $\sigma_i + \lambda \geq 2\eta(N)$ as $\sigma_i\geq 0$ and $\eta(N)\leq \lambda/2$, we have that $2(\sigma_i + \lambda -\eta(N) )\geq \sigma_i + \lambda$, which implies that $(1/2) (\sigma_i + \lambda - \eta(N))^{-1} \leq (\sigma_i + \lambda )^{-1}$. Hence, we have: \begin{align*} &x^{\top}\left( \widehat{\Sigma} + \lambda \mathbf{I} \right)^{-1} x - x^{\top} \left( \Sigma + \lambda I \right)^{-1} x \leq \sum_{i=1} (u_i\cdot x)^2 (\sigma_i + \lambda)^{-1} = x^{\top}(\Sigma + \lambda\mathbf{I})^{-1} x. \end{align*} This concludes the proof. \end{proof} \fi \iffalse \begin{lemma}[ Concentration with the Inverse of Covariance Matrix] \label{lem:inverse_covariance} Consider a fixed $N$. Given $N$ distributions $\nu_1,\dots, \nu_N$ with $\nu_i\in\Delta(\mathcal{S}\times\mathcal{A})$, assume we draw $K$ i.i.d samples from $\nu_i$ and form $\widehat{\Sigma}^i = \sum_{j=1}^K \phi_j\phi_j^{\top}/ K$ for all $i$. Denote $\Sigma = \sum_{i=1}^N \ensuremath{\mathbb{E}}_{(s,a)\sim \nu_i} \phi(s,a)\phi(s,a)^{\top} + \lambda I$ and $\widehat\Sigma = \sum_{i=1}^N \widehat{\Sigma}^i + \lambda I$ with $\lambda\in (0,1]$. Setting $K = 32 N^2 \log\left(8 N \widetilde{d}/\delta\right)/\lambda^2$, with probability at least $1-\delta$, we have: \begin{align*} \frac{1}{2} x^T \left({\Sigma} + \lambda I \right)^{-1} x \leq x^T \left(\widehat{\Sigma} + \lambda I \right)^{-1} x \leq 2 x^T \left({\Sigma} + \lambda I \right)^{-1} x, \end{align*} for all $x$ with $\|x \|_2 \leq 1$. \end{lemma} \begin{proof} Denote $\Sigma^i = \ensuremath{\mathbb{E}}_{(s,a)\sim \nu_i} \phi(x_i,a_i)\phi(x_i,a_i)^{\top}$. Denote $\eta(K) = \frac{2 \ln(8N \widetilde{d}/\delta)}{ 3K } + \sqrt{ \frac{ 2\ln(8 N \widetilde{d}/\delta) }{K} }$. From Lemma~\ref{lemma:covariance_concentration}, we know that with probability $1-\delta$, for all $i$, we have: \begin{align*} \Sigma^i + \eta(K)\mathbf{I} + (\lambda/N) \mathbf{I} \succeq \widehat\Sigma^i + (\lambda/N)\mathbf{I} \succeq \Sigma^i - \eta(K)\mathbf{I} + (\lambda/N)\mathbf{I}, \end{align*} which implies that: \begin{align*} \Sigma + N\eta(K) \mathbf{I} + \lambda \mathbf{I} \geq \widehat{\Sigma} + \lambda \mathbf{I} \geq \Sigma - N\eta(K)\mathbf{I} + \lambda\mathbf{I}, \end{align*}which further implies that: \begin{align*} \left(\Sigma - N \eta(K)\mathbf{I} + \lambda \mathbf{I} \right)^{-1} \succeq \left(\widehat\Sigma + \lambda\mathbf{I} \right)^{-1} \succeq \left(\Sigma + N \eta(K)\mathbf{I} + \lambda\mathbf{I}\right)^{-1}, \end{align*} under the condition that $N\eta(K) \leq \lambda$ which holds under the condition of $K$. Let $U\Lambda U^{\top}$ be the eigendecomposition of $\Sigma$. \begin{align*} &x^{\top}\left( \widehat{\Sigma} + \lambda \mathbf{I} \right)^{-1} x - x^{\top} \left( \Sigma + \lambda I \right)^{-1} \leq x^{\top} \left(\left(\Sigma +(- N \eta(K)+ \lambda)\mathbf{I}\right)^{-1} - \left({\Sigma} + \lambda \mathbf{I}\right)^{-1}\right)x \\ & = \sum_{i} \left( (\sigma_i+\lambda - N\eta(K))^{-1} - (\sigma_i + \lambda ))^{-1} \right)(x\cdot u_i)^2 \end{align*} Since $\sigma_i + \lambda \geq 2N \eta(K)$ as $\sigma_i\geq 0$ and $N\eta(K)\leq \lambda/2$, we have that $2(\sigma_i + \lambda - N\eta(K) )\geq \sigma_i + \lambda$, which implies that $(1/2) (\sigma_i + \lambda - K\eta(N))^{-1} \leq (\sigma_i + \lambda )^{-1}$. Hence, we have: \begin{align*} &x^{\top}\left( \widehat{\Sigma} + \lambda \mathbf{I} \right)^{-1} x - x^{\top} \left( \Sigma + \lambda I \right)^{-1} x \leq \sum_{i=1} (u_i\cdot x)^2 (\sigma_i + \lambda)^{-1} = x^{\top}(\Sigma + \lambda\mathbf{I})^{-1} x. \end{align*} The analysis for the other direction is similar. This concludes the proof. \end{proof} \fi \begin{lemma}[ Concentration with the Inverse of Covariance Matrix] \label{lem:inverse_covariance} Consider a fixed $N$. Given $N$ distributions $\nu_1,\dots, \nu_N$ with $\nu_i\in\Delta(\mathcal{S}\times\mathcal{A})$, assume we draw $K$ i.i.d samples from $\nu_i$ and form $\widehat{\Sigma}^i = \sum_{j=1}^K \phi_j\phi_j^{\top}/ K$ for all $i$. Denote $\Sigma = \sum_{i=1}^N \ensuremath{\mathbb{E}}_{(s,a)\sim \nu_i} \phi(s,a)\phi(s,a)^{\top} + \lambda I$ and $\widehat\Sigma = \sum_{i=1}^N \widehat{\Sigma}^i + \lambda I$ with $\lambda\in (0,1]$. Setting $K = 32 N^2 \log\left(8 N \widetilde{d}/\delta\right)/\lambda^2$, with probability at least $1-\delta$, we have: \begin{align*} \frac{1}{2} x^T \left({\Sigma} + \lambda I \right)^{-1} x \leq x^T \left(\widehat{\Sigma} + \lambda I \right)^{-1} x \leq 2 x^T \left({\Sigma} + \lambda I \right)^{-1} x, \end{align*} for all $x$ with $\|x \|_2 \leq 1$. \end{lemma} \begin{proof} Denote $\Sigma^i = \ensuremath{\mathbb{E}}_{(s,a)\sim \nu_i} \phi(x_i,a_i)\phi(x_i,a_i)^{\top}$. Denote $\eta(K) = \frac{2 \ln(8N \widetilde{d}/\delta)}{ 3K } + \sqrt{ \frac{ 2\ln(8 N \widetilde{d}/\delta) }{K} }$. From Lemma~\ref{lemma:covariance_concentration}, we know that with probability $1-\delta$, for all $i$, we have: \begin{align*} \Sigma^i + \eta(K)\mathbf{I} + (\lambda/N) \mathbf{I} \succeq \widehat\Sigma^i + (\lambda/N)\mathbf{I} \succeq \Sigma^i - \eta(K)\mathbf{I} + (\lambda/N)\mathbf{I}, \end{align*} which implies that: \begin{align*} \Sigma + N\eta(K) \mathbf{I} + \lambda \mathbf{I} \geq \widehat{\Sigma} + \lambda \mathbf{I} \geq \Sigma - N\eta(K)\mathbf{I} + \lambda\mathbf{I}, \end{align*}which further implies that: \begin{align*} \left(\Sigma - N \eta(K)\mathbf{I} + \lambda \mathbf{I} \right)^{-1} \succeq \left(\widehat\Sigma + \lambda\mathbf{I} \right)^{-1} \succeq \left(\Sigma + N \eta(K)\mathbf{I} + \lambda\mathbf{I}\right)^{-1}, \end{align*} under the condition that $N\eta(K) \leq \lambda$ which holds under the condition of $K$. Let $U\Lambda U^{\top}$ be the eigendecomposition of $\Sigma$. \begin{align*} &x^{\top}\left( \widehat{\Sigma} + \lambda \mathbf{I} \right)^{-1} x - x^{\top} \left( \Sigma + \lambda I \right)^{-1} \leq x^{\top} \left(\left(\Sigma +(- N \eta(K)+ \lambda)\mathbf{I}\right)^{-1} - \left({\Sigma} + \lambda \mathbf{I}\right)^{-1}\right)x \\ & = \sum_{i} \left( (\sigma_i+\lambda - N\eta(K))^{-1} - (\sigma_i + \lambda ))^{-1} \right)(x\cdot u_i)^2 \end{align*} Since $\sigma_i + \lambda \geq 2N \eta(K)$ as $\sigma_i\geq 0$ and $N\eta(K)\leq \lambda/2$, we have that $2(\sigma_i + \lambda - N\eta(K) )\geq \sigma_i + \lambda$, which implies that $(1/2) (\sigma_i + \lambda - K\eta(N))^{-1} \leq (\sigma_i + \lambda )^{-1}$. Hence, we have: \begin{align*} &x^{\top}\left( \widehat{\Sigma} + \lambda \mathbf{I} \right)^{-1} x - x^{\top} \left( \Sigma + \lambda I \right)^{-1} x \leq \sum_{i=1} (u_i\cdot x)^2 (\sigma_i + \lambda)^{-1} = x^{\top}(\Sigma + \lambda\mathbf{I})^{-1} x. \end{align*} The analysis for the other direction is similar. This concludes the proof. \end{proof} \section{Analysis of EPOC\xspace} \label{app:analysis} In this section, we first prove \pref{thm:agnostic} under \pref{ass:transfer_bias}. We then prove \pref{thm:linear_mdp} and \pref{thm:state_aggregation} by bounding transfer error. The following theorem states the detailed sample complexity of EPOC\xspace (a detailed version of \pref{thm:agnostic}). \begin{theorem}[Sample Complexity of EPOC\xspace] Fix $\delta\in (0,1/2)$ and $\epsilon\in (0, \frac{1}{1-\gamma})$ and consider an arbitrary comparator policy $\pi^\star$ (not necessarily an optimal policy). Setting hyperparameters as follows: \begin{align*} & \lambda = 1, \quad \beta = \frac{\epsilon^2(1-\gamma)^2}{4W^2}, \quad N = \frac{4W^2\log(A) \mathcal{I}_N(1)}{ \epsilon^3(1-\gamma)^3} \cdot \ln\left( \frac{4W^2\log(A)\mathcal{I}_N(1)}{\epsilon^3(1-\gamma)^3} \right),\\ & M = \frac{ 576 W^4 \mathcal{I}_N(1)^2 }{\epsilon^6(1-\gamma)^{10}} \cdot \ln(NT/\delta )\ln\left( \frac{2\mathcal{I}_N(1)}{\beta\epsilon(1-\gamma)}\right)^2 , \quad% K = N^2 \log\left(\frac{N\ensuremath{\widehat w}{d}}{\delta}\right), \end{align*} Under \pref{ass:transfer_bias}, with probability at least $1-2\delta$, we have: \begin{align*} \max_{n\in [0,\dots, N-1]} V^{\pi^n} \geq V^{\pi^\star} - \frac{\sqrt{2A\epsilon_{bias}}}{1-\gamma} - 4\epsilon, \end{align*} with at most total number of samples: \begin{align*} \frac{c \nu W^6 \mathcal{I}_N(1)^{3} \ln^3(A) }{ \epsilon^{9}(1-\gamma)^{13} }, \end{align*} where $c $ is a universal constant, and $\nu$ contains only log terms: \begin{align*} \nu & = \ln\left( \frac{4W^2\log(A)\mathcal{I}_N(1)}{\epsilon^3(1-\gamma)^3} \right) \left( \ln\left( { \frac{ 4W^2 \log(A) \mathcal{I}_N(1)}{ \epsilon^3(1-\gamma)^3\delta } \ln\left(\frac{4W^2\log(A)\mathcal{I}_N(1)}{ \epsilon^3(1-\gamma)^3 }\right) } \right)\ln\left( \frac{ 4W^2\mathcal{I}_N(1) }{ \epsilon^3(1-\gamma)^3} \right)\right)\\ & \qquad + \ln^3\left( \frac{4W^2 \log(A)\mathcal{I}_N(1)}{\epsilon^3(1-\gamma)^3} \right) \ln\left( \frac{ 4\log(A)W^2 \ensuremath{\widehat w}{d}\mathcal{I}_N(1) }{ \epsilon^3(1-\gamma)^3\delta}\ln\left( \frac{4W^2\log(A)\mathcal{I}_N(1)}{\epsilon^3(1-\gamma)^3} \right) \right). \end{align*} \label{thm:detailed_bound_rmax_pg_new} \end{theorem} In the rest of this section, we prove the theorem. Proving the theorem requires the following steps at a high-level: \begin{enumerate} \item Bounding the number of outer iterations $N$ in order to obtain a desired accuracy $\epsilon$. Intuitively, this requires showing that the probability with which we can reach an \emph{unknown state} with a positive reward bonus is appropriately small. We carry out this bounding by using arguments from the analysis of linear bandits~\citep{dani2008stochastic}. At a high-level, if there is a good probability of reaching unknown states, then NPG finds them as these states carry a high reward. But every time we find such states, the covariance matrix of the resulting policy contains directions not visited by the previous cover with a large probability (or else the quadratic form defining the unknown states would be small). In a $d$-dimensional linear space, the number of times we can keep finding significantly new directions is roughly $O(d)$ (or more precisely based on the information gain), which allows us to bound the number of required outer episodes. \item Bounding the statistical error of the critic. This can be done by a standard regression analysis and we use a specific dimension-free result for stochastic gradient descent (SGD) to fit the critic. \item Errors from empirical covariance matrices instead of their population counterparts have to be accounted for as well, and this is done by using standard inequalities on matrix concentration~\citep{tropp2015introduction}. \end{enumerate} \subsection{Set up of Augmented MDPs} \label{app:setup_mdps} To analyze NPG, we need to construct an augmented MDP $\mathcal{M}^n$ which is \emph{only} going to be used in the analysis. The construction is as follows. We add an extra action denoted as $a^{\dagger}$. For any $s\not\in\mathcal{K}^n$, we add $a^\dagger$ to the set of available actions one could take at $s$. We set rewards and transitions as follows: \begin{align} &r^n(s,a) = r(s,a) + b^n(s,a) + \one\{a = a^\dagger\}; \\ & P^n(\cdot | s,a) = P(\cdot | s,a), \forall (s,a), \quad P^n(s|s,a^\dagger) = 1, \label{eq:constructed_mdp} \end{align} where $r^n(s,a^\dagger) = b^n(s,a^\dagger) = 0$ for all $s$. Note that at this point, we have three different kinds of MDPs that we will cross during the analysis: \begin{enumerate} \item the original MDP $\mathcal{M}$---the one that EPOC\xspace ultimately cares to optimize; \item the MDP with reward bonus $b^n(s,a)$---the one is optimized by EPOC\xspace in each episode $n$ \emph{in the algorithm}, which we denote as $\mathcal{M}_{b^n} = \{P, r(s,a) + b^n(s,a)\}$ with $P$ and $r$ being the original transition and reward from $\mathcal{M}$; \label{item:mdp_2} \item the MDP $\mathcal{M}^n$ that is constructed in Eq.~\pref{eq:constructed_mdp} which is \emph{only used in analysis but not in algorithm}. \label{item:mdp_3} \end{enumerate} The relationship between $\mathcal{M}_{b^n}$ (item \pref{item:mdp_2}) and $\mathcal{M}^n$ (item \pref{item:mdp_3}) is that NPG \pref{alg:npg} runs on $\mathcal{M}_{b^n}$ (NPG is not even aware of the existence of $\mathcal{M}^n$) but we use $\mathcal{M}^n$ to analyze the performance of NPG below. We are going to focus on a fixed comparator policy $\widetilde\pi \in \Pi$. We denote $\widetilde\pi^n$ as the policy such that $\widetilde{\pi}(\cdot |s) = \widetilde\pi^n(\cdot |s)$ for $s\in\mathcal{K}^n$, and $\widetilde\pi^n(a^\dagger |s ) = 1$ for $s\not\in\mathcal{K}^n$. This means that the comparator policy $\widetilde\pi^n$ will self-loop in a state $s\not\in\mathcal{K}^n$ and collect maximum rewards. We denote $\widetilde{d}_{\mathcal{M}_n}$ as the state-action distribution of $\widetilde{\pi}^n$ under $\mathcal{M}^n$. We denote $V^{\pi}_{\mathcal{M}^n}, Q^\pi_{\mathcal{M}^n}$, and $A^\pi_{\mathcal{M}^n}$ as the value, Q, and advantage functions of $\pi$ under $\mathcal{M}^n$. \begin{remark} \label{remark:relationship_two_mdps} Note that policies in our policy class $\Pi$ do not pick $a^\dagger$. Hence for any policy $\pi\in\Pi$, we have $V^{\pi}_{\mathcal{M}^n}(s) = V^{\pi}_{b^n}(s)$ for all $s$, $Q^{\pi}_{\mathcal{M}^n}(s,a) = Q^{\pi}_{b^n}(s,a)$ and $A^{\pi}_{\mathcal{M}^n}(s,a) = A^{\pi}_{b^n}(s,a)$ for all $s$ with $a\neq a^\dagger$, where recall we denote $Q^{\pi}_{b^n}$ in short of $Q^{\pi}_{\mathcal{M}_{b^n}}$ (similarly $A^{\pi}_{b^n}$ in short of $A^{\pi}_{\mathcal{M}_{b^n}}$). This fact is important as our algorithm is running on $\mathcal{M}_{b^n}$ while the performance progress of the algorithm is tracked under $\mathcal{M}^n$. \end{remark} \subsection{Proof of \pref{thm:detailed_bound_rmax_pg_new}} Recall Performance Difference Lemma \cite{kakade2003sample}, for any policy $\pi$ we have: \begin{align*} V^{\widetilde\pi^n}_{\mathcal{M}^n} - V^{\pi}_{\mathcal{M}^n} = \frac{1}{1-\gamma} \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}} \left[A^{\pi}_{\mathcal{M}^n}(s,a)\right]. \end{align*} For notation simplicity below, given a policy $\pi$ and state $s$, we denote $\pi_s$ in short of $\pi(\cdot | s)$. The next lemma quantifies progress made by EPOC\xspace over $N$ episodes. \begin{lemma}[NPG Progress] Setting $\eta = \sqrt{\frac{\log(A)}{ W^2 N }}$, \pref{alg:epoc} outputs a sequence of policies $\{\pi_i\}_{i=0}^{N-1}$ such that: \begin{align*} &\frac{1}{N}\sum_{n=0}^{N-1} \left(V^{\widetilde{\pi}^n}_{\mathcal{M}^n} - V^{n}_{\mathcal{M}^n} \right)\\ & \leq \frac{1}{1-\gamma}\left(2W\sqrt{\frac{\log(A)}{N}} + \frac{1}{N}\sum_{n=0}^{N-1} \left( \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}_n}}\left( A^n_{\mathcal{M}^n}(s,a) - \ensuremath{\widehat w}{A}^t_{\mathcal{M}^n}(s,a) \right) \one\{s\in\mathcal{K}^n\} \right)\right), \end{align*}where $\ensuremath{\widehat w}{A}^n_{\mathcal{M}^n}(s,a) = \theta^n\cdot\left( \phi(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi^n_s}\phi(s,a') \right) $. \label{lem:npg_construction_one} \end{lemma} \begin{proof} First consider any policy $\pi$ which uniformly picks actions among $\{a\in\mathcal{A}: (s,a)\not\in\mathcal{K}^n\}$ at any $s\not\in\mathcal{K}^n$. Via performance difference lemma, we have: \begin{align*} V^{\widetilde\pi^n}_{\mathcal{M}^n} - V^{\pi}_{\mathcal{M}^n} = \frac{1}{1-\gamma} \sum_{(s,a)} \widetilde{d}_{\mathcal{M}^n}(s,a) A_{\mathcal{M}^n}^{\pi}(s,a) \leq \frac{1}{1-\gamma} \sum_{(s,a)} \widetilde{d}_{\mathcal{M}^n}(s,a) A_{\mathcal{M}^n}^{\pi}(s,a) \one\{s \in \mathcal{K}^n\}, \end{align*} where the last inequality comes from the fact that $A^{\pi}_{\mathcal{M}^n}(s,a)\one\{s\not\in\mathcal{K}^n\} \leq 0$. To see this, first note that that for any $s\not\in\mathcal{K}^n$, $\widetilde\pi^n$ will deterministically pick $a^\dagger$, and $Q^\pi_{\mathcal{M}^n}(s,a^\dagger) = 1 + \gamma V^{\pi}_{\mathcal{M}^n}(s)$ as taking $a^\dagger$ leads the agent back to $s$. Second, since $\pi$ uniformly picks actions among $\{a: (s,a)\not\in\mathcal{K}^n\}$, we have $V^{\pi}_{\mathcal{M}_n} \geq 1/(1-\gamma)$ as the reward bonus $b^n(s,a)$ on $(s,a)\not\in\mathcal{K}^n$ is $1/(1-\gamma)$. Hence, we have \begin{align*} A^{\pi}_{\mathcal{M}^n}(s,a^\dagger) = Q^\pi_{\mathcal{M}^n}(s,a^\dagger) - V^\pi_{\mathcal{M}^n}(s) = 1 - (1-\gamma) V^{\pi}_{\mathcal{M}^n}(s) \leq 0, \quad \forall s\not\in\mathcal{K}^n. \end{align*} % Recall \pref{alg:npg}, $\pi^n$ chooses actions uniformly randomly among $\{a: (s,a)\not\in\mathcal{K}^n\}$ for $s\not\in\mathcal{K}^n$, thus we have: \begin{align*} (1-\gamma)\left(V^{\widetilde\pi^n}_{\mathcal{M}^n} - V^{n}_{\mathcal{M}^n}\right) \leq \sum_{(s,a)} \widetilde{d}_{\mathcal{M}^n}(s,a) A_{\mathcal{M}^n}^{n}(s,a) \one\{s \in \mathcal{K}^n\}. \end{align*} % Recall the update rule of NPG, we have: \begin{align*} {\pi}^{n+1}(\cdot |s) \propto \pi^n(\cdot | s) \exp\left(\eta \left(\theta^n\cdot \bar\phi^n(s,\cdot)\right)\one\{s\in\mathcal{K}^n\} \right), \forall s, \end{align*} where the centered feature is defined as $\bar\phi^n(s,a) = \phi(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi^n(\cdot |s)} \phi(s,a') $. % Denote the normalizer $z^n = \sum_{a}\pi^n(a | s) \exp\left(\eta \left(\theta^n\cdot \bar\phi^n(s,a)\right) \right) $ for any $s\in\mathcal{K}^n$. We have that for any $s\in\mathcal{K}^n$: \begin{align*} \ensuremath{\mathrm{KL}}(\widetilde\pi^n_s, \pi^{n+1}_s) - \ensuremath{\mathrm{KL}}(\widetilde\pi^n_s, \pi^n_s) = \ensuremath{\mathbb{E}}_{a\sim \widetilde\pi^n_s } \left[ -\eta \ensuremath{\widehat w}{A}^n_{\mathcal{M}^n}(s,a)+ \log(z^n) \right], \end{align*} where recall we use $\pi_s$ as a shorthand for the vector of probabilities $\pi(\cdot|s)$ over actions, given the state $s$. Also note that we only focus on $s\in\Kcal^n$ which we know that $\widetilde\pi_s = \widetilde\pi^n_s$ for any $s\in\Kcal^n$. Thus, the above equality can be simplified as: \begin{align*} \ensuremath{\mathrm{KL}}(\widetilde\pi_s, \pi^{n+1}_s) - \ensuremath{\mathrm{KL}}(\widetilde\pi_s, \pi^n_s) = \ensuremath{\mathbb{E}}_{a\sim \widetilde\pi_s } \left[ -\eta \ensuremath{\widehat w}{A}^n_{\mathcal{M}^n}(s,a)+ \log(z^n) \right],\quad \forall s\in\Kcal^n. \end{align*} For $\log(z^n)$, using the assumption that $\eta \leq 1/W$, we have that $\eta \ensuremath{\widehat w}{A}^n_{\mathcal{M}^n}(s,a) \leq 1$, which allows us to use the inequality $\exp(x) \leq 1 + x + x^2$ for any $x\leq 1$ and leads to the following inequality: \begin{align*} &\log(z^n) = \log\left( \sum_{a} \pi^n(a|s) \exp(\eta\widehat{A}^n_{\mathcal{M}^n}(s,a)) \right) \\ &\leq \log\left( \sum_{a}\pi^n(a|s) \left( 1 + \eta\ensuremath{\widehat w}{A}^n_{\mathcal{M}^n}(s,a) + \eta^2 \left(\widehat{A}^n_{\mathcal{M}^n}(s,a)\right)^2 \right) \right) = \log\left( 1 + \eta^2 W^2 \right) \leq \eta^2 W^2. \end{align*} Hence, we have: \begin{align*} \ensuremath{\mathrm{KL}}(\widetilde\pi_s, \pi^{n+1}_s) - \ensuremath{\mathrm{KL}}(\widetilde\pi_s, \widetilde\pi^n_s) \leq -\eta \ensuremath{\mathbb{E}}_{a\sim \widetilde\pi_s} \ensuremath{\widehat w}{A}^n_{\mathcal{M}^n}(s,a)+ \eta^2 W^2, \quad \forall s\in\Kcal^n. \end{align*} Adding terms across rounds, and using the telescoping sum, we get: \begin{align*} \sum_{n=1}^N \ensuremath{\mathbb{E}}_{a\sim \widetilde\pi^n_s}\ensuremath{\widehat w}{A}^n_{\mathcal{M}^n}(s,a)\one\{s\in\mathcal{K}^n\} \leq \frac{1}{\eta}\ensuremath{\mathrm{KL}}(\widetilde\pi_s, \pi^1_s) + \eta N W^2 \leq \frac{\log(A)}{\eta} + \eta N W^2, \end{align*} where we use the fact that for $s\in\Kcal^n$, $\widetilde\pi^n_s = \widetilde\pi_s$. Add $\ensuremath{\mathbb{E}}_{s\sim \widetilde{d}_{\mathcal{M}^n}}$, we have: \begin{align*} \sum_{n=1}^N \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}} \left[\ensuremath{\widehat w}{A}^n_{\mathcal{M}^n}(s,a)\one\{s\in\mathcal{K}^n\}\right] \leq \frac{\log(A)}{\eta} + \eta N W^2 \leq 2W \sqrt{\log(A) N }. \end{align*} Now we apply Performance difference lemma on the left hand side of the above inequality, which leads to: \begin{align*} &(1-\gamma)\sum_{n=1}^N \left( V^{\widetilde\pi^n}_{\mathcal{M}^n} - V_{\mathcal{M}^n}^{n} \right)\\ & \leq \sum_{n=1}^N \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}} \left[\ensuremath{\widehat w}{A}^n_{\mathcal{M}^n}(s,a) \one\{s\in\mathcal{K}^n\} \right]+ \sum_{n=1}^N \left( \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^n_{\mathcal{M}^n}(s,a) - \ensuremath{\widehat w}{A}^n_{\mathcal{M}^n}(s,a) \right)\one\{s\in\mathcal{K}^n\} \right)\\ & \leq 2W\sqrt{\log(A) N} + \sum_{n=1}^N \left( \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^n_{\mathcal{M}^n}(s,a) - \ensuremath{\widehat w}{A}^n_{\mathcal{M}^n}(s,a) \right)\one\{s\in\mathcal{K}^n\} \right), \end{align*} which concludes the proof. \end{proof} We need the following lemma to relate the probability of a known state being visited by $\widetilde{\pi}^n$ under $\mathcal{M}^n$ and the probability of the same state being visited by $\widetilde{\pi}^n$ under $\mathcal{M}_{b^n}$. Note that intuitively as $\widetilde{\pi}^n$ always picks $a^\dagger$ outside $\mathcal{K}^n$, it should have smaller probability of visiting the states inside $\mathcal{K}^n$ (once $\widetilde{\pi}^n$ escapes, it will be absorbed and will never return back to $\mathcal{K}^n$). The following lemma formally states this. \begin{lemma} \label{lem:prob_absorb} Consider any state $s\in\mathcal{K}^n$, we have: \begin{align*} \widetilde{d}_{\mathcal{M}^n}(s,a) \leq d^{\widetilde{\pi}}(s,a), \forall a \in \mathcal{A}. \end{align*} \end{lemma} \begin{proof}We prove by induction. Recall $\widetilde{d}_{\mathcal{M}^n}$ is the state-action distribution of $\widetilde\pi^n$ under $\mathcal{M}^n$, and $d^{\widetilde\pi}$ is the state-action distribution of $\widetilde\pi$ under both $\mathcal{M}_{b^n}$ and $\mathcal{M}$ as they share the same dynamics. Starting at $h = 0$, we have: \begin{align*} \widetilde{d}_{\mathcal{M}^n,0}(s_0, a) = d^{\widetilde{\pi}}_{0}(s_0,a), \end{align*} as $s_0$ is fixed and $s_0\in\mathcal{K}^n$, and $\widetilde{\pi}^n(\cdot | s_0) = \widetilde\pi(\cdot | s_0)$. Now assume that at time step $h$, we have that for all $s\in \mathcal{K}^n$, we have: \begin{align*} \widetilde{d}_{\mathcal{M}^n, h}(s,a) \leq d^{\widetilde{\pi}}_{h}(s,a), \forall a \in \mathcal{A}. \end{align*} Now we proceed to prove that this holds for $h+1$. By definition, we have that for $s\in\mathcal{K}^n$, \begin{align*} &\widetilde{d}_{\mathcal{M}^n, h+1 }(s) = \sum_{s',a'} \widetilde{d}_{\mathcal{M}^n, h}(s',a') P_{\mathcal{M}^n}(s | s',a') \\ &= \sum_{s',a'} \one\{s'\in\mathcal{K}^n\} \widetilde{d}_{\mathcal{M}^n, h}(s',a') P_{\mathcal{M}^n}(s | s',a') = \sum_{s',a'} \one\{s'\in\mathcal{K}^n\} \widetilde{d}_{\mathcal{M}^n, h}(s',a') P(s | s',a') \end{align*} as if $s'\not\in\mathcal{K}^n$, $\widetilde\pi^n$ will deterministically pick $a^\dagger$ (i.e., $a' = a^\dagger$) and $P_{\mathcal{M}^n}(s | s' ,a^\dagger) = 0$. On the other hand, for $d^{\widetilde{\pi}}_{h+1}(s,a)$, we have that for $s\in\mathcal{K}^n$, \begin{align*} &d^{\widetilde{\pi}}_{h+1}(s,a) = \sum_{s',a'}d^{\widetilde{\pi}}_{h}(s',a') P(s|s',a') \\ &=\sum_{s',a'}\one\{s'\in\mathcal{K}^n\} d^{\widetilde{\pi}}_{h}(s',a') P(s|s',a') + \sum_{s',a'}\one\{s\not\in\mathcal{K}^n\}d^{\widetilde{\pi}}_{h}(s',a') P(s|s',a') \\ & \geq \sum_{s',a'}\one\{s'\in\mathcal{K}^n\} d^{\widetilde{\pi}}_{h}(s',a') P(s|s',a') \\ & \geq \sum_{s',a'}\one\{s'\in\mathcal{K}^n\} \widetilde{d}_{\mathcal{M}^n,h}(s',a') P(s|s',a') = \widetilde{d}_{\mathcal{M}^n, h+1}(s). \end{align*} Using the fact that $\widetilde\pi^n(\cdot|s) = \widetilde\pi(\cdot | s)$ for $s\in\mathcal{K}^n$, we conclude that the inductive hypothesis holds at $h+1$ as well. Thus it holds for all $h$. Using the definition of average state-action distribution, we conclude the proof. \end{proof} The above lemma says that if we have a good critic approximator $\widehat{A}^n$ in each episode $n$, the NPG will learn policies well across the $N$ many MDPs. The following lemma bounds this advantage prediction error. \begin{lemma}[Variance and Bias Tradeoff] Assume that at episode $n$ we have $\phi(s,a)^{\top}\left(\Sigma_{\mix}^{n}\right)^{-1}\phi(s,a) \leq \beta\in\mathbb{R}^+$ for $(s,a)\in\mathcal{K}^n$. Denote $\theta^n_{\star}$ as one of the best critic fit, i.e., \begin{align*} \theta^n_\star \in \argmin_{\|\theta\|\leq W} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho_{\mix}^n}\left( Q^n(s,a; r+b^n) - \theta\cdot \phi(s,a) \right)^2. \end{align*} Assume the following two conditions are true for all $n\in \{0,\dots, N\}$: \begin{enumerate} \item $\ensuremath{\mathbb{E}}_{(s,a)\sim \rho_{\mix}^n}\left( Q^n(s,a; r+b^n) - \theta^n\cdot \phi(s,a) \right)^2 \leq \min_{\theta:\|\theta\|\leq W} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho_{\mix}^n}\left( Q^n(s,a; r+b^n) - \theta\cdot \phi(s,a) \right)^2 + \epsilon_{stat}$; \item $\ensuremath{\mathbb{E}}_{s\sim d^{\widetilde\pi}, a\sim U(\mathcal{A})} \left( Q^n(s,a;r+b^n) - \theta^n_{\star}\cdot {\phi} \right)^2 \leq \epsilon_{bias}$ \end{enumerate} for $\epsilon_{bias}, \epsilon_{stat} \in\mathbb{R}^+$. We have that for all $n \in \{0, \dots, N-1\}$: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^n_{\mathcal{M}^n}(s,a) - \ensuremath{\widehat w}{A}^n_{\mathcal{M}^n}(s,a) \right) \one\{s\in\mathcal{K}^n\}\leq 2\sqrt{A\epsilon_{bias}} + 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{ \beta n \epsilon_{stat} }. \end{align*} \label{lem:variance_bias_n_new} \end{lemma} Note that the second conditions in the lemma above is the transfer error assumption (\pref{ass:transfer_bias}) where we transfer one of the best on-policy fits to a fixed state-action distribution $d^{\widetilde\pi} U(\mathcal{A})$. The first condition is about the usual generalization error from statistical learning, i.e., $\epsilon_{stat}$ scales in the order of $1/\sqrt{M}$ where $M$ is the number of samples used for linear regression. \begin{proof} We first show that under condition 1 above, $\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_{\mix}}\left( \theta^n_{\star} \phi(s,a) - \theta^n\phi(s,a) \right)^2$ is bounded by $\epsilon_{stat}$. For notation simplicity, we will just denote $Q^n_{b^n}(s,a) := Q^n(s,a; r+ b^n)$. We can verify that: \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix} \left( Q^n_{b^n} (s,a) - \theta^n\cdot \phi(s,a) \right)^2 - \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}\left( Q^n_{b^n}(s,a) - \theta^n_\star\cdot \phi(s,a) \right)^2\\ & = \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}\left( \theta^n_\star\cdot \phi(s,a) - \theta^n\cdot\phi(s,a) \right)^2 + 2\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_{\mix}} \left( Q^n_{b^n}(s,a) - \theta^n_\star\cdot\phi(s,a) \right)\phi(s,a)^{\top}\left( \theta^n_\star - \theta^n \right). \end{align*} Note that $\theta_\star$ is one of the minimizers of the constrained square loss $\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}(Q^n_{b^n}(s,a) - \theta\cdot\phi(s,a))^2$, via first-order optimality, we have: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}\left( Q^n_{b^n}(s,a) - \theta^n_\star\cdot\phi(s,a) \right) (-\phi(s,a)^{\top})\left( \theta - \theta^n_\star \right)\geq 0, \end{align*} for any $\|\theta\|\leq W$, which implies that: \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}\left( \theta^n_\star\cdot \phi(s,a) - \theta^n \cdot\phi(s,a) \right)^2 \\ &\leq \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix} \left( Q^n_{b^n} (s,a) - \theta^n\cdot \phi(s,a) \right)^2 - \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}\left( Q^n_{b^n}(s,a) - \theta^n_\star\cdot \phi(s,a) \right)^2 \leq \epsilon_{stat}. \end{align*} Recall that $\Sigma^n_{\mix} = \sum_{i=1}^n \ensuremath{\mathbb{E}}_{(s,a)\sim d^n}\phi(s,a)\phi(s,a)^{\top} + \lambda \mathbf{I} = n \left( \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}\phi(s,a)\phi(s,a)^{\top} + \lambda/n \mathbf{I}\right)$ . Denote $\bar{\Sigma}_{\mix}^n = \Sigma_{\mix}^n / n$. We have: \begin{align*} \left(\theta^n_\star - \theta^n \right)^{\top} \left( \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}\phi(s,a)\phi(s,a)^{\top} + \lambda/n \mathbf{I} \right) (\theta^n_\star - \theta^n) \leq \epsilon_{stat} + \frac{\lambda}{n} W^2. \end{align*} Hence for any $(s,a)\in\mathcal{K}^n$, we must have: \begin{align} \left\lvert \phi(s,a)^{\top}\left( \theta^n_\star - \theta^n \right)\right\rvert \leq \| \phi(s,a) \|_{(\Sigma_{\mix}^n)^{-1}} \| \theta^n_\star - \theta^n \|_{\Sigma_\mix^n} \leq \sqrt{ \beta n\epsilon_{stat} + \beta \lambda W^2 }. \label{eq:point_wise_est} \end{align} Now we bound $\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^n_{\mathcal{M}^n}(s,a) - \ensuremath{\widehat w}{A}^n_{\mathcal{M}^n}(s,a)\right)\one\{s\in\mathcal{K}^n\}$ as follows. \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^n_{\mathcal{M}^n}(s,a) - \ensuremath{\widehat w}{A}^n_{\mathcal{M}^n}(s,a)\right)\one\{s\in\mathcal{K}^n\} \\ &= \underbrace{\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^n_{\mathcal{M}^n}(s,a) - \theta^n_\star\cdot\bar\phi^n (s,a) \right)\one\{s\in\mathcal{K}^n\}}_{\text{term A}} \\ & \qquad + \underbrace{\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( \theta^n_\star \cdot\bar\phi^n(s,a) - \theta^n\cdot\bar\phi^n(s,a) \right)\one\{s\in\mathcal{K}^n\}}_{\text{term B}}. \end{align*} We first bound term A above. \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^n_{\mathcal{M}^n}(s,a) - \theta^n_\star\cdot\bar\phi^n (s,a) \right)\one\{s\in\mathcal{K}^n\} \\ & = \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left(Q^n_{\mathcal{M}^n}(s,a) - \theta^n_\star\cdot \phi(s,a) \right)\one\{s\in\mathcal{K}^n\} \\ &\qquad + \ensuremath{\mathbb{E}}_{s\sim \widetilde{d}_{\mathcal{M}^n},a\sim \pi^n_s}\left(-Q^n_{\mathcal{M}^n}(s,a) + \theta^n_\star\cdot \phi(s,a) \right)\one\{s\in\mathcal{K}^n\} \\ & \leq \sqrt{ \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left(Q^n_{\mathcal{M}^n}(s,a) - \theta^n_\star\cdot \phi(s,a) \right)^2\one\{s\in\mathcal{K}^n\} } \\ & \qquad + \sqrt{\ensuremath{\mathbb{E}}_{s\sim \widetilde{d}_{\mathcal{M}^n},a\sim \pi^n_s}\left(Q^n_{\mathcal{M}^n}(s,a) - \theta^n_\star\cdot \phi(s,a) \right)^2\one\{s\in\mathcal{K}^n\} }\\ & \leq \sqrt{ \ensuremath{\mathbb{E}}_{(s,a)\sim {d}^{\widetilde\pi}}\left(Q^n_{\mathcal{M}^n}(s,a) - \theta^n_\star\cdot \phi(s,a) \right)^2\one\{s\in\mathcal{K}^n\} } \\ & \qquad + \sqrt{\ensuremath{\mathbb{E}}_{s\sim d^{\widetilde\pi},a\sim \pi^n_s}\left(Q^n_{\mathcal{M}^n}(s,a) - \theta^n_\star\cdot \phi(s,a) \right)^2\one\{s\in\mathcal{K}^n\} }\\ & = \sqrt{ \ensuremath{\mathbb{E}}_{(s,a)\sim {d}^{\widetilde\pi}}\left(Q^n_{b^n}(s,a) - \theta^n_\star\cdot \phi(s,a) \right)^2\one\{s\in\mathcal{K}^n\} } \\ & \qquad + \sqrt{\ensuremath{\mathbb{E}}_{s\sim d^{\widetilde\pi},a\sim \pi^n_s}\left(Q^n_{b^n}(s,a) - \theta^n_\star\cdot \phi(s,a) \right)^2\one\{s\in\mathcal{K}^n\} }\\ & \leq \sqrt{ \ensuremath{\mathbb{E}}_{(s,a)\sim {d}^{\widetilde\pi}}\left(Q^n_{b^n}(s,a) - \theta^n_\star\cdot \phi(s,a) \right)^2 } + \sqrt{\ensuremath{\mathbb{E}}_{s\sim d^{\widetilde\pi},a\sim \pi^n_s}\left(Q^n_{b^n}(s,a) - \theta^n_\star\cdot \phi(s,a) \right)^2 }\\ & \leq 2 \sqrt{A \epsilon_{bias}}, \end{align*} where the first inequality uses CS inequality, the second inequality uses \pref{lem:prob_absorb} for $s\in\mathcal{K}^n$, the second equality uses Remark \pref{remark:relationship_two_mdps} that for any $s\in\mathcal{K}^n$, we have $Q^n_{\mathcal{M}^n}(s,a) = Q^n_{b^n}(s,a)$ as $\pi^n$ never picks $a^\dagger$, and the last inequality uses the change of variable over action distributions. Now we bound term B above. We have: \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( \theta^n_\star \cdot\bar\phi^n(s,a) - \theta^n\cdot\bar\phi^n(s,a) \right)\one\{s\in\mathcal{K}^n\} \\ & = \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( \theta^n_\star \phi(s,a) - \theta^n\cdot\phi(s,a) \right)\one\{s\in\mathcal{K}^n\} \\ & \qquad - \ensuremath{\mathbb{E}}_{s\sim \widetilde{d}_{\mathcal{M}^n}}\ensuremath{\mathbb{E}}_{a\sim \pi^n} \one\{s\in\mathcal{K}^n\}\left( \theta^n_\star \phi(s,a) - \theta^n\cdot\phi(s,a) \right) \leq 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{ \beta n \epsilon_{stat} }, \end{align*} where we use the point-wise estimation guarantee from inequality \pref{eq:point_wise_est}. Combine term A and term B together, we conclude the proof. \end{proof} To analyze the performance of EPOC\xspace, we need to link $\mathcal{M}_n$ and the real MDP $\mathcal{M}$, which is conducted in the following lemma. Recall we consider a fixed comparator policy $\widetilde\pi$ and at episode $n$, we denote $\widetilde\pi^n$ as a policy such that $\widetilde\pi^n(\cdot |s) = \widetilde\pi(\cdot |s)$ for $s\in\mathcal{K}^n$ and $\widetilde\pi^n(a^\dagger |s) = 1$ for $s\not\in\mathcal{K}^n$. \begin{lemma}[Policy Performances on $\mathcal{M}^n$ and $\mathcal{M}$] At any episode $n\in \{0,\dots, N-1\}$, we have that for $\widetilde\pi^n$ and $\pi^n$ : \begin{align*} & V^{\widetilde\pi^n}_{\mathcal{M}^n} \geq V^{\widetilde\pi}_{\mathcal{M}},\\ & V^{n}_{\mathcal{M}} \geq V^{n}_{\mathcal{M}^n} - \ensuremath{\mathbb{E}}_{(s,a)\sim d^n} \left[b^n(s,a)\right] \\ &\qquad = V^{n}_{\mathcal{M}^n}- \frac{1}{1-\gamma}\left(\sum_{(s,a)\not\in\mathcal{K}^n} d^n(s,a)\right) . \end{align*} \label{lem:perf_absorb_new} \end{lemma} \begin{proof} Note that when running $\widetilde\pi^n$ under $\mathcal{M}^n$, once $\widetilde\pi^n$ visits $s\not\in\mathcal{K}^n$, it will be absorbed into $s$ and keeps looping there and receiving the maximum reward $1$ afterwards. Note that $\widetilde\pi$ receives immediate reward no more than $1$ and in $\mathcal{M}$ we do not have reward bonus. Recall that $\pi^n$ never takes $a^\dagger$. Hence $d^n(s,a) = d^{n}_{\mathcal{M}^n}(s,a)$ for all $(s,a)$. Recall that the reward bonus is defined as $\frac{1}{1-\gamma}\one\{(s,a)\not\in\mathcal{K}^n\}$. Using the definition of $b^n(s,a)$ concludes the proof. \end{proof} \begin{lemma}[Potential Function Argument] Consider the sequence of policies $\{\pi^n\}_{n=0}^N$ generated from \pref{alg:epoc}. We have: \begin{align*} &\sum_{n=0}^N V^{{n}}_{\mathcal{M}} \geq \sum_{n=0}^N V^{{n}}_{\mathcal{M}^n} - \sum_{n=0}^N \ensuremath{\mathbb{E}}_{(s,a)\sim d^n}\left[ b^n(s,a) \right] \\ & \qquad \geq \sum_{n=0}^N V^{n}_{\mathcal{M}^n} - \frac{\mathcal{I}_N(\lambda)}{\beta(1-\gamma)}. \end{align*} \label{lem:potential_argument_n_new} \end{lemma} \begin{proof} Recall that $\rho^n_{\mix} = \frac{1}{n+1}\sum_{i=0}^{n}\Sigma^i$. Denote the eigen-decomposition of $\Sigma_{\mix}^n$ as $U\Lambda U^{\top}$ and $\Sigma^n = \ensuremath{\mathbb{E}}_{(s,a)\sim d^n}\phi\phi^{\top}$. We have: \begin{align*} &\tilde{r}\left( \Sigma^{n} \left(\Sigma_{\mix}^n\right)^{-1} \right) = \ensuremath{\mathbb{E}}_{(s,a)\sim d^{n}}\tilde{r}\left( \phi(s,a)\phi(s,a)^{\top}\left( \Sigma_\mix^n \right)^{-1} \right)\\ & = \ensuremath{\mathbb{E}}_{(s,a)\sim d^{n}}\phi(s,a)^{\top} \left(\Sigma_{\mix}^n\right)^{-1} \phi(s,a) \\ & \geq \ensuremath{\mathbb{E}}_{(s,a)\sim d^{n}} \left[ \one\{(s,a)\not\in\mathcal{K}^n\}\phi(s,a)^{\top} \left(\Sigma_\mix^n \right)^{-1}\phi(s,a) \right] \geq \beta \ensuremath{\mathbb{E}}_{(s,a)\sim d^{n}} \one\{(s,a)\not\in\mathcal{K}^n\} \\ & = \beta (1-\gamma) \ensuremath{\mathbb{E}}_{(s,a)\sim d^n} \left[ b^n(s,a) \right]. \end{align*} which together with the second result in \pref{lem:perf_absorb_new} imply that \begin{align*} V^{{n}}_{\mathcal{M}^n} - V^{{n}} \leq \ensuremath{\mathbb{E}}_{(s,a)\sim d^{n}}\left[ b^n(s,a) \right] \leq \frac{\tilde{r}\left( \Sigma^{n} \left(\Sigma^n_{\mix} \right)^{-1} \right)}{\beta (1-\gamma)} . \end{align*} Now sum over $n$ and call~\pref{lem:trace_tele}, we have: \begin{align*} \sum_{n=0}^N \left(V^{{n}}_{\mathcal{M}^n} - V^{{n}}\right) \leq \sum_{n=0}^N \ensuremath{\mathbb{E}}_{(s,a)\sim d^n}\left[ b^n(s,a) \right] \leq \frac{ \log\left(\det(\Sigma^N_\mix)/\det(\lambda I )\right) }{\beta(1-\gamma)} \leq \frac{\mathcal{I}_N(\lambda)}{ \beta(1-\gamma)}, \end{align*} where we use the definition of maximum information gain $\mathcal{I}_N(\lambda)$. \end{proof} Using the above lemma, now we can transfer the regret we computed under the sequence of models $\{\mathcal{M}_n\}$ to regret under $\mathcal{M}$. Recall that $V^{\pi}$ denotes $V^{\pi}(s_0)$ and $V^n$ is in short of $V^{\pi^n}$. \begin{lemma}Assume the two conditions in~\pref{lem:variance_bias_n_new} hold. For the sequence of policies $\{\pi^n\}_{n=0}^{N-1}$, we have: \begin{align*} \max_{n\in [0,\dots,N-1]} V^{n} & \geq V^{\widetilde\pi} - \frac{1}{1-\gamma}\left(\sqrt{\frac{4W^2\log(A)}{N}} + 2\sqrt{A\epsilon_{bias}} + 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{\beta N \epsilon_{stat}}\right) \\ &\qquad - \frac{1}{N} \sum_{n=0}^N \ensuremath{\mathbb{E}}_{(s,a)\sim d^{n}}\left[b^n(s,a) \right]\\ & \geq V^{\widetilde\pi} - \frac{1}{1-\gamma}\left(2W\sqrt{\frac{\log(A)}{N}} + 2\sqrt{A\epsilon_{bias}} + 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{\beta N \epsilon_{stat}} +\frac{2\mathcal{I}_N(\lambda)}{N\beta}\right). \end{align*} \label{lem:regret_rmax_pg_new} \end{lemma} \begin{proof} First combine \pref{lem:npg_construction_one} and \pref{lem:variance_bias_n_new}, we have: \begin{align*} \frac{1}{N} \sum_{n=0}^{N-1} V^n_{\mathcal{M}^n} \geq \frac{1}{N}\sum_{n=0}^{N-1} V^{\widetilde\pi^n}_{\mathcal{M}^n} - \frac{1}{1-\gamma}\left(2W\sqrt{\frac{\log(A)}{N}} + 2\sqrt{A \epsilon_{bias}} + \sqrt{ \beta \lambda W^2 } + 2\sqrt{ \beta N \epsilon_{stat} } \right). \end{align*} Use~\pref{lem:perf_absorb_new} and \pref{lem:potential_argument_n_new}, we have: \begin{align*} \frac{1}{N} \sum_{n=0}^{N-1} V^{n} \geq V^{\widetilde\pi} - \frac{1}{1-\gamma} \left( 2W\sqrt{\frac{\log(A)}{N}} + 2\sqrt{A\epsilon_{bias}} + 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{ \beta N \epsilon_{stat} } + \frac{\mathcal{I}_N(\lambda)}{N\beta} \right) , \end{align*} which concludes the proof. \end{proof} The following theorem shows that setting hyperparameters properly, we can guarantee to learn a near optimal policy. \begin{theorem}Assume the conditions in~\pref{lem:variance_bias_n_new} holds. Fix $\epsilon \in (0,1/(1-\gamma))$. Setting hyperparameters as follows: \begin{align*} &\lambda = 1, \quad \beta = \frac{\epsilon^2(1-\gamma)^2}{4W^2}, \quad N = \frac{4W^2\log(A)\mathcal{I}_N(1)}{ \epsilon^3(1-\gamma)^3} \ln\left( \frac{4W^2\mathcal{I}_N(1)}{\epsilon^3(1-\gamma)^3} \right),\\ &\epsilon_{stat} = \frac{ \epsilon^3 (1-\gamma)^3 }{\log(A) \mathcal{I}_N(1)} \ln^{-1}\left( \frac{4W^2\mathcal{I}_N(1)}{ \epsilon^3(1-\gamma)^3 } \right) \end{align*} we have: \begin{align*} \max_{n\in [N]} V^n \geq V^{\widetilde\pi} - \frac{\sqrt{2A\epsilon_{bias}}}{1-\gamma} - 4\epsilon. \end{align*} \label{thm:regret_stat_error_new} \end{theorem} \begin{proof} The theorem can be easily verified by substituting the values of hyperparameters into~\pref{lem:regret_rmax_pg_new}. \end{proof} The above theorem indicates that we need to control the $\epsilon_{stat}$ statistical error from linear regression to be small in the order of $\widetilde{O}\left(\epsilon{^3}(1-\gamma)^3\right)$. Recall that $M$ is the total number of samples we used for each linear regression. If $\epsilon_{stat} = \widetilde{O}\left(1/\sqrt{M}\right)$, then we roughly will need $M$ to be in the order of $\widetilde{\Omega}\left( 1/(\epsilon^6 (1-\gamma)^6) \right)$. % Another source of samples is the samples used to estimate covariance matrices $\Sigma^n$. As $\phi$ could be infinite dimensional, we need matrix concentration without explicit dependency on dimension of $\phi$. Leveraging matrix Bernstein inequality with matrix intrinsic dimension, the following lemma shows concentration results of $\ensuremath{\widehat w}\Sigma^n$ on $\Sigma^n$, and of $\widehat\Sigma_\mix^n$ on $\Sigma_\mix^n$. \begin{lemma}[Estimating Covariance Matrices] Set $\lambda = 1$. Define $\ensuremath{\widehat w}{d}$ as: \begin{align*} \ensuremath{\widehat w}{d} = \max_{\pi\in \Delta(\Pi_{})} \tilde{r}\left( \Sigma^{\pi} \right)/\| \Sigma^{\pi}\|, \end{align*} i.e., the maximum intrinsic dimension of the covariance matrix from a mixture policy. For $K \geq 32 N^2 \ln\left(\ensuremath{\widehat w}{d}N/\delta\right)$, with probability at least $1-\delta$, for any $n\in [N]$, we have for all $x$ with $\|x\| \leq 1$, \begin{align*} (1/2)x^{\top}\left( \Sigma_\mix^n \right)^{-1} x\leq x^{\top}\left( \ensuremath{\widehat w}\Sigma_\mix^n \right)^{-1} x \leq 2 x^{\top}\left( \Sigma_\mix^n \right)^{-1}x \end{align*} \label{lem:concentration_cov_new} \end{lemma} \begin{proof} The proof of the above Lemma directly comes from \pref{lem:inverse_covariance} for concentration of matrix inverse. \end{proof} Note that \pref{ass:transfer_bias} is equivalent to the condition stated in \pref{lem:variance_bias_n_new} since by the construction of $M^n$ and $M_{b^n}$ and Remark~\pref{remark:relationship_two_mdps}, we have $Q^n_{b^n}(s,a) = Q^n_{M^n}(s,a)$ and $A^n_{b^n}(s,a) = A^n_{M^n}(s,a)$ for all $s$ with $a\neq a^\dagger$, and the policy cover $\rho^n_{\mix}$ and policy $\pi^n$ has zero probability of visiting $a^\dagger$. Now we are ready to prove \pref{thm:detailed_bound_rmax_pg_new} \begin{proof}[Proof of \pref{thm:detailed_bound_rmax_pg_new}] Assume the event in~\pref{lem:concentration_cov_new} holds. In this case, we have for all $n\in [N]$, \begin{align*} (1/2) x^{\top} \left( {\Sigma}^n_\mix \right)^{-1} x \leq x^{\top}\left({\ensuremath{\widehat w}\Sigma}^n_\mix\right)^{-1} x \leq 2 x^{\top} \left({\Sigma}^n_\mix\right)^{-1} x, \end{align*} for all $\|x\|\leq 1$ and the total number of samples used is: \begin{align} \label{eq:source_1} &N \times \left(N^2 \ln\left( \ensuremath{\widehat w}{d} N /\delta \right)\right) = N^3 \ln\left( \ensuremath{\widehat w}{d} N /\delta \right) \\ &= \frac{64\log(A)^3\mathcal{I}_N(1)^3 W^6}{ \epsilon^9(1-\gamma)^{9}} \ln^3\left( \frac{4W^2 \log(A)\mathcal{I}_N(1)}{\epsilon^3(1-\gamma)^3} \right) \ln\left( \frac{ 4\log(A)W^2 \ensuremath{\widehat w}{d}\mathcal{I}_N(1) }{ \epsilon^3(1-\gamma)^3\delta}\ln\left( \frac{4W^2\log(A)\mathcal{I}_N(1)}{\epsilon^3(1-\gamma)^3} \right) \right)\\ & : = \frac{c_1 \nu_1 \mathcal{I}_N(1)^3 \log(A)^3 W^6}{\epsilon^9(1-\gamma)^9}, \end{align} where $c_1$ is a constant and $\nu_1$ contains log-terms: \begin{align*} \nu_1:= \ln^3\left( \frac{4W^2 \log(A)\mathcal{I}_N(1)}{\epsilon^3(1-\gamma)^3} \right) \ln\left( \frac{ 4\log(A)W^2 \ensuremath{\widehat w}{d}\mathcal{I}_N(1) }{ \epsilon^3(1-\gamma)^3\delta}\ln\left( \frac{4W^2\log(A)\mathcal{I}_N(1)}{\epsilon^3(1-\gamma)^3} \right) \right). \end{align* Since we set known state-action pair as $\phi(s,a)^{\top} \left( \ensuremath{\widehat w}{\Sigma}_\mix^n \right)^{-1}\phi(s,a) \leq \beta$, then we must have that for any $(s,a)\in\mathcal{K}^n$, we have: \begin{align*} \phi(s,a)^{\top} \left( \Sigma^n_{\mix} \right)^{-1} \phi(s,a) \leq 2\beta, \end{align*} and any $(s,a)\not\in\Kcal^n$, we have: \begin{align*} \phi(s,a)^{\top} \left( \Sigma^n_{\mix} \right)^{-1} \phi(s,a) \geq \frac{1}{2} \beta. \end{align*} This allows us to call \pref{thm:regret_stat_error_new}. From \pref{thm:regret_stat_error_new}, we know that we need to set $M$ large enough such that \begin{align*} \epsilon_{stat} = \frac{ \epsilon^3 (1-\gamma)^3 }{\log(A) \mathcal{I}_N(1)} \ln^{-1}\left( \frac{4W^2\mathcal{I}_N(1)}{ \epsilon^3(1-\gamma)^3 } \right) \end{align*} Using \pref{lem:sgd_dim_free}, we know that with probability at least $1-\delta$, for all $n$, $\epsilon_{stat}$ scales in the order of: \begin{align*} \epsilon_{stat} = \sqrt{\frac{ 9W^4 \log(N /\delta) }{(1-\gamma)^4 M }}, \end{align*} where we have taken union bound over all episodes $n\in [N]$. Now solve for $M$, we have: \begin{align*} M = \frac{9 W^4\mathcal{I}_N(1)^2 \ln^2(A) }{\epsilon^6(1-\gamma)^{10}} \left( \ln\left( \frac{N}{\delta} \right)\ln\left( \frac{ 4W^2\mathcal{I}_N(1) }{ \epsilon^3(1-\gamma)^3} \right)\right). \end{align*} Considering every episode $n\in [N]$, we have the total number of samples needed for NPG is: \begin{align*} N \cdot M & = \frac{4W^2\log(A) \mathcal{I}_N(1)}{ \epsilon^3(1-\gamma)^3} \ln\left( \frac{4W^2\log(A)\mathcal{I}_N(1)}{\epsilon^3(1-\gamma)^3} \right) \times \frac{9 W^4\mathcal{I}_N(1)^2 \ln^2(A) }{\epsilon^6(1-\gamma)^{10}} \left( \ln\left( \frac{N}{\delta} \right)\ln\left( \frac{ 4W^2\mathcal{I}_N(1) }{ \epsilon^3(1-\gamma)^3} \right)\right)\\ & = \frac{ 36 W^6 \ln^3(A) \mathcal{I}_N(1)^3 }{ \epsilon^9 (1-\gamma)^{13}} \ln\left( \frac{4W^2\log(A)\mathcal{I}_N(1)}{\epsilon^3(1-\gamma)^3} \right) \left( \ln\left( \frac{N}{\delta} \right)\ln\left( \frac{ 4W^2\mathcal{I}_N(1) }{ \epsilon^3(1-\gamma)^3} \right)\right)\\ & = \frac{c_2 \nu_2 W^6 \mathcal{I}_N(1)^{3} \ln^3(A) }{ \epsilon^{9}(1-\gamma)^{13} }, \end{align*} where $c_2$ is a positive universal constant, and $\nu_2$ only contains log terms: \begin{align*} \nu_2 = \ln\left( \frac{4W^2\log(A)\mathcal{I}_N(1)}{\epsilon^3(1-\gamma)^3} \right) \left( \ln\left( { \frac{ 4W^2 \log(A) \mathcal{I}_N(1)}{ \epsilon^3(1-\gamma)^3\delta } \ln\left(\frac{4W^2\log(A)\mathcal{I}_N(1)}{ \epsilon^3(1-\gamma)^3 }\right) } \right)\ln\left( \frac{ 4W^2\mathcal{I}_N(1) }{ \epsilon^3(1-\gamma)^3} \right)\right) \end{align*} Combine two sources of samples, we have that the total number of samples is bounded as: \begin{align*} \frac{c_1 \nu_1 \mathcal{I}_N(1)^3 \log(A)^3 W^6}{\epsilon^9(1-\gamma)^9} + \frac{c_2 \nu_2 W^6 \mathcal{I}_N(1)^{3} \log^3(A) }{ \epsilon^{9}(1-\gamma)^{13} }. \end{align*} This concludes the proof. \end{proof} \input{app_linear_mdp} \input{app_state_aggregation} \section{NPG Analysis (\pref{alg:npg})} \label{app:npg_analysis} In this section, we analyze \pref{alg:npg} for a particular episode $n$. In order to carry out our analysis, we first set up some auxiliary MDPs which are needed in our analysis. Throughout this section, we focus on episode $n$. \subsection{Set up of Augmented MDPs} \label{app:setup_mdps} Denote \begin{equation} \mathcal{K}^n := \left\{(s,a): \phi(s,a)^{\top} \left(\Sigma^n_{\mix}\right)^{-1}\phi(s,a) \leq \beta \right\}. \label{eqn:known_set} \end{equation} That is, $\mathcal{K}^n$ contains state-action pairs that obtain positive reward bonuses. We abuse notation a bit by denoting $s\in\mathcal{K}^n$ if and only if $(s,a)\in\mathcal{K}^n$ for all $a\in\mathcal{A}$. We also add an extra action denoted as $a^{\dagger}$ in $\mathcal{M}^n$. For any $s\not\in\mathcal{K}^n$, we add $a^\dagger$ to the set of available actions one could take at $s$. We set rewards and transitions as follows: \begin{align} r^n(s,a) = r(s,a) + b^n(s,a) + \one\{a = a^\dagger\}; \quad P^n(\cdot | s,a) = P(\cdot | s,a), \forall (s,a), \quad P^n(s|s,a^\dagger) = 1, \label{eq:constructed_mdp} \end{align} where $r(s,a^\dagger) = b^n(s,a^\dagger) = 0$ for any $s$. Note that at this point, we have three different kinds of MDPs that we will cross during the analysis: \begin{enumerate} \item the original MDP $\mathcal{M}$---the one that EPOC\xspace is ultimately optimizing; \item the MDP with reward bonus $b^n(s,a)$---the one is optimized by NPG in each episode $n$ \emph{in the algorithm}, which we denote as $\mathcal{M}_{b^n} = \{P, r(s,a) + b^n(s,a)\}$ with $P$ and $r$ being the transition and reward from $\mathcal{M}$; \label{item:mdp_2} \item the MDP $\mathcal{M}^n$ that is constructed in Eq.~\pref{eq:constructed_mdp} which is \emph{only used in analysis but not in algorithm}. \label{item:mdp_3} \end{enumerate} The relationship between $\mathcal{M}_{b^n}$ (item \pref{item:mdp_2}) and $\mathcal{M}^n$ (item \pref{item:mdp_3}) is that NPG \pref{alg:npg} runs on $\mathcal{M}_{b^n}$ (NPG is not even aware of the existence of $\mathcal{M}^n$) but we use $\mathcal{M}^n$ to analyze the performance of NPG below. \paragraph{Additional Notations.} We are going to focus on a fixed comparator policy $\widetilde\pi \in \Pi$. We denote $\widetilde\pi^n$ as the policy such that $\widetilde{\pi}(\cdot |s) = \widetilde\pi^n(\cdot |s)$ for $s\in\mathcal{K}^n$, and $\widetilde\pi^n(a^\dagger |s ) = 1$ for $s\not\in\mathcal{K}^n$. This means that the comparator policy $\widetilde\pi^n$ will self-loop in a state $s\not\in\mathcal{K}^n$ and collect maximum rewards. We denote $\widetilde{d}_{\mathcal{M}_n}$ as the state-action distribution of $\widetilde{\pi}^n$ under $\mathcal{M}^n$, and $V^{\pi}_{\mathcal{M}^n}, Q^\pi_{\mathcal{M}^n}$, and $A^\pi_{\mathcal{M}^n}$ as the value, Q, and advantage functions of $\pi$ under $\mathcal{M}^n$. We also denote $Q^{\pi}_{b^n}(s,a)$ in short of $Q^{\pi}(s,a ; r + b^n)$, similarly $A^{\pi}_{b^n}(s,a)$ in short of $A^{\pi}(s,a; r + b^n)$, and $V^\pi_{b^n}(s) $ in short of $V^{\pi}(s; r + b^n)$. \begin{remark} \label{remark:relationship_two_mdps} Note that policies used in the algorithm do not pick $a^\dagger$ (i.e., algorithms does not even aware of $\mathcal{M}^n$). Hence for any policy $\pi$ that we would encounter during learning, we have $V^{\pi}_{\mathcal{M}^n}(s) = V^{\pi}_{b^n}(s)$ for all $s$, $Q^{\pi}_{\mathcal{M}^n}(s,a) = Q^{\pi}_{b^n}(s,a)$ and $A^{\pi}_{\mathcal{M}^n}(s,a) = A^{\pi}_{b^n}(s,a)$ for all $s$ with $a\neq a^\dagger$. This fact is important as our algorithm is running on $\mathcal{M}_{b^n}$ while the performance progress of the algorithm is tracked under $\mathcal{M}^n$. \end{remark} \subsection{Performance of NPG (\pref{alg:npg}) on the Augmented MDP $\mathcal{M}^n$} In this section, we focus on analyzing the performance of NPG (\pref{alg:npg}) on a specific episode $n$. Specifically we leverage the Mirror Descent analysis similar to~\citet{agarwal2019optimality} to show that regret between the sequence of learned policies $\{\pi^t\}_{t=1}^{T}$ and the comparator $\widetilde{\pi}^n$ on the constructed MDP $\mathcal{M}^n$. Via performance difference lemma \cite{kakade2003sample}, we immediately have: \begin{align*} V^{\widetilde\pi^n}_{\mathcal{M}^n} - V^{\pi}_{\mathcal{M}^n} = \frac{1}{1-\gamma} \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}} \left[A^{\pi}_{\mathcal{M}^n}(s,a)\right]. \end{align*} For notation simplicity below, given a policy $\pi$ and state $s$, we denote $\pi_s$ in short of $\pi(\cdot | s)$. \begin{lemma}[NPG Convergence] % Consider any episode $n$. Setting $\eta = \sqrt{\frac{\log(A) }{ W^2 T}} $, assume NPG updates policy as: \begin{align*} \pi^{t+1}(\cdot | s) \propto \begin{cases} \pi^t(\cdot | s) \exp\left( \eta \widehat{A}^t_{b^n}(s,a) \right), & s\in \Kcal^n, \\ \pi^t(\cdot |s), & \text{else}, \end{cases} \end{align*} {with} $\pi^0$ initialized as: \begin{align*} \pi^0(\cdot |s) = \begin{cases} \text{Uniform}(\mathcal{A}) & s\in\Kcal^n\\ \text{Uniform}(\{a\in\mathcal{A}: (s,a)\not\in\Kcal^n\}) & \text{else}. \end{cases} \end{align*} Assume that $\sup_{s,a}\left\lvert\widehat{A}^t_{b^n}(s,a)\right\rvert \leq W$ and $\ensuremath{\mathbb{E}}_{a'\sim \pi^t_s} \widehat{A}^t_{b^n}(s,a') = 0$ for all $t$. Then the NPG outputs a sequence of policies $\{\pi^t\}_{t=1}^T$ such that on $\mathcal{M}^n$, when comparing to $\widetilde\pi^n$: \begin{align*} &\frac{1}{T}\sum_{t=1}^T \left(V^{\widetilde{\pi}^n}_{\mathcal{M}^n} - V^{t}_{\mathcal{M}^n} \right) = \frac{1}{T} \sum_{t=1}^T \left(V^{\widetilde{\pi}^n}_{\mathcal{M}^n} - V^{t}_{{b^n}} \right) \\ &\quad \leq \frac{1}{1-\gamma}\left(2W\sqrt{\frac{\log(A)}{T}} + \frac{1}{T}\sum_{t=1}^T \left( \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^t_{{b^n}}(s,a) - \ensuremath{\widehat w}{A}^t_{{b^n}}(s,a) \right) \one\{s\in\mathcal{K}^n\} \right)\right), \end{align*} \label{lem:npg_construction} \end{lemma} \begin{proof} First consider any policy $\pi$ which uniformly picks actions among $\{a\in\mathcal{A}: (s,a)\not\in\mathcal{K}^n\}$ at any $s\not\in\mathcal{K}^n$. Via performance difference lemma, we have: \begin{align*} V^{\widetilde\pi^n}_{\mathcal{M}^n} - V^{\pi}_{\mathcal{M}^n} = \frac{1}{1-\gamma} \sum_{(s,a)} \widetilde{d}_{\mathcal{M}^n}(s,a) A_{\mathcal{M}^n}^{\pi}(s,a) \leq \frac{1}{1-\gamma} \sum_{(s,a)} \widetilde{d}_{\mathcal{M}^n}(s,a) A_{\mathcal{M}^n}^{\pi}(s,a) \one\{s \in \mathcal{K}^n\}, \end{align*} where the last inequality comes from the fact that $A^{\pi}_{\mathcal{M}^n}(s,a)\one\{s\not\in\mathcal{K}^n\} \leq 0$. To see this, first note that that for any $s\not\in\mathcal{K}^n$, $\widetilde\pi^n$ will deterministically pick $a^\dagger$, and $Q^\pi_{\mathcal{M}^n}(s,a^\dagger) = 1 + \gamma V^{\pi}_{\mathcal{M}^n}(s)$ as taking $a^\dagger$ leads the agent back to $s$. Second, since $\pi$ uniformly picks actions among $\{a: (s,a)\not\in\mathcal{K}^n\}$, we have $V^{\pi}_{\mathcal{M}_n} \geq 1/(1-\gamma)$ as the reward bonus $b^n(s,a)$ on $(s,a)\not\in\mathcal{K}^n$ is $1/(1-\gamma)$. Hence, we have \begin{align*} A^{\pi}_{\mathcal{M}^n}(s,a^\dagger) = Q^\pi_{\mathcal{M}^n}(s,a^\dagger) - V^\pi_{\mathcal{M}^n}(s) = 1 - (1-\gamma) V^{\pi}_{\mathcal{M}^n}(s) \leq 0, \quad \forall s\not\in\mathcal{K}^n. \end{align*} % Recall \pref{alg:npg}, $\pi^t$ chooses actions uniformly randomly among $\{a: (s,a)\not\in\mathcal{K}^n\}$ for $s\not\in\mathcal{K}^n$, thus we have: \begin{align*} (1-\gamma)\left(V^{\widetilde\pi^n}_{\mathcal{M}^n} - V^{t}_{\mathcal{M}^n}\right) \leq \sum_{(s,a)} \widetilde{d}_{\mathcal{M}^n}(s,a) A_{\mathcal{M}^n}^{t}(s,a) \one\{s \in \mathcal{K}^n\} = \sum_{(s,a)} \widetilde{d}_{\mathcal{M}^n}(s,a) A_{b^n}^{t}(s,a) \one\{s \in \mathcal{K}^n\}, \end{align*} where the last equation uses the fact that $A^t_{b^n}(s,a) = A^t_{\mathcal{M}^n}(s,a)$ for $a\neq a^\dagger$ and the fact that for $s\in\Kcal^n$, $\widetilde\pi^n$ never picks $a^\dagger$ (i.e., $\widetilde{d}_{\mathcal{M}^n}(s,a^\dagger) = 0$ for $s\in\Kcal^n$). Recall the update rule of NPG, \begin{align*} {\pi}^{t+1}(\cdot |s) \propto \pi^t(\cdot | s) \exp\left(\eta \left(\widehat{A}^t_{b^n}(s,\cdot)\right)\one\{s\in\mathcal{K}^n\} \right), \forall s, \end{align*} where the centered feature is defined as $\bar\phi^t(s,a) = \phi(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi^t(\cdot |s)} \phi(s,a') $. % This is equivalent to updating $s\in \Kcal^n$ while holding $\pi(\cdot |s)$ fixed for $s\not\in\Kcal^n$, i.e., \begin{align*} \pi^{t+1}(\cdot | s) \propto \begin{cases} \pi^t(\cdot | s) \exp\left( \eta\widehat{A}^t_{b^n}(s,\cdot) \right), & s\in\Kcal^n, \\ \pi^t(\cdot | s), & else. \end{cases} \end{align*} Now let us focus on any $s\in\Kcal^n$. Denote the normalizer $z^t = \sum_{a}\pi^t(a | s) \exp\left(\eta \widehat{A}^t_{b^n}(s,a) \right) $. We have that: \begin{align*} \ensuremath{\mathrm{KL}}(\widetilde\pi^n_s, \pi^{t+1}_s) - \ensuremath{\mathrm{KL}}(\widetilde\pi^n_s, \pi^t_s) = \ensuremath{\mathbb{E}}_{a\sim \widetilde\pi^n_s } \left[ -\eta \ensuremath{\widehat w}{A}^t_{b^n}(s,a) + \log(z^t) \right], \end{align*} where we use $\pi_s$ as a shorthand for the vector of probabilities $\pi(\cdot|s)$ over actions, given the state $s$. For $\log(z^t)$, using the assumption that $\eta \leq 1/W$, we have that $\eta \ensuremath{\widehat w}{A}^t_{b^n}(s,a) \leq 1$, which allows us to use the inequality $\exp(x) \leq 1 + x + x^2$ for any $x\leq 1$ and leads to the following inequality: \begin{align*} &\log(z^t) = \log\left( \sum_{a} \pi^t(a|s) \exp(\eta\widehat{A}^t_{b^n}(s,a)) \right) \\ &\leq \log\left( \sum_{a}\pi^t(a|s) \left( 1 + \eta\ensuremath{\widehat w}{A}^t_{b^n}(s,a) + \eta^2 \left(\widehat{A}^t_{b^n}(s,a) \right)^2 \right) \right) \\ & = \log\left( 1 + \eta^2 W^2 \right) \leq \eta^2 W^2, \end{align*} where we use the fact that $\sum_a \pi^t(a|s) \widehat{A}^t_{b^n}(s,a) = 0$. Hence, for $s\in\Kcal^n$ we have: \begin{align*} \ensuremath{\mathrm{KL}}(\widetilde\pi^n_s, \pi^{t+1}_s) - \ensuremath{\mathrm{KL}}(\widetilde\pi^n_s, \widetilde\pi^t_s) \leq -\eta \ensuremath{\mathbb{E}}_{a\sim \widetilde\pi^n_s} \ensuremath{\widehat w}{A}^t_{b^n} + \eta^2 W^2. \end{align*} Adding terms across rounds, and using the telescoping sum, we get: \begin{align*} \sum_{t=1}^T \ensuremath{\mathbb{E}}_{a\sim \widetilde\pi^n_s}\ensuremath{\widehat w}{A}^t_{b^n}(s,a) \leq \frac{1}{\eta}\ensuremath{\mathrm{KL}}(\widetilde\pi^n_s, \pi^1_s) + \eta T W^2 \leq \frac{\log(A)}{\eta} + \eta T W^2, \quad \forall s\in \Kcal^n. \end{align*} Add $\ensuremath{\mathbb{E}}_{s\sim \widetilde{d}_{\mathcal{M}^n}}$, we have: \begin{align*} \sum_{t=1}^T \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}} \left[\ensuremath{\widehat w}{A}^t_{b^n}(s,a)\one\{s\in\mathcal{K}^n\}\right] \leq \frac{\log(A)}{\eta} + \eta T W^2 \leq 2W \sqrt{\log(A) T }. \end{align*} Hence, for regret on $\mathcal{M}_n$, we have: \begin{align*} &\sum_{t=1}^T \left( V^{\widetilde\pi^n}_{\mathcal{M}^n} - V_{\mathcal{M}^n}^{t} \right)\\ & \leq \sum_{t=1}^T \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}} \left[\ensuremath{\widehat w}{A}^t_{b^n}(s,a) \one\{s\in\mathcal{K}^n\} \right]+ \sum_{t=1}^T \left( \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^t_{b^n}(s,a) - \ensuremath{\widehat w}{A}^t_{b^n}(s,a) \right)\one\{s\in\mathcal{K}^n\} \right)\\ & \leq 2W\sqrt{\log(A) T} + \sum_{t=1}^T \left( \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^t_{b^n}(s,a) - \ensuremath{\widehat w}{A}^t_{b^n}(s,a) \right)\one\{s\in\mathcal{K}^n\} \right). \end{align* Now using the fact that $\pi^t$ never picks $a^\dagger$, we have $V^{t}_{\mathcal{M}^n} = V^{t}_{{b^n}}$. This concludes the proof. \end{proof} Note that the second term of the RHS of the inequality in the above lemma measures the average estimation error of $\ensuremath{\widehat w}{A}^t_{{b^n}}$. Below, for EPOC\xspace's analysis, we bound the critic prediction error under $d^\star$. \section{Relationship between $\mathcal{M}^n$ and $\mathcal{M}$} We need the following lemma to relate the probability of a known state being visited by $\widetilde{\pi}^n$ under $\mathcal{M}^n$ and the probability of the same state being visited by $\widetilde{\pi}$ under $\mathcal{M}_{b^n}$. Note that intuitively as $\widetilde{\pi}^n$ always picks $a^\dagger$ outside $\mathcal{K}^n$, it should have smaller probability of visiting the states inside $\mathcal{K}^n$ (once $\widetilde{\pi}^n$ escapes, it will be absorbed and will never return back to $\mathcal{K}^n$). Also recall that $\mathcal{M}_{b^n}$ and $\mathcal{M}$ share the same underlying transition dynamics. So for any policy, we simply have $d^{\pi}_{\mathcal{M}_{b^n}} = d^{\pi}$. The following lemma formally states this. \begin{lemma} \label{lem:prob_absorb} Consider any state $s\in\mathcal{K}^n$, we have: \begin{align*} \widetilde{d}_{\mathcal{M}^n}(s,a) \leq d^{\widetilde{\pi}}(s,a), \forall a \in \mathcal{A}, \end{align*} where recall $\widetilde{d}_{\mathcal{M}^n}$ is the state-action distribution of $\widetilde\pi^n$ under $\mathcal{M}^n$. \end{lemma} \begin{proof}We prove by induction. Recall $\widetilde{d}_{\mathcal{M}^n}$ is the state-action distribution of $\widetilde\pi^n$ under $\mathcal{M}^n$, and $d^{\widetilde\pi}$ is the state-action distribution of $\widetilde\pi$ under both $\mathcal{M}_{b^n}$ and $\mathcal{M}$ as they share the same dynamics. Starting at $h = 0$, we have: \begin{align*} \widetilde{d}_{\mathcal{M}^n,0}(s_0, a) = d^{\widetilde{\pi}}_{0}(s_0,a), \end{align*} as $s_0$ is fixed and $s_0\in\mathcal{K}^n$, and $\widetilde{\pi}^n(\cdot | s_0) = \widetilde\pi(\cdot | s_0)$. Now assume that at time step $h$, we have that for all $s\in \mathcal{K}^n$, we have: \begin{align*} \widetilde{d}_{\mathcal{M}^n, h}(s,a) \leq d^{\widetilde{\pi}}_{h}(s,a), \forall a \in \mathcal{A}. \end{align*} Now we proceed to prove that this holds for $h+1$. By definition, we have that for $s\in\mathcal{K}^n$, \begin{align*} &\widetilde{d}_{\mathcal{M}^n, h+1 }(s) = \sum_{s',a'} \widetilde{d}_{\mathcal{M}^n, h}(s',a') P_{\mathcal{M}^n}(s | s',a') \\ &= \sum_{s',a'} \one\{s'\in\mathcal{K}^n\} \widetilde{d}_{\mathcal{M}^n, h}(s',a') P_{\mathcal{M}^n}(s | s',a') = \sum_{s',a'} \one\{s'\in\mathcal{K}^n\} \widetilde{d}_{\mathcal{M}^n, h}(s',a') P(s | s',a') \end{align*} as if $s'\not\in\mathcal{K}^n$, $\widetilde\pi^n$ will deterministically pick $a^\dagger$ (i.e., $a' = a^\dagger$) and $P_{\mathcal{M}^n}(s | s' ,a^\dagger) = 0$. On the other hand, for $d^{\widetilde{\pi}}_{h+1}(s,a)$, we have that for $s\in\mathcal{K}^n$, \begin{align*} &d^{\widetilde{\pi}}_{h+1}(s,a) = \sum_{s',a'}d^{\widetilde{\pi}}_{h}(s',a') P(s|s',a') \\ &=\sum_{s',a'}\one\{s'\in\mathcal{K}^n\} d^{\widetilde{\pi}}_{h}(s',a') P(s|s',a') + \sum_{s',a'}\one\{s\not\in\mathcal{K}^n\}d^{\widetilde{\pi}}_{h}(s',a') P(s|s',a') \\ & \geq \sum_{s',a'}\one\{s'\in\mathcal{K}^n\} d^{\widetilde{\pi}}_{h}(s',a') P(s|s',a') \\ & \geq \sum_{s',a'}\one\{s'\in\mathcal{K}^n\} \widetilde{d}_{\mathcal{M}^n,h}(s',a') P(s|s',a') = \widetilde{d}_{\mathcal{M}^n, h+1}(s). \end{align*} Using the fact that $\widetilde\pi^n(\cdot|s) = \widetilde\pi(\cdot | s)$ for $s\in\mathcal{K}^n$, we conclude that the inductive hypothesis holds at $h+1$ as well. Thus it holds for all $h$. Using the definition of average state-action distribution, we conclude the proof. \end{proof} We now establish a standard simulation lemma-style result to link the performance of policies on $\mathcal{M}^n$ to the performance on the real MDP $\mathcal{M}$, before bounding the error in the lemma using a linear bandits potential function argument as sketched above. These arguments allow us to translate the error bounds from Appendix~\ref{app:npg_analysis} from the augmented MDP $\mathcal{M}^n$ to the actual MDP $\mathcal{M}$. \begin{lemma}[Policy Performances on $\mathcal{M}^n$, $\mathcal{M}_{b^n}$ $\mathcal{M}$] At each episode $n$, denote $\{\pi^t\}_{t=1}^T$ as the sequence of policies generated from NPG in that episode. we have that for $\widetilde\pi^n$ and $\pi^t$ for any $t\in [T]$: \begin{align*} & V^{\widetilde\pi^n}_{\mathcal{M}^n} \geq V^{\widetilde\pi}_{\mathcal{M}},\\ & V^{t}_{\mathcal{M}} \geq V^{t}_{{b^n}} - \frac{1}{1-\gamma}\left(\sum_{(s,a)\not\in\mathcal{K}^n} d^t(s,a)\right) . \end{align*} \label{lem:perf_absorb} \end{lemma} \begin{proof} Note that when running $\widetilde\pi^n$ under $\mathcal{M}^n$, once $\widetilde\pi^n$ visits $s\not\in\mathcal{K}^n$, it will be absorbed into $s$ and keeps looping there and receiving the maximum reward $1$. Note that $\widetilde\pi$ receives reward no more than $1$ and in $\mathcal{M}$ we do not have reward bonus. Recall that $\pi^t$ never takes $a^\dagger$. Hence $d^t(s,a) = d^{t}_{\mathcal{M}_{b^n}}(s,a)$ for all $(s,a)$. Recall that the reward bonus is defined as $\frac{1}{1-\gamma}\one\{(s,a)\not\in\mathcal{K}^n\}$. Using the definition of $b^n(s,a)$ concludes the proof. \end{proof} The lemma below relates the escaping probability to an elliptical potential function and quantifies the progress made by the algorithm by the maximum information gain quantity. \begin{lemma}[Potential Function Argument] Consider the sequence of policies $\{\pi^n\}_{n=1}^N$ generated from \pref{alg:epoc}. We have: \begin{align*} \sum_{n=0}^{N-1} V^{\pi^{n+1}} \geq \sum_{n=0}^{N-1} V^{\pi^{n+1}}_{b^n} - \frac{2\mathcal{I}_{N}(\lambda)}{ \beta (1-\gamma) } \end{align*} \label{lem:potential_argument_n} \end{lemma} \begin{proof} Denote the eigen-decomposition of $\Sigma_{\mix}^n$ as $U\Lambda U^{\top}$ and $\Sigma^n = \ensuremath{\mathbb{E}}_{(s,a)\sim d^n}\phi\phi^{\top}$. We have: \begin{align*} &\tilde{r}\left( \Sigma^{n+1} \left(\Sigma_{\mix}^n\right)^{-1} \right) = \ensuremath{\mathbb{E}}_{(s,a)\sim d^{n+1}}\tilde{r}\left( \phi(s,a)\phi(s,a)^{\top}\left( \Sigma_\mix^n \right)^{-1} \right)\\ & = \ensuremath{\mathbb{E}}_{(s,a)\sim d^{n+1}}\phi(s,a)^{\top} \left(\Sigma_{\mix}^n\right)^{-1} \phi(s,a) \\ & \geq \ensuremath{\mathbb{E}}_{(s,a)\sim d^{n+1}} \left[ \one\{(s,a)\not\in\mathcal{K}^n\}\phi(s,a)^{\top} \left(\Sigma_\mix^n \right)^{-1}\phi(s,a) \right] \geq \beta \ensuremath{\mathbb{E}}_{(s,a)\sim d^{n+1}} \one\{(s,a)\not\in\mathcal{K}^n\} \end{align*}together with \pref{lem:perf_absorb}, which implies that \begin{align*} V^{\pi^{n+1}}_{b^n} - V^{\pi^{n+1}} \leq \frac{\tilde{r}\left( \Sigma^{n+1} \left(\Sigma^n_{\mix} \right)^{-1} \right)}{\beta (1-\gamma)} . \end{align*} Now call~\pref{lem:trace_tele}, we have: \begin{align*} \sum_{n=0}^N \left(V^{\pi^{n+1}}_{b^n} - V^{\pi^{n+1}}\right) \leq \frac{2\log(\det\left( \Sigma_{\mix}^N \right) / \det(\lambda I))}{\beta(1-\gamma)} \leq \frac{ 2 \mathcal{I}_N(\lambda) }{ \beta(1-\gamma)} \end{align*} where we use the definition of information gain $\mathcal{I}_{N}(\lambda)$. \end{proof} \section{Analysis of EPOC\xspace for the Agnostic Setting (\pref{thm:agnostic})} \label{app:rmaxpg_sample} In this section, we analyze the performance of EPOC\xspace using the NPG results we derived from the previous section. We begin with an assumption and a theorem statement which is the most general sample complexity result for EPOC\xspace and from which all the statements of Section~\ref{sec:analysis} follow. We first formally state the assumption of transfer bias $\varepsilon_{bias}$ which we have used as the condition in NPG analysis in \pref{lem:variance_bias_n}. The following theorem states the detailed sample complexity of EPOC\xspace (a detailed version of \pref{thm:agnostic}). \begin{theorem}[Main Result: Sample Complexity of EPOC\xspace] Fix $\delta\in (0,1/2)$ and $\epsilon\in (0, \frac{1}{1-\gamma})$. Setting hyperparameters as follows: \begin{align*} &T = \frac{4W^2 \log(A)}{ (1-\gamma)^2 \epsilon^2}, \quad \lambda = 1, \quad \beta = \frac{\epsilon^2(1-\gamma)^2}{4W^2}, \quad N \geq \frac{4W^2 \mathcal{I}_N(1)}{ (1-\gamma)^3 \epsilon^3 },\\ & M = \frac{ 144 W^4 \mathcal{I}_N(1)^2 \ln(NT/\delta )}{\epsilon^6(1-\gamma)^{10}}, \quad K = 32 N^2 \log\left(\frac{N\ensuremath{\widehat w}{d}}{\delta}\right), \end{align*} Under \pref{ass:transfer_bias}, with probability at least $1-2\delta$, we have: \begin{align*} \max_{n\in [N]} V^{\pi^n} \geq V^{\widetilde\pi} - \frac{2\sqrt{A\varepsilon_{bias}}}{1-\gamma} - 4\epsilon, \end{align*} for any comparator $\widetilde\pi\in \Pi_{linear}$, with at most total number of samples: \begin{align*} \frac{c \nu W^8 \mathcal{I}_N(1)^3 \ln(A)}{\epsilon^{11}(1-\gamma)^{15}}, \end{align*} where $c $ is a universal constant, and $\nu$ contains only log terms: \begin{align*} \nu & = \ln\left( \frac{4\widehat{d} W^2 \mathcal{I}_N(1) }{(1-\gamma)^3 \epsilon^3 \delta} \right) + \ln\left( \frac{16 W^4 \ln(A) \mathcal{I}_N(1)}{ \epsilon^5(1-\gamma)^5 \delta} \right). \end{align*} \label{thm:detailed_bound_rmax_pg} \end{theorem} \begin{remark}Note that in the above theorem, we require that the number of iterations $N$ to satisfy the constraint $N \geq 4W^2 \mathcal{I}_N(1) / ((1-\gamma)^3 \epsilon^3)$. The specific $N$ thus depends on the form of the maximum information gain $\mathcal{I}_N(1)$. For instance, when $\phi(s,a) \in \mathbb{R}^d$ with $\|\phi\|_2 \leq 1$, we have $\mathcal{I}_N(1) \leq d\log(N + 1)$. Hence setting $N \geq \frac{8 W^2 d}{ (1-\gamma)^3\epsilon^3 } \ln\left( \frac{4 W^2 d}{ (1-\gamma)^3\epsilon^3 } \right)$ suffices. Another example is when $\phi$ lives in an RKHS with RBF kernel. In this case, we have $\mathcal{I}_N(1) = O( \log(N)^{d_{s,a}} )$ (\cite{srinivas2010gaussian}), where $d_{s,a}$ stands for the dimension of the concatenated vector of state and action. In this case, we can set $N = O\left( \frac{ W^2 }{(1-\gamma)^3 \epsilon^3} \left(\ln\left( \frac{ W^2 }{(1-\gamma)^3 \epsilon^3} \right) \right)^{d_{s,a}} \right)$.\label{remark:kernel_discussion} \end{remark} In the rest of this section, we prove the theorem. Given the analysis of \pref{app:npg_analysis}, proving the theorem requires the following steps at a high-level: \begin{enumerate} \item Bounding the number of outer iterations $N$ in order to obtain a desired accuracy $\epsilon$. Intuitively, this requires showing that the probability with which we can reach an \emph{unknown state} with a positive reward bonus is appropriately small. We carry out this bounding by using arguments from the analysis of linear bandits~\citep{dani2008stochastic}. At a high-level, if there is a good probability of reaching unknown states, then NPG finds them based on our previous analysis as these states carry a high reward. But every time we find such states, the covariance matrix of the resulting policy contains directions not visited by the previous cover with a large probability (or else the quadratic form defining the unknown states would be small). In a $d$-dimensional linear space, the number of times we can keep finding significantly new directions is roughly $O(d)$ (or more precisely based on the intrinsic dimension), which allows us to bound the number of required outer episodes. \item Bounding the prediction error of the critic in Lemma~\ref{lem:npg_construction}. This can be done by a standard regression analysis and we use a specific result for stochastic gradient descent to fit the critic. \item Errors from empirical covariance matrices instead of their population counterparts have to be accounted for as well, and this is done by using standard inequalities on matrix concentration~\citep{tropp2015introduction}. \end{enumerate} \subsection{Proof of \pref{thm:detailed_bound_rmax_pg}} We recall that we perform linear regression from $\phi(s,a)$ to $Q^{\pi}_{b^n}(s,a) - b^n(s,a)$, and set $\widehat{A}^t_{b^n}(s,a)$ as \begin{align*} & \widehat{A}^t_{b^n}(s,a) = \left(b^n(s,a) + \theta^t\cdot \phi(s,a)\right) - \ensuremath{\mathbb{E}}_{a' \sim \pi^t_s}[ b^n(s,a') + \theta^t \cdot \phi(s,a')] \\ & := \bar{b}^{n,t}(s,a) + \theta^t \cdot \bar\phi^t(s,a), \end{align*} where for notation simplicitly, we denote centered bonus $\bar{b}^{n,t}(s,a) = b^n(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi^t_s} b^n(s,a')$, and centered feature $\bar\phi^t(s,a) = \phi(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi^t_s} \phi(s,a')$. \begin{lemma}[Variance and Bias Tradeoff] \label{lem:variance_bias_n}Assume that at episode $n$ we have $\phi(s,a)^{\top}\left(\Sigma_{\mix}^{n}\right)^{-1}\phi(s,a) \leq \beta$ for $(s,a)\in\mathcal{K}^n$. At iteration $t$ inside episode $n$, let us denote a best on-policy fit as $\theta^t_{\star} \in \argmin_{\|\theta\|\leq W} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho_{\mix}^n}\left( (Q^{t}_{b^n}(s,a) -b^n(s,a)) - \theta\cdot \phi(s,a) \right)^2$. Assume the following condition is true for all $t\in [T]$: \begin{align*} L\left(\theta^t ; \rho^n_{\mix}, Q^{t}_{b^n} - b^n \right) \leq \min_{\theta:\|\theta\|\leq W} L\left(\theta ; \rho^n_{\mix}, Q^{t}_{b^n} - b^n \right) + \epsilon_{stat}, \end{align*} where $\varepsilon_{stat} \in\mathbb{R}^+$. Then under \pref{ass:transfer_bias} (with $\widetilde\pi$ as the comparator policy here), we have that for all $t\in [T]$: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^t_{{b^n}}(s,a) - \ensuremath{\widehat w}{A}^t_{{b^n}}(s,a) \right) \one\{s\in\mathcal{K}^n\}\leq 2\sqrt{A\varepsilon_{bias}} + 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{ \beta n \varepsilon_{stat} }. \end{align*} \end{lemma} \begin{proof} We first show that under condition 1 above, $\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_{\mix}}\left( \theta^t_{\star} \phi(s,a) - \theta^t\phi(s,a) \right)^2$ is bounded by $\varepsilon_{stat}$. \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix} \left( Q^t_{{b^n}} (s,a) - b^n(s,a) - \theta^t\cdot \phi(s,a) \right)^2 - \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}\left( Q^t_{{b^n}}(s,a) - b^n(s,a) - \theta^t_\star\cdot \phi(s,a) \right)^2\\ & = \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}\left( \theta^t_\star\cdot \phi(s,a) - \theta^t\cdot\phi(s,a) \right)^2 + 2\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_{\mix}} \left( Q^t_{{b^n}}(s,a) -b^n(s,a) - \theta^t_\star\cdot\phi(s,a) \right)\phi(s,a)^{\top}\left( \theta^t_\star - \theta^t \right). \end{align*} Note that $\theta_\star$ is one of the minimizers of the constrained square loss $\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}(Q^t_{{b^n}}(s,a)-b^n(s,a) - \theta\cdot\phi(s,a))^2$, via first-order optimality, we have: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}\left( Q^t_{{b^n}}(s,a) - b^n(s,a) - \theta^t_\star\cdot\phi(s,a) \right) (-\phi(s,a)^{\top})\left( \theta - \theta^t_\star \right)\geq 0, \end{align*} for any $\|\theta\|\leq W$, which implies that: \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}\left( \theta^t_\star\cdot \phi(s,a) - \theta^t \cdot\phi(s,a) \right)^2 \\ &\leq \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix} \left( Q^t_{{b^n}} (s,a) - b^n(s,a) - \theta^t\cdot \phi(s,a) \right)^2 - \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}\left( Q^t_{{b^n}}(s,a) -b^n(s,a) - \theta^t_\star\cdot \phi(s,a) \right)^2 \leq \varepsilon_{stat}. \end{align*} Recall that $\Sigma^n_{\mix} = \sum_{i=1}^n \ensuremath{\mathbb{E}}_{(s,a)\sim d^n}\phi(s,a)\phi(s,a)^{\top} + \lambda \mathbf{I} = n \left( \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}\phi(s,a)\phi(s,a)^{\top} + \lambda/n \mathbf{I}\right)$ . Denote $\bar{\Sigma}_{\mix}^n = \Sigma_{\mix}^n / n$. We have: \begin{align*} \left(\theta^t_\star - \theta^t \right)^{\top} \left( \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_\mix}\phi(s,a)\phi(s,a)^{\top} + \lambda/n \mathbf{I} \right) (\theta^t_\star - \theta^t) \leq \varepsilon_{stat} + \frac{\lambda}{n} W^2. \end{align*} Hence for any $(s,a)\in\mathcal{K}^n$, we must have the following point-wise estimation error: \begin{align} \left\lvert \phi(s,a)^{\top}\left( \theta^t_\star - \theta^t \right)\right\rvert \leq \| \phi(s,a) \|_{(\Sigma_{\mix}^n)^{-1}} \| \theta^t_\star - \theta^t \|_{\Sigma_\mix^n} \leq \sqrt{ \beta n\varepsilon_{stat} + \beta \lambda W^2 }. \label{eq:point_wise_est} \end{align} Now we bound $\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^t_{{b^n}}(s,a) - \ensuremath{\widehat w}{A}^t_{{b^n}}(s,a)\right)\one\{s\in\mathcal{K}^n\}$ as follows. \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}_{b^n}}}\left( A^t_{{b^n}}(s,a) - \ensuremath{\widehat w}{A}^t_{{b^n}}(s,a)\right)\one\{s\in\mathcal{K}^n\} \\ &= \underbrace{\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^t_{{b^n}}(s,a) - ( \bar{b}^{n,t}(s,a) + \theta^t_\star\cdot\bar\phi^t (s,a) ) \right)\one\{s\in\mathcal{K}^n\}}_{\text{term A}} \\ & \qquad + \underbrace{\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( (\bar{b}^{n,t}(s,a) + \theta^t_\star \cdot\bar\phi^t(s,a)) - ( \bar{b}^{n,t}(s,a) + \theta^t\cdot\bar\phi^t(s,a)) \right)\one\{s\in\mathcal{K}^n\}}_{\text{term B}}. \end{align*} We first bound term A above. \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^t_{{b^n}}(s,a) - \bar{b}^{n,t}(s,a) - \theta^t_\star\cdot\bar\phi^t (s,a) \right)\one\{s\in\mathcal{K}^n\} \\ & = \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left(Q^t_{{b^n}}(s,a) - b^n(s,a) - \theta^t_\star\cdot \phi(s,a) \right)\one\{s\in\mathcal{K}^n\} \\ &\qquad + \ensuremath{\mathbb{E}}_{s\sim \widetilde{d}_{\mathcal{M}^n},a\sim \pi^t_s}\left(-Q^t_{{b^n}}(s,a) + b^n(s,a) + \theta^t_\star\cdot \phi(s,a) \right)\one\{s\in\mathcal{K}^n\} \\ & \leq \sqrt{ \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left(Q^t_{{b^n}}(s,a) -b^n(s,a) - \theta^t_\star\cdot \phi(s,a) \right)^2\one\{s\in\mathcal{K}^n\} } \\ & \qquad + \sqrt{\ensuremath{\mathbb{E}}_{s\sim \widetilde{d}_{\mathcal{M}^n},a\sim \pi^t_s}\left(Q^t_{{b^n}}(s,a) - b^n(s,a) - \theta^t_\star\cdot \phi(s,a) \right)^2\one\{s\in\mathcal{K}^n\} }\\ & \leq \sqrt{ \ensuremath{\mathbb{E}}_{(s,a)\sim {d}^{\widetilde\pi}}\left(Q^t_{b^n}(s,a) -b^n(s,a) - \theta^t_\star\cdot \phi(s,a) \right)^2\one\{s\in\mathcal{K}^n\} } \\ & \qquad + \sqrt{\ensuremath{\mathbb{E}}_{s\sim d^{\widetilde\pi},a\sim \pi^t_s}\left(Q^t_{b^n}(s,a) - b^n(s,a) - \theta^t_\star\cdot \phi(s,a) \right)^2\one\{s\in\mathcal{K}^n\} }\\ & \leq \sqrt{ \ensuremath{\mathbb{E}}_{(s,a)\sim {d}^{\widetilde\pi}}\left(Q^t_{b^n}(s,a) -b^n(s,a)- \theta^t_\star\cdot \phi(s,a) \right)^2 } + \sqrt{\ensuremath{\mathbb{E}}_{s\sim d^{\widetilde\pi},a\sim \pi^t_s}\left(Q^t_{b^n}(s,a) - b^n(s,a) - \theta^t_\star\cdot \phi(s,a) \right)^2 }\\ & \leq 2 \sqrt{A \epsilon_{bias}}, \end{align*} where the first inequality uses CS inequality, the second inequality uses \pref{lem:prob_absorb} for $s\in\mathcal{K}^n$, and the last inequality uses the change of variable over action distributions and \pref{ass:transfer_bias}. Now we bound term B above. We have: \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( \theta^t_\star \cdot\bar\phi^t(s,a) - \theta^t\cdot\bar\phi^t(s,a) \right)\one\{s\in\mathcal{K}^n\} \\ & = \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( \theta^t_\star \phi(s,a) - \theta^t\cdot\phi(s,a) \right)\one\{s\in\mathcal{K}^n\} \\ & \qquad - \ensuremath{\mathbb{E}}_{s\sim \widetilde{d}_{\mathcal{M}^n}}\ensuremath{\mathbb{E}}_{a\sim \pi^t} \one\{s\in\mathcal{K}^n\}\left( \theta^t_\star \phi(s,a) - \theta^t\cdot\phi(s,a) \right) \leq 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{ \beta n \epsilon_{stat} }, \end{align*} where we use the point-wise estimation guarantee from inequality \pref{eq:point_wise_est}. Combine term A and term B together, we conclude the proof. \end{proof} Combine the the above lemma and \pref{lem:npg_construction}, we can see that as long as the on-policy critic achieves small statistical error (i.e., $\epsilon_{stat}$ is small), and our features $\phi(s,a)$ are sufficient to represent Q functions in a linear form (i.e., $\epsilon_{bias}$ is small), then we can guarantee inside episode $n$, NPG succeeds by finding a policy that has low regret with respect to the comparator $\widetilde{\pi}^n$: \begin{align} \max_{t\in[T]} V^{t}_{b^n} \geq V^{\widetilde\pi^n}_{\mathcal{M}^n} - \frac{1}{1-\gamma}\left( 2W\sqrt{\frac{\log(A)}{T}} + 2\sqrt{A\varepsilon_{bias}} + 2\sqrt{\beta \lambda W^2} + 2\sqrt{\beta n \varepsilon_{stat}} \right). \label{eq:npg_perf} \end{align} The term that contains $\epsilon_{stat}$ comes from the statistical error induced from constrained linear regression. Note that in general, $\epsilon_{stat}$ decays in the rate of $O(1/\sqrt{M})$ with $M$ being the total number of data samples used for linear regression (\pref{line:learn_critic} in \pref{alg:npg}), and $\epsilon_{stat}$ usually does not polynomially depend on dimension of $\phi(s,a)$ explicitly. See Lemma~\ref{lemma:least_square_dim_free} for an example where linear regression is solved via stochastic gradient descent. Using \pref{lem:potential_argument_n}, now we can transfer the regret we computed under the sequence of models $\{\mathcal{M}_{b^n}\}$ to regret under $\mathcal{M}$. Recall that $V^{\pi}$ denotes $V^{\pi}(s_0)$ and $V^n$ is in short of $V^{\pi^n}$. \begin{lemma}Assume the condition in~\pref{lem:variance_bias_n} and \pref{ass:transfer_bias} hold. For the sequence of policies $\{\pi^n\}_{n=1}^N$, we have: \begin{align*} \max_{n\in [N]} V^{n} \geq V^{\widetilde\pi} - \frac{1}{1-\gamma}\left(2W\sqrt{\frac{\log(A)}{T}} + 2 \sqrt{A\varepsilon_{bias}} + 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}} +\frac{2\mathcal{I}_{N}(\lambda)}{N\beta}\right). \end{align*} \label{lem:regret_rmax_pg} \end{lemma} \begin{proof} First combine \pref{lem:npg_construction} and \pref{lem:variance_bias_n}, we have: \begin{align*} \frac{1}{N} \sum_{n=0}^{N-1} V^{n+1}_{b^n} \geq \frac{1}{N}\sum_{n=0}^{N-1} V^{\widetilde\pi^n}_{\mathcal{M}^n} - \frac{1}{1-\gamma}\left(2W\sqrt{\frac{\log(A)}{T}} + 2\sqrt{A\varepsilon_{bias}} + 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{ \beta N \varepsilon_{stat} } \right). \end{align*} Use~\pref{lem:perf_absorb} and \pref{lem:potential_argument_n}, we have: \begin{align*} \frac{1}{N} \sum_{n=1}^{N} V^{n} \geq V^{\widetilde\pi} - \frac{1}{1-\gamma} \left( 2W\sqrt{\frac{\log(A)}{T}} + 2\sqrt{A \varepsilon_{bias}} + 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{ \beta N \varepsilon_{stat} } + \frac{\mathcal{I}_N(\lambda)}{N\beta} \right) , \end{align*} which concludes the proof. \end{proof} The following theorem shows that setting hyperparameters properly, we can guarantee to learn a near optimal policy. \begin{theorem}Assume the conditions in~\pref{lem:variance_bias_n} and \pref{ass:transfer_bias} hold. Fix $\epsilon \in (0,1/(4(1-\gamma)))$. Setting hyperparameters as follows: \begin{align*} &T = \frac{4W^2 \log(A)}{ (1-\gamma)^2 \epsilon^2}, \quad \lambda = 1, \quad \beta = \frac{\epsilon^2(1-\gamma)^2}{4W^2}, \\ & N \geq \frac{4W^2 \mathcal{I}_N(1)}{ (1-\gamma)^3 \epsilon^3 }, \quad \epsilon_{stat} = \frac{ \epsilon^3 (1-\gamma)^3 }{ 4 \mathcal{I}_{N}(1)}, \end{align*} we have: \begin{align*} \max_{n\in [N]} V^n \geq V^{\widetilde\pi} - \frac{2\sqrt{A\varepsilon_{bias}}}{1-\gamma} - 4\epsilon. \end{align*} \label{thm:regret_stat_error} \end{theorem} \begin{proof} The theorem can be easily verified by substituting the values of hyperparameters into~\pref{lem:regret_rmax_pg}. \end{proof} The above theorem indicates that we need to control the $\varepsilon_{stat}$ statistical error from linear regression to be small in the order of $\widetilde{O}\left(\epsilon{^3}(1-\gamma)^3\right)$. Recall that $M$ is the total number of samples we used for each linear regression. If $\varepsilon_{stat} = \widetilde{O}\left(1/\sqrt{M}\right)$, then we roughly will need $M$ to be in the order of $\widetilde{\Omega}\left( 1/(\epsilon^6 (1-\gamma)^6) \right)$. Note that we do on-policy fit in each iteration $t$ inside each episode $n$, thus we will pay total number of samples in the order of $M\times (TN)$. % Another source of samples is the samples used to estimate covariance matrices $\Sigma^n$. As $\phi$ could be infinite dimensional, we need matrix concentration without explicit dependency on dimension of $\phi$. Leveraging matrix Bernstein inequality with matrix intrinsic dimension, the following lemma shows concentration results of $\ensuremath{\widehat w}\Sigma^n$ on $\Sigma^n$, and of $\widehat\Sigma_\mix^n$ on $\Sigma_\mix^n$. \begin{lemma}[Estimating Covariance Matrices] Set $\lambda = 1$. Define $\ensuremath{\widehat w}{d}$ as: \begin{align*} \ensuremath{\widehat w}{d} = \max_{\pi} \tilde{r}\left( \Sigma^{\pi} \right)/\| \Sigma^{\pi}\|, \end{align*} i.e., the maximum intrinsic dimension of the covariance matrix from a mixture policy. For $K \geq 32 N^2 \ln\left(\ensuremath{\widehat w}{d}N/\delta\right)$ (a parameter in \pref{alg:epoc}), with probability at least $1-\delta$, for any $n\in [N]$, we have for all $x$ with $\|x\| \leq 1$, \begin{align*} (1/2)x^{\top}\left( \Sigma_\mix^n \right)^{-1} x\leq x^{\top}\left( \ensuremath{\widehat w}\Sigma_\mix^n \right)^{-1} x \leq 2 x^{\top}\left( \Sigma_\mix^n \right)^{-1}x \end{align*} \label{lem:concentration_cov} \end{lemma} \begin{proof} The proof of the above lemma is simply \pref{lem:inverse_covariance}. \end{proof} We are now ready to prove Theorem~\ref{thm:detailed_bound_rmax_pg}. \begin{proof}[Proof of Theorem~\ref{thm:detailed_bound_rmax_pg}] Assume the event in~\pref{lem:concentration_cov} holds. In this case, we have for all $n\in [N]$, \begin{align*} (1/2) x^{\top} \left( {\Sigma}^n_\mix \right)^{-1} x \leq x^{\top}\left({\ensuremath{\widehat w}\Sigma}^n_\mix\right)^{-1} x \leq 2 x^{\top} \left({\Sigma}^n_\mix\right)^{-1} x, \end{align*} for all $\|x\|\leq 1$ and the total number of samples used for estimating covariance matrices is: \begin{align} \label{eq:source_1} &N\times K = N \times \left( 32 N^2 \ln\left( \ensuremath{\widehat w}{d} N /\delta \right)\right) = 32 N^3 \ln\left( \ensuremath{\widehat w}{d} N /\delta \right) \\ &= \frac{(32\times 64)\mathcal{I}_N(1)^3 W^6}{ \epsilon^9(1-\gamma)^{9}} \ln\left( \frac{4\widehat{d} W^2 \mathcal{I}_N(1) }{(1-\gamma)^3 \epsilon^3 \delta} \right) = \frac{c_1 \nu_1 \mathcal{I}_N(1)^3 W^6}{\epsilon^9(1-\gamma)^9}, \end{align} where $c_1$ is a constant and $\nu_2$ contains log-terms $\nu_1:= \ln\left( \frac{4\widehat{d} W^2 \mathcal{I}_N(1) }{(1-\gamma)^3 \epsilon^3 \delta} \right) Since we set known state-action pair as $\phi(s,a)^{\top} \left( \ensuremath{\widehat w}{\Sigma}_\mix^n \right)^{-1}\phi(s,a) \leq \beta$, then we must have that for any $(s,a)\in\mathcal{K}^n$, we have: \begin{align*} \phi(s,a)^{\top} \left( \Sigma^n_{\mix} \right)^{-1} \phi(s,a) \leq 2\beta, \end{align*} and any $(s,a)\not\in\Kcal^n$, we have: \begin{align*} \phi(s,a)^{\top} \left( \Sigma^n_{\mix} \right)^{-1} \phi(s,a) \geq \frac{1}{2} \beta. \end{align*} This allows us to call \pref{thm:regret_stat_error}. From \pref{thm:regret_stat_error}, we know that we need to set $M$ (number of samples for linear regression) large enough such that \begin{align*} \varepsilon_{stat} = \frac{ \epsilon^3 (1-\gamma)^3 }{ 4 \mathcal{I}_{N}(1)}, \end{align*} Using \pref{lem:sgd_dim_free} for linear regression, we know that with probability at least $1-\delta$, for any $n,t$, $\varepsilon_{stat}$ scales in the order of: \begin{align*} \varepsilon_{stat} = \sqrt{\frac{ 9W^4 \log(N T/\delta) }{(1-\gamma)^4 M }}, \end{align*} where we have taken union bound over all episodes $n\in [N]$ and all iterations $t\in [T]$. Now solve for $M$, we have: \begin{align*} M = \frac{ 144 W^4 \mathcal{I}_N(1)^2 \ln(NT/\delta )}{\epsilon^6(1-\gamma)^{10}} \end{align*} Considering every episode $n\in [N]$ and every iteration $t\in [T]$, we have the total number of samples needed for NPG is: \begin{align*} NT \cdot M & = \frac{ 4 W^2 \mathcal{I}_N(1) }{\epsilon^3 (1-\gamma)^3} \times \frac{4W^2 \log(A)}{(1-\gamma)^2 \epsilon^2} \times \frac{ 144 W^4 \mathcal{I}_N(1)^2 \ln(NT/\delta )}{\epsilon^6(1-\gamma)^{10}}\\ & = \frac{c_2 W^8 \mathcal{I}_N(1)^{3} \ln(A) }{ \epsilon^{11}(1-\gamma)^{15} } \cdot \ln\left( \frac{16 W^4 \ln(A) \mathcal{I}_N(1)}{ \epsilon^5(1-\gamma)^5 \delta} \right) = \frac{c_2 \nu_2 W^8 \mathcal{I}_N(1)^{3} \ln(A) }{ \epsilon^{11}(1-\gamma)^{15} }, \end{align*} where $c_2$ is a positive universal constant, and $\nu_2$ only contains log terms: \begin{align*} \nu_2 = \ln\left( \frac{16 W^4 \ln(A) \mathcal{I}_N(1)}{ \epsilon^5(1-\gamma)^5 \delta} \right). \end{align*} Combine two sources of samples, we have that the total number of samples is bounded as: \begin{align*} \frac{c_2 \nu_2 W^8 \mathcal{I}_N(1)^{3} \ln(A) }{ \epsilon^{11}(1-\gamma)^{15} } + \frac{c_1 \nu_2 \mathcal{I}_N(1)^3 W^6}{ \epsilon^9(1-\gamma)^{9}}, \end{align*} This concludes the proof. \end{proof} \section{Analysis of EPOC\xspace for Linear MDPs (\pref{thm:linear_mdp})} \label{app:app_to_linear_mdp} For linear MDP $\mathcal{M}$, recall that we assume the following parameters' norms are bounded: \begin{align*} \|v^{\top}\mu \| \leq \xi \in\mathbb{R}^+, \quad \|\theta\| \leq \omega\in\mathbb{R}^+, \quad \forall v, \text{ s.t. } \|v\|_{\infty} \leq 1. \end{align*}With these bounds on linear MDP's parameters, we can show that for any policy $\pi$, we have $Q^{\pi}(s,a) = w^{\pi}\cdot \phi(s,a)$, with $\|w^{\pi}\|\leq \omega + V_{\max} \xi $, where $V_{\max} = \max_{\pi,s}V^{\pi}(s)$ is the maximum possible expected total value ($V_{\max}$ is at most $r_{\max}/(1-\gamma)$ with $r_{\max}$ being the maximum possible immediate reward). At every episode $n$, recall that NPG is optimizing the MDP $\mathcal{M}_{b^n} = \{P, r(s,a) + b^n(s,a)\}$ with $P, r$ being the true transition and reward of $\mathcal{M}$ which is linear under $\phi(s,a)$. Due to the reward bonus $b^n(s,a)$ in $\mathcal{M}_{b^n}$, $\mathcal{M}_{b^n}$ is not necessarily a linear MDP under $\phi(s,a)$ ($P$ is still linear under $\phi$ but $r(s,a)+b^n(s,a)$ it not linear anymore). Here we leverage an observation that we know $b^n(s,a)$ (as we designed it), and $Q^{\pi}(s,a;r+b^n) - b^n(s,a)$ is linear with respect to $\phi$ for any $(s,a)\in \mathcal{S}\times\mathcal{A}$. The following claim state this observation formally. \begin{claim}[Linear Property of $(Q^{\pi}(s,a;r+b^n) - b^n(s,a))$ under $\phi$] Consider any policy $\pi$ and any reward bonus $b^n(s,a)\in [0,1/(1-\gamma)]$. We have that: \begin{align*} Q^{\pi}(s,a ; r+b^n) - b^n(s,a) = w\cdot \phi(s,a), \forall s,a. \end{align*} Further we have $\| w\| \leq \omega + \xi / (1-\gamma)^2$. \label{claim:linear_property} \end{claim} \begin{proof} By definition of $Q$-function, we have: \begin{align*} Q^{\pi}(s,a ; r + b^n) & = r(s,a) + b^n(s,a) + \gamma \phi(s,a)^{\top} \sum_{s'}\mu(s') V^{\pi}(s'; r+b^n) \\ & = b^n(s,a) + \phi(s,a)\cdot \left( \theta + \gamma \mu^{\top} V^{\pi}(\cdot; r+b^n) \right) := b^n(s,a) + \phi(s,a)\cdot w, \end{align*} where note that $w$ is independent of $(s,a)$. Rearrange terms, we prove that $Q^{\pi}(s,a; r+b^n) - b^n(s,a) = w\cdot \phi(s,a)$. Further, using the norm bounds we have for $\theta$ and $\mu$, and the fact that $\|V^{\pi}(\cdot; r+b^n)\|_{\infty} \leq 1/(1-\gamma)^2$, we conclude the proof. \end{proof} The above claim supports our specific choice of critic $\widehat{A}^t_{b^n}$ in the algorithm, where we recall that we perform linear regression from $\phi(s,a)$ to $Q^{\pi}_{b^n}(s,a) - b^n(s,a)$, and set $\widehat{A}^t_{b^n}(s,a)$ as \begin{align*} & \widehat{A}^t_{b^n}(s,a) = \left(b^n(s,a) + \theta^t\cdot \phi(s,a)\right) - \ensuremath{\mathbb{E}}_{a' \sim \pi^t_s}[ b^n(s,a') + \theta^t \cdot \phi(s,a')] \\ & := \bar{b}^{n,t}(s,a) + \theta^t \cdot \bar\phi^t(s,a), \end{align*} where $\bar{b}^{n,t}(s,a) = b^n(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi^t_s} b^n(s,a')$, and $\bar\phi^t(s,a) = \phi(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi^t_s} \phi(s,a')$. We now prove \pref{thm:linear_mdp} by showing that $\epsilon_{bias}$ is zero. \begin{lemma} Consider \pref{ass:transfer_bias}. For any episode $n$, iteration $t$, we have $\epsilon_{bias} = 0$. \end{lemma} \begin{proof} At iteration $t$, denote $\theta^t_\star$ as the linear parameterization of $Q^{\pi^t}_{b^n}(s,a) - b^n(s,a)$, i.e., $\theta^t_\star\cdot\phi(s,a) =Q^{\pi^t}_{b^n}(s,a) - b^n(s,a)$ (see \pref{claim:linear_property} for the existence of $\theta^t_\star$). we know that $\theta^t_\star \in \argmin_{\theta:\|\theta\| \leq W} L( \theta; \rho^n_\mix, Q^t_{b^n} - b^n ) $, as $L(\theta^t_\star; \rho^n_\mix, Q^t_{b^n} - b^n) = 0$. This indicates that $\theta^t_\star$ is one of the best on-policy fit. Now when transfer $\theta^\star_t$ to a different distribution $d^{\pi^\star}\text{Unif}_{\mathcal{A}}$, we simply have: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\pi^\star}\text{Unif}_{\mathcal{A}}} \left( \theta^t_\star \cdot \phi(s,a) - \left( Q^t_{b^n}(s,a) - b^n(s,a) \right) \right)^2 = 0. \end{align*} This concludes the proof. \end{proof} We can now conclude the proof of \pref{thm:linear_mdp} by invoking \pref{thm:agnostic} with $\epsilon_{bias} = 0$. \qed \iffalse With the above setup, we now state a bias-variance tradeoff lemma for linear MDP case that is analogous to \pref{lem:variance_bias_n} for the more general setting. \begin{lemma}[Variance and Bias Tradeoff for linear MDP] \label{lem:variance_bias_linear} Set $W = \omega + \xi / (1-\gamma)^2$. Assume that at episode $n$ we have $\phi(s,a)^{\top}\left(\Sigma_{\mix}^{n}\right)^{-1}\phi(s,a) \leq \beta$ for $(s,a)\in\mathcal{K}^n$. At iteration $t$, denote $\theta^t_\star$ as the linear parameterization of $Q^{\pi^t}_{b^n}(s,a) - b^n(s,a)$, i.e., $\theta^t_\star\cdot\phi(s,a) =Q^{\pi^t}_{b^n}(s,a) - b^n(s,a)$ (see \pref{claim:linear_property}) Assume the following condition regarding linear regression is true for all $t\in [T]$: \begin{align*} L^t\left( \theta^t ;\rho^n_\mix, Q^t_{b^n} - b^n \right) \leq \min_{\theta:\|\theta \|\leq W} L^t(\theta; \rho^n_\mix, Q^t_{b^n} - b^n) + \epsilon_{stat}, \end{align*} where recall $L^t(\theta; \nu, f) := \ensuremath{\mathbb{E}}_{(s,a)\sim \nu} \left[ \theta\cdot\phi(s,a) - f(s,a) \right]^2$, and $\epsilon_{stat} \in\mathbb{R}^+$. Then we have that for all $t\in [T]$: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^t_{{b^n}}(s,a) - \ensuremath{\widehat w}{A}^t_{{b^n}}(s,a) \right) \one\{s\in\mathcal{K}^n\}\leq 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{ \beta n \varepsilon_{stat} }. \end{align*} \end{lemma} \begin{proof} The proof of the above lemma is almost identical to the proof of \pref{lem:variance_bias_n}. First by \pref{claim:linear_property}, we know that $\theta^t_\star \in \argmin_{\theta:\|\theta\| \leq W} L( \theta; \rho^n_\mix, Q^t_{b^n} - b^n ) $, as $L(\theta^t_\star; \rho^n_\mix, Q^t_{b^n} - b^n) = 0$. Now use the first order optimality condition of $\theta^t_\star$, and the statistical learning guarantee condition of $\theta^t$, we arrive: \begin{align*} \ensuremath{\mathbb{E}}_{sa\sim \rho^n_\mix} \left( \phi(s,a)\cdot (\theta^t - \theta^t_\star) \right)^2 \leq \epsilon_{stat}, \end{align*} which gives a point-wise prediction guarantee: \begin{align*} \left\lvert \phi(s,a) \cdot (\theta^t - \theta^t_\star) \right\rvert \leq \sqrt{ \beta n \epsilon_{stat} + \lambda W^2 }. \end{align*} \wen{the derivation below uses new def of critic $\hat{A}^t_{b^n}..$} Now we bound $\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^t_{{b^n}}(s,a) - \ensuremath{\widehat w}{A}^t_{{b^n}}(s,a)\right)\one\{s\in\mathcal{K}^n\}$ as follows. \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}_{b^n}}}\left( A^t_{{b^n}}(s,a) - \ensuremath{\widehat w}{A}^t_{{b^n}}(s,a)\right)\one\{s\in\mathcal{K}^n\} \\ &= \underbrace{\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^t_{{b^n}}(s,a) - ( \bar{b}^{n,t}(s,a) + \theta^t_\star\cdot\bar\phi^t (s,a) ) \right)\one\{s\in\mathcal{K}^n\}}_{\text{term A}} \\ & \qquad + \underbrace{\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( (\bar{b}^{n,t}(s,a) + \theta^t_\star \cdot\bar\phi^t(s,a)) - ( \bar{b}^{n,t}(s,a) + \theta^t\cdot\bar\phi^t(s,a)) \right)\one\{s\in\mathcal{K}^n\}}_{\text{term B}}. \end{align*} Here note that by definition, $ \bar{b}^{n,t}(s,a) + \theta^t_\star\cdot\bar\phi^t (s,a) = b^n(s,a) + \theta^t_\star\cdot \phi(s,a) - \ensuremath{\mathbb{E}}_{s'\sim \pi^t_s} (b^n(s,a') + \theta^t_\star\cdot \phi(s,a') ) = Q^t_{b^n}(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi^t_s} Q^t_{b^n}(s,a') = A^t_{b^n}(s,a)$. This means that \emph{term A} = 0. Now we can use the pointwise estimation error to bound term $B$ above. We immediately notice that in term B, $\bar{b}^n(s,a)$ cancel, and we have: \begin{align*} & \text{term B} = \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( \theta^t_\star \cdot\bar\phi^t(s,a) - \theta^t\cdot\bar\phi^t(s,a) \right)\one\{s\in\mathcal{K}^n\} \\ & = \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( \theta^t_\star \phi(s,a) - \theta^t\cdot\phi(s,a) \right)\one\{s\in\mathcal{K}^n\} \\ & \qquad - \ensuremath{\mathbb{E}}_{s\sim \widetilde{d}_{\mathcal{M}^n}}\ensuremath{\mathbb{E}}_{a\sim \pi^t} \one\{s\in\mathcal{K}^n\}\left( \theta^t_\star \phi(s,a) - \theta^t\cdot\phi(s,a) \right) \leq 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{ \beta n \epsilon_{stat} }, \end{align*} which concludes the proof. \end{proof} \fi \input{app_state_act_aggregation_2} \iffalse \section{Analysis for State-Aggregation (\pref{thm:state_aggregation})} \label{app:state_agg} In this section, we analyze \pref{thm:state_aggregation} for state-aggregation. Different from the Linear MDP case where it is required to change the critic fitting procedure, we do not need to perform any modification of algorithm here. Hence, we are going to leverage the general theorem~\ref{thm:detailed_bound_rmax_pg} and bound the transfer bias $\epsilon_{bias}$ using aggregation errors . First recall the definition of state aggregation $\phi:\mathcal{X} \to\mathcal{Z}$. We abuse the notation a bit, and denote $\phi(s,a) = \one\{\phi(s) = z, a\} \in\mathbb{R}^{|\mathcal{Z}||\mathcal{A}|}$, i.e., the feature vector $\phi$ indicates which $z$ the state $x$ is mapped to. The following claim reasons the approximation of $Q$ values under state aggregation. \begin{claim} \label{claim:state_agg} Denote aggregation error $\epsilon_{z,a}$ as: \begin{align*} \max\left\{ \| P(\cdot | s,a) - P(\cdot | s', a) \|_1, \lvert r(s,a) - r(s',a) \rvert \right\} \leq \epsilon_{z,a}, \forall s,s', \text{ s.t., } \phi(s) = \phi(s') = z. \end{align*} Then, for any policy $\pi$, $s,s', a, z$, such that $\phi(s) = \phi(s') = z$, we have: \begin{align*} \left\lvert Q^{\pi}(s, a) - Q^{\pi}(s', a)\right\rvert \leq \frac{r_{\max} \epsilon_{z,a} }{1-\gamma}, \end{align*} where $r(s,a)\in [0, r_{\max}]$ for $r_{\max}\in \mathbb{R}^+$. \end{claim} \begin{proof} Starting from the definition of $Q^{\pi}$, we have: \begin{align*} &\left\lvert Q^{\pi}(s, a) - Q^{\pi}(s', a)\right\rvert = \lvert r(s,a) - r(s' ,a) \rvert + \gamma \lvert \ensuremath{\mathbb{E}}_{x'\sim P_{s,a}} V^{\pi}(s') - \ensuremath{\mathbb{E}}_{x'\sim P_{s',a}} V^{\pi}(s') \rvert \\ & \leq \epsilon_{z,a} + \frac{r_{\max}\gamma}{1-\gamma} \left\| P_{s,a} - P_{s',a} \right\|_1 \leq \frac{r_{\max}\epsilon_{z,a}}{1-\gamma}, \end{align*} where we use the assumption that $\phi(s) = \phi(s') = z$, and the fact that value function $\|V\|_{\infty} \leq r_{\max}/(1-\gamma)$ as $r(s,a)\in [0, r_{\max}]$. \end{proof} Recall the definition $\epsilon_{bias}(z) = \max_{a} \epsilon_{z,a}$. Now we can bound the transfer bias defined in \pref{ass:transfer_bias}. \begin{lemma}Throughout EPOC\xspace, consider any episode $n$ and iteration $t$ inside episode $n$, we have: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star}\one\{s\in\Kcal^n\}\left( A^t(s,a; r+b^n) - A^t_\star(s,a) \right) \leq \frac{2\ensuremath{\mathbb{E}}_{z\sim d} [\epsilon_{bias}(z)] }{ (1-\gamma)^2 }, \end{align*} where $A^t_\star(s,a) = \theta^t_\star\cdot \phi(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi^t_s} \theta^t_\star\cdot \phi(s,a')$, and $\theta^t_\star$ is a best on-policy fit: \begin{align*} \theta^t_\star \in \arg\min_{\|\theta\|\leq W}\ensuremath{\mathbb{E}}_{ (s,a)\sim \rho^n_{\mix}} \left(\theta\cdot\phi(s,a) - Q^t(s,a; r+b^n)\right)^2. \end{align*}\label{lem:bias_in_state_agg} \end{lemma} \begin{proof} Recall that notation $Q^t_{b^n}(s,a)$ is in short of $Q^t(s,a; r+b^n)$. First note that as $b^n(s,a) \in [0,1/(1-\gamma)]$, we must have $Q^t_{b^n}(s,a) \in [1, 1/(1-\gamma)^2]$. Second, for any $s,a$ such that $\phi(s) = \phi(s') = z$, we have $\phi(s,a) = \phi(s',a)$ which means that $(s,a)\in \Kcal^n$ if and only if $(s',a)\in\Kcal^n$ as their features are identical. This means that the reward misspecification assumption still holds under model $\mathcal{M}_{b^n}$, i.e., $\lvert r(s,a) + b^n(s,a) - r(s',a) - b^n(s',a) \rvert \leq \epsilon_{z,a}$. Now let us consider $\theta^t_{\star}$. Using the definition of the state aggregation, we have: \begin{align*} &\ensuremath{\mathbb{E}}_{ (s,a)\sim \rho^n_{\mix}} \left(\theta\cdot\phi(s,a) - Q^t_{b^n}(s,a)\right)^2 \\ &= \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_{\mix}} \sum_{z} \one\{\phi(s) = z\} \left( \theta_{z,a} - Q^t_{b^n}(s,a) \right)^2, \end{align*} which means that for $\theta^t_{\star}$, we have: \begin{align*} \sum_{s,a'} \rho^n_{\mix}(s,a') \one\{\phi(s) = z, a' = a \} \left( \theta_{z,a} - Q^t_{b^n}(s,a')\right) = 0, \end{align*} which implies that $\theta^t_{\star, z,a} := \frac{ \sum_{s,a'} \rho^n_{\mix}(s,a')\one\{\phi(s) = z, a' = a\} Q^t_{b^n}(s,a')}{ \sum_{s,a'}\rho^n_{\mix}(s,a')\one\{\phi(s) = z, a'= a\} } $. Hence, for any $s'' $ such that $\phi(s'') = z$, we must have: \begin{align*} &\left\lvert \theta_{\star, z,a} - Q^{t}_{b^n}(s'', a) \right\rvert \\ & \leq \left\vert \frac{ \sum_{s,a'} \rho^n_{\mix}(s,a')\one\{\phi(s) = z, a' = a\} Q^t_{b^n}(s'',a)}{ \sum_{s,a'}\rho^n_{\mix}(s,a')\one\{\phi(s) = z, a'= a\} } - Q^{t}_{b^n}(s'', a) \right\rvert \\ & \qquad + \left\lvert \frac{ \sum_{s,a'} \rho^n_{\mix}(s,a')\one\{\phi(s) = z, a' = a\} (Q^t_{b^n}(s'',a) - Q^t_{b^n}(s,a))}{ \sum_{s,a'}\rho^n_{\mix}(s,a')\one\{\phi(s) = z, a'= a\} } \right\rvert \\ & \leq \frac{\epsilon_{z,a}}{ (1-\gamma)^2 }, \end{align*} where we use Claim~\ref{claim:state_agg}. Note $|\theta^t_{\star,z,a}| \leq \frac{1}{(1-\gamma)^2}$ and $\|\theta^t_\star\|_2 \leq \sqrt{\frac{ZA}{(1-\gamma)^4}} := W$ in this case Now for any state-action distribution $d$, we will have: \begin{align} &\ensuremath{\mathbb{E}}_{(s,a)\sim d}\one\{s\in\Kcal^n\} \left(Q^t_{b^n}(s,a) -\theta^t_\star\cdot \phi(s,a) \right) \nonumber \\ & = \sum_{z} \ensuremath{\mathbb{E}}_{(s,a)\sim d} \one\{\phi(s) = z\}\one\{ s\in \Kcal^n \} \left( Q^t_{b^n}(s,a) - \theta^t_{\star,z,a} \right) \nonumber\\ & \leq \sum_{z} \ensuremath{\mathbb{E}}_{(s,a)\sim d} \one\{\phi(s) = z\}\one\{ s\in \Kcal^n \} \left\lvert Q^t_{b^n}(s,a) - \theta^t_{\star,z,a} \right\rvert \nonumber \\ & \leq \sum_{z} \ensuremath{\mathbb{E}}_{(s,a)\sim d} \one\{\phi(s) = z\} \one\{s\in\Kcal^n\} \frac{\epsilon_{z,a}}{(1-\gamma)^2} \leq \frac{\ensuremath{\mathbb{E}}_{(z,a)\sim d} \left[\epsilon_{z,a}\right] }{(1-\gamma)^2} \leq \frac{\ensuremath{\mathbb{E}}_{z\sim d} [\epsilon_{bias}(z)] }{ (1-\gamma)^2 } \label{eq:d_err_state_agg} \end{align} where $\epsilon_{bias}(z) = \max_{a} \epsilon_{z,a}$. Now for advantage functions, we have: \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim d^\star}\one\{s\in\Kcal^n\}\left( A^t_{b^n}(s,a) - A^t_\star(s,a) \right) \\ & = \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star}\one\{s\in\Kcal^n\}\left( Q^t_{b^n}(s,a) -\theta^t_\star\cdot \phi(s,a) \right) + \ensuremath{\mathbb{E}}_{s\sim d^\star, a\sim \pi^t_s} \left( Q^t_{b^n}(s,a) - \theta^t_\star\cdot \phi(s,a) \right). \end{align*} Using inequality~\ref{eq:d_err_state_agg} for both terms, we conclude the proof. \end{proof} The above lemma (\pref{lem:bias_in_state_agg}) essentially proves that $\varepsilon_{bias} = \frac{2\ensuremath{\mathbb{E}}_{z\sim d} [\epsilon_{bias}(z)] }{ (1-\gamma)^2 }$. Now call \pref{thm:detailed_bound_rmax_pg} with $\varepsilon_{bias} = \frac{2\ensuremath{\mathbb{E}}_{z\sim d} [\epsilon_{bias}(z)] }{ (1-\gamma)^2 }$, we conclude the proof of \pref{thm:state_aggregation}. \qed \fi \iffalse \subsection{Application to Reward-free Exploration in Linear MDPs} \label{app:reward_free_explore} We can run our algorithm in a reward free setting, i.e., $r(s,a) = 0$ for all $(s,a)$. In high level, under reward free setting, EPOC\xspace tries to explore the entire state-action space as quickly as possible. For notation simplicity, we focus on linear MDPs in this section (but similar agnostic result can be achieved using the concept of transfer bias). To do reward-free exploration, we need a slight modification: EPOC\xspace introduces a termination condition which indicates when we terminate the algorithm (i.e., when we have explored sufficiently except for those state-actions which are hard to reach under any policy). At the end of episode $n$, we check the following termination criteria: \begin{align} \label{eq:termination_critera} \sum_{(s,a)\not\in\Kcal^n} d^{\pi^{n+1}}(s,a) \leq \theta \in\mathbb{R}^+, \end{align} where $\theta$ is some function of accuracy parameter $\epsilon$ and horizon $(1-\gamma)$ which we will specifiy in the main theorem (\pref{thm:reward_free}) in this section. We terminate if Inequality~\ref{eq:termination_critera} is satisfied and EPOC\xspace outputs the policy cover $\{\pi^0,\dots, \pi^n \}$. We have the following guarantee for reward free exploration. \begin{theorem}[Sample Complexity of Reward-free Exploration] Set the parameters as follows: \begin{align*} &\beta = \frac{\epsilon^2(1-\gamma)^2}{4W^2}, \quad M = \frac{ 576 W^4 \widetilde{d}^2 \ln(NT/\delta )\ln( 2\widetilde{d}/(\beta\epsilon(1-\gamma)))^2 }{\epsilon^6(1-\gamma)^{10}},\\ & \theta = (1-\gamma)\epsilon, \quad \lambda = 1, \quad T = \frac{4W^2 \log(A)}{(1-\gamma)^2 \epsilon^2}. \end{align*} With probability at least $1-\delta$, the algorithm terminates within at most $N$ episodes with $N \leq \frac{4\widetilde{d}}{\beta\theta}\log\left(\frac{4\widetilde{d}}{\lambda \beta\theta}\right)$, and upon termination, we identify a set of hard-to-reach state action pairs, i.e., $\overline\Kcal^N := \mathcal{S}\times\mathcal{A} \setminus \Kcal^N$, such that: \begin{align*} \max_{\pi\in\Pi_{linear}} \sum_{(s,a) \in \overline{\Kcal}^N} d^{\pi}(s,a) \leq 4\epsilon, \end{align*} with total number of samples: \begin{align*} \frac{c \nu W^8 \widetilde{d}^3 \ln(A)}{\epsilon^{11}(1-\gamma)^{15}}, \end{align*} where $c $ is a universal constant, and $\nu$ contains only log terms: \begin{align*} \nu & = \ln\left(\frac{W^2\widetilde{d}}{\epsilon^3 (1-\gamma)^3}\right)\ln\left(\frac{\widetilde{d} W^4\ln(A) }{\epsilon^5(1-\gamma)^5\delta} \ln\left( \frac{2\widetilde{d}W^2}{\epsilon^3(1-\gamma)^3} \right)\right) \ln^2\left( \frac{W^2\widetilde{d}}{\epsilon^3(1-\gamma)^3} \right)\\ & \qquad + \ln^3\left( \frac{W^2 \widetilde{d}}{\epsilon^3(1-\gamma)^3} \right) \ln\left( \frac{ \ensuremath{\widehat w}{d}\widetilde{d} }{ \epsilon^3(1-\gamma)^3\delta}\ln\left( \frac{W^2\widetilde{d}}{\epsilon^3(1-\gamma)^3} \right) \right). \end{align*} \label{thm:reward_free} \end{theorem} The benefit of reward-free exploration is that once exploration is done, one can efficient optimize any given reward functions. The next theorem shows that with the policy cover from EPOC\xspace, for any future non-zero reward we could have, we can run the classic NPG with the policy cover's induced state-action distribution as reset/initial distribution to achieve a near optimal policy. \begin{theorem}[Post NPG optimization with the Policy Cover from EPOC\xspace] Conditioned on \pref{thm:reward_free} holding, for the policy cover (denote $\rho^N_{\mix} = \frac{1}{N} \sum_i d^{\pi^i}$), given any non-zero reward $r'(s,a)\in [0,1]$, run NPG (\pref{alg:npg}), with $(\rho^N_{\mix}, b^N(s,a) = 0,\forall(s,a))$ as inputs, with parameters: \begin{align*} T = 4W^2 \log(A)/\epsilon^2, \quad M = \frac{ 576 W^4 \widetilde{d}^2 \ln(NT/\delta )\ln( 2\widetilde{d}/(\beta\epsilon(1-\gamma)))^2 }{\epsilon^6(1-\gamma)^{10}}. \end{align*} Then with probability at least $1-\delta$, \pref{alg:npg} outputs a policy $\hat{\pi}$ such that: \begin{align*} \ensuremath{\mathbb{E}}\left[\sum_{t}\gamma^t r'(s_t,a_t) | \hat{\pi}, s_0\sim \mu_0\right] \geq \max_{\pi\in\Pi_{linear}}\ensuremath{\mathbb{E}}\left[\sum_{t}\gamma^t r'(s_t,a_t) | {\pi}, s_0\sim \mu_0\right] - O\left(\frac{ \epsilon }{(1-\gamma)^2} + \frac{W\epsilon}{1-\gamma}\right), \end{align*} with total number of samples \begin{align*} O\left(\frac{ W^6 \log(A) \widetilde{d}^2 }{\epsilon^8(1-\gamma)^{10}} \ln(NT/\delta )\ln( 2\widetilde{d}/(\beta\epsilon(1-\gamma)))^2 \right). \end{align*} \label{thm:reward_free_npg_opt} \end{theorem} The above theorem indicates that once a reward-free phase is done, EPOC\xspace identifies a hard-to-reach state-action subspace, and a policy cover which can be used as the reset distribution for NPG for any new given reward functions. Note that unlike prior results for NPG~\citep{geist2019theory,agarwal2019optimalityshani2019adaptive}, this is a global optimality guarantee without any distribution mismatch or concetrability coefficients since we have provided the algorithm with a favorable initial distribution from the reward-free exploration phase. \subsubsection{Proof of \pref{thm:reward_free} and \pref{thm:reward_free_npg_opt}} To prove the above main theorem, we first show that the algorithm will terminate with finite number of bounds. \begin{lemma}[Total Number of Episodes] Assume that for $n = 0,\dots N $, we have $\sum_{(s,a)\not\in\Kcal^n}(s,a) \geq \theta$ (i.e., algorithm does not terminate), we must have: \begin{align*} N \leq \frac{4\widetilde{d}}{\beta\theta} \log\left( \frac{4\widetilde{d}}{\lambda \beta\theta} \right). \end{align*} \end{lemma} \begin{proof}Based on the proof of \pref{lem:potential_argument_n}, we have that: \begin{align*} &\sum_{n=1}^N \beta \theta \leq \beta \ensuremath{\mathbb{E}}_{(s,a)\sim d^{n+1}}\one\{(s,a)\not\in\Kcal^n\} \leq \sum_n \tilde{r}\left(\Sigma^{n+1} \left( \Sigma^n_{\mix}\right)^{-1}\right) \\ &\leq 2\log\det\left( \Sigma^N_{\mix} \right) \leq 2\widetilde{d} \log\left(N/\lambda + 1\right). \end{align*} which implies that: \begin{align*} N \leq \frac{2\widetilde{d}}{ \beta\theta}\log\left( N / \lambda + 1\right). \end{align*} This concludes the proof. \end{proof} Now we can prove the main theorem below. \begin{proof}[Proof of \pref{thm:reward_free}] Focus on the last iteration $N$ where the algorithm terminates. Using \pref{lem:perf_absorb}, we have that: \begin{align*} V^{N+1}_{\mathcal{M}^{N}} \leq V^{{N+1}}_{\mathcal{M}} + \frac{1}{1-\gamma} \sum_{(s,a)\not\in\Kcal^N} d^{N+1}(s,a) \leq V^{N+1}_{\mathcal{M}} + \frac{\theta}{1- \gamma} = \frac{\theta}{1-\gamma}, \end{align*} where the second inequality uses the termination criteria and the last equality uses the fact that $V^\pi_{M} = 0$ for any $\pi$ as $r(s,a) = 0$ for all $(s,a)$. Recall the definition of $\mathcal{M}^N$ from Item~\ref{item:mdp_3} and the NPG guarantee, we have for any comparator $\widetilde\pi$: \begin{align*} V^{\widetilde\pi^{N}}_{\mathcal{M}^N} - \frac{1}{1-\gamma}\left( 2W\sqrt{\frac{\log(A)}{T}} + 2\sqrt{\beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}} \right) \leq V^{N+1}_{\mathcal{M}^N} \leq \frac{\theta}{1-\gamma}. \end{align*} To link the above results to the maximum possible escaping probability, we define another MDP $\widetilde{\mathcal{M}}$ such that $\widetilde{\mathcal{M}}$ and $\mathcal{M}$ has the same transition dynamics, but $\widetilde{\mathcal{M}}$ has rewards $r(s,a) = 1$ for $(s,a)\not\in\Kcal^N$ and $r(s,a) = 0$ otherwise. We also construct the ``absorbing'' MDP $\widetilde{\mathcal{M}}^N$ analogous of $\mathcal{M}^n$ to $\mathcal{M}$. Note that for any policy $\widetilde{\pi}$, we have: \begin{align*} V^{\widetilde\pi}_{\widetilde{\mathcal{M}}} = \sum_{(s,a)\not\in\Kcal^N} d^{\widetilde\pi}(s,a), \quad V^{\widetilde{\pi}^N}_{\widetilde{\mathcal{M}}^N} \geq V^{\widetilde\pi}_{\widetilde{\mathcal{M}}}. \end{align*} Note that $V^{\tilde\pi^N}_{\widetilde{\mathcal{M}}^N} = V^{\widetilde{\pi}^N }_{{\mathcal{M}}^N}$, since $\widetilde\pi^N$ only picks $a^{\dagger}$ at $s\not\in\Kcal^N$ and ${\mathcal{M}}^N$ and $\widetilde{\mathcal{M}}^N$ only differ at $(s,a)\not\in\Kcal^N$ in terms of rewards. Combine the above results, we get: \begin{align*} &\sum_{(s,a)\not\in\Kcal^N} d^{\widetilde\pi}(s,a) = V^{\widetilde\pi}_{\widetilde{\mathcal{M}}} \leq V^{\widetilde\pi^N}_{\widetilde{\mathcal{M}}^N} = V^{\widetilde\pi^N}_{\mathcal{M}^N} \\ & \leq \frac{1}{1-\gamma}\left( 2W\sqrt{\frac{\log(A)}{T}} + 2\sqrt{\beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}} \right)+ \frac{\theta}{1-\gamma}. \end{align*} Take $\widetilde\pi = \arg\max_{\pi\in\Pi_{linear}} \sum_{(s,a)\not\in\Kcal^N} d^{\pi}(s,a)$ and set parameters based on the values proposed in the main theorem, we conclude that: \begin{align*} \max_{\pi\in\Pi_{linear}} \sum_{(s,a)\not\in\Kcal^N} d^\pi(s,a) \leq 4\epsilon. \end{align*} Now we calculate the sample complexity. Following the proof of \pref{thm:detailed_bound_rmax_pg}, we have that the total number of samples one use for constructing $\widehat\Sigma^n_{\mix}$ is: \begin{align*} &N \times \left(N^2 \ln\left( \ensuremath{\widehat w}{d} N /\delta \right)\right) = N^3 \ln\left( \ensuremath{\widehat w}{d} N /\delta \right) = \frac{c_1 \nu_1 \widetilde{d}^3 W^6}{\epsilon^9(1-\gamma)^9}, \end{align*} where $c_1$ is a constant and $\nu_1$ contains log-terms $\nu_1:= \ln^3\left( \frac{8W^2 \widetilde{d}}{\epsilon^3(1-\gamma)^3} \right) \ln\left( \frac{ 2\ensuremath{\widehat w}{d}\widetilde{d} }{ \epsilon^3(1-\gamma)^3\delta}\ln\left( \frac{8W^2\widetilde{d}}{\epsilon^3(1-\gamma)^3} \right) \right)$. The second source of sample complexity comes from on-policy fit. To derive $\varepsilon_{stat} = (1-\gamma)^3 \epsilon^3 / \widetilde{d}$, we need to set $M$ (the number of samples for on-policy fit in each iteration $t$ and episode $n$): \begin{align*} M = \frac{ 576 W^4 \widetilde{d}^2 \ln(NT/\delta )\ln( 2\widetilde{d}/(\beta\epsilon(1-\gamma)))^2 }{\epsilon^6(1-\gamma)^{10}} \end{align*} Considering every episode $n\in [N]$ and every iteration $t\in [T]$, we have the total number of samples needed for NPG is: \begin{align*} NT \cdot M \end{align*} The rest of the caculation involves subistituing $M, N, T$ into the above expression and then combining two sources of samples together, exactly as what we did for the proof of \pref{thm:detailed_bound_rmax_pg} . \end{proof} Now once the learned the policy cover, given any non-zero reward $r(s,a)$, and any policy $\pi$, we can also show that by leveraging the policy cover we have upon termination, we can accurately estimate any policy's advantage on any state-action pair in the known set $\Kcal^N$ using the fact that for any $(s,a)\in\mathcal{K}^n$, we have $\phi(s,a)^{\top}\left(\Sigma^N_{\mix}\right)^{-1}\phi(s,a) \leq \beta$. \begin{lemma}[Policy Cover For On-Policy Critic Fit] Denote the policy cover's induced state-action distribution as $\rho^N_{\mix} = \frac{1}{N} \sum_i d^{\pi^i}$. Given any non-zero reward $r(s,a)\in [0,1]$, and any policy $\pi$, denote $\hat\theta$ as the approximate minimizer of $\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^N_{\mix}}\left( \theta\cdot \phi(s,a) - Q^{\pi}(s,a) \right)^2$, i.e., $\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^N_{\mix}}\left( \hat\theta\cdot \phi(s,a) - Q^{\pi}(s,a) \right)^2 \leq \varepsilon_{stat}$. We have that for any comparator $\widetilde\pi$: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\widetilde\pi}}\left( A^{\pi}(s,a) - \hat\theta\cdot\overline{\phi}(s,a) \right)\one\{(s,a)\in\Kcal^N\} \leq 2\sqrt{\beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}}, \end{align*} where $\overline\phi(s,a) = \phi(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi_s}\phi(s,a')$. \label{lem:policy_cover_fit} \end{lemma} \begin{proof} Denote $Q^\pi(s,a):= \theta_\star\cdot \phi(s,a)$. For any state-action pair in $\Kcal^N$, following the similar derivation we have in the proof of \pref{lem:variance_bias_n}, we thus have: \begin{align*} \left\lvert \phi(s,a) \cdot \left( \theta_\star - \hat\theta \right) \right\rvert \leq \sqrt{\beta \lambda W^2 } + \sqrt{\beta N \varepsilon_{stat}}, \quad \forall (s,a)\in\Kcal^N. \end{align*} As the above bound is state-action wise, thus for any policy distribution $d^{\widetilde\pi}$, for $A^\pi$, we have that: \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim d^{\widetilde\pi}} \left( A^\pi(s,a) - \hat\theta\cdot \overline\phi(s,a) \right)\one\{(s,a)\in \Kcal^N\} \leq2\sqrt{\beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}}. \end{align* This concludes the proof. \end{proof} \begin{proof}[Proof of \pref{thm:reward_free_npg_opt}] Now for the performance of NPG on a new reward function $r'$ with $\rho_{\mix}^N$ as the reset distribution, from \pref{lem:policy_cover_fit}, we have that for any $\pi^t$ generated during the NPG's run, we have: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\star}}\left( A^{t}(s,a) - \hat\theta^t \cdot\overline{\phi}^n(s,a) \right)\one\{(s,a)\in\Kcal^N\} \leq 2\sqrt{\beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}}. \end{align*} With a performance difference lemma application, we have: \begin{align*} &V^{\pi^\star} - V^{\pi^t} = \frac{1}{1-\gamma}\ensuremath{\mathbb{E}}_{(s,a)\sim d^{\star}} A^{t}(s,a) \\ & = \frac{1}{1-\gamma}\ensuremath{\mathbb{E}}_{(s,a)\sim d^{\star}} A^{t}(s,a) \one\{(s,a)\in\Kcal^N\} + \frac{1}{1-\gamma}\ensuremath{\mathbb{E}}_{(s,a)\sim d^{\star}} A^{t}(s,a)\one\{(s,a)\not\in\Kcal^N\}\\ & \leq \frac{1}{1-\gamma}\ensuremath{\mathbb{E}}_{(s,a)\sim d^{\star}} A^{t}(s,a) \one\{(s,a)\in\Kcal^N\} + \frac{4\epsilon}{(1-\gamma)^2} \\ & \leq \frac{1}{1-\gamma}\ensuremath{\mathbb{E}}_{(s,a)\sim d^{\star}} \widehat{A}^{t}(s,a) \one\{(s,a)\in\Kcal^N\} + \frac{4\epsilon}{(1-\gamma)^2} + \frac{2\sqrt{\beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}}}{1-\gamma} \end{align*} where we use $\max_{\pi} \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\pi}} \one\{(s,a)\not\in\Kcal^N\} \leq \epsilon$ (recall $\overline{\Kcal}^N := \mathcal{S}\times\mathcal{A} \setminus \Kcal^N$). As NPG is performing the following update rule, \begin{align*} \pi^{t+1}(\cdot |s) \propto \pi^t(\cdot | s) \exp\left(\eta \widehat{A}^t (s,a) \right), \end{align*} with a Mirror Descent analysis, we have: \begin{align*} \sum_{t=1}^{T} \ensuremath{\mathbb{E}}_{a\sim \pi^\star_s} \widehat{A}^t(s,a) \leq 2W \sqrt{\log(A) T }. \end{align*} Add expectation with respect to $d^\star$ on both sides, we get: \begin{align*} \sum_{t=1}^T \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} \widehat{A}^t(s,a) = \sum_{t=1}^T \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} \widehat{A}^t(s,a)\one\{(s,a)\in\mathcal{K}^n \} + \sum_{t=1}^T \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} \widehat{A}^t(s,a)\one\{(s,a)\not\in\mathcal{K}^N\} \end{align*} which implies that: \begin{align*} &\sum_{t=1}^T \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} \widehat{A}^t(s,a)\one\{(s,a)\in\mathcal{K}^n \} \leq 2W\sqrt{\log(A)T} - \sum_{t=1}^T \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} \widehat{A}^t(s,a)\one\{(s,a)\not\in\mathcal{K}^N\} \\ & \leq 2W\sqrt{\log(A) T} + T\epsilon \max_{s,a} | \widehat{A}^t(s,a) | \leq 2W\sqrt{\log(A)T} + TW \epsilon. \end{align*} This leads to the following result: \begin{align*} \sum_{t} \left(V^{\pi^\star} - V^{\pi^t}\right)/T \leq \frac{1}{1-\gamma}\left( 2W\sqrt{\log(A)/T} + \frac{4\epsilon}{(1-\gamma)} + W\epsilon + {2\sqrt{\beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}}}\right), \end{align*} where we abuse the notation a bit by denoting $V^{\pi}$ as the expected total reward of $\pi$ under the new reward function $r'$. With $T:= 4W^2\log(A)/\epsilon^2$, and the values of $\beta, \lambda, M, \varepsilon_{stat}$ defined in \pref{thm:reward_free}, we can simply the above inequality to: \begin{align*} \sum_{t} \left(V^{\pi^\star} - V^{\pi^t}\right)/T \leq O\left( \frac{\epsilon}{(1-\gamma)^2} + \frac{W\epsilon}{1-\gamma}\right). \end{align*} The total number of samples one needs is: \begin{align*} &M \cdot T = \frac{ 576 W^4 \widetilde{d}^2 \ln(NT/\delta )\ln( 2\widetilde{d}/(\beta\epsilon(1-\gamma)))^2 }{\epsilon^6(1-\gamma)^{10}} \frac{4W^2\log(A)}{\epsilon^2} \\ & = \frac{ 2304 W^6 \log(A) \widetilde{d}^2 \ln(NT/\delta )\ln( 2\widetilde{d}/(\beta\epsilon(1-\gamma)))^2 }{\epsilon^8(1-\gamma)^{10}} \end{align*} This concludes the proof. \end{proof} \fi \section{Analysis of EPOC\xspace for the Partially Well-specified Models (Corollary~\pref{cora:agnostic}) \label{app:examples} \begin{proof}[Proof of Corollary \pref{cora:agnostic}] The proof involves showing that the transfer error is $0$. Specifically, we will show the following: consider any state-action distribution $\rho$, and any policy $\pi$, and any bonus function $b$ with bounded value on all $b(s,a)$, there exists $\theta_\star$ as one of the best on-policy fit,\\ i.e., $\theta_\star\in \arg\min_{\theta:\|\theta\|\leq W} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho}\left( \theta\cdot \phi(s,a) - (Q^{\pi}(s,a) - b(s,a)) \right)^2 $, such that: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a) \sim d^\star} \left(Q^{\pi}(s,a) - b(s,a) - \theta_\star\cdot\phi(s,a)\right)^2 = 0, \end{align*} i.e., the transfer error is zero. Let us denote a minimizer of $\ensuremath{\mathbb{E}}_{(s,a)\sim \rho}\left( \theta\cdot \phi(s,a) -b(s,a) - Q^{\pi}(s,a) \right)^2 $ as $\widetilde{\theta}$. We can modify the first three bits of $\widetilde{\theta}$. We set $\widetilde{\theta}_1 = Q^{\pi}(s_0, L) - b(s_0,L) = 1/2 - b(s_0,L)$, $\widetilde{\theta}_2 = Q^{\pi}(s_0, R) - b(s_0,R)$, and $\widetilde{\theta}_3 = Q^{\pi}(s_1, a) -b(s_1,a) = - b(s_1,a)$ for any $a\in\{L, R\}$. Denote this new vector as $\theta_\star$. Note that due to the construction of $\phi$ (the feature vectors associated with states inside the binary tree is orthogonal to the span of $e_1,e_2,e_3$), $\theta_\star$ will not change any prediction error for states inside the binary tree under $(s_0, R)$ and will only bring down the prediction error for $(s_0, a)$ for $a\in\{L,R\}$, and $(s_1, a)$ for $a\in\{L,R\}$. Hence $\theta_\star$ is also the minimizer of $\ensuremath{\mathbb{E}}_{(s,a)\sim \rho}\left( \theta\cdot \phi(s,a) - (Q^{\pi}(s,a) -b(s,a)) \right)^2 $. For $\theta_\star$, we have $\theta_\star\cdot\phi(s_0, a) = Q^{\pi}(s_0,a) - b(s_0,a)$ for $a\in\{L,R\}$, and $\theta_\star\cdot \phi(s_1, a) = Q^{\pi}(s_1,a) - b(s_1,a)$ for $a\in\{L, R\}$, thus, we can verify that $Q^{\pi}(s_0, a)-b(s_0,a) = \theta_\star\cdot\phi(s_0, a)$ and $Q^{\pi}(s_1, a) -b(s_1,a)= \theta_\star\cdot\phi(s_1,a)$ for $a\in\{L, R\}$. Since $\pi^\star$ only visits $s_0$ and $s_1$, we can conclude that $\ensuremath{\mathbb{E}}_{(s,a) \sim d^\star} \left( Q^{\pi}(s,a) - b(s,a) - \theta_\star\cdot\phi(s,a)\right)^2 = 0$. With $\varepsilon_{bias} = 0$, we can conclude the proof by recalling \pref{thm:detailed_bound_rmax_pg}. \end{proof} \iffalse \subsection{Failures of Bellman backup based Algorithm} We now informally argue that all of the prior and provable algorithms will not succeed on this example; this is due to that they are all based on using Bellman backups (e.g., Q-learning and model based approaches). Note that the feature representation $\phi$ could be arbitrarily pathological inside the binary tree below $(s_0, R)$ (e.g. it could be some neural net or misspecified state aggregation). This reason is due to that any algorithm which satisfies the following two conditions will fail. These two conditions are: \begin{itemize} \item (Bellman Consistency) The algorithm does value based backups, with the property that it does an exact backup if this is possible. Note that due to our construction, all aforementioned prior algorithms will do an exact backup for $Q(s_0,R)$, where they estimate $Q(s_0,R)$ to be their value estimate on the subtree. This is due to that the feature $\phi(s_0,R)$ is orthogonal to all other features, so a $0$ error, Bellman backup is possible, without altering estimation in any other part of the tree. \item (One Sided Error in Agnostic Learning) Suppose the true value of the subtree is less than $1/2-\Delta$, and suppose that there exists a set of features where the algorithm approximates the value of the subtree to be larger than $1/2$. Note that our feature set is arbitrary, and all known algorithms are not guaranteed to return values with one side error. In fact, it is not difficult to provide examples where other algorithms fail in this manner. In fact, $\Delta$ can made $O(1)$ for other algorithms. \end{itemize} With these two properties, the best another algorithm can obtain is $1/2-\Delta$ value. This is due to that at $(s_0,R)$ a perfect backup occurs. Importantly, there are no guarantees of any algorithm with function approximation that are not globally $\ell_\infty$ in nature. Furthermore, for specific algorithms, it is not difficult to find cases where they will fail in our construction due to the arbitrary nature of the subtree. Note that EPOC\xspace will also fail in the subtree. However, by not using Bellman backups, EPOC\xspace will correctly realize that the value of going right is worst than going left. In this sense, our work provides a unique guarantee with respect to model misspecification in the RL setting. \subsection{Agnostic Result for EPOC\xspace}\label{app:agnostic} Note that nowhere in our construction did we rely on the optimal value being $1/2$. In fact, the optimal value could be near $O(1/1-\gamma)$, if there is a policy obtaining high reward in the right subtree. However, EPOC\xspace is guaranteed to obtain a reward of at least $1/2$. This is due to the transfer error being defined in a local sense; it is defined with respect to the policy that takes the left action at $s_0$, and EPOC\xspace can compete against any comparator policy. \subsection{Failure of relying on Concentrability (and Distribution Mismatch) Coefficients} We can easily extend the left most branch to be of length $H$, along with making this a fully balanced tree at $s_0$. With this modification, the concentrability coefficient will be $O(2^H)$. Furthermore, it is not obvious how to obtain a policy with good coverage (i.e. how to obtain a policy with a distribution mismatch coefficient which is less than $O(2^H)$, unless an oracle gives us such a distribution); this is due to that the initial state distribution is started on $s_0$. See~\cite{agarwal2019optimality,geist2019theory,Scherrer:API} for further discussion. EPOC\xspace can still succeed with this modification, in the following sense: With a balanced tree, we can place a reward of $1/2$ on the leaf of the left most branch. We will also add $O(H)$ features, where these features well approximate all values on this left branch and all other features are orthogonal to these features, just as in our current construction. Here, again, EPOC\xspace will succeed as we have zero transfer error along the left most path. For the same reasons as above, due to a poor distribution mistmatch coefficient, there is no reason to believe policy gradient methods will succeed. \fi \iffalse \section{Detailed examples with Misspecified Models} \label{app:examples} In this section, we discuss examples where we show that Bellman backup based approaches will fail while EPOC\xspace will succeed. \subsection{A Simple Example on LinearMDPs v.s. Our Misspecified Result} Let us first now provide perhaps the simplest example where $\varepsilon_{bias}=0$ and where the MDP is not a linear MDP; generalizations of this example is considered in \pref{app:examples}. For this simple example, it is not at all evident that algorithms of~\cite{yang2019sample,yang2019reinforcement,jin2019provably} will succeed. Suppose that there are two actions in the MDP; that the optimal policy is $\pi^*(a_0=1|s_0)=1$ and that under this optimal policy, the agent stays at starting state $s_0$ with probability one. Say that $d$-dimensional feature representation is $\phi(s_0,a_0=1)=e_1$ (the first standard basis vector), that $\phi(s_0,a_0=2)=e_2$, and for all other states $s\neq s_0$, that $\phi(s,a)$ is orthogonal to $e_1$ and $e_2$, i.e. $\phi(s,a)$ has zeros on the first two coordinates. This implies $\varepsilon_{bias}=0$, since we always fit the advantages at state $s_0$ perfectly. However, it is not evident a $Q$-learning based approach or a model based approach will succeed; note that when the agent takes action $a_0= 2$ at $s_0$ the agent could move into the rest of the MDP, where there may be approximation error. For our approach, the on-policy nature of EPOC\xspace provably ensures success. \subsection{Example: Deterministic Binary Tree} \label{app:binary_tree} This example expands and generalizes the discussion in the previous section. Specifically, the constructed MDP is a deterministic MDP arranged in a completely balanced binary tree with $2^H$ states (where $H$ is the depth) and 2 actions together with feature vector $\phi$ with $\phi(s,a)\in\mathbb{R}^d$ with $d = \Theta(\text{poly}(H))$. The constructed MDP has the following properties: \begin{itemize} \item it is not a linear MDP (unless the dimension is exponential in the depth $H$, i.e. $d=\Omega(2^H)$); \item algorithms that rely on Bellman backup (e.g., Q learning) can fail; \item the example has large function approximation error, i.e. the worst-case, $\ell_{\infty}$, is large. % \end{itemize} \subsubsection{A Binary Tree MDP Construction } For simplicity, here we consider episodic finite horizon setting~\footnote{By taking $H=1/(1-\gamma)$, the claims formally hold in the discounted setting with only changes in constant factors.}. We consider a deterministic MDP with $2^H-1$ many states and two actions $L$ and $R$. The states are organized in a completely balanced binary tree, where at any state, action $L$ always leads to the left subtree and acton $R$ always leads to the right subtree. The root is at level $h = 1$, while the leaf is at level $h = H$. The initial state $s_0$ is the root of the tree. Note that the above MDP \emph{cannot be modeled as a linear MDP}, unless one uses feature vector $\phi \in \mathbb{R}^{2^H}$, i.e., feature vector with dimension scaling exponentially in horizon, which will result least square value iteration based approach \cite{jin2019provably} to have sample complexity scaling exponentially in $H$. Without loss of generality, for the left most leaf $s$, we assign reward $1/4$ to both $(s, L)$ and $(s,R)$. We assign reward zero for any other state-action pair. % Hence, the optimal policy $\pi^\star$ is to take left action $L$ all the way from the root to the bottom and the optimal policy's reward is 1/4. We denote this optimal path (the left most path in the tree) as $(s^\star_1, \dots, s^\star_H)$. Now we design feature presentation $\phi$. Let us consider $\phi(s,a) \in \mathbb{R}^d$ with $d > 2H$ and $d = \Theta(\text{poly}(H))$. We first design feature representation on the states along the optimal path. For $s_h^\star$ with $h\geq 1$, we have $\phi(s^\star_h, L)$ has zeros everywhere but one at the $(2h-1)$-th bit ($\phi_{2h-1}(s^\star_h, L) = 1$), and $\phi(s^\star_h, R)$ has zeros everywhere but one at the $2h$-th bit, i.e., $\phi_{2h}(s^\star_h, R) = 1$. Hence, given a weight vector $\theta\in\mathbb{R}^d$, for state $s^\star_h$ at level $h$, we have: \begin{align*} \theta\cdot \phi(s, L) = \theta_{2h-1}, \quad \theta\cdot \phi(s, R) = \theta_{2h}. \end{align*} Hence, for \emph{any} weight vector $\theta$ such that $\theta_{2h-1} = 1/4$ and $\theta_{2h} = 0$ for $h\in [H]$, we have $\phi(s^\star_h,L)\cdot \theta = Q^\star(s^\star_h, L)$ and $\phi(s^\star_h, R)\cdot \theta = Q^\star(s^\star_h, R)$ for any $s$ along the optimal path. Since for each state action pair $(s^\star_h, a)$ along the optimal path, we assign it a unique bit of the weight vector, i.e., $\theta_{2h-1}$ for $(s^\star_h, L)$ and $\theta_{2h}$ for $(s^\star_h, R)$, for any policy $\pi$, any weight vector such that $\theta_{2h-1} = Q^{\pi}(s,L)$ and $\theta_{2h} = Q^\pi(s, R)$ will achieve zero advantage prediction error under the state-action pairs visited by the optimal policy, i.e., \begin{align} \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\star}}\left[ A^\pi(s,a) - \left( \theta\cdot\phi(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi_s}\theta\cdot\phi(s,a') \right) \right] = 0, \quad \forall \theta \text{ s.t., } \theta_{2h-1} = Q^\pi(s^\star_h, L), \theta_{2h} = Q^\pi(s^\star_h, R). \label{eq:transfer_error_zero} \end{align} For any other state $s$ that are not on the optimal path, we design its feature $\phi(s,a)$ such that $\phi_i(s,a) = 0$ for all $i \in [2H]$ ($\phi_i$ stands for the i-th entry of the vector $\phi$) and $a\in \{L, R\}$, i.e., the first 2H bits of $\phi(s,a)$ are zeros. Namely, the feature at states that are not on the optimal path is decoupled from the feature of the states on the optimal path. \subsubsection{Optimality of EPOC\xspace} Under this construction, we always have transfer bias equal to zero under $d^\star$. Namely, a vector $\theta$ may have large advantage prediction error under $d^\pi$ for some $\pi$, but as long as $\theta$ has the property shown in Eq.~\pref{eq:transfer_error_zero}, it has perfect advantage prediction under $d^\star$ (recall that the first 2H bits of $\theta$ can be adjusted without affecting the prediction on states that are not on the optimal path). \begin{lemma}[Zero Transfer Bias] Consider any policy $\pi$. There is one best on-policy linear predictor: \begin{align*} \theta^\star \in \argmin_{\theta: \theta_i\in [0,1], \forall i\in [d]} \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\pi}} \left( \theta\cdot \phi(s,a) - Q^{\pi}(s,a) \right)^2, \end{align*} such that: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\star}}\left[ A^\pi(s,a) - \left( \theta^\star \cdot\phi(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi_s}\theta^\star\cdot\phi(s,a') \right) \right] = 0. \end{align*} \end{lemma} \begin{proof} We can split state space into two groups, $\mathcal{S}^\star = \{s^\star_h\}_{h=1}^H$ and the rest of the states $\overline{\mathcal{S}}^\star$. We have: \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim d^\pi} \left( \theta\cdot\phi(s,a) - Q^\pi(s,a) \right)^2 \\ &= \underbrace{\sum_{s\in\mathcal{S}^\star} \sum_{a\in \{L,R\}} d^\pi(s, a) \left( \theta\cdot\phi(s,a) - Q^\pi(s,a) \right)^2}_{\text{a}} + \underbrace{\sum_{s\in\overline\mathcal{S}^\star}\sum_{a\in\{L,R\}} d^\pi(s, a)\left( \theta\cdot\phi(s,a) - Q^\pi(s,a) \right)^2}_{\text{b}} \end{align*} Consider an arbitrary minimizer $\theta^\star$, i.e., $\theta^\star \in \argmin_{\theta: \theta_i\in [0,1]} \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\pi}} \left( \theta\cdot \phi(s,a) - Q^{\pi}(s,a) \right)^2$. We can always modify $\theta^\star$ to $\widetilde{\theta}^\star$ such that $\widetilde\theta^\star_{2h-1} = Q^\pi(s^\star_h, L)$ and $\widetilde\theta^\star_{2h} = Q^\pi(s^\star_h, R)$. Note that this modification makes term $a$ above equal to zero, and does not change the value of the term $b$, as for any state $s\in\overline\mathcal{S}^\star$, $\phi(s,a)$ has zeros in the first 2H bits of its feature with both actions. Hence the modification does not change the value of the prediction. Note that $\widetilde\theta^\star$ still satisfies the convex constraints $\theta: \theta_i\in [0,1] $ for all $i\in [d]$. Hence, $\widetilde\theta^\star$ is the minimizer $\theta^\star \in \argmin_{\theta: \theta_i\in [0,1]} \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\pi}} \left( \theta\cdot \phi(s,a) - Q^{\pi}(s,a) \right)^2$. Since $\widetilde\theta^\star$ satisfies the condition in Eq~\ref{eq:transfer_error_zero}, we must have that its prediction error for the advantage functions $A^\pi$ under $d^\star$ being zero. This concludes the proof. \end{proof} The lemma above indicates that EPOC\xspace will succeed to find an $\epsilon$ near-optimal policy with $\varepsilon_{bias} = 0$ and with sample complexity scales polynomially with respect to $d$, \emph{regardless of how features for states in $\overline{\mathcal{S}}^\star$ are set}. Hence we do not need strong assumption such as uniform approximation. Below we show that with some specific design of features for states in $\overline\mathcal{S}^\star$, we can fail a family of algorithms that based on Bellman optimality backup. \subsubsection{Failure of Bellman Backup Consistent Algorithms} We consider the situation where $H \geq 4$. Denote the right most path as $\{\widetilde{s}_1,\dots, \widetilde{s}_H\}$ (note $\widetilde{s}_1 = s^\star_1$ is the root of the tree). % We design features for states $\widetilde{s}_h$ for $h \geq 2$. For any $h\in [2, H-2]$, we have $\phi(\widetilde{s}_h, L)$ contains zeros everywhere except that the $(2H + h - 1)$-th entry contains 1; for $\phi(\widetilde{s}_h, R)$ we have zeros everywhere except that the $(2H+h)$-th entry contains $1$. For $\widetilde{s}_{H-1}$, we have that its features at two actions are coupled as follows: \begin{align*} \phi_{3H-1}(\widetilde{s}_{H-1}, L) = 1, \quad \phi_{3H-1}(\widetilde{s}_{H-1}, R) = 1; \end{align*} and all other entries are zeros. This setting means that for any $Q$ estimator associated with linear weight $\theta$, we must have: \begin{align*} Q(\widetilde{s}_{H-1}, L ) = Q(\widetilde{s}_{H-1}. R) = \theta_{3H-1}. \end{align*} As $Q(\widetilde{s}_{H-1}, L ) = Q(\widetilde{s}_{H-1}. R) $ regardless of the predictor weight $\theta$, we assume the the greedy policy here just picks actions among $\{L, R\}$ uniformly random, as from $Q$'s perspective, L and R are equally good. Consider $\widetilde{s}_{H-2}$ (the parent of $\widetilde{s}_{H-1}$). Assume $\widetilde{s}_{H-2}$ only has one action $R$, and $\phi_{3H-1} (\widetilde{s}_{H-2}, R)= -1$. Whenever an agent visits $\widetilde{s}_{H-2}$ it will visit $(\widetilde{s}_{H-1}, L)$ and $(\widetilde{s}_{H-1}. R)$ with equal probability. Note that $Q(\widetilde{s}_{H-2}, R) = -\theta_{3H-1}$. All other $(s,a)$ in the tree have features such that $\phi(s,a)_{3H-1} = 0$. The regression problem in this triangle will be: \begin{align*} \min_{\theta_{3H-1}} (-\theta_{3H-1} - \theta_{3H-1})^2 + \frac{1}{2}\left( \theta_{3H-1} - a \right)^2 + \frac{1}{2} \left( \theta_{3H-1} - b \right)^2, \end{align*} where $r(\widetilde{s}_{H-1}, L) = a$ and $r(\widetilde{s}_{H-1}, R) = b$. We have $1/2$ in the second and third term above is that as we assume that learner always uniformly picking $L$ and $R$ at state $s$ if $Q(s,L) = Q(s,R)$ where $Q$ is the agent's predictor. Solving the above optimization program, we get: \begin{align*} \theta_{3H-1} = \frac{a+b}{18}. \end{align*} Set $a = -18$ and $b = 0$, we get $\theta_{3H-1} = -1$, which implies that $Q(\widetilde{s}_{H-2}, R) = -\theta_{3H-1} = 1$. As $\widetilde{s}_{H-2}$ only has one action $R$, we get that $V(\widetilde{s}_{H-2}) = 1$. Hence, by performing least square, the predictor believes that $\widetilde{s}_{H-2} = 1$ has higher value than the optimal total reward (which is 1/4), i.e., the learner will believe the rightmost path is the optimal path. For any other states $s$ that are not on the optimal path and the right most path, we simply set their features as $\phi_{3H}(s, L) = 1$ and $\phi_{3H+1}(s, R) = 1$. Hence the dimension of $\phi(s,a)$ is equal to $3H+2$, which is polynomial in $H$. We define a \emph{Bellman Backup Consistent} algorithm as an algorithm which at the end, satisfies Bellman optimality at any state as long as it can, including the right most path $\{\widetilde{s}_h, R\}$, which means that: \begin{align*} Q(\widetilde{s}_{h}, R) = \theta_{2H+h} = \max\left\{ Q(\widetilde{s}_{h+1}, L) , \quad Q(\widetilde{s}_{h+1}, R) \right\}, \forall h. \end{align*} Thus, continue dynamic programming, we get to the root with $Q(\widetilde{s}_1, R) = 1$ while $Q(\widetilde{s}_1, L) = 1/4$. We have perfect Bellman optimality consistency along the optimal path,% \begin{align*} Q(s^\star_h, L) = \theta_{2h-1} = \max\left\{ Q({s}^\star_{h+1}, L), \quad Q({s}^\star_{h+1}, R) \right\}, \forall h \end{align*} Using induction with the base case that $Q(s^\star_H, L) = Q(s^\star_H, R) = \theta_{2H-1} = \theta_{2H} = 1/4$, we have $Q(s^\star_1, L) = 1/4 $. For any states not on the left most and right most paths, with $\theta_{3H} = \theta_{3H+1} = 0$, we achieve perfect Bellman consistency at these states as well. Namely due to the set up of feature $\phi$, we can achieve perfect Bellman consistency at any states except the right most leaf node $\widetilde{s}_{H-1}$ and its parent $\widetilde{s}_{H-2}$ (but there we find the best predictor using linear regression) Denote the root as $s_0$ (we have $s_0 = s^\star_1 = \widetilde{s}_1$), any Bellman backup consistent algorithm must have that: \begin{align*} Q(s_0, R) = Q(\widetilde{s}_1, R) \geq 1/2 \geq 1/4 = Q(s_0, L), \end{align*} which means that any Bellman backup consistent algorithm will pick action $L$ at root $s_0$ and completely miss the optimal path and hence incur regret $1/4$. \paragraph{Remark} One example of Bellman backup based algorithm is Fitted Q-iteration (FQI) \cite{munos2008finite}. First of all, Fitted Q-iteration will have sample complexity scales exponentially with respect to $H$ as the concentrability coefficient is as large as the number of states which is $2^H$ here. Secondly, even ignoring sample complexity issue and exploration issue (i.e., assume FQI accesses exponentially many samples from a good exploration distribution that covers state space everywhere), FQI will have trouble to converge to the optimal policy here as based on the design of $\phi$, the reward function in the above example does not below to the linear class, i.e., there is no $\theta$ such that $\theta\cdot\phi(s,a) = r(s,a)$ for all $(s,a)$, which results nonzero inherent Bellman error \cite{munos2008finite}. \subsection{Example: No-regret with respect a Suboptimal Policy} Note that EPOC\xspace's guarantee ensures that we can learn a policy that is almost as good as any comparator policy in the policy class, \emph{as long as the transfer bias under the comparator's distribution is small}. In this section, we design an MDP such that we have non-zero transfer bias under the optimal policy's distribution $d^\star$, but we have zero transfer bias under a sub-optimal policy, which ensures that EPOC\xspace at least learns a policy that is as good as that sub-optimal policy. We consider the following simple MDP with 4 states $\{s_0, s_1, s_2, s_3\}$ and two actions $\{L, R\}$. We have $P(s_1 | s_0, L) = 1$, $P(s_2 | s_0, R) = 1$, $P(s_3 | s_1, a) = 1$ for any $a\in\{L, R\}$, and $P(s_3 | s_2, a) = 1$ for any $a\in\{L, R\}$, and $P(s_3 | s_3, a) = 1$ for any $a\in \{L, R\}$. We have reward $r(s_2, L) = r(s_2, R) = 1/4$, $r(s_1, R) = 1/2$, and the rest state-action pairs have reward zero. We design the feature vector $\phi$ as follows. For $(s_0, L)$, we have $\phi_1(s_0, L) = 1$ and zeros everywhere else; for $(s_0, R)$, we have $\phi_2(s_0, R) = 1$ and zeros everywhere else; $\phi_3(s_2, L) = 1$ and zeros everywhere else; $\phi_3(s_2, R) = 1$ and zeros everywhere else; $\phi_4(s_1, R) =1$ and zero everywhere else; $\phi_4(s_1, L) = 1$ and $\phi_6(s_1, L) = 1$; $\phi_5(s_3, L) = 1$ and zeros everywhere else; $\phi_5(s_3, R) = 1$ and zeros everywhere else. We consider the following convex set of $\theta$: $\{\theta: \theta_i \in [0,1],i = 1,\dots5, \theta_d = 1\}$, with $d = 6$ in this example. With the above set up, we know that the optimal policy $\pi^\star$ takes $L$ in $s_0$, and then $R$ in $s_1$ and the total reward is 1/2. A suboptimal policy $\widetilde\pi$ takes $R$ in $s_0$ and the total reward is $1/4$. Due to the design of the feature and $\theta$, we can easily see that we can achieve zero transfer bias under the state-action pairs visited by the suboptimal policy, i.e., for any policy $\pi$, there exists a $\theta^\star$, s.t., \begin{align*} \theta^\star \in \argmin_{\theta: \{\theta: \theta_i \in [0,1], \theta_d = 1\}} \ensuremath{\mathbb{E}}_{(s,a)\sim d^\pi}\left( \theta\cdot \phi(s,a) - Q^\pi(s,a) \right)^2, \end{align*} and \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}} \left( A^\pi(s,a) - \theta^\star\left( \phi(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi_s}\phi(s,a')\right) \right) = 0. \end{align*} To see this, we can set $\theta^\star_1 = Q^{\pi}(s_0, L)$, $\theta^\star_2 = 1/4 = Q^{\pi}(s_0,R)$, and $\theta^\star_3 = 1/4 = Q^{\pi}(s_2, L) = Q^{\pi}(s_2,R)$, and $\theta^\star_5 = Q^{\pi}(s_3,L) = Q^{\pi}(s_3, R) = 0$. as long as $\theta_2 = 1/4$ and $\theta_3 = 0$ where $\theta_2$ and $\theta_3$ will not affect the prediction on state-action $(s_0, L)$ and $(s_1, a)$ for $a\in\{L,R\}$ and $(s_3, a)$ for $a\in\{L,R\}$. Hence, EPOC\xspace will find a policy that is comparable to $\widetilde\pi$ with polynomial number of samples. For any Bellman backup consistent algorithm, it will pick action $L$ at $s_1$: \begin{align*} L = \argmax_{a\in\{L, R\}} Q(s_1, a), \end{align*} as $Q(s_1, L) = 1+\theta_4 > \theta_4 = Q(s_1, R)$ regardless of how $\theta_4$ is set. Namely a Bellman backup consistent algorithm will pick action $L$ at state $s_1$; For state $s_0$, a Bellman backup consistent algorithm will have: \begin{align*} Q(s_0, L) = \max\{Q(s_1, L), Q(s_1, R)\} = 1 + \theta_4 \geq 1/4 = Q(s_0, R); \end{align*} Hence a Bellman backup consistent algorithm will pick action $L$ at $s_0$ and pick action $L$ at $s_1$. Hence, it outputs a policy that leads the agent to $(s_1, L)$ which gets total reward zero, while EPOC\xspace can learn the policy that achieves reward $1/4$ at least, though the optimal policy achieves reward $1/2$. \fi \subsection{Application to Approximate Linear MDPs} \label{app:app_approx_linear_mdp} In this section, we prove the agnostic result for mis-specified linear MDPs. Consider a fixed linear MDP $\widehat{M}$ that is defined in \pref{def:linear_mdp}. For any state-action pair $(s,a)$, let us denote the approximation error $\epsilon_{s,a;\widehat{M}}$ as follows: \begin{align*} \max\left\{\left\| P(\cdot | s,a) - \widehat{P}(\cdot | s,a) \right\|_1, \left\lvert r(s,a) - \widehat{\theta}\cdot\phi(s,a) \right\rvert \right\} = \epsilon_{s,a;\widehat{M}}. \end{align*} Namely we assume that there exists a linear MDP that can approximate the true MDPs up to $\epsilon_{s,a;\widehat{M}}$ for each state-action pair $(s,a)$. Given a policy $\pi$, we denote $Q^{\pi}$ as the Q function of $\pi$ under the true MDP and $\widehat{Q}^{\pi}$ as the Q function of $\pi$ under $\widehat{M}$. The following simulation lemma will be useful. \begin{lemma}[Simulation Lemma] For any policy $\pi$, any state-action pair $(s,a)$, \end{lemma} \fi \subsection{Application to Linear MDPs (Proof of \pref{thm:linear_mdp})} \label{app:app_to_linear_mdp} To prove \pref{thm:linear_mdp}, we just need to show that $\varepsilon_{bias}$ defined in \pref{ass:transfer_bias} is zero under the linear MDP model assumption. For linear MDP $\mathcal{M}$, we assume the following parameters' norms are bounded: \begin{align*} \| v^{\top} \mu \| \leq \xi \in\mathbb{R}^+, \forall v\in\mathbb{R}^{|\mathcal{S}|} s.t., \|v\|_{\infty}\leq 1, \quad \|\theta\| \leq \omega\in\mathbb{R}^+. \end{align*}With these bounds on linear MDP's parameters, we can show that for any policy $\pi$, we have $Q^{\pi}(s,a) = w^{\pi}\cdot \phi(s,a)$, with $\|w^{\pi}\|\leq \omega + V_{\max} \xi $ \citep{jin2019provably}, where $V_{\max} = \max_{\pi,s}V^{\pi}(s)$ is the maximum possible expected total value ($V_{\max}$ is at most $r_{\max}/(1-\gamma)$ with $r_{\max}$ being the maximum possible immediate reward). At every episode $n$, recall that NPG is optimizing the MDP $\mathcal{M}_{b^n} = \{P, r(s,a) + b^n(s,a)\}$ with $P, r$ being the true transition and reward of $\mathcal{M}$ which is linear under $\phi(s,a)$. Due to the reward bonus $b^n(s,a)$ in $\mathcal{M}_{b^n}$, $\mathcal{M}_{b^n}$ is not necessarily a linear MDP under $\phi(s,a)$ ($P$ is still linear under $\phi$ but $r(s,a)+b^n(s,a)$ it not linear anymore). However, augmenting feature vector by one more bit, i.e., augment feature vector $\widetilde{\phi}(s,a) = [ \phi(s,a)^{\top}, \one\{(s,a)\not\in\mathcal{K}^n\}]^{\top}$, $\mathcal{M}_{b^n}$ is a linear MDP under $\widetilde\phi$. \begin{claim}[Linear Property of $\mathcal{M}_{b^n}$ under $\widetilde\phi$] The MDP ${\mathcal{M}}_{b^n}$ is a linear MDP under features $\widetilde\phi$. \label{claim:linear_property} \end{claim} \begin{proof} We design a measure $\widetilde{\mu} = [\mu, \vec{0}]$. We have $P(\cdot |s,a) = \mu \phi(s,a) = \widetilde\mu \widetilde\phi(s,a)$. Hence, the transition is still linear under $\widetilde\phi$. Regarding reward, we set $\widetilde\theta = [\theta^{\top}, 1/(1-\gamma)]^{\top}$. For $(s,a)\in\mathcal{K}^n$, we simply have $\widetilde\theta \cdot \widetilde\phi(s,a) = \theta\cdot \phi(s,a)$, and for $(s,a)\not\in\mathcal{K}^n$, we have $\widetilde\theta\cdot \widetilde\phi(s,a) = \theta\cdot\phi + 1/(1-\gamma) = r(s,a) + b^n(s,a)$. Hence we have new reward functions being linear under $\widetilde\phi(s,a)$. \end{proof} With this claim, we can now prove \pref{thm:linear_mdp}. \begin{proof}[Proof of \pref{thm:linear_mdp}] Note that $\|\widetilde\mu\| = \|\mu\| = \xi$, and $\|\widetilde\theta\| \leq \omega + 1/(1-\gamma)$. Note that in the MDP with transition $P$ and reward $r^n$, we have $V_{\max} \leq b^n(s,a) / (1-\gamma) \leq 1/(1-\gamma)^2$. Hence, for any policy $\pi$, we have $Q^{\pi}_{b^n}(s,a) = w^{\pi}\cdot \widetilde\phi(s,a)$ with $\|w^\pi\| \leq \omega + 1/(1-\gamma) + \xi / (1-\gamma)^2$. We assume that for on-policy fit, we have $W \geq \omega + 1/(1-\gamma) + \xi / (1-\gamma)^2$. Hence we have shown that for any $n\in [N]$, $\mathcal{M}_{b^n}$ is a linear MDP under $\widetilde\phi(s,a)$, and hence at any iteration $t$ inside episode $n$, we have $Q^t(s,a; r+ b^n)$ is a linear function under feature $\widetilde\phi(s,a)$. This implies that under $\widetilde\phi(s,a)$, at episode $n$ and iteration $t$, we have: \begin{align} \min_{\theta: \|\theta\| \leq W} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho_\mix^n}\left( \theta\cdot \widetilde\phi(s,a) - Q^n_{b^n}(s,a) \right)^2 = 0, \label{eq:linear_regression_linear_mdp} \end{align} as there exists $w^t_\star$ with $\|w^t_\star\|\leq W$ such that $Q^n_{b^n}(s,a) = w^n_\star \cdot \widetilde\phi(s,a)$ for all $s$ as $\mathcal{M}_{b^n}$ is a linear MDP with respect to $\widetilde\phi$ This also means that we can fit advantage function perfectly with the best on-policy fit $w^t_\star$, and hence, we have: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a) \sim d^\star} \left( Q^n(s,a) - w^n_\star\cdot \widetilde\phi(s,a) \right)^2 = 0 = \varepsilon_{bias}. \end{align*} Namely we have shown that \pref{ass:transfer_bias} holds with $\varepsilon_{bias} = 0$. This implies that \pref{lem:variance_bias_n_new} directly holds with $\varepsilon_{bias}= 0$, which implies that \pref{lem:regret_rmax_pg_new} holds with $\varepsilon_{bias} = 0$. Then the rest of the calculation for sample complexity is almost identical to the proof of \pref{thm:detailed_bound_rmax_pg_new} with $\varepsilon_{bias}$ being set to $0$. This concludes the proof of \pref{thm:linear_mdp}. \end{proof} \subsection{Classic NPG for Linear MDPs} We analyze the performance of a classic NPG using the initial state distribution $\mu_0$ as the reset distribution (i.e., \pref{alg:npg} with $\left(\mu_0, b(s,a) = 0, \mathcal{K} = \mathcal{S}\times\mathcal{A}\right)$ as input). We focus on comparing to a fixed policy $\pi^\star\in\Pi_{linear}$, and denote $d^\star$ as the state-action distribution of $\pi^\star$. To ensure NPG returns non-vacuous bounds, we first need the following assumption on $\mu_0$. \begin{assum} Denote $\Sigma_0 := \ensuremath{\mathbb{E}}_{s \sim \mu_0, a\sim U(\mathcal{A})}\phi(s,a)\phi(s,a)^{\top}$, $\Sigma_\star:= \ensuremath{\mathbb{E}}_{s \sim d^\star, a\sim U(\mathcal{A})}\phi(s,a)\phi(s,a)^{\top}$, and $\kappa := \tilde{r}\left( \Sigma_0^{-1} \Sigma_\star \right)$. We assume $\kappa < \infty$. \label{asm:init_cond} \end{assum} Without this assumption, a direct application of the classic NPG algorithm's guarantee will be vacuous. Throughout this section, we assume the above assumption holds. We first analyze the critic fitting performance. \begin{lemma}Suppose at episode $n$, we have a policy $\pi^n$, and denote $\theta^n$ as the approximate minimizer of on-policy fit: $\min_{\theta:\|\theta\|\leq W}\ensuremath{\mathbb{E}}_{s\sim \mu_0,a\sim U(\mathcal{A})} \left( \theta\cdot \phi^n(s,a) - Q^{n}(s,a) \right)^2$ and $Q^n(s,a) := \theta^n_\star\cdot \phi(s,a) $. Further assume we have the following property: \begin{align*} &\ensuremath{\mathbb{E}}_{s\sim \mu_0, a\sim U(\mathcal{A})} \left( Q^n(s,a) - \theta^n \cdot \phi(s,a) \right)^2 \leq \varepsilon_{stat}; \end{align*} Then we have: \begin{align*} \max_{n\in [N]} V^{n} \geq V^{\pi^\star} - \frac{1}{1-\gamma}\left( 2W\sqrt{\log(A)/ N } + 2A\sqrt{\varepsilon_{stat} \kappa }\right). \end{align*} \label{lemma:NPG_classic} \end{lemma} \begin{proof} From the on-policy fit condition above, we immediately have: \begin{align*} \left(\theta^n - \theta^n_\star \right)^{\top} \Sigma_0 \left( \theta^n - \theta^n_\star \right) \leq \varepsilon_{stat}. \end{align*} Denote $\widehat{A}^n(s,a) := \theta^n\cdot \left(\phi(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi_s} \phi(s,a)\right)$. Now we can bound the prediction error under $d^\star$ as follows: \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim d^\star}\left( A^n(s,a) - \widehat{A}^n(s,a) \right) \\ & = \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star}\left( (\theta^n_\star - \theta^n)\cdot \phi(s,a) \right) + \ensuremath{\mathbb{E}}_{s\sim d^\star,a\sim \pi^n}\left( (\theta^n_\star - \theta^n)\cdot \phi(s,a) \right)\\ & \leq \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star}\left\lvert (\theta^n_\star - \theta^n)\cdot \phi(s,a) \right\rvert + \ensuremath{\mathbb{E}}_{s\sim d^\star,a\sim \pi^n}\left\lvert (\theta^n_\star - \theta^n)\cdot \phi(s,a) \right\rvert\\ &\leq A \ensuremath{\mathbb{E}}_{s\sim d^\star,a\sim U(\mathcal{A})}\left\lvert (\theta^n_\star - \theta^n)\cdot \phi(s,a) \right\rvert + A\ensuremath{\mathbb{E}}_{s\sim d^\star,a\sim U(\mathcal{A})}\left\lvert (\theta^n_\star - \theta^n)\cdot \phi(s,a) \right\rvert \\ & \leq 2A \sqrt{\varepsilon_{stat}} \sqrt{ \ensuremath{\mathbb{E}}_{a\sim d^\star,a\sim U(\mathcal{A})} \phi(s,a)^{\top} \Sigma_0^{-1} \phi(s,a) } \leq 2A\sqrt{\varepsilon_{stat} \tilde{r}\left( \Sigma_\star \Sigma_0^{-1} \right)}, \end{align*} where the last second inequality uses CS inequality and Jenssen's inequality. With this prediction error under $d^\star$, now we can proceed the usual mirror descent analysis for NPG just as we did in the proof of Lemma~\ref{lem:npg_construction}. Specifically, we will get: \begin{align*} \sum_{n=1}^N \ensuremath{\mathbb{E}}_{(s,a)\sim \pi^\star} \left[ \widehat{A}^n(s,a) \right] \leq \frac{1}{\eta} \ensuremath{\mathrm{KL}}(\pi^\star_s, \pi^1_s) + \eta TW^2 \leq \log(A)/\eta + \eta T W^2, \end{align*} which leads to: \begin{align*} \sum_{n=1}^N \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} \left[ \widehat{A}^n(s,a) \right] \leq 2 W \sqrt{\log(A) N}, \end{align*} with $\eta: = \sqrt{\log(A)/(W^2 N)}$. Using the transfer error of $\widehat{A}^n$ and an application of performance difference lemma, we have: \begin{align*} \sum_{n=1}^N (1-\gamma)\left( V^{\pi^\star} - V^n \right) /N \leq 2W\sqrt{\log(A)/ N } + 2A\sqrt{\varepsilon_{stat} \tilde{r}\left( \Sigma_\star \Sigma_0^{-1} \right)}. \end{align*} This concludes the proof. \end{proof} \begin{theorem}[Sample Complexity of NPG] Fix $(\epsilon, \delta)$. Under Assumption \pref{asm:init_cond}, we have that with probability at least $1-\delta$, we learned a policy $\hat{\pi}$ such that: \begin{align*} V^{\hat\pi} \geq V^\star - 2 \epsilon, \end{align*} with total number of samples: \begin{align*} \frac{ W^6 A^4 \kappa^2 \nu }{ (1-\gamma)^2 \epsilon^6 }, \end{align*} where $\nu$ contains only log terms and constants $\nu:= 32\log(A) \log\left( 4W^2\log(A)/(\delta\epsilon^2) \right)$. \label{thm:npg_classic_linear} \end{theorem} \begin{proof} To tolerate $O(\epsilon)$ error, we need to set $N$ big enough such that $2W\sqrt{\log (A) / N} = \epsilon$. We can verify that $N = \frac{ 4W^2 \log(A) }{\epsilon^2}$ suffices. Using Lemma~\ref{lem:sgd_dim_free} and a union bound over all episodes $N$, with probability at least $1-\delta$, for any $n\in [N]$, we have: \begin{align*} \varepsilon_{stats} \leq \frac{ (2W^2/(1-\gamma)) \sqrt{ \ln(N/\delta)} }{ \sqrt{M} }, \end{align*} To make $2A \sqrt{\varepsilon_{stats}\kappa} = \epsilon$, it suffices to set $\varepsilon_{stats} = \frac{\epsilon^2 }{4A^2\kappa} $, which in turn means it suffices to set $M$ as: \begin{align*} M = \frac{ 16 W^4 A^4 \kappa^2 \ln(N/\delta)}{ (1-\gamma)^2 \epsilon^4 }. \end{align*} The total number of samples will be $NM$, which is at the order: \begin{align*} NM = \frac{ 64 W^6 A^4 \kappa ^2 \log(A) \log\left( 4W^2\log(A)/(\delta\epsilon^2) \right) }{ (1-\gamma)^2 \epsilon^6}, \end{align*} which concludes the proof. \end{proof} \subsection{Application to Reward-free Exploration in Linear MDPs} \label{app:reward_free_explore} We can run our algorithm in a reward free setting, i.e., $r(s,a) = 0$ for all $(s,a)$. In high level, under reward free setting, EPOC\xspace tries to explore the entire state-action space as quickly as possible. For notation simplicity, we focus on linear MDPs in this section (but similar agnostic result can be achieved using the concept of transfer bias). To do reward-free exploration, we need a slight modification: EPOC\xspace introduces a termination condition which indicates when we terminate the algorithm (i.e., when we have explored sufficiently except for those state-actions which are hard to reach under any policy). At the end of episode $n$, we check the following termination criteria: \begin{align} \label{eq:termination_critera} \sum_{(s,a)\not\in\Kcal^n} d^{\pi^{n+1}}(s,a) \leq \theta \in\mathbb{R}^+, \end{align} where $\theta$ is some function of accuracy parameter $\epsilon$ and horizon $(1-\gamma)$ which we will specifiy in the main theorem (\pref{thm:reward_free}) in this section. We terminate if Inequality~\ref{eq:termination_critera} is satisfied and EPOC\xspace outputs the policy cover $\{\pi^0,\dots, \pi^n \}$. We have the following guarantee for reward free exploration. \begin{theorem}[Sample Complexity of Reward-free Exploration] Set the parameters as follows: \begin{align*} &\beta = \frac{\epsilon^2(1-\gamma)^2}{4W^2}, \quad M = \frac{ 576 W^4 \widetilde{d}^2 \ln(NT/\delta )\ln( 2\widetilde{d}/(\beta\epsilon(1-\gamma)))^2 }{\epsilon^6(1-\gamma)^{10}},\\ & \theta = (1-\gamma)\epsilon, \quad \lambda = 1, \quad T = \frac{4W^2 \log(A)}{(1-\gamma)^2 \epsilon^2}. \end{align*} With probability at least $1-\delta$, the algorithm terminates within at most $N$ episodes with $N \leq \frac{4\widetilde{d}}{\beta\theta}\log\left(\frac{4\widetilde{d}}{\lambda \beta\theta}\right)$, and upon termination, we identify a set of hard-to-reach state action pairs, i.e., $\overline\Kcal^N := \mathcal{S}\times\mathcal{A} \setminus \Kcal^N$, such that: \begin{align*} \max_{\pi\in\Pi_{linear}} \sum_{(s,a) \in \overline{\Kcal}^N} d^{\pi}(s,a) \leq 4\epsilon, \end{align*} with total number of samples: \begin{align*} \frac{c \nu W^8 \widetilde{d}^3 \ln(A)}{\epsilon^{11}(1-\gamma)^{15}}, \end{align*} where $c $ is a universal constant, and $\nu$ contains only log terms: \begin{align*} \nu & = \ln\left(\frac{W^2\widetilde{d}}{\epsilon^3 (1-\gamma)^3}\right)\ln\left(\frac{\widetilde{d} W^4\ln(A) }{\epsilon^5(1-\gamma)^5\delta} \ln\left( \frac{2\widetilde{d}W^2}{\epsilon^3(1-\gamma)^3} \right)\right) \ln^2\left( \frac{W^2\widetilde{d}}{\epsilon^3(1-\gamma)^3} \right)\\ & \qquad + \ln^3\left( \frac{W^2 \widetilde{d}}{\epsilon^3(1-\gamma)^3} \right) \ln\left( \frac{ \ensuremath{\widehat w}{d}\widetilde{d} }{ \epsilon^3(1-\gamma)^3\delta}\ln\left( \frac{W^2\widetilde{d}}{\epsilon^3(1-\gamma)^3} \right) \right). \end{align*} \label{thm:reward_free} \end{theorem} The benefit of reward-free exploration is that once exploration is done, one can efficient optimize any given reward functions. The next theorem shows that with the policy cover from EPOC\xspace, for any future non-zero reward we could have, we can run the classic NPG with the policy cover's induced state-action distribution as reset/initial distribution to achieve a near optimal policy. \begin{theorem}[Post NPG optimization with the Policy Cover from EPOC\xspace] Conditioned on \pref{thm:reward_free} holding, for the policy cover (denote $\rho^N_{\mix} = \frac{1}{N} \sum_i d^{\pi^i}$), given any non-zero reward $r'(s,a)\in [0,1]$, run NPG (\pref{alg:npg}), with $(\rho^N_{\mix}, b^N(s,a) = 0,\forall(s,a))$ as inputs, with parameters: \begin{align*} T = 4W^2 \log(A)/\epsilon^2, \quad M = \frac{ 576 W^4 \widetilde{d}^2 \ln(NT/\delta )\ln( 2\widetilde{d}/(\beta\epsilon(1-\gamma)))^2 }{\epsilon^6(1-\gamma)^{10}}. \end{align*} Then with probability at least $1-\delta$, \pref{alg:npg} outputs a policy $\hat{\pi}$ such that: \begin{align*} \ensuremath{\mathbb{E}}\left[\sum_{t}\gamma^t r'(s_t,a_t) | \hat{\pi}, s_0\sim \mu_0\right] \geq \max_{\pi\in\Pi_{linear}}\ensuremath{\mathbb{E}}\left[\sum_{t}\gamma^t r'(s_t,a_t) | {\pi}, s_0\sim \mu_0\right] - O\left(\frac{ \epsilon }{(1-\gamma)^2} + \frac{W\epsilon}{1-\gamma}\right), \end{align*} with total number of samples \begin{align*} O\left(\frac{ W^6 \log(A) \widetilde{d}^2 }{\epsilon^8(1-\gamma)^{10}} \ln(NT/\delta )\ln( 2\widetilde{d}/(\beta\epsilon(1-\gamma)))^2 \right). \end{align*} \label{thm:reward_free_npg_opt} \end{theorem} The above theorem indicates that once a reward-free phase is done, EPOC\xspace identifies a hard-to-reach state-action subspace, and a policy cover which can be used as the reset distribution for NPG for any new given reward functions. Note that unlike prior results for NPG~\citep{geist2019theory,agarwal2019optimalityshani2019adaptive}, this is a global optimality guarantee without any distribution mismatch or concetrability coefficients since we have provided the algorithm with a favorable initial distribution from the reward-free exploration phase. \subsubsection{Proof of \pref{thm:reward_free} and \pref{thm:reward_free_npg_opt}} To prove the above main theorem, we first show that the algorithm will terminate with finite number of bounds. \begin{lemma}[Total Number of Episodes] Assume that for $n = 0,\dots N $, we have $\sum_{(s,a)\not\in\Kcal^n}(s,a) \geq \theta$ (i.e., algorithm does not terminate), we must have: \begin{align*} N \leq \frac{4\widetilde{d}}{\beta\theta} \log\left( \frac{4\widetilde{d}}{\lambda \beta\theta} \right). \end{align*} \end{lemma} \begin{proof}Based on the proof of \pref{lem:potential_argument_n}, we have that: \begin{align*} &\sum_{n=1}^N \beta \theta \leq \beta \ensuremath{\mathbb{E}}_{(s,a)\sim d^{n+1}}\one\{(s,a)\not\in\Kcal^n\} \leq \sum_n \tilde{r}\left(\Sigma^{n+1} \left( \Sigma^n_{\mix}\right)^{-1}\right) \\ &\leq 2\log\det\left( \Sigma^N_{\mix} \right) \leq 2\widetilde{d} \log\left(N/\lambda + 1\right). \end{align*} which implies that: \begin{align*} N \leq \frac{2\widetilde{d}}{ \beta\theta}\log\left( N / \lambda + 1\right). \end{align*} This concludes the proof. \end{proof} Now we can prove the main theorem below. \begin{proof}[Proof of \pref{thm:reward_free}] Focus on the last iteration $N$ where the algorithm terminates. Using \pref{lem:perf_absorb}, we have that: \begin{align*} V^{N+1}_{\mathcal{M}^{N}} \leq V^{{N+1}}_{\mathcal{M}} + \frac{1}{1-\gamma} \sum_{(s,a)\not\in\Kcal^N} d^{N+1}(s,a) \leq V^{N+1}_{\mathcal{M}} + \frac{\theta}{1- \gamma} = \frac{\theta}{1-\gamma}, \end{align*} where the second inequality uses the termination criteria and the last equality uses the fact that $V^\pi_{M} = 0$ for any $\pi$ as $r(s,a) = 0$ for all $(s,a)$. Recall the definition of $\mathcal{M}^N$ from Item~\ref{item:mdp_3} and the NPG guarantee, we have for any comparator $\widetilde\pi$: \begin{align*} V^{\widetilde\pi^{N}}_{\mathcal{M}^N} - \frac{1}{1-\gamma}\left( 2W\sqrt{\frac{\log(A)}{T}} + 2\sqrt{\beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}} \right) \leq V^{N+1}_{\mathcal{M}^N} \leq \frac{\theta}{1-\gamma}. \end{align*} To link the above results to the maximum possible escaping probability, we define another MDP $\widetilde{\mathcal{M}}$ such that $\widetilde{\mathcal{M}}$ and $\mathcal{M}$ has the same transition dynamics, but $\widetilde{\mathcal{M}}$ has rewards $r(s,a) = 1$ for $(s,a)\not\in\Kcal^N$ and $r(s,a) = 0$ otherwise. We also construct the ``absorbing'' MDP $\widetilde{\mathcal{M}}^N$ analogous of $\mathcal{M}^n$ to $\mathcal{M}$. Note that for any policy $\widetilde{\pi}$, we have: \begin{align*} V^{\widetilde\pi}_{\widetilde{\mathcal{M}}} = \sum_{(s,a)\not\in\Kcal^N} d^{\widetilde\pi}(s,a), \quad V^{\widetilde{\pi}^N}_{\widetilde{\mathcal{M}}^N} \geq V^{\widetilde\pi}_{\widetilde{\mathcal{M}}}. \end{align*} Note that $V^{\tilde\pi^N}_{\widetilde{\mathcal{M}}^N} = V^{\widetilde{\pi}^N }_{{\mathcal{M}}^N}$, since $\widetilde\pi^N$ only picks $a^{\dagger}$ at $s\not\in\Kcal^N$ and ${\mathcal{M}}^N$ and $\widetilde{\mathcal{M}}^N$ only differ at $(s,a)\not\in\Kcal^N$ in terms of rewards. Combine the above results, we get: \begin{align*} &\sum_{(s,a)\not\in\Kcal^N} d^{\widetilde\pi}(s,a) = V^{\widetilde\pi}_{\widetilde{\mathcal{M}}} \leq V^{\widetilde\pi^N}_{\widetilde{\mathcal{M}}^N} = V^{\widetilde\pi^N}_{\mathcal{M}^N} \\ & \leq \frac{1}{1-\gamma}\left( 2W\sqrt{\frac{\log(A)}{T}} + 2\sqrt{\beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}} \right)+ \frac{\theta}{1-\gamma}. \end{align*} Take $\widetilde\pi = \arg\max_{\pi\in\Pi_{linear}} \sum_{(s,a)\not\in\Kcal^N} d^{\pi}(s,a)$ and set parameters based on the values proposed in the main theorem, we conclude that: \begin{align*} \max_{\pi\in\Pi_{linear}} \sum_{(s,a)\not\in\Kcal^N} d^\pi(s,a) \leq 4\epsilon. \end{align*} Now we calculate the sample complexity. Following the proof of \pref{thm:detailed_bound_rmax_pg}, we have that the total number of samples one use for constructing $\widehat\Sigma^n_{\mix}$ is: \begin{align*} &N \times \left(N^2 \ln\left( \ensuremath{\widehat w}{d} N /\delta \right)\right) = N^3 \ln\left( \ensuremath{\widehat w}{d} N /\delta \right) = \frac{c_1 \nu_1 \widetilde{d}^3 W^6}{\epsilon^9(1-\gamma)^9}, \end{align*} where $c_1$ is a constant and $\nu_1$ contains log-terms $\nu_1:= \ln^3\left( \frac{8W^2 \widetilde{d}}{\epsilon^3(1-\gamma)^3} \right) \ln\left( \frac{ 2\ensuremath{\widehat w}{d}\widetilde{d} }{ \epsilon^3(1-\gamma)^3\delta}\ln\left( \frac{8W^2\widetilde{d}}{\epsilon^3(1-\gamma)^3} \right) \right)$. The second source of sample complexity comes from on-policy fit. To derive $\varepsilon_{stat} = (1-\gamma)^3 \epsilon^3 / \widetilde{d}$, we need to set $M$ (the number of samples for on-policy fit in each iteration $t$ and episode $n$): \begin{align*} M = \frac{ 576 W^4 \widetilde{d}^2 \ln(NT/\delta )\ln( 2\widetilde{d}/(\beta\epsilon(1-\gamma)))^2 }{\epsilon^6(1-\gamma)^{10}} \end{align*} Considering every episode $n\in [N]$ and every iteration $t\in [T]$, we have the total number of samples needed for NPG is: \begin{align*} NT \cdot M \end{align*} The rest of the caculation involves subistituing $M, N, T$ into the above expression and then combining two sources of samples together, exactly as what we did for the proof of \pref{thm:detailed_bound_rmax_pg} . \end{proof} Now once the learned the policy cover, given any non-zero reward $r(s,a)$, and any policy $\pi$, we can also show that by leveraging the policy cover we have upon termination, we can accurately estimate any policy's advantage on any state-action pair in the known set $\Kcal^N$ using the fact that for any $(s,a)\in\mathcal{K}^n$, we have $\phi(s,a)^{\top}\left(\Sigma^N_{\mix}\right)^{-1}\phi(s,a) \leq \beta$. \begin{lemma}[Policy Cover For On-Policy Critic Fit] Denote the policy cover's induced state-action distribution as $\rho^N_{\mix} = \frac{1}{N} \sum_i d^{\pi^i}$. Given any non-zero reward $r(s,a)\in [0,1]$, and any policy $\pi$, denote $\hat\theta$ as the approximate minimizer of $\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^N_{\mix}}\left( \theta\cdot \phi(s,a) - Q^{\pi}(s,a) \right)^2$, i.e., $\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^N_{\mix}}\left( \hat\theta\cdot \phi(s,a) - Q^{\pi}(s,a) \right)^2 \leq \varepsilon_{stat}$. We have that for any comparator $\widetilde\pi$: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\widetilde\pi}}\left( A^{\pi}(s,a) - \hat\theta\cdot\overline{\phi}(s,a) \right)\one\{(s,a)\in\Kcal^N\} \leq 2\sqrt{\beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}}, \end{align*} where $\overline\phi(s,a) = \phi(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi_s}\phi(s,a')$. \label{lem:policy_cover_fit} \end{lemma} \begin{proof} Denote $Q^\pi(s,a):= \theta_\star\cdot \phi(s,a)$. For any state-action pair in $\Kcal^N$, following the similar derivation we have in the proof of \pref{lem:variance_bias_n}, we thus have: \begin{align*} \left\lvert \phi(s,a) \cdot \left( \theta_\star - \hat\theta \right) \right\rvert \leq \sqrt{\beta \lambda W^2 } + \sqrt{\beta N \varepsilon_{stat}}, \quad \forall (s,a)\in\Kcal^N. \end{align*} As the above bound is state-action wise, thus for any policy distribution $d^{\widetilde\pi}$, for $A^\pi$, we have that: \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim d^{\widetilde\pi}} \left( A^\pi(s,a) - \hat\theta\cdot \overline\phi(s,a) \right)\one\{(s,a)\in \Kcal^N\} \leq2\sqrt{\beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}}. \end{align* This concludes the proof. \end{proof} \begin{proof}[Proof of \pref{thm:reward_free_npg_opt}] Now for the performance of NPG on a new reward function $r'$ with $\rho_{\mix}^N$ as the reset distribution, from \pref{lem:policy_cover_fit}, we have that for any $\pi^t$ generated during the NPG's run, we have: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\star}}\left( A^{t}(s,a) - \hat\theta^t \cdot\overline{\phi}^n(s,a) \right)\one\{(s,a)\in\Kcal^N\} \leq 2\sqrt{\beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}}. \end{align*} With a performance difference lemma application, we have: \begin{align*} &V^{\pi^\star} - V^{\pi^t} = \frac{1}{1-\gamma}\ensuremath{\mathbb{E}}_{(s,a)\sim d^{\star}} A^{t}(s,a) \\ & = \frac{1}{1-\gamma}\ensuremath{\mathbb{E}}_{(s,a)\sim d^{\star}} A^{t}(s,a) \one\{(s,a)\in\Kcal^N\} + \frac{1}{1-\gamma}\ensuremath{\mathbb{E}}_{(s,a)\sim d^{\star}} A^{t}(s,a)\one\{(s,a)\not\in\Kcal^N\}\\ & \leq \frac{1}{1-\gamma}\ensuremath{\mathbb{E}}_{(s,a)\sim d^{\star}} A^{t}(s,a) \one\{(s,a)\in\Kcal^N\} + \frac{4\epsilon}{(1-\gamma)^2} \\ & \leq \frac{1}{1-\gamma}\ensuremath{\mathbb{E}}_{(s,a)\sim d^{\star}} \widehat{A}^{t}(s,a) \one\{(s,a)\in\Kcal^N\} + \frac{4\epsilon}{(1-\gamma)^2} + \frac{2\sqrt{\beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}}}{1-\gamma} \end{align*} where we use $\max_{\pi} \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\pi}} \one\{(s,a)\not\in\Kcal^N\} \leq \epsilon$ (recall $\overline{\Kcal}^N := \mathcal{S}\times\mathcal{A} \setminus \Kcal^N$). As NPG is performing the following update rule, \begin{align*} \pi^{t+1}(\cdot |s) \propto \pi^t(\cdot | s) \exp\left(\eta \widehat{A}^t (s,a) \right), \end{align*} with a Mirror Descent analysis, we have: \begin{align*} \sum_{t=1}^{T} \ensuremath{\mathbb{E}}_{a\sim \pi^\star_s} \widehat{A}^t(s,a) \leq 2W \sqrt{\log(A) T }. \end{align*} Add expectation with respect to $d^\star$ on both sides, we get: \begin{align*} \sum_{t=1}^T \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} \widehat{A}^t(s,a) = \sum_{t=1}^T \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} \widehat{A}^t(s,a)\one\{(s,a)\in\mathcal{K}^n \} + \sum_{t=1}^T \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} \widehat{A}^t(s,a)\one\{(s,a)\not\in\mathcal{K}^N\} \end{align*} which implies that: \begin{align*} &\sum_{t=1}^T \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} \widehat{A}^t(s,a)\one\{(s,a)\in\mathcal{K}^n \} \leq 2W\sqrt{\log(A)T} - \sum_{t=1}^T \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} \widehat{A}^t(s,a)\one\{(s,a)\not\in\mathcal{K}^N\} \\ & \leq 2W\sqrt{\log(A) T} + T\epsilon \max_{s,a} | \widehat{A}^t(s,a) | \leq 2W\sqrt{\log(A)T} + TW \epsilon. \end{align*} This leads to the following result: \begin{align*} \sum_{t} \left(V^{\pi^\star} - V^{\pi^t}\right)/T \leq \frac{1}{1-\gamma}\left( 2W\sqrt{\log(A)/T} + \frac{4\epsilon}{(1-\gamma)} + W\epsilon + {2\sqrt{\beta \lambda W^2 } + 2\sqrt{\beta N \varepsilon_{stat}}}\right), \end{align*} where we abuse the notation a bit by denoting $V^{\pi}$ as the expected total reward of $\pi$ under the new reward function $r'$. With $T:= 4W^2\log(A)/\epsilon^2$, and the values of $\beta, \lambda, M, \varepsilon_{stat}$ defined in \pref{thm:reward_free}, we can simply the above inequality to: \begin{align*} \sum_{t} \left(V^{\pi^\star} - V^{\pi^t}\right)/T \leq O\left( \frac{\epsilon}{(1-\gamma)^2} + \frac{W\epsilon}{1-\gamma}\right). \end{align*} The total number of samples one needs is: \begin{align*} &M \cdot T = \frac{ 576 W^4 \widetilde{d}^2 \ln(NT/\delta )\ln( 2\widetilde{d}/(\beta\epsilon(1-\gamma)))^2 }{\epsilon^6(1-\gamma)^{10}} \frac{4W^2\log(A)}{\epsilon^2} \\ & = \frac{ 2304 W^6 \log(A) \widetilde{d}^2 \ln(NT/\delta )\ln( 2\widetilde{d}/(\beta\epsilon(1-\gamma)))^2 }{\epsilon^8(1-\gamma)^{10}} \end{align*} This concludes the proof. \end{proof} \section{Analysis of EPOC\xspace for State-Aggregation (\pref{thm:state_aggregation})} \label{app:state_agg} In this section, we analyze \pref{thm:state_aggregation} for state-aggregation. Similar to the analysis for linear MDP, we provide a variance bias tradeoff lemma that is analogous to \pref{lem:variance_bias_n}. However, unlike linear MDP, here due to model-misspecification from state-aggregation, the transfer error $\epsilon_{bias}$ will not be zero. But we will show that the transfer error is related to a term that is an expected model-misspecification averaged over a fixed comparator's state distribution First recall the definition of state aggregation $\phi:\mathcal{S}\times\mathcal{A} \to\mathcal{Z}$. We abuse the notation a bit, and denote $\phi(s,a) = \one\{\phi(s,a) = z\} \in\mathbb{R}^{|\mathcal{Z}|}$, i.e., the feature vector $\phi$ indicates which $z$ the state action pair $(s,a)$ is mapped to. The following claim reasons the approximation of $Q$ values under state aggregation. \begin{claim} \label{claim:state_agg} Consider any MDP with transition $P$ and reward $r$. Denote aggregation error $\epsilon_{z}$ as: \begin{align*} \max\left\{ \| P(\cdot | s,a) - P(\cdot | s', a') \|_1, \lvert r(s,a) - r(s',a') \rvert \right\} \leq \epsilon_{z}, \forall (s,a),(s',a'), \text{ s.t., } \phi(s,a) = \phi(s',a') = z. \end{align*} Then, for any policy $\pi$, $(s,a),(s',a'), z$, such that $\phi(s,a) = \phi(s',a') = z$, we have: \begin{align*} \left\lvert Q^{\pi}(s, a) - Q^{\pi}(s', a')\right\rvert \leq \frac{r_{\max} \epsilon_{z} }{1-\gamma}, \end{align*} where $r(s,a)\in [0, r_{\max}]$ for $r_{\max}\in \mathbb{R}^+$. \end{claim} \begin{proof} Starting from the definition of $Q^{\pi}$, we have: \begin{align*} &\left\lvert Q^{\pi}(s, a) - Q^{\pi}(s', a')\right\rvert = \lvert r(s,a) - r(s' ,a') \rvert + \gamma \lvert \ensuremath{\mathbb{E}}_{x'\sim P_{s,a}} V^{\pi}(s') - \ensuremath{\mathbb{E}}_{x'\sim P_{s',a'}} V^{\pi}(s') \rvert \\ & \leq \epsilon_{z} + \frac{r_{\max}\gamma}{1-\gamma} \left\| P_{s,a} - P_{s',a'} \right\|_1 \leq \frac{r_{\max}\epsilon_{z}}{1-\gamma}, \end{align*} where we use the assumption that $\phi(s,a) = \phi(s',a') = z$, and the fact that value function $\|V\|_{\infty} \leq r_{\max}/(1-\gamma)$ as $r(s,a)\in [0, r_{\max}]$. \end{proof} Now we state the bias and variance tradeoff lemma for state aggregation. \begin{lemma}[Bias and Variance Tradeoff for State Aggregation] Set $W:=\sqrt{|\mathcal{Z}|}/(1-\gamma)^2$. Consider any episode $n$. Assume that we have $\phi(s,a)^{\top}\left(\Sigma_{\mix}^{n}\right)^{-1}\phi(s,a) \leq \beta\in\mathbb{R}^+$ for $(s,a)\in\mathcal{K}^n$, and the following condition is true for all $t\in \{0,\dots, T-1\}$: \begin{align*} L^t( \theta^t; \rho^n_\mix, Q^t_{b^n} - b^n ) \leq \min_{\theta:\|\theta\|\leq W} L^t(\theta; \rho^n_{\mix}, Q^t_{b^n} - b^n) + \epsilon_{stat} \in \mathbb{R}^+. \end{align*} We have that for all $t \in \{0, \dots, T-1\}$ at episode $n$: \begin{align*} & \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^t_{b^n}(s,a) - \ensuremath{\widehat w}{A}^t_{b^n}(s,a) \right) \one\{s\in\mathcal{K}^n\} \\ & \quad \leq 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{ \beta n \epsilon_{stat} } + \frac{2 \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\widetilde\pi} } \max_{a} \left[\epsilon_{\phi(s,a)}\right] }{(1-\gamma)^2}. \end{align*}\label{lem:variance_bias_state_agg} \end{lemma} Note that comparing to \pref{lem:variance_bias_n}, the above lemma replaces $\sqrt{A\epsilon_{bias}}$ by the average model-misspecification $ \frac{ \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\widetilde\pi} } \max_{a} \left[\epsilon_{\phi(s,a)}\right] }{(1-\gamma)^2}$. \begin{proof} We first compute one of the minimizers of $L^t(\theta; \rho^n_\mix, Q^t_{b^n} - b^n)$. Recall the definition of $L^t(\theta; \rho^n_\mix, Q^t_{b^n} - b^n)$, we have: \begin{align*} &\ensuremath{\mathbb{E}}_{ (s,a)\sim \rho^n_{\mix}} \left(\theta\cdot\phi(s,a) - Q^{\pi^t}_{b^n}(s,a) + b^n(s,a) \right)^2 \\ &= \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_{\mix}} \sum_{z} \one\{\phi(s,a) = z\} \left( \theta_{z} - Q^{\pi^t}_{b^n}(s,a) + b^n(s,a) \right)^2, \end{align*} which means that for $\theta^t_{\star}$, we have: \begin{align*} \sum_{s,a} \rho^t_{\mix}(s,a) \one\{\phi(s,a) = z\} \left( \theta_{z} - Q^{\pi^t}_{b^n}(s,a)+ b^n(s,a) \right) = 0, \end{align*} which implies that $\theta^t_{\star, z} := \frac{ \sum_{s,a} \rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} (Q^{\pi^t}_{b^n}(s,a) - b^n(s,a) )}{ \sum_{s,a}\rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} } $. Note that $|\theta^t_{\star,z}| \leq \frac{1}{(1-\gamma)^2}$, hence $\| \theta^t_\star \|_2 \leq \sqrt{|\mathcal{Z}|}/(1-\gamma)^2 := W$. Hence, for any $s'',a'' $ such that $\phi(s'',a'') = z$, we must have: \begin{align*} &\left\lvert \theta^t_{\star, z} - (Q^{\pi^t}_{b^n}(s'', a'') - b^n(s'',a'')) \right\rvert \\ & = \left\lvert \frac{ \sum_{s,a} \rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} (Q^{\pi^t}_{b^n}(s,a)-b^n(s,a))}{ \sum_{s,a}\rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} } - Q^{\pi^t}_{b^n}(s'',a'') + b^n(s'',a'')\right\rvert\\ & = \left\lvert \frac{\sum_{s,a} \rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} \left( Q^{\pi^t}_{b^n}(s,a) - Q^{\pi^t}_{b^n}(s'',a'') \right)}{ \sum_{s,a}\rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} } \right\rvert \leq \frac{\epsilon_{z}}{ (1-\gamma)^2 }, \end{align*} where we use Claim~\ref{claim:state_agg}, and the fact that $r(s,a)+b^n(s,a) \in [0, 1/(1-\gamma)]$, and the fact that $b^n(s,a) = b^n(s'',a'')$ if $\phi(s,a) = \phi(s'',a'')$ as the bonus is defined under feature $\phi$. With $\theta^t_\star$ and its optimality condition for loss $L^t(\theta; \rho^n_\mix)$, we can prove the same point-wise estimation guarantee, i.e., for any $(s,a)\in\Kcal^n$, we have: \begin{align*} \left\lvert \phi(s,a) \cdot ( \theta^t - \theta^t_\star ) \right\rvert \leq \sqrt{ \beta n\epsilon_{stat} + \lambda W^2 }. \end{align*} Now we bound $\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^t_{{b^n}}(s,a) - \ensuremath{\widehat w}{A}^t_{{b^n}}(s,a)\right)\one\{s\in\mathcal{K}^n\}$ as follows. \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}_{b^n}}}\left( A^t_{{b^n}}(s,a) - \ensuremath{\widehat w}{A}^t_{{b^n}}(s,a)\right)\one\{s\in\mathcal{K}^n\} \\ &= \underbrace{\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^t_{{b^n}}(s,a) - \bar{b}^{t,n}(s,a) - \theta^t_\star\cdot\bar\phi^t (s,a) \right)\one\{s\in\mathcal{K}^n\}}_{\text{term A}} \\ & \qquad + \underbrace{\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( \theta^t_\star \cdot\bar\phi^t(s,a) - \theta^t\cdot\bar\phi^t(s,a) \right)\one\{s\in\mathcal{K}^n\}}_{\text{term B}}. \end{align*} Again, for term B, we can use the point-wise estimation error to bound it as: \begin{align*} \text{term B} \leq 2\sqrt{\beta \lambda W^2} + 2\sqrt{\beta n \epsilon_{stat}}. \end{align*} For term A, we have: \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left( A^t_{{b^n}}(s,a) - \bar{b}^{t,n}(s,a) - \theta^t_\star\cdot\bar\phi^t (s,a) \right)\one\{s\in\mathcal{K}^n\} \\ & \leq \ensuremath{\mathbb{E}}_{(s,a)\sim \widetilde{d}_{\mathcal{M}^n}}\left\lvert Q^t_{{b^n}}(s,a) - b^n(s,a) - \theta^t_\star\cdot \phi(s,a) \right\rvert \one\{s\in\mathcal{K}^n\} \\ &\qquad + \ensuremath{\mathbb{E}}_{s\sim \widetilde{d}_{\mathcal{M}^n},a\sim \pi^t_s}\left\lvert -Q^t_{{b^n}}(s,a) + b^n(s,a) + \theta^t_\star\cdot \phi(s,a) \right\rvert \one\{s\in\mathcal{K}^n\} \\ & \leq \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\widetilde\pi}} \left\lvert Q^t_{{b^n}}(s,a) - b^n(s,a) - \theta^t_\star\cdot \phi(s,a) \right\rvert + \ensuremath{\mathbb{E}}_{s\sim d^{\widetilde\pi}, a\sim \pi^t_s} \left\lvert -Q^t_{{b^n}}(s,a) + b^n(s,a) + \theta^t_\star\cdot \phi(s,a) \right\rvert, \end{align*} where last inequality uses \pref{lem:prob_absorb} for $s\in\mathcal{K}^n$ to switch from $\widetilde{d}_{\mathcal{M}^n}$ to $d^{\widetilde\pi}$---the state-action distribution of the comparator $\widetilde\pi$ in the real MDP $\mathcal{M}$. Note that for any $d\in\mathcal{S}\times\mathcal{A}$, we have: \begin{align*} &\ensuremath{\mathbb{E}}_{(s,a)\sim d} \left\lvert Q^{t}_{b^n}(s,a) -b^n(s,a) - \theta^t_\star\cdot \phi(s,a)\right\rvert \nonumber \\ & \leq \sum_{z} \ensuremath{\mathbb{E}}_{(s,a)\sim d} \one\{\phi(s,a) = z\} \left\lvert Q^{t}_{b^n}(s,a) - b^n(s,a) - \theta^t_{\star,z} \right\rvert \leq \ensuremath{\mathbb{E}}_{(z)\sim d} \frac{\epsilon_{z}}{(1-\gamma)^2} = \frac{ \ensuremath{\mathbb{E}}_{(s,a) \sim d} \epsilon_{\phi(s,a)} }{(1-\gamma)^2}. \end{align*} With this, we have: \begin{align*} \text{term A} & \leq \ensuremath{\mathbb{E}}_{(s,a)\sim {d}^{\widetilde\pi}} \left\lvert Q^{\pi^t}_{b^n}(s,a) - b^n(s,a) -\theta^n_\star\cdot \phi(s,a) \right\rvert + \ensuremath{\mathbb{E}}_{s\sim d^{\widetilde\pi}, a\sim \pi^t_s} \left\lvert - Q^{\pi^t}_{b^n}(s,a) +b^n(s,a) + \theta^t_\star \cdot \phi(s,a) \right\rvert \\ & \leq \ensuremath{\mathbb{E}}_{s\sim {d}^{\widetilde\pi}} \max_{a} \left\lvert Q^{\pi^n}_{\widetilde\mathcal{M}}(s,a) - b^n(s,a) - \theta^n_\star\cdot \phi(s,a) \right\rvert + \ensuremath{\mathbb{E}}_{s\sim d^{\widetilde\pi}}\max_a \left\lvert - Q^{\pi^t}_{b^n}(s,a) + b^n(s,a) + \theta^t_\star \cdot \phi(s,a) \right\rvert \\ & \leq 2 \left( \ensuremath{\mathbb{E}}_{s\sim d^{\widetilde\pi} } \max_{a} \left\lvert Q^{\pi^t}_{b^n}(s,a) - b^n(s,a) - \theta^t_\star\cdot \phi(s,a) \right\rvert \right) \leq \frac{2 \ensuremath{\mathbb{E}}_{s \sim d^{\widetilde\pi} } \max_a \left[\epsilon_{\phi(s,a)}\right] }{(1-\gamma)^2 \end{align*} Combine term A and term B, we conclude the proof. \end{proof} \iffalse Now we can bound the transfer error defined in \pref{ass:transfer_bias}. \begin{lemma}[Bounding Transfer error] Consider an arbitrary comparator $\pi^\star$ with $d^{\star}(s,a) := d^{\pi^\star}(s) \text{Unif}_{\mathcal{A}}(a)$. Throughout EPOC\xspace, consider any episode $n$, we have: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a) \sim d^\star}\left( Q^n(s,a; r+b^n) - \theta^n_\star(s,a) \right)^2 \leq \frac{\ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} [\epsilon^2_{misspec}(\phi(s,a))] }{ (1-\gamma)^4}, \end{align*} where $\theta^t_\star$ is one of the best on-policy fit: \begin{align*} \theta^t_\star \in \arg\min_{\|\theta\|\leq W}\ensuremath{\mathbb{E}}_{ (s,a)\sim \rho^n_{\mix}} \left(\theta\cdot\phi(s,a) - Q^n(s,a; r+b^n)\right)^2. \end{align*}\label{lem:bias_in_state_agg} \end{lemma} \begin{proof} Recall that notation $Q^n_{b^n}(s,a)$ is in short of $Q^n(s,a; r+b^n)$. First note that as $b^n(s,a) \in [0,1/(1-\gamma)]$, we must have $Q^n_{b^n}(s,a) \in [1, 1/(1-\gamma)^2]$. Second, for any $s,a$ such that $\phi(s,a) = \phi(s',a') = z$, we have $\phi(s,a) = \phi(s',a')$ which means that $(s,a)\in \Kcal^n$ if and only if $(s',a')\in\Kcal^n$ as their features are identical. This means that the reward misspecification assumption still holds under model $\mathcal{M}_{b^n}$, i.e., $\lvert r(s,a) + b^n(s,a) - r(s',a') - b^n(s',a') \rvert \leq \epsilon_{z}$. Now let us consider $\theta^t_{\star}$. Using the definition of the state aggregation, we have: \begin{align*} &\ensuremath{\mathbb{E}}_{ (s,a)\sim \rho^n_{\mix}} \left(\theta\cdot\phi(s,a) - Q^n_{b^n}(s,a)\right)^2 \\ &= \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_{\mix}} \sum_{z} \one\{\phi(s,a) = z\} \left( \theta_{z} - Q^n_{b^n}(s,a) \right)^2, \end{align*} which means that for $\theta^t_{\star}$, we have: \begin{align*} \sum_{s,a} \rho^n_{\mix}(s,a) \one\{\phi(s,a) = z\} \left( \theta_{z} - Q^n_{b^n}(s,a)\right) = 0, \end{align*} which implies that $\theta^t_{\star, z} := \frac{ \sum_{s,a} \rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} Q^n_{b^n}(s,a)}{ \sum_{s,a}\rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} } $. Hence, for any $s'',a'' $ such that $\phi(s'',a'') = z$, we must have: \begin{align*} &\left\lvert \theta_{\star, z} - Q^{n}_{b^n}(s'', a'') \right\rvert \\ & = \left\lvert \frac{ \sum_{s,a} \rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} Q^n_{b^n}(s,a)}{ \sum_{s,a}\rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} } - Q^n_{b^n}(s'',a'') \right\rvert\\ & = \left\lvert \frac{\sum_{s,a} \rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} \left( Q^n_{b^n}(s,a) - Q^n_{b^n}(s'',a'') \right)}{ \sum_{s,a}\rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} } \right\rvert \leq \frac{\epsilon_{z}}{ (1-\gamma)^2 }, \end{align*} where we use Claim~\ref{claim:state_agg}. Note $|\theta^n_{\star,z}| \leq \frac{1}{(1-\gamma)^2}$ and $\|\theta^n_\star\|_2 \leq \sqrt{\frac{Z}{(1-\gamma)^4}} := W$ in this case Now for any state-action distribution $d$, we will have: \begin{align} &\ensuremath{\mathbb{E}}_{(s,a)\sim d} \left(Q^n_{b^n}(s,a) -\theta^n_\star\cdot \phi(s,a) \right)^2 \nonumber \\ & = \sum_{z} \ensuremath{\mathbb{E}}_{(s,a)\sim d} \one\{\phi(s,a) = z\} \left( Q^n_{b^n}(s,a) - \theta^n_{\star,z} \right)^2 \nonumber\\ & \leq \ensuremath{\mathbb{E}}_{(z)\sim d} \frac{\epsilon_{z}^2}{(1-\gamma)^4} \nonumber \leq \frac{\ensuremath{\mathbb{E}}_{z\sim d} \epsilon_{misspec}(z)^2 }{(1-\gamma)^4} = \frac{\ensuremath{\mathbb{E}}_{(s,a)\sim d}\left[ \epsilon_{misspec}(\phi(s,a))^2\right] }{(1-\gamma)^4}. \end{align} \end{proof} \fi The rest of the proof of \pref{thm:state_aggregation} is almost identical to the proof of \pref{thm:detailed_bound_rmax_pg} with $\sqrt{A \epsilon_{bias}}$ in \pref{thm:detailed_bound_rmax_pg} being replaced by $\frac{2 \ensuremath{\mathbb{E}}_{s \sim d^{\widetilde\pi} } \max_a \left[\epsilon_{\phi(s,a)}\right] }{(1-\gamma)^2}$. \qed \section{Application to State-Aggregation (\pref{thm:state_aggregation})} \label{app:state_agg} In this section, we analyze \pref{thm:state_aggregation} for state-aggregation. Different from the Linear MDP case where it is required to change the critic fitting procedure, we do not need to perform any modification of algorithm here. Hence, we are going to leverage the general theorem~\ref{thm:detailed_bound_rmax_pg} and bound the transfer bias $\epsilon_{bias}$ using aggregation errors . First recall the definition of state aggregation $\phi:\mathcal{S}\times\mathcal{A} \to\mathcal{Z}$. We abuse the notation a bit, and denote $\phi(s,a) = \one\{\phi(s,a) = z\} \in\mathbb{R}^{|\mathcal{Z}|}$, i.e., the feature vector $\phi$ indicates which $z$ the state action pair $(s,a)$ is mapped to. The following claim reasons the approximation of $Q$ values under state aggregation. \begin{claim} \label{claim:state_agg} Denote aggregation error $\epsilon_{z}$ as: \begin{align*} \max\left\{ \| P(\cdot | s,a) - P(\cdot | s', a') \|_1, \lvert r(s,a) - r(s',a') \rvert \right\} \leq \epsilon_{z}, \forall (s,a),(s',a'), \text{ s.t., } \phi(s,a) = \phi(s',a') = z. \end{align*} Then, for any policy $\pi$, $(s,a),(s',a'), z$, such that $\phi(s,a) = \phi(s',a') = z$, we have: \begin{align*} \left\lvert Q^{\pi}(s, a) - Q^{\pi}(s', a')\right\rvert \leq \frac{r_{\max} \epsilon_{z} }{1-\gamma}, \end{align*} where $r(s,a)\in [0, r_{\max}]$ for $r_{\max}\in \mathbb{R}^+$. \end{claim} \begin{proof} Starting from the definition of $Q^{\pi}$, we have: \begin{align*} &\left\lvert Q^{\pi}(s, a) - Q^{\pi}(s', a')\right\rvert = \lvert r(s,a) - r(s' ,a') \rvert + \gamma \lvert \ensuremath{\mathbb{E}}_{x'\sim P_{s,a}} V^{\pi}(s') - \ensuremath{\mathbb{E}}_{x'\sim P_{s',a'}} V^{\pi}(s') \rvert \\ & \leq \epsilon_{z} + \frac{r_{\max}\gamma}{1-\gamma} \left\| P_{s,a} - P_{s',a'} \right\|_1 \leq \frac{r_{\max}\epsilon_{z}}{1-\gamma}, \end{align*} where we use the assumption that $\phi(s,a) = \phi(s',a') = z$, and the fact that value function $\|V\|_{\infty} \leq r_{\max}/(1-\gamma)$ as $r(s,a)\in [0, r_{\max}]$. \end{proof} Recall the definition $\epsilon_{misspec}(z) = \epsilon_{z}$ and we also denote the $\ell_\infty$ error as $\epsilon_{misspec} := \max_{z} \epsilon_{misspec}(z)$. Now we can bound the transfer error defined in \pref{ass:transfer_bias}. \begin{lemma}[Bounding Transfer error] Consider an arbitrary comparator $\pi^\star$ with $d^{\star}(s,a) := d^{\pi^\star}(s) \text{Unif}_{\mathcal{A}}(a)$. Throughout EPOC\xspace, consider any episode $n$, we have: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a) \sim d^\star}\left( Q^n(s,a; r+b^n) - \theta^n_\star(s,a) \right)^2 \leq \frac{\ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} [\epsilon^2_{misspec}(\phi(s,a))] }{ (1-\gamma)^4}, \end{align*} where $\theta^t_\star$ is one of the best on-policy fit: \begin{align*} \theta^t_\star \in \arg\min_{\|\theta\|\leq W}\ensuremath{\mathbb{E}}_{ (s,a)\sim \rho^n_{\mix}} \left(\theta\cdot\phi(s,a) - Q^n(s,a; r+b^n)\right)^2. \end{align*}\label{lem:bias_in_state_agg} \end{lemma} \begin{proof} Recall that notation $Q^n_{b^n}(s,a)$ is in short of $Q^n(s,a; r+b^n)$. First note that as $b^n(s,a) \in [0,1/(1-\gamma)]$, we must have $Q^n_{b^n}(s,a) \in [1, 1/(1-\gamma)^2]$. Second, for any $s,a$ such that $\phi(s,a) = \phi(s',a') = z$, we have $\phi(s,a) = \phi(s',a')$ which means that $(s,a)\in \Kcal^n$ if and only if $(s',a')\in\Kcal^n$ as their features are identical. This means that the reward misspecification assumption still holds under model $\mathcal{M}_{b^n}$, i.e., $\lvert r(s,a) + b^n(s,a) - r(s',a') - b^n(s',a') \rvert \leq \epsilon_{z}$. Now let us consider $\theta^t_{\star}$. Using the definition of the state aggregation, we have: \begin{align*} &\ensuremath{\mathbb{E}}_{ (s,a)\sim \rho^n_{\mix}} \left(\theta\cdot\phi(s,a) - Q^n_{b^n}(s,a)\right)^2 \\ &= \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_{\mix}} \sum_{z} \one\{\phi(s,a) = z\} \left( \theta_{z} - Q^n_{b^n}(s,a) \right)^2, \end{align*} which means that for $\theta^t_{\star}$, we have: \begin{align*} \sum_{s,a} \rho^n_{\mix}(s,a) \one\{\phi(s,a) = z\} \left( \theta_{z} - Q^n_{b^n}(s,a)\right) = 0, \end{align*} which implies that $\theta^t_{\star, z} := \frac{ \sum_{s,a} \rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} Q^n_{b^n}(s,a)}{ \sum_{s,a}\rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} } $. Hence, for any $s'',a'' $ such that $\phi(s'',a'') = z$, we must have: \begin{align*} &\left\lvert \theta_{\star, z} - Q^{n}_{b^n}(s'', a'') \right\rvert \\ & = \left\lvert \frac{ \sum_{s,a} \rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} Q^n_{b^n}(s,a)}{ \sum_{s,a}\rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} } - Q^n_{b^n}(s'',a'') \right\rvert\\ & = \left\lvert \frac{\sum_{s,a} \rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} \left( Q^n_{b^n}(s,a) - Q^n_{b^n}(s'',a'') \right)}{ \sum_{s,a}\rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} } \right\rvert \leq \frac{\epsilon_{z}}{ (1-\gamma)^2 }, \end{align*} where we use Claim~\ref{claim:state_agg}. Note $|\theta^n_{\star,z}| \leq \frac{1}{(1-\gamma)^2}$ and $\|\theta^n_\star\|_2 \leq \sqrt{\frac{Z}{(1-\gamma)^4}} := W$ in this case Now for any state-action distribution $d$, we will have: \begin{align} &\ensuremath{\mathbb{E}}_{(s,a)\sim d} \left(Q^n_{b^n}(s,a) -\theta^n_\star\cdot \phi(s,a) \right)^2 \nonumber \\ & = \sum_{z} \ensuremath{\mathbb{E}}_{(s,a)\sim d} \one\{\phi(s,a) = z\} \left( Q^n_{b^n}(s,a) - \theta^n_{\star,z} \right)^2 \nonumber\\ & \leq \ensuremath{\mathbb{E}}_{(z)\sim d} \frac{\epsilon_{z}^2}{(1-\gamma)^4} \nonumber \leq \frac{\ensuremath{\mathbb{E}}_{z\sim d} \epsilon_{misspec}(z)^2 }{(1-\gamma)^4} = \frac{\ensuremath{\mathbb{E}}_{(s,a)\sim d}\left[ \epsilon_{misspec}(\phi(s,a))^2\right] }{(1-\gamma)^4}. \end{align} \end{proof} The above lemma (\pref{lem:bias_in_state_agg}) essentially proves that transfer error $\epsilon_{bias} = \frac{\ensuremath{\mathbb{E}}_{z\sim d^\star} [\epsilon_{misspec}^2(z)] }{ (1-\gamma)^4 }$. Now call \pref{thm:detailed_bound_rmax_pg} with $\epsilon_{bias} = \frac{\ensuremath{\mathbb{E}}_{z\sim d^\star_{}} [\epsilon^2_{misspec}(z)] }{ (1-\gamma)^4 }$, we conclude the proof of \pref{thm:state_aggregation}. \qed \iffalse \subsection{Application to Mis-specified Linear MDPs} We consider model mis-specification in linear MDPs, which is defined formally as follows. \begin{align*} P(s' | s,a) = \mu(s') \phi(s,a) + \epsilon_{s,a}, \quad r(s,a) = \theta\cdot\phi(s,a) + \epsilon_{s,a}, \end{align*} for some $\epsilon_{s,a}\in [0,1]$ for all $(s,a)$. The following claim shows that the resulting $Q^{\pi}$ for any policy $\pi$ is approximately linear. \begin{claim}Consider any policy $\pi$, and any reward function $r(s,a)\in [0, r_{\max}]$. For $Q^{\pi}(\cdot,\cdot; r)$, there exists $\theta^\pi$ such that: \begin{align*} \left\lvert Q^\pi(s,a; r) - \theta^\pi \cdot \phi(s,a) \right\rvert \leq \quad \forall (s,a). \end{align*} \end{claim} \begin{proof} to do.. \end{proof} Consider episode $n$ and the augmented feature $\widetilde\phi(s,a) = [\phi(s,a)^{\top}, \one\{(s,a)\not\in\Kcal^n\}]^{\top}$. \fi \subsection{Application to State-Aggregation (\pref{thm:state_aggregation})} \label{app:state_agg} In this section, we analyze \pref{thm:state_aggregation} for state-aggregation. Similarly, we leverage the general theorem~\ref{thm:detailed_bound_rmax_pg_new} and bound the transfer error using aggregation errors. First recall the definition of state aggregation $\phi:\mathcal{S}\times\mathcal{A} \to\mathcal{Z}$. We abuse the notation a bit, and denote $\phi(s,a) = \one\{\phi(s,a) = z\} \in\mathbb{R}^{|\mathcal{Z}|}$, i.e., the feature vector $\phi$ indicates which $z$ the state action pair $(s,a)$ is mapped to. The following claim reasons the approximation of $Q$ values under state aggregation. \begin{claim} \label{claim:state_agg} Denote aggregation error $\epsilon_{z}$ as: \begin{align*} \max\left\{ \| P(\cdot | s,a) - P(\cdot | s', a') \|_1, \lvert r(s,a) - r(s',a') \rvert \right\} \leq \epsilon_{z}, \forall (s,a),(s',a'), \text{ s.t., } \phi(s,a) = \phi(s',a') = z. \end{align*} Then, for any policy $\pi$, $(s,a),(s',a'), z$, such that $\phi(s,a) = \phi(s',a') = z$, we have: \begin{align*} \left\lvert Q^{\pi}(s, a) - Q^{\pi}(s', a')\right\rvert \leq \frac{r_{\max} \epsilon_{z} }{1-\gamma}, \end{align*} where $r(s,a)\in [0, r_{\max}]$ for $r_{\max}\in \mathbb{R}^+$. \end{claim} \begin{proof} Starting from the definition of $Q^{\pi}$, we have: \begin{align*} &\left\lvert Q^{\pi}(s, a) - Q^{\pi}(s', a')\right\rvert = \lvert r(s,a) - r(s' ,a') \rvert + \gamma \lvert \ensuremath{\mathbb{E}}_{x'\sim P_{s,a}} V^{\pi}(s') - \ensuremath{\mathbb{E}}_{x'\sim P_{s',a'}} V^{\pi}(s') \rvert \\ & \leq \epsilon_{z} + \frac{r_{\max}\gamma}{1-\gamma} \left\| P_{s,a} - P_{s',a'} \right\|_1 \leq \frac{r_{\max}\epsilon_{z}}{1-\gamma}, \end{align*} where we use the assumption that $\phi(s,a) = \phi(s',a') = z$, and the fact that value function $\|V\|_{\infty} \leq r_{\max}/(1-\gamma)$ as $r(s,a)\in [0, r_{\max}]$. \end{proof} Recall the definition $\epsilon_{misspec}(z) = \max_{a} \epsilon_{z}$. Now we can bound the transfer error defined in \pref{ass:transfer_bias}. \begin{lemma}[Bounding Transfer error] Consider an arbitrary comparator $\pi^\star$ with $d^{\star}(s,a) := d^{\pi^\star}(s) \text{Unif}_{\mathcal{A}}(a)$. Throughout EPOC\xspace, consider any episode $n$, we have: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a) \sim d^\star}\left( Q^n(s,a; r+b^n) - \theta^n_\star(s,a) \right)^2 \leq \frac{\ensuremath{\mathbb{E}}_{(s,a)\sim d^\star} [\epsilon^2_{misspec}(\phi(s,a))] }{ (1-\gamma)^4}, \end{align*} where $\theta^t_\star$ is one of the best on-policy fit: \begin{align*} \theta^t_\star \in \arg\min_{\|\theta\|\leq W}\ensuremath{\mathbb{E}}_{ (s,a)\sim \rho^n_{\mix}} \left(\theta\cdot\phi(s,a) - Q^n(s,a; r+b^n)\right)^2. \end{align*}\label{lem:bias_in_state_agg} \end{lemma} \begin{proof} Recall that notation $Q^n_{b^n}(s,a)$ is in short of $Q^n(s,a; r+b^n)$. First note that as $b^n(s,a) \in [0,1/(1-\gamma)]$, we must have $Q^n_{b^n}(s,a) \in [1, 1/(1-\gamma)^2]$. Second, for any $s,a$ such that $\phi(s,a) = \phi(s',a') = z$, we have $\phi(s,a) = \phi(s',a')$ which means that $(s,a)\in \Kcal^n$ if and only if $(s',a')\in\Kcal^n$ as their features are identical. This means that the reward misspecification assumption still holds under model $\mathcal{M}_{b^n}$, i.e., $\lvert r(s,a) + b^n(s,a) - r(s',a') - b^n(s',a') \rvert \leq \epsilon_{z}$. Now let us consider $\theta^t_{\star}$. Using the definition of the state aggregation, we have: \begin{align*} &\ensuremath{\mathbb{E}}_{ (s,a)\sim \rho^n_{\mix}} \left(\theta\cdot\phi(s,a) - Q^n_{b^n}(s,a)\right)^2 \\ &= \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_{\mix}} \sum_{z} \one\{\phi(s,a) = z\} \left( \theta_{z} - Q^n_{b^n}(s,a) \right)^2, \end{align*} which means that for $\theta^t_{\star}$, we have: \begin{align*} \sum_{s,a} \rho^n_{\mix}(s,a) \one\{\phi(s,a) = z\} \left( \theta_{z} - Q^n_{b^n}(s,a)\right) = 0, \end{align*} which implies that $\theta^t_{\star, z} := \frac{ \sum_{s,a} \rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} Q^n_{b^n}(s,a)}{ \sum_{s,a}\rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} } $. Hence, for any $s'',a'' $ such that $\phi(s'',a'') = z$, we must have: \begin{align*} &\left\lvert \theta_{\star, z} - Q^{n}_{b^n}(s'', a'') \right\rvert \\ & = \left\lvert \frac{ \sum_{s,a} \rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} Q^n_{b^n}(s,a)}{ \sum_{s,a}\rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} } - Q^n_{b^n}(s'',a'') \right\rvert\\ & = \left\lvert \frac{\sum_{s,a} \rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} \left( Q^n_{b^n}(s,a) - Q^n_{b^n}(s'',a'') \right)}{ \sum_{s,a}\rho^n_{\mix}(s,a)\one\{\phi(s,a) = z\} } \right\rvert \leq \frac{\epsilon_{z}}{ (1-\gamma)^2 }, \end{align*} where we use Claim~\ref{claim:state_agg}. Note $|\theta^n_{\star,z}| \leq \frac{1}{(1-\gamma)^2}$ and $\|\theta^n_\star\|_2 \leq \sqrt{\frac{Z}{(1-\gamma)^4}} := W$ in this case Now for any state-action distribution $d$, we will have: \begin{align} &\ensuremath{\mathbb{E}}_{(s,a)\sim d} \left(Q^n_{b^n}(s,a) -\theta^n_\star\cdot \phi(s,a) \right)^2 \nonumber \\ & = \sum_{z} \ensuremath{\mathbb{E}}_{(s,a)\sim d} \one\{\phi(s,a) = z\} \left( Q^n_{b^n}(s,a) - \theta^n_{\star,z} \right)^2 \nonumber\\ & \leq \ensuremath{\mathbb{E}}_{(z)\sim d} \frac{\epsilon_{z}^2}{(1-\gamma)^4} \nonumber \leq \frac{\ensuremath{\mathbb{E}}_{z\sim d} \epsilon_{misspec}(z)^2 }{(1-\gamma)^4} = \frac{\ensuremath{\mathbb{E}}_{(s,a)\sim d}\left[ \epsilon_{misspec}(\phi(s,a))^2\right] }{(1-\gamma)^4}. \end{align} \end{proof} The above lemma (\pref{lem:bias_in_state_agg}) essentially proves that transfer error $\epsilon_{bias} = \frac{\ensuremath{\mathbb{E}}_{z\sim d^\star} [\epsilon_{misspec}^2(z)] }{ (1-\gamma)^4 }$. Now call \pref{thm:detailed_bound_rmax_pg_new} with $\epsilon_{bias} = \frac{\ensuremath{\mathbb{E}}_{z\sim d^\star_{}} [\epsilon^2_{misspec}(z)] }{ (1-\gamma)^4 }$, we conclude the proof of \pref{thm:state_aggregation}. \qed \iffalse \subsection{Application to Mis-specified Linear MDPs} We consider model mis-specification in linear MDPs, which is defined formally as follows. \begin{align*} P(s' | s,a) = \mu(s') \phi(s,a) + \epsilon_{s,a}, \quad r(s,a) = \theta\cdot\phi(s,a) + \epsilon_{s,a}, \end{align*} for some $\epsilon_{s,a}\in [0,1]$ for all $(s,a)$. The following claim shows that the resulting $Q^{\pi}$ for any policy $\pi$ is approximately linear. \begin{claim}Consider any policy $\pi$, and any reward function $r(s,a)\in [0, r_{\max}]$. For $Q^{\pi}(\cdot,\cdot; r)$, there exists $\theta^\pi$ such that: \begin{align*} \left\lvert Q^\pi(s,a; r) - \theta^\pi \cdot \phi(s,a) \right\rvert \leq \quad \forall (s,a). \end{align*} \end{claim} \begin{proof} to do.. \end{proof} Consider episode $n$ and the augmented feature $\widetilde\phi(s,a) = [\phi(s,a)^{\top}, \one\{(s,a)\not\in\Kcal^n\}]^{\top}$. \fi \section{Discussion} This work proposes a new policy gradient algorithm for balancing the exploration-exploitation tradeoff in RL, which enjoys provable sample efficiency guarantees in the linear and kernelized settings. Our experiments provide evidence that the algorithm can be combined with neural policy optimization methods and be effective in practice. An interesting direction for future work would be to combine our approach with unsupervised feature learning methods such as autoencoders~\cite{JarrettKRL09, DAE} or noise-contrastive estimation~\cite{pmlr-v9-gutmann10a, CPC} in rich observation settings to learn a good feature representation. \section*{Acknowledgement} The authors would like to thank Andrea Zanette and Ching-An Cheng for carefully reviewing the proofs, and Akshay Krishnamurthy for helpful discussions. \section{Experiments} \label{section:experiments} We provide experiments illustrating EPOC\xspace's performance on problems requiring exploration, and focus on showing the algorithm's flexibility to leverage existing policy gradient algorithms with neural networks (e.g., PPO~\citep{schulman2017proximal}). Specifically, we show that for challenging exploration tasks, our algorithm combined with PPO significantly outperforms both vanilla PPO as well as PPO augmented with the popular RND exploration bonus~\cite{burda2018exploration}. Specifically, we aim to show the following two properties of EPOC\xspace: \begin{enumerate} \item EPOC\xspace can build a policy cover that explores the state space widely; hence EPOC\xspace is able to find near optimal policies even in tasks that have obvious local minimas and sparse rewards. \item the policy cover in EPOC\xspace avoids catastrophic forgetting issue one can experience in policy gradient methods due to the possibility that the policy can become deterministic quickly. \end{enumerate} For all experiments, we use policies parameterized by fully-connected or convolutional neural networks. We use a kernel $\phi(s, a)$ to compute bonus as \mbox{$b(s, a) = \phi(s, a)^\top \hat{\Sigma}^{-1}_{\mix} \phi(s, a)$}, where $\hat{\Sigma}_{\mix}$ is the empirical covariance matrix of the policy cover. In order to prune any redundant policies from the cover, we use a rebalancing scheme to select a policy cover which induces maximal coverage over the state space. This is done by finding weights $\alpha^{(n)}=(\alpha_1^{(n)}, ..., \alpha_n^{(n)})$ on the simplex at each episode which solve the optimization problem: $\alpha^{(n)} = \argmax_{\alpha} \log \det \big[ \sum_{i=1}^n \alpha_i \hat{\Sigma}_i \big]$ where $\hat{\Sigma}_i$ is the empirical covariance matrix of $\pi_i$. Details of the implemented algorithm, network architectures and kernels can be found in Appendix \ref{app:exp}. \subsection{Bidirectional Diabolical Combination Lock} \begin{figure}[t!] \centering \begin{tabular}{cc} \begin{minipage}{0.47\columnwidth} \begin{figure}[H] \centering \includegraphics[width=\columnwidth]{media/2-way-combolock2-crop.pdf} \end{figure} \end{minipage} & \begin{minipage}{0.4\columnwidth} \begin{tabular}{rrrrr} \toprule \multirow{2}*{Algorithm} & \multicolumn{4}{c}{Horizon}\\\cline{2-5} &$2$ & $5$ & $10$ & $15$ \\ \midrule PPO & 1.0 & 0.0 & 0.0 & 0.0 \\ PPO+RND & 0.75 & 0.40 & 0.50 & 0.55 \\ EPOC\xspace & 1.0 & 1.0 & 1.0 & 1.0 \\ \bottomrule \end{tabular} \end{minipage} \end{tabular} \caption{\textbf{Left} panel shows the Bidirectional Diabolical Combination Lock domain (see text for details). \textbf{Right} panel shows success rate of different algorithms averaged over 20 different seeds.} \label{fig:mixture_visitations} \end{figure} We first provide experiments on an exploration problem designed to be particularly difficult: the Bidirectional Diabolical Combination Lock (a harder version of the problem in \cite{homer}, see Figure \ref{fig:mixture_visitations}). In this problem, the agent starts at an initial state $s_0$ (left most state), and based on its first action, transitions to one of two combination locks of length $H$. Each combination lock consists of a chain of length $H$, at the end of which are two states with high reward. At each level in the chain, $9$ out of $10$ actions lead the agent to a dead state (black) from which it cannot recover and lead to zero reward. The problem is challenging for exploration for several reasons: (1) \textit{Sparse positive rewards}: Uniform exploration has a $10^{-H}$ chance of reaching a high reward state; (2) \textit{Dense antishaped rewards}: The agent receives a reward of $-1/H$ for transitioning to a good state and $0$ to a dead state. A locally optimal policy is to transition to a dead state quickly; (3) \textit{Forgetting}: At the end of one of the locks, the agent receives a maximal reward of $+5$, and at the end of the other lock it receives a reward of $+2$. Since there is no indication which lock has the optimal reward, if the agent does not explore to the end of both locks it will only have a $50\%$ chance of encountering the globally optimal reward. If it makes it to the end of one lock, it must remember to still visit the other one. \begin{figure}[h] \centering \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{media/RND_traces2-crop.pdf} \caption{RND trace during training} \end{subfigure} \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{media/big_figure_combolock_horizontal-crop.pdf} \caption{EPOC\xspace final trace} \end{subfigure} \caption{ \textbf{(a)} shows the state visitation frequencies (brighter color depicts higher visitation frequency) when the RND bonus~\cite{burda2018exploration} is applied to a policy gradient method throughout training on the above problem. 'Ep' denotes epoch number showing the progress during a single training run. Although the agent manages to explore to the end of one chain (chain 2 in this case), its policy quickly becomes deterministic and it ``forgets'' to explore the remaining chain, missing the optimal reward. RND obtains the optimal reward on roughly half of the initial seeds. \textbf{(b)} panel shows the traces of policies in the policy cover of EPOC\xspace. Together the policy cover provides a near uniform coverage over both chains.}. \label{fig:lock} \end{figure} For both the policy network input and the kernel we used a binary vector encoding the current lock, state and time step as one-hot components. We compared to two other methods: a PPO agent, and a PPO agent with a RND exploration bonus, all of which used the same representation as input. Performance for the different methods is shown in Figure \ref{fig:mixture_visitations} (left). The PPO agent succeeds for the shortest problem of horizon $H=2$, but fails for longer horizons due to the antishaped reward leading it to the dead states. The PPO+RND agent succeeds roughly $50\%$ of the time: due to its exploration bonus, it avoids the local minimum and explores to the end of one of the chains. However, as shown in Figure \ref{fig:lock} (a), the agent's policy quickly becomes deterministic and the agent forgets to go back and explore the other chain after it has reached the reward at the end of the first. EPOC\xspace succeeds over all seeds and horizon lengths. We found that the policy cover provides near uniform coverage over both chains. In Figure~\ref{fig:lock} (b) we demonstrate the traces of some individual policies in the policy cover and the trace of the policy cover itself as a whole. \subsection{Reward-free Exploration in Mazes} \begin{figure}[h!] \centering \includegraphics[width=1.\columnwidth]{media/maze_policy_visitations-crop.pdf} \caption{Different policies in the policy cover for the maze environment. All the locations visited by the agent during the policy execution are marked in green.} \label{fig:rpg_maze_policies} \end{figure} \begin{figure}[t] \centering \begin{subfigure}[t]{0.48\columnwidth} \includegraphics[width=\columnwidth]{media/maze3.pdf} \end{subfigure} \begin{subfigure}[t]{0.48\columnwidth} \includegraphics[width=\columnwidth]{media/mountaincar2.pdf} \end{subfigure} \caption{ \label{fig:reward_free_results}Results for maze (left) \& control (right). Solid line is mean and shaded region is standard deviation over $5$ seeds.} \end{figure} We next evaluated EPOC\xspace in a reward-free exploration setting using maze environments adapted from \citep{VPN}. At each step, the agent's observation consists of an RGB-image of the maze with the red channel representing the walls and the green channel representing the location of the agent (an example is shown in Figure \ref{fig:rpg_maze_policies}). We compare EPOC\xspace, PPO and PPO+RND in the reward-free setting where the agent receives a constant environment reward of $0$ (note that PPO receives zero gradient; EPOC\xspace and PPO+RND learn from their reward bonus). Figure \ref{fig:reward_free_results} (left) shows the percentage of locations in the maze visited by each of the agents over the course of 10 million steps. The proportion of states visited by the PPO agent stays relatively constant, while the PPO+RND agent is able to explore to some degree. EPOC\xspace quickly visits a significantly higher proportion of locations than the other two methods. Visualizations of traces from different policies in the policy cover can be seen in Figure \ref{fig:rpg_maze_policies} where we observe the diverse coverage of the policies in the policy cover. \subsection{Continuous Control} \begin{figure}[h!] \centering \includegraphics[width=0.5 \columnwidth]{media/mountaincar_policy_visitations.pdf} \caption State visitations of different policies in EPOC\xspace's policy cover on MountainCar.} \label{fig:visitations} \end{figure} We further evaluated EPOC\xspace on a continuous control task which requires exploration: continuous control MountainCar from OpenAI Gym \cite{brockman2016openai}. Note here actions are continuous in $[-1, 1]$ and incur a small negative reward. Since the agent only receives a large reward $(+100)$ if it reaches the top of the hill, a locally optimal policy is to do nothing and avoid the action cost (e.g., PPO never escapes this local optimality in our experiments). Results for PPO, PPO+RND and EPOC\xspace are shown in Figure \ref{fig:reward_free_results}(right). The PPO agent quickly learns the locally optimal policy of doing nothing. The PPO+RND agent exhibits wide variability across seeds: some seeds solve the task while others not. The EPOC\xspace agent consistently discovers a good policy across all seeds. In Figure~\ref{fig:visitations}, we show the traces of policies in the policy cover constructed by EPOC\xspace. \section{Framework} \label{sec:framework} A high-level description of our method is given in Algorithm \ref{alg:rmaxpg_general}. The algorithm operates over epochs and grows a set of policies, called the \textit{policy cover}, whose weighted mixture induces a distribution $\rho_\mathrm{cov}$ over states with increasingly wide coverage. At each epoch $n$, a new policy $\pi_{n+1}$ is optimized using a reward bonus ${b}^n$ which assigns high reward to states with low coverage under $\rho_\mathrm{cov}$. Crucially, the algorithm uses $\rho_\mathrm{cov}$ as a starting distribution to train $\pi_{n+1}$. This is done by randomly sampling a policy from the cover, following it for a certain number of steps, and then switching to the policy being optimized. By initializing the current policy roughly uniformly over the region of the state space explored so far, the policy is assured to receive useful gradients for either exploring new parts of the state space, or optimizing the environment reward. Once the new policy is trained, it is added to the policy cover and the next epoch begins. \iffalse \begin{algorithm}[t] \begin{algorithmic}[1] \STATE \textbf{Require}: parameters \STATE Initialize policy $\pi^{1}$ \FOR{episode $n = 1, \dots $} \STATE Design policy weights $\alpha_1,...,\alpha_{n}$ \STATE Define $\rho_{\mix}^n = \sum_{i=1}^n \alpha_i d^{\pi_i}_{\mathcal{M}}$ \label{line:mixture} \STATE Design exploration reward $\mathbf{b}_n(s,a)$ for all $(s,a)$ \STATE $\pi_{n+1} = \argmin_\pi \mathbb{E}_{s \sim \rho_\mathrm{mix}^n}[V^{\pi, r}(s) + V^{\pi, \mathbf{b}_n}(s)]$ \ENDFOR \end{algorithmic} \caption{Rmax-PG} \label{alg:rmaxpg_general} \end{algorithm} \fi \begin{algorithm}[t] \begin{algorithmic}[1] \STATE \textbf{Require}: MDP $\mathcal{M}$ \STATE Initialize policy $\pi^{1}$ \FOR{episode $n = 1, \dots $} \STATE Design policy weights $\alpha_1,...,\alpha_{n}$ \STATE Define $\rho_{\textrm{cov}}^n = \sum_{i=1}^n \alpha_i d^{\pi_i}$ \label{line:mixture} \STATE Design exploration bonus ${b}^n(s,a)$ for all $(s,a)$ \STATE $\pi^{n+1} = \textrm{PolicyOptimizer}(\rho_{\mathrm{cov}}^n,{b}^n)$ % \ENDFOR \end{algorithmic} \caption{Rmax-PG} \label{alg:rmaxpg_general} \end{algorithm} A simple strategy for constructing $\rho_\mathrm{cov}$ from the policy cover is to weight all policies equally, i.e. setting $\alpha_i=1/n$. For linear MDPs, we also consider a rebalancing scheme where the policies are weighted to maximize the log-determinant of their mixture distribution's covariance matrix. This ensures that the weighted policy mixture induces as uniform a coverage as possible over the state space. Policies with small weights can then be removed from the cover; in our experiments, we found that our reweighting scheme allocated high weights to a small number of policies which were sufficient to obtain good coverage. \iffalse In this section, we introduce the general version of our Rmax-PG algorithm, which can be instantiated in different ways (see~\pref{alg:rmaxpg_general}). At a high level, the algorithm operates over epochs and grows a set of policies which together induce a distribution over states with increasingly large support. Specifically, the Rmax-PG framework in~\pref{alg:rmaxpg_general} relies on two oracles: a reward bonus oracle and a policy optimization oracle. The key step in Rmax-PG is to maintain the mixture of all previously learned policies $\rho_{\textrm{cov}}^n$ (\pref{line:mixture}) and call policy optimization (e.g., Natural Policy Gradient) with $\rho_{\textrm{cov}}^n$ as the restart distribution. Maintaining a mixture of all previously learned policies and always restarting with $\rho_{\textrm{cov}}^n$ to optimize new policy alleviate the forgetting issue: the policy mixture always remembers the previously explored the state-action space. The reward bonus $\mathbf{b}(s,a)$ is often related to whether or not $(s,a)$ has been visited many times. Adding sufficient reward bonus (hence name Rmax) encourages the agent to explore to new state action space which may not be covered by the current policy mixture. With bonus, the policy optimization oracle will not experience vanishing gradient issue caused from potential sparse reward structure in the original MDP. \fi Both the reward bonus and the policy optimization procedures can be instantiated in different ways depending on the setting at hand. The reward bonus should be designed to give high reward to states which are not often visited by the policy cover, and low reward to states with high visitation. In the tabular setting, inverse counts can be used; in the next section, we discuss reward bonuses for the linear and kernelized settings in detail. In the general setting, possible choices from the deep learning literature include pseudocounts \citep{bellemare2016pseudocounts}, log-densities under a neural model \citep{ostrovski17count}, prediction errors of a dynamics model \citep{pathak2017curiosity}, or random network distillation \citep{burda2018exploration}, all of which can be fit using data collected by rollouts from policies in the cover. The policy optimization procedure can be instantiated using any policy gradient method, for example A2C or PPO \citep{mnih2016asynchronous, schulman2017proximal}. \section{Introduction} \label{section:intro} Policy gradient methods are a successful class of Reinforcement Learning (RL) methods, as they are amenable to parametric policy classes, including neural policies~\citep{schulman2015trust, schulman2017proximal}), and they directly optimizing the cost function of interest. While these methods have a long history in the RL literature~\citep{williams1992simple, sutton1999policy, konda2000actor, Kakade01}, only recently have their theoretical convergence properties been established: roughly when the objective function has wide coverage over the state space, global convergence is possible~\citep{agarwal2019optimality,geist2019theory,russoGlobal,abbasi2019politex}. In other words, the assumptions in these works imply that the state space is already well-explored. Conversely, without such coverage (and, say, with sparse rewards), policy gradients often suffer from the vanishing gradient problem. With regards to exploration, at least in the tabular setting, there is an established body of results which provably explore in order to achieve sample efficient reinforcement learning, including model based methods~\citep{kearns2002optimal,brafman2002r,kakade2003sample, jaksch2010optimal,azar2017minimax,dann2015sample}, model free approaches such as Q-learning~\citep{strehl2006pac,li2009unifying,jin2018q,dong2019provably}, thompson sampling~\citep{osband2014generalization,agrawal2017optimistic,russo2019worst}, and, more recently, policy optimization approaches~\citep{efroni2020optimistic,cai2019provably}. In fact, more recently, there are number of provable reinforcement learning algorithms, balancing exploration and exploitation, for MDPs with linearly parameterized dynamics, including~\cite{jiang2017contextual,yang2019sample,jin2019provably,pmlr-v108-zanette20a,ayoub2020,zhou2020provably,cai2019provably}. The motivation for our work is to develop algorithms and guarantees which are more robust to violations in the underlying modeling assumptions; indeed, the primary practical motivation for policy gradient methods is that the overall methodology is disentangled from modeling (and Markovian) assumptions, since they are an ``end-to-end'' approach, directly optimizing the cost function of interest. Furthermore, in support of these empirical findings, there is a body of theoretical results, both on direct policy optimization approaches~\citep{kakade2002approximately,NIPS2003_2378, Scherrer:API,scherrer2014local} and more recently on policy gradient approaches~\citep{agarwal2019optimality}, which show that such incremental policy improvement approaches are amenable to function approximation and violations of modeling assumptions, under certain coverage assumptions over the state space. This work focuses on how policy gradient methods can be extended to handle exploration, while also retaining their favorable properties with regards to how they handle function approximation and model misspecification. The practical relevance of answering these questions is evident by the growing body of empirical techniques for exploration in policy gradient methods such as pseudocounts ~\citep{bellemare2016pseudocounts}, dynamics model errors ~\citep{pathak2017curiosity}, or random network distillation (RND)~\citep{burda2018exploration}. \iffalse Supporting these empirical findings, both classical~\citep{kakade2002approximately, Scherrer:API,scherrer2014local} and recent~\citep{agarwal2019optimality} results suggest that incremental policy improvement approaches are much more amenable to function approximation and modeling assumptions, assuming sufficient exploration. In this work, we examine these fundamental questions by studying how policy gradient methods can be extended to handle exploration, and how well they handle model misspecification. The practical relevance of answering these questions is evident by the growing body of empirical techniques for exploration in policy gradient methods such as pseudocounts ~\cite{bellemare2016pseudocounts}, dynamics model errors ~\cite{pathak2017curiosity}, or random network distillation (RND)~\cite{burda2018exploration}. \fi \begin{table*}[t!] \centering \aboverulesep=0ex \belowrulesep=0ex \ra{2} \begin{tabular}{|>{\centering\arraybackslash}m{9.5cm}|>{\centering\arraybackslash}m{2.8cm}|>{\centering\arraybackslash}m{3cm}|} \toprule Algorithm & Sample Complexity & Misspecified State Aggregation\\ \midrule E$^3$, Rmax, UCBVI \hspace{9cm}\hfill \footnotesize{\citep{kearns2002optimal,brafman2002r,jaksch2010optimal,azar2017minimax}} & $\text{poly}(S,A, H, \frac{1}{\epsilon})$ & $\ell_\infty$\\ \midrule Thompson Sampling \hspace{9cm}\hfill ~\footnotesize{\citep{osband2014generalization,agrawal2017optimistic,russo2019worst}} & $\text{poly}\left(S,A, H, \frac{1}{\epsilon}\right)$ & $\ell_\infty$\\ \midrule Q-learning ($\epsilon$ greedy) & $\Omega(A^{H})$ & $\ell_\infty$\\ \midrule delayed/UCB Q-learning \hspace{9cm}\hfill \footnotesize{\citep{strehl2006pac,li2009unifying,jin2018q,dong2019provably}} & $\text{poly}\left(S,A, H, \frac{1}{\epsilon}\right)$ & $\ell_\infty$ for $Q^\star$\\ \midrule Policy Optimization \hspace{9cm}\hfill \footnotesize{(PG\citep{williams1992simple,sutton1999policy}, NPG \citep{Kakade01, agarwal2019optimality}, MD-MPI \cite{geist2019theory})} & $\Omega(A^h)$ & ?\\ \midrule Optimistic Policy Optimization in the Empirical Model ~\footnotesize{\cite{cai2019provably,efroni2020optimistic}} & $\text{poly}\left(S, A, H, \frac{1}{\epsilon}\right)$ & $\ell_\infty$\\ \midrule EPOC\xspace (this paper) & $\text{poly}\left(S,A, H, \frac{1}{\epsilon}\right)$ & local $\ell_\infty$ \\ \bottomrule \end{tabular} \caption{Comparison of algorithms in tabular (and state-aggregation) settings. For the last column, state-aggregation provides a means to compare tabular approaches when the aggregated MDP may only approximately be an MDP (i.e. when there is a model misspecification). We assume the agent starts at a fixed starting state $s_0$ and only has the ability to do rollouts from the state $s_0$. Sample complexity is for the number of samples required to learn an $\epsilon$-optimal policy. $Q$-learning and standard policy optimization have an exponential sample complexity in $H := 1/(1-\gamma)$ due to that they do not actively explore. If the starting state distribution had coverage (as opposed to starting at a single state $s_0$), then stronger guarantees exist for policy optimization methods~\citep{kakade2002approximately,agarwal2019optimality}, both with regards to sample complexity and state-aggregation. The optimistic policy optimization approaches of~\citep{cai2019provably,efroni2020optimistic} build an empirical model of the transition dynamics and do optimistic policy updates in this empirical model; as such, the can also viewed as being model based, unlike $Q$-learning and EPOC\xspace which do not store and use prior data. EPOC\xspace removes the initial state distribution assumptions~\citep{kakade2002approximately,agarwal2019optimality} from prior policy gradient results through incorporating strategic exploration; this is done via learning an ensemble of policies, the policy cover. EPOC\xspace extends to linear MDPs with linear function approximation as well, and it also works under a weaker error condition when state aggregation is performed as the type of function approximation. } \label{tbl:tabular} \vspace{-5pt} \end{table*} \subsection{Our Contributions} This work introduces the Exploration for Policy Optimization with learned policy Covers\xspace algorithm (EPOC\xspace), a direct, model-free, policy optimization approach which addresses exploration through the use of a learned ensemble of policies, the latter provides a policy cover over the state space. The use of a learned policy cover addresses exploration, and also addresses what is the ``catastrophic forgetting'' problem in policy gradient approaches (which use reward bonuses); while the on-policy nature avoids the ``delusional bias'' inherent to Bellman backup-based approaches, where approximation errors due to model misspecification amplify (see~\citep{lu2018non} for discussion). It is a conceptually different approach from the predominant prior (and provable) RL algorithms, which are either model-based --- variants of UCB~\cite{kearns2002optimal,brafman2002r,jaksch2010optimal,azar2017minimax} or based on Thompson sampling~\cite{agrawal2017optimistic,russo2019worst} --- or model-free and value based, such as Q-learning~\cite{jin2018q,strehl2006pac}. Our work adds policy optimization methods to this list, as a direct alternative: the use of learned covers permits a \emph{a model-free approach} by allowing the algorithm to plan in the real world, using the cover for initializing the underlying policy optimizer. We remark that only a handful of prior (provable) exploration algorithms~\cite{jin2018q,strehl2006pac} are model-free in the tabular setting, and these are largely value based. Table~\ref{tbl:tabular} shows the relative landscape of results for the tabular case. Here, we can compare tabular approaches when the MDP may only approximately be an MDP. For the latter, we consider the question of \emph{state-aggregation}, where states are aggregated into ``meta-states'' due to some given state-aggregation function~\citep{li2006towards}. The hope is that the aggregated MDP is also approximately an MDP (with a smaller number of aggregated state). Table~\ref{tbl:tabular} compares the effectiveness of tabular algorithms in this case, where the state-aggregation function introduces model misspecification. Importantly, EPOC\xspace provides a local guarantee, in a more model agnostic sense, unlike model-based and Bellman-backup based methods. \iffalse In contrast to the recent work of~\cite{agarwal2019optimality}, our results do not require coverage over the state space; they also do not rely on having bounded density ratios (as is the case for the policy optimization results in~\citep{kakade2002approximately,NIPS2003_2378, Scherrer:API,scherrer2014local}). \fi Our main results show that EPOC\xspace is provably sample and computationally efficient for \emph{both tabular and linear MDPs}, where EPOC\xspace finds a near optimal policy with a polynomial sample complexity in all the relevant parameters in the (linear) MDP. Furthermore, we give theoretical support that the direct approach is particularly favorable with regards to function approximation and model misspecification. Highlights are as follows: \paragraph{RKHS in Linear MDPs:} For the linear MDPs proposed by \cite{jin2019provably}, our results hold when the linear MDP features live in an infinite dimensional Reproducing Kernel Hilbert Space (RKHS). It is not immediately evident how to extend the prior work on linear MDPs (e.g.~\citep{jin2019provably}) to this setting (due to concentration issues with data re-use). The following informal theorem summarizes this contribution. \begin{theorem}[Informal theorem for EPOC\xspace on linear MDPs] With high probability, EPOC\xspace finds an $\epsilon$ near optimal policy with number of samples $\widetilde{O}\left(\text{poly}\left(1/(1-\gamma), \mathcal{I}_N, 1/\epsilon, W \right)\right)$, where $W$ is related to the maximum RKHS norm of any policy's Q function and $\mathcal{I}_N$ is the maximum information gain defined with respect to the kernel. Here, $\mathcal{I}_N$ implicitly measures the effective dimensionality of the problem, and $\mathcal{I}_N = \widetilde{O}(d)$ for a linear kernel with $d$-dimensional features. \end{theorem} \paragraph{Bounded transfer error and state aggregation:} When specialized to a state aggregation setting, we show that EPOC\xspace provides a different approximation guarantee in comparison to prior works. In particular, the aggregation need only be good locally, under the visitations of the comparison policy. This means that quality of the aggregation need only be good in the regions where a high value policy tends to visit. More generally, we analyze EPOC\xspace under a notion of a small transfer error in critic fitting~\citep{agarwal2019optimality}---a condition on the error of a best on-policy critic under a comparison policy's state distribution---which generalizes the special case of state aggregation, and show that EPOC\xspace enjoys a favorable sample complexity whenever this transfer error is small. We also instantiate the general result with other concrete examples where EPOC\xspace is effective, and where we argue prior approaches will not be provably accurate. The following is an informal statement for the special case of state-aggregation with model-misspecification. \begin{theorem}[Informal theorem for state aggregation] With high probability, EPOC\xspace finds an $\epsilon + \epsilon_{misspec}$ near optimal policy with $\widetilde{O}\left(\text{poly}\left(|\mathcal{Z}| , 1/(1-\gamma), 1/\epsilon\right) \right)$ many samples, where $\mathcal{Z}$ is the set of abstracted states; $\epsilon_{misspec} = O\left({\ensuremath{\mathbb{E}}_{s\sim d^\star}[\max_{a} \epsilon_{misspec}(s,a)}] / (1-\gamma)^3 \right)$ where $d^\star$ is the state visitation distribution of an optimal policy (the distribution of which states an optimal policy tends to visit), and $\epsilon_{misspec}(s,a)$ is a measure of the model-misspecification error at state action $s,a$ (a disagreement measure between dynamics and rewards of state-action pairs aggregated to the same abstract state as $s,a$). \end{theorem} \paragraph{Empirical evaluation:} We provide experiments showing the viability of EPOC\xspace in settings where prior bonus based approaches such as Random Network Distillation~\citep{burda2018exploration} do not recover optimal policies with high probability. Our experiments show our basic approach complements and leverages existing deep learning approaches, implicitly also verifying the robustness of EPOC\xspace outside the regime where the sample complexity bounds provably hold. \subsection{Robustness to ``Delusional Bias'' with Partially Correct Models} \subsection{Robustness to ``Delusional Bias'' with Partially Well-specified Models} \label{sec:example} In this section, we provide an additional example of model misspecification where we show that EPOC\xspace succeeds while Bellman backup based algorithms do not. The basic spirit of the example is that if our modeling assumption holds for a sub-part of the MDP, then EPOC\xspace can compete with the best policy that only visits states in this sub-part with some additional assumptions. In contrast, prior model-based and $Q$-learning based approaches heavily rely on the modeling assumptions being globally correct, and bootstrapping-based methods fail in particular due to their susceptibility to the delusional bias problem~\citep{lu2018non}. We emphasize that this constructed MDP and class of features have the following properties: \begin{itemize} \item It is not a linear MDP; we would need the dimension to be exponential in the depth $H$, i.e. $d=\Omega(2^H)$, in order to even approximate the MDP as a linear MDP. \item We have no reason to believe that value based methods (that rely on Bellman backups, e.g., Q learning) or model based algorithms will provably succeed for this example (or simple variants of it). \item Our example will have large worst case function approximation error, i.e. the $\ell_{\infty}$ error in approximating $Q^\star$ will be (arbitrarily) large. \item The example can be easily modified so that the concentrability coefficient (and the distribution mismatch coefficient) of the starting distribution (or a random initial policy) will be $\Omega(2^H)$. \end{itemize} Furthermore, we will see that EPOC\xspace succeeds on this example, provably. We describe the construction below (see \pref{fig:binary_tree} for an example). There are two actions, denoted by $L$ and $R$. At initial state $s_0$, we have $P(s_1 | s_0, L) = 1$; $P(s_1 | s_1, a) = 1$ for any $a\in \{L, R\}$. We set the reward of taking the left action at $s_0$ to be $1/2$, i.e. $r(s_0,L)=1/2$. This implies that there exists a policy which is guaranteed to obtain at least reward $1/2$. When taking action $a= R$ at $s_0$, we deterministically transition into a depth-H completely balanced binary tree. We can further constrain the MDP so that the optimal value is $1/2$ (coming from left most branch), though, as we see later, this is not needed. The feature construction of $\phi\in \mathbb{R}^d$ is as follows: For $s_0, L$, we have $\phi(s_0, L) = e_1$ and $\phi(s_0, R) = e_2$, and $\phi(s_1, a) = e_3$ for any $a\in\{L,R\}$, where $e_1, e_2, e_3$ are the standard basis vectors. For all other states $s\not\in\{s_0,s_1\}$, we have that $\phi(s,a)$ is constrained to be orthogonal to $e_1$, $e_2$, and $e_3$, but otherwise arbitrary. In other words, $\phi(s,a)$ has the first three coordinates equal to zero for any $s\not\in \{s_0,s_1\}$ but can otherwise be pathological. The intuition behind this construction is that the features $\phi$ are allowed to be arbitrary complicated for states inside the depth-H binary tree, but are uncoupled with the features on the left path. This implies that both EPOC\xspace and any other algorithm do not have access to a good global function approximator. Furthermore, as discussed in the following remark, these features do not provide a good approximation of the true dynamics as a linear MDP. \begin{remark}(Linear-MDP approximation failure). As the MDP is deterministic, we would need dimension $d = \Omega(2^H)$ in order to approximate the MDP as a linear MDP (in the sense required in~\cite{jin2019provably}). This is due to that the rank of the transition matrix is $O(2^H)$. \end{remark} However the on-policy nature of EPOC\xspace ensures that there always exists a best linear predictor that can predict $Q^{\pi}$ well under the optimal trajectory (the left most path) due to the fact that the features on $s_0$ and $s_1$ are decoupled from the features in the rest of the states inside the binary tree. Thus it means that the transfer error is always zero. This is formally stated in the following lemma. \begin{corollary}[Corollary of Theorem~\ref{thm:agnostic}] EPOC\xspace is guaranteed to find a policy with value greater than $1/2-\epsilon$ with probability greater than $1-\delta$, using a number of samples that is $O\left(\textrm{poly}(H,d, 1/\epsilon,\log(1/\delta))\right)$. This is due to the transfer error being zero. \label{cora:agnostic} \end{corollary} We provide a proof of the corollary in~\pref{app:examples}. \paragraph{Intuition for the success of EPOC\xspace.} Since the corresponding features of the binary subtree have no guarantees in the worst-case, EPOC\xspace may not successfully find the best global policy in general. However, it does succeed in finding a policy competitive with the best policy that remains in the \emph{favorable} sub-part of the MDP satisfying the modeling assumptions (e.g., the left most trajectory in \pref{fig:binary_tree}). We do note that the feature orthogonality is important (at least for a provably guarantee), otherwise the errors in fitting value functions on the binary subtree can damage our value estimates on the favorable parts as well; this behavior effect may be less mild in practice. \paragraph{Delusional bias and challenges with Bellman backup (and Model-based) approaches.} While we do not explicitly construct algorithm dependent lower bounds in our construction, we now discuss why obtaining guarantees similar to ours with Bellman backup-based (or even model-based) approaches may be challenging with the current approaches in the literature. We are not assuming any guarantees about the quality of the features in the right subtree (beyond the aforementioned orthogonality). Specifically, for Bellman backup-based approaches, the following two observations (similar to those stressed in~\citet{lu2018non}), when taken together, suggest difficulties for algorithms which enforce consistency by assuming the Markov property holds: \begin{itemize} \item (Bellman Consistency) The algorithm does value based backups, with the property that it does an exact backup if this is possible. Note that due to our construction, such algorithms will seek to do an exact backup for $Q(s_0,R)$, where they estimate $Q(s_0,R)$ to be their value estimate on the right subtree. This is due to that the feature $\phi(s_0,R)$ is orthogonal to all other features, so a $0$ error, Bellman backup is possible, without altering estimation in any other part of the tree. \item (One Sided Errors) Suppose the true value of the subtree is less than $1/2-\Delta$, and suppose that there exists a set of features where the algorithm approximates the value of the subtree to be larger than $1/2$. Current algorithms are not guaranteed to return values with one side error; with an arbitrary featurization, it is not evident why such a property would hold. \end{itemize} More generally, what is interesting about the state aggregation featurization is that it permits us to run \emph{any} tabular RL learning algorithm. Here, it is not evident that \emph{any} other current tabular RL algorithm, including model-based approaches, can achieve guarantees similar to our average-case guarantees, due to their strong reliance on how they use the Markov property. In this sense, our work provides a unique guarantee with respect to model misspecification in the RL setting. \iffalse As remarked above, no algorithm can hope to reliably estimate the value functions in the binary subtree accurately with a polynomial sample complexity, due to that we are not assuming any guarantees about the quality of the features in that part of the subtree. Furthermore, there is a no evidence that prior exploration approaches relying on Bellman backups will not overestimate the values from this subtree, due to the uncontrolled approximation error. Even parametric model based approaches also have no guarantees if the model parameterization is not accurate in the subtree. If such an overestimate occurs, then this overestimate will be perfectly backed up to the root at $s_0, R$, since we have no loss in expressivity at $s_0, R$ (nor is there interference from features in any other state-action pairs). Certainly, for this specific example one could modify approaches to succeed. However, we do not know of any algorithms which can achieve our average-case guarantee. Furthermore, any algorithm which satisfies the following two conditions will fail on this example (or generalizations of it). These two conditions are: \begin{itemize} \item (Bellman Consistency) The algorithm does value based backups, with the property that it does an exact backup if this is possible. Note that due to our construction, all aforementioned prior algorithms will do an exact backup for $Q(s_0,R)$, where they estimate $Q(s_0,R)$ to be their value estimate on the subtree. This is due to that the feature $\phi(s_0,R)$ is orthogonal to all other features, so a $0$ error, Bellman backup is possible, without altering estimation in any other part of the tree. \item (One Sided Error in Agnostic Learning) Suppose the true value of the subtree is less than $1/2-\Delta$, and suppose that there exists a set of features where the algorithm approximates the value of the subtree to be larger than $1/2$. Note that our feature set is arbitrary, and all known algorithms are not guaranteed to return values with one side error. In fact, it is not difficult to provide algorithmic specific feature constructions where other algorithms fail in this manner. \end{itemize} With these two properties, the best another algorithm can obtain is $1/2-\Delta$ value. This is due to that at $(s_0,R)$ a perfect backup occurs. Importantly, there are no guarantees of any algorithm with function approximation that are not globally $\ell_\infty$ in nature. Note that EPOC\xspace will also fail in the subtree. However, by not using Bellman backups, EPOC\xspace will correctly realize that the value of going right is worst than going left. In this sense, our work provides a unique guarantee with respect to model misspecification in the RL setting. This example serves to highlight the crucial limitations of Bellman backups when considering actions that lead to places of model misspecification, quite like the issues highlighted in~\citet{lu2018non}. \fi \paragraph{Failure of concentrability-based approaches} Some of the prior results on policy optimization algorithms, starting from the Conservative Policy Iteration algorithm~\citet{kakade2002approximately} and further studied in a series of subsequent papers~\citep{Scherrer:API,geist2019theory,agarwal2019optimality} provide the strongest guarantees in settings without exploration, but considering function approximation. As remarked in Section~\ref{sec:linear}, most works in this literature make assumptions on the maximal density ratio between the initial state distribution and comparator policy to be bounded. In the MDP of~\pref{fig:binary_tree}, this quantity seems fine since the ratio is at most $H$ for the comparator policy that goes on the left path (by acting randomly in the initial state). However, we can easily change the left path into a fully balanced binary tree as well, with $O(H)$ additional features that let us realize the values on the leftmost path (where the comparator goes) exactly, while keeping all the other features orthogonal to these. It is unclear how to design an initial distribution to have a good concentrability coefficient, but EPOC\xspace still competes with the comparator following the leftmost path since it can realize the value functions on that path exactly and the remaining parts of the MDP do not interfere with this estimation. \subsection{Key Lemmas} \label{sec:proof_tech} In this section, we highlight some of key lemmata that we use to analyze EPOC\xspace. The key technical lemma is the following NPG-like convergence result but with an extra term that relates to the accumulative bonuses collected by EPOC\xspace. \begin{lemma}[EPOC\xspace Convergence] For an arbitrary comparator policy $\pi^\star$, under \pref{ass:transfer_bias}, and the following condition: for all $n$, \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho_{\mix}^n}\left( Q^n(s,a; r+b^n) - \theta^n\cdot \phi(s,a) \right)^2 \leq \min_{\theta:\|\theta\|\leq W} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho_{\mix}^n}\left( Q^n(s,a; r+b^n) - \theta\cdot \phi(s,a) \right)^2 + \epsilon_{stat}, \end{align*} where $\epsilon_{stat} \in \mathbb{R}^+$, then we have: \begin{align*} V^{\pi^\star} - \max_{n\in [0,\dots, N-1]} V^n & \leq \frac{1}{1-\gamma}\left(\sqrt{\frac{4W^2\log(A)}{N}} + 2\sqrt{A\epsilon_{bias}} + 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{\beta N \epsilon_{stat}}\right) \\ &\qquad + \frac{1}{N} \sum_{n=0}^{N-1} \ensuremath{\mathbb{E}}_{(s,a)\sim d^{n}}\left[b^n(s,a) \right]. \end{align*} \label{lem:epoch_convergence} \end{lemma} Note that $\epsilon_{stat}$ is the generalization error from constrained linear regression which often scales in the order of $O(\text{poly}(W)/\sqrt{M})$ with $M$ being the number of samples used for linear regression. The threshold $\beta$ is set to be $O(\text{poly}(\epsilon))$. Comparing to Q-NPG in \cite{agarwal2019optimality}, we note that EPOC\xspace completely eliminates the need of an initial distribution with a nontrivial condition number (see Remark~\ref{remark:compare_to_Q_npg}). Instead, EPOC\xspace's convergence result contains a new term which is the average expected reward bonuses collected by the sequence of learned policies. The next lemma shows that the accumulative reward bonus scales in the order of information gain $\mathcal{I}_N(1)$ (e.g., in finite dimensional setting with bounded feature norm, $\mathcal{I}_N(1)$ scales as $O(d\log(N+1))$). \begin{lemma}[Information Gain and Average Reward Bonus] Consider any sequence of policies $\pi^0, \cdots \pi^{N-1}$ for any $N$, we have: \begin{align*} \frac{1}{N }\sum_{n=0}^{N-1} \ensuremath{\mathbb{E}}_{(s,a)\sim d^{n}} \left[ b^n (s,a) \right] \leq \frac{\mathcal{I}_N(1)}{N \beta (1-\gamma)}. \end{align*} \end{lemma} Hence, for a large $N$ (large enough to offset $\beta(1-\gamma)$ where $\beta = O(\text{poly}(\epsilon))$), the average reward bonus will converge to zero as maximum information gain grows sublinearly for common kernels \citep{srinivas2010gaussian}. Combining the above two lemmas, we get to the following convergence result of EPOC\xspace: \begin{align*} &V^{\pi^\star} - \max_{n\in [0,\dots, N-1]} V^n \\ & \leq \frac{1}{1-\gamma}\left(\sqrt{\frac{4W^2\log(A)}{N}} + 2\sqrt{A\epsilon_{bias}} + 2\sqrt{ \beta \lambda W^2 } + 2\sqrt{\beta N \epsilon_{stat}}\right) + \frac{\mathcal{I}_N(1)}{N\beta(1-\gamma)}. \end{align*} The final proof of \pref{thm:agnostic} consists of combining the above two lemmas and setting parameters ($N, \beta, \lambda, \epsilon_{bias}$) properly to be $O(\epsilon + \sqrt{A\epsilon_{stat}}/(1-\gamma))$ close to $\pi^\star$. The detailed proof is included in \pref{app:analysis}. \subsection{Related Work} \label{sec:related} We first discuss work with regards to policy gradient methods and incremental policy optimization; we then discuss work with regards to exploration in the context of explicit (or implicit) assumptions on the MDP (which permit sample complexity that does not explicitly depend on the number of states); and then ``on-policy'' exploration methods. Finally, we discuss the recent and concurrent work of \citet{cai2019provably,efroni2020optimistic}, which provide an optimistic policy optimization approach which uses off-policy data. Our line of work seeks to extend the recent line of provably correct policy gradient methods~\cite{agarwal2019optimality,fazel2018global,russoGlobal,caiTRPO,even-dar2009online, DBLP:journals/corr/NeuJG17,Azar:2012:DPP:2503308.2503344,abbasi2019politex} to incorporate exploration. As discussed in the intro, our focus is that policy gradient methods, and more broadly ``incremental'' methods --- those methods which make gradual policy changes such as Conservative Policy Iteration (CPI) \citep{kakade2002approximately,scherrer2014local,Scherrer:API}, Policy Search by Dynamic Programming (PSDP)~\citep{NIPS2003_2378}, and MD-MPI~\cite{geist2019theory} --- have guarantees with function approximation that are stronger than the more abrupt approximate dynamic programming methods, which rely on the boundedness of the more stringent concentrability coefficients~\cite{munos2005error, szepesvari2005finite, antos2008learning}; see \citet{Scherrer:API,agarwal2019optimality,geist2019theory,chen2019information,shani2019adaptive} for further discussion. Our main agnostic result shows how EPOC\xspace is more robust than all extant bounds with function approximation in terms of both concentrability coefficients and distribution mismatch coefficients; as such, our results require substantially weaker assumptions, building on the recent work of~\citet{agarwal2019optimality} who develop a similar notion of robustness in the policy optimization setting without exploration. Specifically, when specializing to linear MDPs and tabular MDPs, our algorithm is PAC while algorithms such as CPI and NPG are not PAC without further assumption on the reset distribution \citep{{agarwal2019optimality}}. We now discuss results with regards to exploration in the context of explicit (or implicit) assumptions on the underlying MDP. To our knowledge, all prior works only provide provable algorithms, under either realizability assumptions or under well specified modelling assumptions; the violations tolerated in these settings are, at best, in an $\ell_\infty$-bounded, worst case sense. The most general set of results are those in \cite{jiang2017contextual}, which proposed the concept of Bellman Rank to characterize the sample complexity of value-based learning methods and gave an algorithm that has polynomial sample complexity in terms of the Bellman Rank, though the proposed algorithm is not computationally efficient. Bellman rank is bounded for a wide range of problems, including MDPs with small number of hidden states, linear MDPs, LQRs, etc. Later work gave computationally efficient algorithms for certain special cases~\citep{dann2018polynomial,du2019provably,yang2019reinforcement,jin2019provably, homer}. Recently, Witness rank, a generalization of Bellman rank to model-based methods, was proposed by~\cite{sun2019model} and was later extended to model-based reward-free exploration by \cite{henaff2019explicit}. We focus on the linear MDP model, studied in~\cite{yang2019reinforcement,jin2019provably}. We note that \citet{yang2019reinforcement} also prove a result for a type of linear MDPs, though their model is significantly more restrictive than the model in ~\citet{jin2019provably}. Another notable result is due to \citet{wen2013efficient}, who showed that in deterministic systems, if the optimal $Q$-function is within a pre-specified function class which has bounded Eluder dimension (for which the class of linear functions is a special case), then the agent can learn the optimal policy using a polynomial number of samples; this result has been generalized by \cite{du2019provably} to deal with stochastic rewards, using further assumptions such as low variance transitions and strictly positive optimality gap. With regards to ``on-policy'' exploration methods, to our knowledge, there are relatively few provable results which are limited to the tabular case. These are all based on Q-learning with uncertainty bonuses in the tabular setting, including the works in ~\cite{strehl2006pac,jin2018q}. More generally, there are a host of results in the tabular MDP setting that handle exploration, which are either model-based or which re-use data (the re-use of data is often simply planning in the empirical model), which include ~\cite{brafman2003r,kearns2002optimal,azar2017minimax,kakade2003sample,jaksch2010optimal,agrawal2017optimistic, lattimore2012pac,lattimore2014near, dann2015sample, szita2010model}. \citet{cai2019provably,efroni2020optimistic} recently study algorithms based on exponential gradient updates for tabular MDPs, utilizing the mirror descent analysis first developed in ~\cite{even-dar2009online} along with idea of optimism in the face of uncertainty. Both approaches use a critic computed from off-policy data and can be viewed as model-based, since the algorithm stores all previous off-policy data and plans in what is effectively the empirically estimated model (with appropriately chosen uncertainty bonuses); in constrast, the model-free approaches such as $Q$-learning do not store the empirical model and have a substantially lower memory footprint (see~\cite{jin2018q} for discussion on this latter point). \citet{cai2019provably} further analyze their algorithm in the linear kernel MDP model~\citep{zhou2020provably}, which is a different model from what is referred to as the linear MDP model~\citet{jin2019provably}. Notably, neither model is a special case of the other. It is worth observing that the linear kernel MDP model of \cite{zhou2020provably} is characterized by at most $d$ parameters, where $d$ is the feature dimensionality, so that model-based learning is feasible; in contrast, the linear MDP model of ~\citet{jin2019provably} requires a number of parameter that is $S\cdot d$ and so it is not describable using a small number of parameters (and yet, sample efficient RL is still possible). See ~\citet{jin2019provably} for further discussion. \section{Setting} \label{section:setting} A Markov Decision Process (MDP) $\mathcal{M} = (\mathcal{S}, \mathcal{A}, P, r, \gamma,s_0)$ is specified by a state space $\mathcal{S}$; an action space $\mathcal{A}$; a transition model $P: \mathcal{S} \times \mathcal{A} \rightarrow \Delta(\mathcal{S})$ (where $\Delta(\mathcal{S})$ denotes a distribution over states), a reward function $r: \mathcal{S}\times \mathcal{A} \to [0,1]$, a discount factor $\gamma \in [0, 1)$, and a starting state $s_0$. We assume $\mathcal{A}$ is discrete and denote $A = \lvert\mathcal{A}\rvert$. Our results generalize to a starting state distribution $\mu_0\in\Delta(\mathcal{S})$ but we use a single starting state $s_0$ to emphasize the need to perform exploration. A policy $\pi: \mathcal{S} \to \Delta(\mathcal{A})$ specifies a decision-making strategy in which the agent chooses actions based on the current state, i.e., $a \sim\pi(\cdot | s)$. The value function $V^\pi(\cdot,r): \mathcal{S} \to \mathbb{R}$ is defined as the expected discounted sum of future rewards, under reward function $r$, starting at state $s$ and executing $\pi$, i.e. \begin{align*} V^\pi(s;r) := \ensuremath{\mathbb{E}} \left[\sum_{t=0}^\infty \gamma^t r(s_t, a_t) | \pi, s_0 = s\right], \end{align*} where the expectation is taken with respect to the randomness of the policy and environment $\mathcal{M}$. The \emph{state-action} value function $Q^\pi(\cdot,\cdot;r): \mathcal{S} \times \mathcal{A} \to \mathbb{R}$ is defined as \begin{align*} Q^\pi(s,a;r) := \ensuremath{\mathbb{E}}\left[\sum_{t=0}^\infty \gamma^t r(s_t, a_t) | \pi, s_0 = s, a_0 = a \right]. \end{align*} We define the discounted state-action distribution $d_{s}^\pi$ of a policy $\pi$: \begin{center} \mbox{$d_{s'}^\pi(s,a) := (1-\gamma) \sum_{t=0}^\infty \gamma^t {\Pr}^\pi(s_t=s,a_t=a|s_0=s')$}, \end{center} where $\Pr^\pi(s_t=s,a_t=a|s_0=s')$ is the probability that $s_t=s$ and $a_t=a$, after we execute $\pi$ from $t=0$ onwards starting at state $s'$ in model $\mathcal{M}$. Similarly, we define $d^{\pi}_{s',a'}(s,a)$ as: \begin{align*} d^{\pi}_{s',a'}(s,a) := (1-\gamma) \sum_{t=0}^{\infty} \gamma^t {\Pr}^{\pi}(s_t = s, a_t = s | s_0=s', a_0 = a'). \end{align*}For any state-action distribution $\nu$, we write $d^{\pi}_{\nu}(s,a):= \sum_{(s',a')\in\mathcal{S}\times\mathcal{A}} \nu(s',a') d^{\pi}_{s',a'}(s,a)$. For ease of presentation, we assume that the agent can reset to $s_0$ at any point in the trajectory.\footnote{This can be replaced with a termination at each step with probability $1-\gamma$.} We denote $d^{\pi}_{\nu}(s) = \sum_{a}d^{\pi}_{\nu}(s,a)$. \iffalse We also write: $d_{\nu}^\pi(s,a) = \ensuremath{\mathbb{E}}_{s_0\sim \nu} \left[d_{s_0}^\pi(s,a)\right] $, where $\nu$ is any distribution over states. \fi The goal of the agent is to find a policy $\pi$ that maximizes the expected value from the starting state $s_0$, i.e. the optimization problem is: % $ \max_\pi V^{\pi}(s_0)$, where the $\max$ is over some policy class. For completeness, we specify a $d^{\pi}_{\nu}$-sampler and an unbiased estimator of $Q^{\pi}(s,a; r)$ in Algorithm~\ref{alg:sampler_est}, which are standard in discounted MDPs. The $d^{\pi}_\nu$ sampler samples $(s,a)$ i.i.d from $d^{\pi}_{\nu}$, and the $Q^{\pi}$ sampler returns an unbiased estimate of $Q^{\pi}(s,a;r)$ for a given triple $(s,a,r)$ by a single roll-out from $(s,a)$. \paragraph{Notation.} When clear from context, we write $d^\pi(s,a)$ and $d^\pi(s)$ to denote $d_{s_0}^\pi(s,a)$ and $d^{\pi}_{s_0}(s)$ respectively, where $s_0$ is the starting state in our MDP. For iterative algorithms which obtain policies at each episode, we let $V^{n}$,$Q^{n}$ and $A^{n}$ denote the corresponding quantities associated with episode $n$. For a vector $v$, we denote $\|v\|_2=\sqrt{\sum_i v_i^2}$, $\|v\|_1=\sum_i |v_i|$, and $\|v\|_\infty=\max_i |v_i|$. For a matrix $V$, we define $\|V\|_2 = \sup_{x:\|x\|_2\leq 1}\| V x \|_2$, and $\det(V)$ as the determinant of $V$. We use $\text{Uniform}(\mathcal{A})$ (in short $\text{Unif}_{\mathcal{A}}$) to represent a uniform distribution over the set $\mathcal{A}$. \begin{algorithm}[!t] \caption{$d^{\pi}_\nu$ sampler and $Q^{\pi}$ estimator} \label{alg:sampler_est} \begin{algorithmic}[1] \setcounter{algorithm}{-1} \Function{$d_{\nu}^\pi$-sampler}{} \State \hspace*{-0.1cm}\textbf{Input}: $\nu\in\Delta(\mathcal{S}\times\mathcal{A}), \pi, r(s,a)$ \State Sample $s_0,a_0 \sim \nu$ \State Execute $\pi$ from $s_0, a_0$; at any step $t$ with $(s_t,a_t)$, terminate the episode with probability $1-\gamma$ \State \hspace*{-0.1cm}\textbf{Return}: $s_t,a_t$ \caption{$d^{\pi}$ sampler and $Q^{\pi}$ estimator} \EndFunction \Function{$Q^\pi$-estimator}{} \State \hspace*{-0.1cm}\textbf{Input}: current state-action $(s,a)$, reward $r(s,a)$, $\pi$ \State Execute $\pi$ from $(s_0,a_0) = (s, a)$; at step $t$ with $(s_t,a_t)$, terminate with probability $1-\gamma$ \State \hspace*{-0.1cm}\textbf{Return}: $\widehat{Q}^{\pi}(s,a) = \sum_{i=0}^t r(s_i,a_i)$ where $(s_0,a_0) = (s,a)$ \caption{$d^{\pi}$ sampler and $Q^{\pi}$ estimator} \EndFunction \setcounter{algorithm}{1} \end{algorithmic} \end{algorithm} \subsection{State-Aggregation under Model Misspecification} Consider a simple model-misspecified setting where the model error is introduced due to state action aggregation. Suppose we have an aggregation function $\phi: \mathcal{S}\times\mathcal{A} \rightarrow\mathcal{Z}$, where $\mathcal{Z}$ is a finite categorical set, the ``state abstractions", which we typically think of as being much smaller than the (possibly infinite) number of state-action pairs. Intuitively, we aggregate state-action pairs that have similar transitions and rewards to an abstracted state $z$. This aggregation introduces model-misspecification, defined below. \begin{definition}[State-Action Aggregation Model-Misspecification] We define model-misspecification $\epsilon_{misspec}(z)$ for any $z\in\mathcal{Z}$ as \begin{align*} \epsilon_{misspec}(z) := \max_{(s,a),(s',a') \textrm{ s.t. }\phi(s,a)=\phi(s',a')=z} \Big\{ \left\|P(\cdot|s,a) - P(\cdot|s',a')\right\|_1, \left\lvert r(s,a) - r(s',a') \right\rvert \Big\}. \end{align*} \end{definition}The model-misspecification measures the maximum possible disagreement in terms of transition and rewards of two state-action pairs which are mapped to the same abstracted state. We now argue that EPOC\xspace provides a unique and stronger guarantee in the case of error in our state aggregation. The folklore result is that with the definition $\|\epsilon_{misspec}\|_\infty=\max_{z\in\mathcal{Z}} \epsilon_{misspec}(z) $, algorithms such as UCB and $Q$-learning succeed with an additional additive error of $\|\epsilon_{misspec}\|_\infty/(1-\gamma)^2$, and will have sample complexity guarantees that are polynomial in only $|\mathcal{Z}|$. Interestingly, see ~\cite{li2009unifying,dong2019provably} for conditions which are limited to only $Q^\star$, but which are still \emph{global} in nature. The following theorem shows that EPOC\xspace only requires a more local guarantee where our aggregation needs to be only good under the distribution of abstracted states where an optimal policy tends to visit. \begin{theorem}[Misspecified, State-Aggregation Bound]\label{thm:state_aggregation} Fix $\epsilon, \delta\in (0,1)$. Let $\pi^\star$ be an arbitrary comparator policy. There exists a setting of the parameters such that EPOC\xspace (\pref{alg:epoc}) uses a total number of samples at most $\text{poly}\left( |\mathcal{Z}|, \log(A), \frac{1}{1-\gamma}, \frac{1}{\epsilon}, \ln\left(\frac{1}{\delta}\right) \right)$ and, with probability greater than $1-\delta$, returns a policy $\widehat \pi$ such that, \[ V^{\widehat \pi}(s_0) \geq V^{\pi^\star}(s_0) - \epsilon - \frac{2 \ensuremath{\mathbb{E}}_{s \sim d^{\pi^\star} } \max_a \left[\epsilon_{misspec}(\phi(s,a))\right] }{(1-\gamma)^3}. \] \label{thm:state_aggregation} \end{theorem} Here, it could be that $ {\ensuremath{\mathbb{E}}_{s\sim d^{\pi^\star}}\max_{a}[\epsilon_{misspec}(\phi(s,a))] } \ll \|\epsilon_{misspec}\|_\infty$ due to that our error notion is an average case one under the comparator. We refer readers to \pref{app:state_agg} for detailed proof of the above theorem which can also be regarded as a corollary of a more general agnostic theorem (\pref{thm:agnostic}) that we present in the next section. Note that here we pay an additional $1/(1-\gamma)$ factor in the approximation error due to the fact that after reward bonus, we have $r(s,a)+b^n(s,a) \in [0,1/(1-\gamma)]$. \footnote{We note that instead of using reward bonus, we could construct absorbing MDPs to make rewards scale $[0,1]$. This way we will pay $1/(1-\gamma)^2$ in the approximation error instead.} One point worth reflecting on is how few guarantees there are in the more general RL setting (beyond dynamic programming), which address model-misspecification in a manner that goes beyond global $\ell_\infty$ bounds. Our conjecture is that this is not merely an analysis issue but an algorithmic one, where incremental algorithms such as EPOC\xspace are required for strong misspecified algorithmic guarantees. We return to this point in \pref{sec:example}, with an example showing why this might be the case. \subsection{Robustness to ``Delusional Bias'' with Partially Correct Models} \subsection{Robustness to ``Delusional Bias'' with Partially Well-specified Models} In this section, we provide an additional example of model misspecification where we show that EPOC\xspace succeeds while Bellman backup based algorithms do not. The basic spirit of the example is that if our modeling assumption holds for a sub-part of the MDP, then EPOC\xspace can compete with the best policy that only visits states in this sub-part with some additional assumptions. In contrast, prior model-based and $Q$-learning based approaches heavily rely on the modeling assumptions being globally correct, and bootstrapping-based methods fail in particular due to their susceptibility to the delusional bias problem~\citep{lu2018non}. We emphasize that this constructed MDP and class of features have the following properties: \begin{itemize} \item It is not a linear MDP; we would need the dimension to be exponential in the depth $H$, i.e. $d=\Omega(2^H)$, in order to even approximate the MDP as a linear MDP. \item Value based or model based algorithms that rely on Bellman backup (e.g., Q learning) will fail, unless they satisfy an agnostic approximation condition which no known algorithms are known to have. \item Our example will have large worst case function approximation error, i.e. the $\ell_{\infty}$ error in approximating $Q^\star$ will be (arbitrarily) large. \item The example can be easily modified so that the concentrability coefficient (and the distribution mismatch coefficient) of the starting distribution (or a random initial policy) will be $\Omega(2^H)$. \end{itemize} Furthermore, we will see that EPOC\xspace succeeds on this example, provably. We describe the construction below (see \pref{fig:binary_tree} for an example). There are two actions, denoted by $L$ and $R$. At initial state $s_0$, we have $P(s_1 | s_0, L) = 1$; $P(s_1 | s_1, a) = 1$ for any $a\in \{L, R\}$. We set the reward of taking the left action at $s_0$ to be $1/2$, i.e. $r(s_0,L)=1/2$. This implies that there exists a policy which is guaranteed to obtain at least reward $1/2$. When taking action $a= R$ at $s_0$, we deterministically transition into a depth-H completely balanced binary tree. We can further constrain the MDP so that the optimal value is $1/2$ (coming from left most branch), though, as we see later, this is not needed. The feature construction of $\phi\in \mathbb{R}^d$ is as follows: For $s_0, L$, we have $\phi(s_0, L) = e_1$ and $\phi(s_0, R) = e_2$, and $\phi(s_1, a) = e_3$ for any $a\in\{L,R\}$, where $e_1, e_2, e_3$ are the standard basis vectors. For all other states $s\not\in\{s_0,s_1\}$, we have that $\phi(s,a)$ is constrained to be orthogonal to $e_1$, $e_2$, and $e_3$, but otherwise arbitrary. In other words, $\phi(s,a)$ has the first three coordinates equal to zero for any $s\not\in \{s_0,s_1\}$ but can otherwise be pathological. The intuition behind this construction is that the features $\phi$ are allowed to be arbitrary complicated for states inside the depth-H binary tree, but are uncoupled with the features on the left path. This implies that both EPOC\xspace and any other algorithm do not have access to a good global function approximator. Furthermore, as discussed in the following remark, these features do not provide a good approximation of the true dynamics as a linear MDP. \begin{remark}(Linear-MDP approximation failure). As the MDP is deterministic, we would need dimension $d = \Omega(2^H)$ in order to approximate the MDP as a linear MDP (in the sense required in~\cite{jin2019provably}). This is due to that the rank of the transition matrix is $O(2^H)$. \end{remark} However the on-policy nature of EPOC\xspace ensures that there always exists a best linear predictor that can predict $Q^{\pi}$ well under the optimal trajectory (the left most path) due to the fact that the features on $s_0$ and $s_1$ are decoupled from the features in the rest of the states inside the binary tree. Thus it means that the transfer error is always zero. This is formally stated in the following lemma. \begin{corollary}[Corollary of Theorem~\ref{thm:agnostic}] EPOC\xspace is guaranteed to find a policy with value greater than $1/2-\epsilon$ with probability greater than $1-\delta$, using a number of samples that is $O\left(\textrm{poly}(H,d, 1/\epsilon,\log(1/\delta))\right)$. This is due to the transfer error being zero. \label{cora:agnostic} \end{corollary} We provide a proof of the corollary in~\pref{app:examples}. \paragraph{Intuition for the success of EPOC\xspace.} Since the corresponding features of the binary subtree have no guarantees in the worst-case, EPOC\xspace may not successfully find the best global policy in general. However, it does succeed in finding a policy competitive with the best policy that remains in the \emph{favorable} sub-part of the MDP satisfying the modeling assumptions (e.g., the left most trajectory in \pref{fig:binary_tree}). We do note that the feature orthogonality is important (at least for a provably guarantee), otherwise the errors in fitting value functions on the binary subtree can damage our value estimates on the favorable parts as well; this behavior effect may be less mild in practice. \paragraph{Delusional bias and the failure of Bellman backup-based approaches.} As remarked above, no algorithm can hope to reliably estimate the value functions in the binary subtree accurately with a polynomial sample complexity, due to that we are not assuming any guarantees about the quality of the features in that part of the subtree. Furthermore, there is a no evidence that prior exploration approaches relying on Bellman backups will not overestimate the values from this subtree, due to the uncontrolled approximation error. Even parametric model based approaches also have no guarantees if the model parameterization is not accurate in the subtree. If such an overestimate occurs, then this overestimate will be perfectly backed up to the root at $s_0, R$, since we have no loss in expressivity at $s_0, R$ (nor is there interference from features in any other state-action pairs). Hence, even if the values in the left path are accurately estimated, the algorithm may incorrectly favor the right action in the root. This example serves to highlight the crucial limitations of Bellman backups when considering actions that lead to places of model misspecification, quite like the issues highlighted in~\citet{lu2018non}. \paragraph{Failure of concentrability-based approaches} Some of the prior results on policy optimization algorithms, starting from the Conservative Policy Iteration algorithm~\citet{kakade2002approximately} and further studied in a series of subsequent papers~\citep{Scherrer:API,geist2019theory,agarwal2019optimality} provide the strongest guarantees in settings without exploration, but considering function approximation. As remarked in Section~\ref{sec:linear}, most works in this literature make assumptions on the maximal density ratio between the initial state distribution and comparator policy to be bounded. In the MDP of~\pref{fig:binary_tree}, this quantity seems fine since the ratio is at most $H$ for the comparator policy that goes on the left path (by acting randomly in the initial state). However, we can easily change the left path into a fully balanced binary tree as well, with $O(H)$ additional features that let us realize the values on the leftmost path (where the comparator goes) exactly, while keeping all the other features orthogonal to these. It is unclear how to design an initial distribution to have a good concentrability coefficient, but EPOC\xspace still competes with the comparator following the leftmost path since it can realize the value functions on that path exactly and the remaining parts of the MDP do not interfere with this estimation. \input{proof_techniques} \subsection{Agnostic Guarantees with Bounded Transfer Error} \label{sec:agnostic_result} We now consider a general MDP in this section, where we do not assume the linear MDP modeling assumptions hold. As $Q - b^n$ may not be linear with respect to the given feature $\phi$, we need to consider model misspecification due to the linear function approximation with features $\phi$. We use the new concept of transfer error from \citep{agarwal2019optimality} below. We use the shorthand notation: \[ Q^t_{b^n}(s,a) =Q^{\pi^t}(s,a; r+b^n) \] below. We capture model misspecification using the following assumption. \begin{assum}[Bounded Transfer Error] \label{ass:transfer_bias} With respect to a target function $f:\mathcal{S}\times\mathcal{A} \rightarrow \mathbb{R}$, define the critic loss function $L(\theta; d, f)$ with $d\in\Delta(\mathcal{S}\times\mathcal{A})$ as: \begin{align*} L\left(\theta; d, f\right) := \ensuremath{\mathbb{E}}_{(s,a)\sim d}\left( \theta\cdot \phi(s,a) - f \right)^2, \end{align*} which is the square loss of using the critic $\theta\cdot\phi$ to predict a given target function $f$, under distribution $d$. Consider an arbitrary comparator policy $\pi^\star$ (not necessarily an optimal policy) and denote the state-action distribution $d^\star(s,a) := d^{\pi^\star}(s) \circ \text{Unif}_{\mathcal{A}}(a)$. For all episode $n$ and all iteration $t$ inside episode $n$, define: \begin{align*} \theta^t_\star \in \argmin_{\|\theta\|\leq W} L\left( \theta; \rho^n_{\mix}, Q^t_{b^n} - b^n \right) \end{align*} Then we assume that (when running Algorithm~\ref{alg:epoc}), $\theta^t_\star$ has a bounded prediction error when transferred to $d^\star_{}$ from $\rho^n_{\mix}$; more formally: \begin{align*} L\left( \theta^t_\star; d^\star_{}, Q^t_{b^n} - b^n \right) \leq \epsilon_{bias} \in \mathbb{R}^+. \end{align*} \end{assum} Note that the transfer error $\epsilon_{bias}$ measures the prediction error, at episode $n$ and iteration $t$, of a best on-policy fit $\overline{Q}^t_{b^n}(s,a) := b^n(s,a) + \theta^t_\star\cdot \phi(s,a)$ measured under a fixed distribution $d^\star$ from the fixed comparator (note $d^\star$ is different from the training distribution $\rho^n_\mix$ hence the name \emph{transfer}). This assumption first appears in the recent work of~\citet{agarwal2019optimality} in order to analyze policy optimization methods under linear function approximation. As our subsequent examples illustrate in the following section, this is a milder notion of model misspecification than $\ell_\infty$-variants more prevalent in the literature, as it is an average-case quantity which can be significantly smaller in favorable cases. We also refer the reader to~\citet{agarwal2019optimality} for further discussion on this assumption. With the above assumption on the transfer error, the next theorem states an agnostic result for the sample complexity of EPOC\xspace: \begin{theorem}[Agnostic Guarantee of EPOC\xspace] Fix $\epsilon, \delta \in (0,1)$ and consider an arbitrary comparator policy $\pi^\star$ (not necessarily an optimal policy). Assume \pref{ass:transfer_bias} holds. There exists a setting of the parameters ($\beta, \lambda, K, M, \eta, N, T$) such that EPOC\xspace uses a number of samples at most $\text{poly}\left( \frac{1}{1-\gamma},\log(A), \frac{1}{\epsilon}, \mathcal{I}_N(1), W, \ln\left(\frac{1}{\delta}\right) \right)$ and, with probability greater than $1-\delta$, returns a policy $\widehat \pi$ such that: \begin{align*} V^{\widehat\pi}(s_0) \geq V^{\pi^\star}(s_0) - \epsilon - \frac{\sqrt{2A\epsilon_{bias}}}{1-\gamma}. \end{align*} \label{thm:agnostic} \end{theorem} The precise polynomial of the sample complexity, along with the settings of all the hyperparameters --- $\beta$ (threshold for bonus), $\lambda$, $K$ (samples for estimating cover's covariance), $M$ (samples for fitting critic), $\eta$ (learning rate in NPG), $N$ (number of episodes), and $T$ (number of NPG iterations per episode) --- is provided in \pref{thm:detailed_bound_rmax_pg} (\pref{app:rmaxpg_sample}), {where we also discuss two specific examples of $\phi$---finite dimensional $\phi\in\mathbb{R}^d$ with bounded norm, and infinite dimensional $\phi$ in RKHS with RBF kernel (Remark \pref{remark:kernel_discussion}).} The above theorem indicates that if the transfer error $\epsilon_{bias}$ is small, then EPOC\xspace finds a near optimal policy in polynomial sample complexity without any further assumption on the MDPs. Indeed, for well-specified cases such as tabular MDPs and linear MDPs, due to that the regression target $Q^{\pi}(\cdot,\cdot;r+b^n) - b^n$ function is always a linear function with respect to the features, one can easily show that $\epsilon_{bias}=0$ (which we show in Appendix~\ref{app:app_to_linear_mdp}), as one can pick the best on-policy fit $\theta^t_\star$ to be the exact linear representation of $Q^{\pi}(s,a;r+b^n) - b^n(s,a)$. Further, in the state-aggregation example, we can show that $\epsilon_{bias}$ is upper bounded by the expected model-misspecification with respect to the comparator policy's distribution (\pref{app:state_agg}). A few remarks are in order to illustrate how the notion of transfer error compares to prior work. \begin{remark}[Comparison with concentrability assumptions \citep{kakade2002approximately,Scherrer:API,agarwal2019optimality}] In the theory for policy gradient methods without explicit exploration, a standard device to obtain global optimality guarantees for the learned policies is through the use of some exploratory distribution $\nu_0$ over initial states and actions in the optimization algorithm. Given such a distribution, a key quantity that has been used in prior analysis is the maximal density ratio to a comparator policy's state distribution~\citep{kakade2002approximately,Scherrer:API}: $\max_{s\in \mathcal{S}} \frac{d^\star(s)}{\nu_0(s)}$, where we use $d^\star(s)$ to refer to the probability of state $s$ under the comparator $\pi^\star$. It is easily seen that if EPOC\xspace is run with a similar exploratory initial distribution, then the transfer error is always bounded as well: \begin{align*} \epsilon_{bias} \leq \left\|\frac{ d^\star_{} }{ \nu_0}\right\|_{\infty} L\left( \theta^t_\star; \nu_0, Q^t_{b^n} - b^n \right). \end{align*} In this work, we do not assume access to a such an exploratory measure (with coverage); our goal is finding a policy with only access to rollouts from $s_0$. This makes this concetrability-style analysis inapplicable in general as the starting measure $\nu_0$ for the algorithm is potentially the delta measure over the initial state $s_0$, which can induce an arbitrarily large density ratio. In contrast, the transfer error is always bounded and is zero in well-specified cases such as tabular MDPs and linear MDPs which we show in Appendix~\ref{app:app_to_linear_mdp}. \end{remark} \begin{remark}[Comparison with the NPG guarantees in \citet{agarwal2019optimality}] The bounded transfer error assumption (\pref{ass:transfer_bias}) stated here is developed in the recent work of~\citet{agarwal2019optimality}. Their work focuses on understanding the global convergence properties of policy gradient methods, including the specific NPG algorithm used here; it does not consider the design of exploration strategies. Consequently, Assumption~\ref{ass:transfer_bias} alone is not sufficient to guarantee convergence in their setting; ~\cite{agarwal2019optimality} make an additional assumption on a relative condition number between the covariance matrices of the comparator distribution $d^\star$ and the initial exploratory distribution $\nu_0$: \[ \kappa = \sup_{w \in \mathbb{R}^d} \frac{w^\top \Sigma_{d^\star}w}{w^\top \Sigma_{\nu_0} w}, \quad \mbox{where} \quad \Sigma_\upsilon = \ensuremath{\mathbb{E}}_{s,a\sim \upsilon} [\phi(s,a)\phi(s,a)^\top]. \] Note that we consider a finite $d$-dimensional feature space for this discussion to be consistent with the prior results. Under the assumption that $\kappa < \infty$, \citet{agarwal2019optimality} provide a bound on the iteration complexity of NPG-style updates with an explicit dependence on $\sqrt{\kappa}$. Related (stronger) assumptions on the relative condition numbers for all possible policies or the initial distribution $\nu_0$ also appear in the recent works~\citep{abbasi2019politex} and~\citep{abbasi2019exploration} respectively (the latter work still assumes access to an exploratory initial policy). Our result does not have any such dependence. In contrast, the distribution $\rho^n_{\mix}$ designed by the algorithm, serves as the initial distribution at episode $n+1$, and the reward bonus explicitly encourages our algorithm to visit places where the relative condition number with the current distribution $\rho^n_{\mix}$ is large. \label{remark:compare_to_Q_npg} \end{remark} \iffalse The transfer bias is different from the concentrability coefficient in a sense that it measures model-misspecification (approximation error of $Q$ functions) averaged over a fixed comparator distribution $d^\star_{}$ rather than in the $\ell_{\infty}$ form which could easily be exponentially large even in the tabular setting (e.g., the agent can only be reset to a fixed $s_0$). On the other hand, transfer error is always bounded and is zero in well-specified cases such as tabular MDPs and linear MDPs which we show in Appendix~\ref{app:app_to_linear_mdp}. We refer readers to \cite{agarwal2019optimality} for a more detailed discussion on transfer error versus concentrability coefficient. \fi \subsection{Agnostic Result with Transfer Bias} To measure the sample complexity, we define the \emph{intrinsic dimension} of the underlying MDP $\mathcal{M}$. First, denote the covariance matrix of any policy $\pi$ as $\Sigma^{\pi} = \ensuremath{\mathbb{E}}_{(s,a)\sim d^{\pi}}\left[\phi(s,a)\phi(s,a)^{\top}\right]$.% We define the intrinsic dimension below: \begin{definition}[Intrinsic Dimension $\widetilde{d}$] We define intrinsic dimension as: \begin{align*} \widetilde{d}: =\max_{n\in\mathbb{N}^+} \max_{\{\pi^i\}_{i=1}^n } \frac{ \log\det\left(\sum_{i=1}^n \Sigma^{\pi^i} + \mathbf{I}\right) }{\log(n + 1)}. \end{align*} \label{def:int_dim} \end{definition} This quantity is identical to the intrinsic dimension in Gaussian Processes bandits \citep{srinivas2010gaussian}; one viewpoint of this quantity is as the information gain from a Bayesian perspective (\cite{srinivas2010gaussian}). A related quantity occurs in a more restricted linear MDP model, in \citet{yang2019reinforcement}. Note that when $\phi(s,a)\in \mathbb{R}^d$, we have that $\log\det\left(\sum_{i=1}^n \Sigma^{\pi^i} + \mathbf{I}\right) \leq d \log(n + 1)$ (as $\|\phi(s,a)\|_2\leq 1$), which means that the intrinsic dimension is $d$. Note that $\widetilde{d}\ll d$ if the covariance matrices from a sequence of policies only concentrated in a low-dimensional subspace. We introduce transfer bias in the following assumption. \begin{assum}[Transfer Bias] \label{ass:transfer_bias} Consider the best comparator policy $\pi^\star\in \Pi_\mathrm{linear}$. Assume that at any iteration $t$ within any episode $n$, there exits a best on-policy fit: \begin{align*} \theta^t_\star \in \argmin_{\|\theta\|\leq W}\ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_{\mix}} \left( \theta\cdot \phi(s,a) - Q^t(s,a; r+b^n) \right)^2, \end{align*} such that its associated critic ${A}^t_\star(s,a) := \theta^t_\star\cdot \phi(s,a) - \ensuremath{\mathbb{E}}_{a'\sim \pi^t_s} \theta^t_\star\cdot \phi(s,a')$ has a bounded prediction error when transferred to $d^\star_{linear}$: \begin{align*} \ensuremath{\mathbb{E}}_{(s,a)\sim d^\star_{linear}}\one\{s\in\Kcal^n\} \left( A^t(s,a; r+b^n) - A^t_\star(s,a) \right) \leq \varepsilon_{bias} \in \mathbb{R}^+. \end{align*} \end{assum} \begin{remark}[Transfer bias VS Concentrability Coefficient \citep{kakade2002approximately,agarwal2019optimality}] Note that transfer bias is always finite and in worst case can be upper bounded by concentrability coefficient: \begin{align*} \varepsilon_{bias} \leq \sqrt{\left\|\frac{ d^\star_{linear} }{ \rho^n_{\mix} }\right\|_{\infty} \ensuremath{\mathbb{E}}_{(s,a)\sim \rho^n_{\mix}} \left(A^t(s,a; r+b^n) - A^t_\star(s,a) \right)^2 }, \end{align*} assuming the concentrability coefficient $ \left\|{ d^\star_{linear} }/{ \rho^n_{\mix} }\right\|_{\infty}$ is finite. The transfer bias is different from concentrability coefficient in a sense that it measures model-misspecification (approximation error of advantage functions) averaged over a fixed comparator distribution $d^\star_{linear}$ rather than in the $\ell_{\infty}$ form which could easily be infinite. \end{remark} With the above assumption on the transfer bias, the next theorem states an agnostic result of EPOC\xspace: \begin{theorem}[Agnostic Guarantee of EPOC\xspace] Assume \pref{ass:transfer_bias} holds. Fix $\epsilon, \delta \in (0,1)$. Suppose that $\mathcal{M}$ is a linear MDP. There exists a setting of the parameters such that EPOC\xspace uses a number of samples at most $\text{poly}\left( \frac{1}{1-\gamma},\log(A), \frac{1}{\epsilon}, \widetilde{d}, W, \ln\left(\frac{1}{\delta}\right) \right)$ and, with probability greater than $1-\delta$, returns a policy $\widehat \pi$ such that: $V^{\widehat\pi}(s_0) \geq \max_{\pi\in\Pi_\mathrm{linear}}V^{\pi}(s_0) - \epsilon - \frac{\varepsilon_{bias}}{1-\gamma}$. \label{thm:agnostic} \end{theorem} The detailed bound with the rates, along with the setup of hyperparameters $\beta, \lambda$, $T$, $M$, $\eta$ (learning rate in NPG) are in~\pref{app:rmaxpg_sample}. Our theorem assumes discrete actions but the sample complexity only scales polylogarithmically with respect to the number of actions $A$, making the algorithm scalable to large action spaces. The result above shows that we achieve $\epsilon$ near optimal policy if transfer bias is small. Below we show two examples where transfer bias is small (zero in linear MDPs as all Q functions are linear there, and small in state-aggregation).
{ "timestamp": "2020-08-14T02:19:06", "yymm": "2007", "arxiv_id": "2007.08459", "language": "en", "url": "https://arxiv.org/abs/2007.08459" }
\section{Introduction: IVOA provenance data model and provenance of CDS HiPS} Datasets used in astronomy are generally the results of a flow of observation and processing steps. Information on this process is generally called "provenance" of the dataset and is stored in various formats and logical organizations. This makes in general the provenance information difficult to compare and use interoperably among different data collections. That's the main reason why IVOA developed an astronomy-oriented provenance data model during the last years. This formalization not only allows traceability of products but also acknowledgment and contact information, quality and reliability assessment and discovery of datasets by provenance details. At the time of writing the Provenance data model specification is an IVOA proposed recommendation \citep{2019ProvRec}. HiPS \citep{fernique2015} defines a new way of organizing image, cube and catalogue data in an all sky and hierarchical way based on HealPIX tessellation of the sky. HiPS datasets are generated from image data collections or catalogues which have their own history. The ProvHiPS service developed at CDS aims at providing provenance information for HiPS stored at CDS back to original raw data when available. \section{HiPS datasets for HST image collections} HiPS are made of hierarchies of tiles containing pixelized information at a given HealPIX order. In the case of image HiPS datasets each tile is generated from a small subset of the original image collection intersecting with the tile. The HiPS format stores inter-nally the progenitor information for each tile in the HiPS tree. The CDS data center publishes HiPS at various wavelengths for the HST image collections. They have been produced from HST drizzled images in collaboration with CADC astronomer Daniel Durand. HST data collections are stored and retrievable from the CADC HST archive through IVOA DAL services. The drizzled images have their own history: they are produced from sets of calibrated images closely related on the sky by a specific type of co-addition called "drizzling". A rich tree of related data is then potentially available. We have browsed HiPS tiles metadata and FITS headers of the HST images to extract features relevant in terms of the IVOA Provenance data model to trace historical information and map it into the ProvTAP tables. This resulted in a database containing ten of thousands of entities and activities. Dozen of descriptions of the various kinds of entities have also been produced as well as ten of thousands of configuration parameters for the drizzling and HiPS generation activities. \section{ProvHiPS implementation} In order to make such information available we implemented a ProvTAP\footnote{https://wiki.ivoa.net/internal/IVOA/ObservationProvenanceDataModel/ProvTAP.pdf} service called ProvHiPS. ProvTAP specification is currently an IVOA working draft describing how to map the provenance data model in a TAP service. The heart of it is the Prov-TAP TAP SCHEMA definition providing the list of tables and columns required for storing the provenance metadata information and mapping respectively the classes and attributes of the model. Tables and columns come with datatype, unit, ucd, and utypes consistent with the model. By default a ProvTAP service is queriable via the ADQL language defined in IVOA and provides results in VOTable format. The CDS ProvHiPS service implements a database containing the provenance information sketched out in the workflow scenario presented in Fig \ref{fig:workflow}. \articlefigure[width=0.8\textwidth]{P2-6_f1.eps}{fig:workflow}{The historical path from raw HST images to their projected tiles in the HiPS HST representation.} \articlefigure[width=0.8\textwidth]{P2-6_f2.eps}{fig:query}{Deep query from tile back to original raw image.} Fig. \ref{fig:query} shows a 13-joins query tracing the provenance of a single HiPS V HST tile around the target NGC104. The query response is presented in Fig. \ref{fig:qresp} as displayed with the TAPHandle application \citep{2014ASPC..485...15M}. Some drizzled and calibrated images are visualized in Aladin via SAMP messaging as well. \articlefigure[width=0.8\textwidth]{P2-6_f3.eps}{fig:qresp}{Query response as shown in the TapHandle application. Access links to images at each step allow to understand the sequence of activities and how they transform data.} Fig. \ref{fig:desc} shows the activity description associated with one of the previous calibration activity. This Activity description provides interoperable typing of the calibration activities as well as a link to the software documentation. \articlefigure[width=0.8\textwidth]{P2-6_f4.eps}{fig:desc}{Metadata for an instance Activity for image calibration and its corresponding ActivityDescription instance.} More sophisticated user scenarios may include retrieving "siblings" of a given dataset entity using various depths or selecting datasets sharing the same creator agent or generated with similar parameters. The number of joins needed to traverse the provenance graph may increase tremendously. That's the reason why we experimented various ways of representing and querying graphs on top of relational databases. Former tests with a triplestore architecture has shown promising results \citep{2019ASPC..523...329}. As published in \citep{P2_15_adassxxix} and proposed by M.Nullmeier\footnote{https://www.asterics2020.eu/dokuwiki/lib/exe/fetch.php?media=open:wp4:nullmeier\_tf5\_prov\_custom\_adql.pdf}, graphical or Common Table Expressions (CTE) techniques to navigate through graph connections on top of the RDBMS are new solutions to consider. We plan to add such layers on top of our service to improve user-friendliness. \section{Conclusion} Despite the "complex query" issue, the ProvTAP implementation of ProvHiPS demonstrates it is feasible to map information stored in FITS headers of homogeneous image collections or HiPS metadata into the IVOA PROV DM profile. The scalability of the database allows coping with very large data collections. Retrieval of multiple steps pipelines is easy as long as the appropriate ADQL queries are provided. \acknowledgements We thank the CDS intership program for supporting A. Egner. This work has been partly supported by the ESCAPE project (the European Science Cluster of Astronomy and Particle Physics ESFRI Research Infrastructures) funded by the EU Horizon 2020 research and innovation program under the Grant Agreement n.824064, and also the ASTERICS project under Grant Agreement n.653477.
{ "timestamp": "2020-07-20T02:03:17", "yymm": "2007", "arxiv_id": "2007.08615", "language": "en", "url": "https://arxiv.org/abs/2007.08615" }
\section{Background} \label{sec:background} In this work, we demonstrate how the proposed explanation scheme can help obtain actionable insights in a material science application. Specifically, we are interested in understanding the behavior of a deep learning model that was trained on SEM images of feedstock materials for predicting their respective mechanical properties. Feedstock materials are basic building blocks for producing increasingly sophisticated components, prototypes, or finished products. These materials are often optimized to meet certain performance requirements before they can be appropriately utilized. One persistent challenge originated from developing and deploying materials in a timely manner is the significant time and resources required to optimize the material to meet the desired specification. In material science applications, we hope to accelerate the material development process by leveraging the modeling capability of deep learning on increasingly complex and heterogeneous experimental data. In particular, by learning the relationship between the salient feature of observed data (e.g., SEM images) and the material's characteristics, the model can provide valuable feedback to deepening the scientists' understanding, which in turn will help accelerate the material design and optimization processes. \begin{figure*}[htbp] \centering \includegraphics[width=0.99\linewidth]{SEMScanIllustration.pdf} \caption{ The SEM high-resolution scan (a) of a given lot is divided into smaller image tiles. All the image tiles from 30 lots is used for training a CNN-based peak-stress prediction network. In (b), examples of the tile from different lots are illustrated. We can see that each image captured key characteristic, e.g., crystal size, of their respective lots. } \label{fig:SEMScan} \end{figure*} In our exemplar case study, the feedstock material of interest is 2,4,6-triamino-1,3,5- trinitrobenzene (TATB) and its compressive strength upon compaction. The compressive strength of compacted TATB can vary significantly with changes in the TATB’s crystal characteristics, including average size, size distribution, porosity and surface textures to name a few. The experiment involves 30 different synthesis batches (referred to as lots in the context of this work) of material samples, with each batch showing different overall crystal characteristics. Each of the 30 lots is analyzed with a Zeiss Sigma HD VP scanning electron microscope (SEM) using a \SI{30.00}{\micro\metre} aperture, 2.00 keV beam energy, and ca. 5.1 mm working distance to capture high-resolution images. The software Atlas is used to automate the image collection. As illustrated in Figure~\ref{fig:SEMScan}, for each sample, the entire SEM stub surface is mapped, and corresponding images are collected with slight overlap to create a stitched mosaic of the full area. The field of few per each mosaic tile is \SI{256.19}{\micro\metre} $\times$ \SI{256.19}{\micro\metre} with a pixel size of \SI{256.19}{\nano\metre} $\times$ \SI{256.19}{\nano\metre} ( $1024 \times 1024$ image size). In total, we captured 69,894 sample images from 30 lots of TATB. \SL{These images are then down selected by removing the ones with black margins (i.e., at edge of the scan) and other inconsistencies to ensure the quality of the training and validation sets, which consists of 59,690 images.} To better characterize the images in each lot, two material scientists provided by visual inspection quantitative estimations of several key material attributes, such as crystal size, porosity, size dispersity, and facetness (the detail of these concepts are discussed in Section~\ref{sec:method}). The stress and strain mechanical properties are tested for each lot by uniaxially pressing duplicate samples from each TATB powder lot in a cylindrical die at ambient temperature to 0.5 in. diameter by 1 in. height, with a nominal density of 1.800 g/cc. Strain controlled compression tests were run in duplicate at \SI{23}{\celsius} at a ramp rate of 0.0001 $s^{-1}$ on an MTS Mini-Bionix servohydraulic test system model 858 with a pair of 0.5-inch gauge length extensometers to collect strain data. From the obtained stress-strain curve, only the peak stress values were considered as the outputs of the machine learning models, resulting in an image dataset, in which the same properties are assigned to all images (tiles) from the same lot. A deep neural network regressor is then trained to predict the stress/strain value from given images (tiles). Even though the prediction is based on a small patch of the whole SEM scan, the material scientists hypothesize that the individual image should contain salient information that is indicative of the behavior of the entire lot. Provided the prediction is accurate for unseen lots, such a predictive model is a valuable tool for material scientists to quickly screening candidate materials to prioritize for laboratory testing. However, despite the ability to down-select potential candidates for further evaluations, the material scientists still need to produce the sample and carry out SEM imaging procedure, which is an extremely time-consuming process. Furthermore, even though the prediction model appears to capture the relationship between the material image features and their performance, the material scientists cannot directly obtain or reason about such understanding to guide the next set of experiments to perform to quickly obtain the desired materials, i.e., producing material with specific features (e.g., crystal size, porosity) that can potentially lead to better and desired performances. In this work, we aim to address the challenge of extracting domain insights from the predictive model and provide actionable guidance to the material scientist for the material manufacturing process. \section{Conclusion} We introduced a general technique for inferring actionable insights from a given predictive model by understanding and manipulating the domain attributes. The ability to turn these explainable ``knobs'' allows us to obtain counterfactual understanding on how the prediction is affected by key domain attributes. To better understand the combined effects of multiple attributes, we introduced an optimization algorithm that allow the model to help reveal what attribute combinations would yield a more desirable output. For domain scientists to adopt the emerging machine learning techniques, domain specific explanations of how the machine learning models are functioning is essential. Without tangible and actionable information from machine learning models, the overall benefit machine learning will have in scientific domains is limited. The work presented here demonstrate that it is possible gain actionable insights from complex machine learning pipelines that can accelerate the materials development processes. It is also important to note that our ability to meaningfully modify and generate hypothetical SEM images based on domain attributes is driven by the recent development in image editing GANs~\cite{attGAN}. \SL{ Moreover, since we obtained the explanation through controlling the attribute-aware variation in the input data, compared to many state-of-the-art explanation techniques, the proposed technique is not restricted to a specific model and can be adapted to understand the behavior of other predictive models as well.} As with any newly developed techniques, our approach has some shortcomings that needs future improvement. One particular challenge originates from the potential distribution shift from original image to the reconstructed images (when we generate new image tiles using the attributes associate the corresponding \emph{Lot}). Even though a human viewer often cannot discern any noticeable difference between the original images and reconstructed ones, these unnoticeable changes can lead to minor prediction shifts from the original ones. Moreover, due to the inherent limitation of how the regression model is built, we try to predict the peak stress for the given \emph{Lot} based on a single SEM image tile and average the predictions, which leads to built-in variation among predicted values generated from different image tiles from the same \emph{Lot}. We are currently exploring other approaches to build more robust regression models that capture the overall qualities of the samples from limited data (i.e, data efficient model design~\cite{mallick2019deep}), a common obstacle in applying machine learning to scientific data. \section{Introduction} Due to the tremendous success of deep learning in commercial applications, there are significant efforts to leverage these tools to solve various scientific challenges. Deep learning automatically discovers a suitable feature representation from the raw data that allows powerful predictive models to be built from large and complex datasets. Unfortunately, this benefit comes with a major limitation – these complex models are often considered as black-boxes, and understanding or explaining their inner workings is extremely difficult. Besides the inherent complexity of deep learning models, their application to the scientific domain also has unique challenges compared to the traditional applications in commercial domains. Scientific data often requires domain knowledge to be understood and annotated, which often leads to label sparsity. Furthermore, instead of focusing on the predictive performance, in scientific applications, we particularly value the insights distilled from the model that can potentially advance our scientific understanding. \begin{figure*}[t] \centering \vspace{-4mm} \includegraphics[width=0.98\linewidth]{pipeline.pdf} \caption{ Overview of the actionable explanation pipeline. We have a deep neural network model (a) for predicting material peak stress from SEM images. Instead of trying to attribute the decision to the input pixel space (e.g., GradCAM~\cite{selvaraju2017grad}) (b), which cannot produce understandable and actionable solution, we can relying on a generative model to produce a hypothetical lot that is conditioned on the key attributes of the material, from which we can obtain an explanation that is not only directly understandable by the material scientist but also can easily be translated into actionable guidelines in the material synthesis process (c). } \label{fig:pipeline} \end{figure*} Many existing scientific applications of deep learning focus on building a predictive model for certain experimental output modality (e.g., building a model for predicting the material peak stress given a scanning electron microscope image~\cite{gallagher2020predicting}). However, despite their effectiveness in predicting the quantity of interest, we do not have a viable way to evaluate and reason about their decisions to the domain scientists. However, despite their effectiveness in predicting the quantity of interest, we do not have a viable way to evaluate and reason about their decisions to the domain scientists. Even if we believe the model accurately captures the underlying scientific principle, the model opacity makes it extremely hard to extract useful information from the model that can be turned into actionable insights for discovery. One key reason that leads to these challenges is our inability to reason about domain attributes in the deep learning pipeline that are meaningful to the scientists. To motivate the role of domain attributes in understanding the model behavior, we present a real-world application of model understanding in material science. \noindent\textit{\textbf{Motivating Example:}} As illustrated in Figure~\ref{fig:pipeline}(a), let us assume that we have a deep learning model that predicts the peak stress of the material given a scanning electron microscope (SEM) image as an input. The traditional pixel-based explanation approaches~\cite{ZeilerFergus2014, bach2015pixel,selvaraju2017grad} for the convolutional neural network (CNN) produces a heat-map (on a per-pixel level) to highlight the region in the image that contributed the most to the prediction. Such an approach may work well for natural images, e.g., highlighting the head of the cat when predicting a cat image. However, this per-pixel explanation is not particularly insightful when trying to explain why a certain material has a higher peak stress by highlighting pixels in the image as illustrated in Figure~\ref{fig:pipeline}(b). The reason for this lack of insight being that the image pixel space does not correspond to any meaningful or understandable material science concepts. Furthermore, a material scientist may be more interested in understanding the effect of only a subset of all possible attributes that are explicit and actionable (e.g., crystal size, etc.). In this work, we aim to address this fundamental explainability challenge by injecting domain attributes in a post-hoc manner into the prediction pipeline by utilizing advances in deep generative modeling~\cite{goodfellow2014generative}. As illustrated in Figure~\ref{fig:pipeline}, we first build a generative model that can produce ``fake (or hypothetical)'' SEM images compliant to user-controlled attributes, e.g., an SEM image of a hypothetical material with a larger or smaller average crystal size than a given reference material. We then leverage these attributes as the explainable handles to reason more effectively by probing the predictive model behavior with generated hypothetical materials. This approach allows us to answer the questions in the language that the domain scientists understand, i.e., \textit{how does changes in the crystal size (or porosity, etc.) impact the peak stress prediction? or how should material attributes be altered to obtain a material with higher peak stress?} \SL{ Note that compared to the correlation analysis between material attributes and prediction outputs, the proposed method not only produce a per-instance explanation but also generates the corresponding hypothetical SEM image that reflects the manifestation of optimal attribute changes to reach a certain objective, e.g., higher peak stress. Such images of hypothetical materials can be particularly helpful to the material scientists for gaining intuitive understanding for what type of material should be targeted during synthesis to attain the desired properties and potentially revealing other previously unknown variations that is not captured by already known attributes.} A crucial component for the success of such an explanation scheme is the ability to generate appropriate images corresponding to given changes in the attribute values. However, training a generative model to generate high-quality images conditioned on given attribute values has proven to be challenging. In this work, we solve this problem by adopting an image editing model rather than generating images from scratch. Specifically, we take an image and target attributes as inputs, and then perform selective editing of the desired attributes in the given image. The additional meta-information of an input image allows us to train high-quality editing models capable of generating hypothetical images that capture intricate details of the material attributes and are indistinguishable from real SEM images. Incorporating domain attributes in the generative modeling pipeline allows us to better understand the behavior of black-box predictive models through the perspective of generated hypothetical materials. Further, it provides scientists with actionable insights that can potentially lead to new discoveries. The key contributions of our work are listed as follows: \begin{itemize} \item We propose an explainable deep learning approach to provide {\em actionable} scientific insights; \item We demonstrate that the generative model crucial for our approach can capture the association between domain attributes and intricate image features with extremely small amount of supervised information; \item We showcase the usefulness of the proposed approach in a real-world application of feedstock material synthesis by providing domain scientists with actionable insights to improve the material quality. \end{itemize} \section{Method} \label{sec:method} As illustrated in Figure \ref{fig:pipeline}, the proposed technique includes three key components: 1) the predictive model, 2) the attribute-guided generative model, and 3) the optimization module that leverages the predictive model and image generation model for obtaining actionable scientific insights. \SL{In this section, we first provide an overview of the predictive model for predicting the compressive strength from SEM images.} Next, we discuss how we can utilize the attribute conditioned generative adversarial network for generating meaningful image modifications even when we only have extremely sparse labels. We then introduce two novel modes of explanation relying the image generation pipeline, one following a forward evaluation process, whereas the other relying on the optimization module using gradient backpropagation, to reason about model behavior through domain attributes. \begin{figure*}[htbp] \centering \vspace{-4mm} \includegraphics[width=0.9\linewidth]{attEditSEM.pdf} \caption{ Illustration of material attributes-guided SEM image generation. The left column is the original SEM image. The middle and right column show the GAN generated images of hypothetical \emph{Lots} that increase or decrease the corresponding material attributes, respectively. The colored boxes highlight the corresponding regions in the image (different colors mark different regions in the image), in which we can find clear changes that reflect the alterations in the attribution. } \label{fig:attEdit} \end{figure*} \subsection{Predictive Model} \label{sec:predModel} \SL{ As discussed in Section~\ref{sec:background}, a deep neural network regression model is trained to predict the peak stress of a material \emph{Lot} from a given SEM image tile (see details in Figure~\ref{fig:SEMScan}(b)). The regression model is built upon the WideRestNet CNN architecture~\cite{zagoruyko2016wide} and trained on all 30 \emph{Lots}. The trained model is able to accurately predict \emph{Lot} peak stress from test images. Please refer to the supplementary material for details regarding the architecture, training, and performance of the model. Despite the effectiveness of the predictive model, it is unclear how we can leverage the model for domain understanding and discovery, as the forward prediction process can only give us answers for existing material \emph{Lots} we have manufactured and scanned, which fails to provide actionable guidance in the synthesized process for producing material with more desirable attributes. In the next section, we will address such limitations by introducing a way to generate hypothetical \emph{Lots} for exploring the material design space. } \subsection{Attribute-Guided Image Generation} \label{sec:imageSynthesis} The generative adversarial network (GAN)~\cite{goodfellow2014generative} has revolutionized our ability to generate incredibly realistic samples from highly complex distributions~\cite{brock2018large, karras2019style}. In general, a GAN transforms noise vectors ($\mathbf{Z}$ vectors from a high-dimensional latent space) into synthetic samples $\mathbf{I}$, resembling data in the training set. The GAN is learned in an adversarial manner, in which a discriminator $D(\mathbf{I})$ (differentiate real vs. fake samples) and a generator $G(\mathbf{Z})$ (produce realistic fake samples) are trained together to compete with each other. One limitation of the standard GAN is that the latent space is not immediately understandable, which limits our ability to control the generated content. This problem is partially addressed by conditional GAN that is conditioned on the labels~\cite{mirza2014conditional}, i.e., generate different types of images by providing both a noise vector $\mathbf{Z}$ and a label $L$. Still, these models, like most GANs, are often extremely hard to train and require a large number of samples for even moderately complex data. Our initial attempts to apply conditional GAN on our SEM image data with the \emph{Lot} indices or other properties as labels were unsuccessful. This is likely due to an insufficient amount of images and labels as well as the innate complexity of the SEM image data. To mitigate the training challenge and to improve the control over generated contents, we turn our focus to another class of GANs that makes selective modifications to existing images rather than generating them from scratch (i.e., transform a vector into the images). Instead of providing a noise vector to the generator, these image editing GANs (e.g., attGAN~\cite{attGAN}) take an input image along with the attributes $\mathbf{A}$ that describe the desirable changes ($G(\mathbf{I}, \mathbf{A})$). For face images, such a GAN can be trained to alter attributes, such as the color of the hair or the presence of eyewear in the original image. Since we provide the generator with an input image that already contains a large amount of information, we can build a model that not only produces a higher quality images but also requires fewer images to train than other classes of GANs. More importantly, for our application, we can train such an attribute editing GAN, in which the material properties are the conditional attributes $\mathbf{A}$ that guides the image generation process. Such a model allows us to generate new images according to the given material attributes that are immediately understandable to the domain scientists. Next, we discuss the attributes used for modifying SEM images. As discussed in Section~\ref{sec:background}, we have compressive strength measure for each \emph{Lot} of the material from laboratory tests. To help understand material appearance in each \emph{Lot}, the material scientists estimated the following properties -- \emph{size}, \emph{porosity}, \emph{polydispersity}, \emph{facetness}, by examining a large number of images per-lot and averaging the estimates from multiple experts. The estimated values are normalized (0 - 1). The meaning of each material property and specific features the scientists are looking for in the images are the following: \emph{size} -- the average size of crystals; \emph{porosity} -- how ``holey'' the crystals are, i.e., does it look like they have a lot of small pin-prick holes on the surface or are they solid; \emph{polydispersity} -- how varied the size of the crystals are, i.e., how broad is the size distribution; \emph{facetness} -- do the crystals look rounded/smooth at edges or do they have flat faces that meet at different angles to give a faceted structure. As a result, only 30 labels/values per attribute are captured, which can be considered as extremely small for any traditional supervised learning task. Compared to other attributes in GAN applications, i.e., face images, in which we have individual defined labels for all images, the supervised \emph{Lot} level information for the SEM images are extremely sparse. \SL{After obtaining these attributes, we train the attGAN~\cite{attGAN} that allows us to modify the material properties of a given SEM image (i.e., obtain an image from a hypothetical \emph{Lot} with the target material attributes).} Besides the sparsity in labeling information, the other challenges originate from the presence of intricate patterns in the images itself. For example, the porosity of a material is reflected by the presence of small pinprick holes on the surface of the crystals in the SEM image, which only occupies an extremely small number of pixels. Learning attributes represented by such a minuscule feature can be very challenging. Despite these obstacles, as illustrated in Figure~\ref{fig:attEdit}, by utilizing the attGAN, the attribute-driven generation can accurately capture these intricate material features. Such a success not only indicates the accuracy of the estimated material attributes by the scientists but also demonstrate the coherency among images from the same \emph{Lot}. \SL{To ensure the image generation model is producing the intended modification, we examine the quality of generated SEM image from the following two aspects: 1) the synthesized images should be indistinguishable from the real SEM images, and 2) the generated images should exhibit material features that correspond to the modified attributes. For a comprehensive analysis, we not only looked into widely adopted computational metrics but also investigated human perception through the feedback from material scientists. Both of these evaluations corroborated that the GAN-based SEM image editing process produces satisfactory results, i.e., meaningful images from a hypothetical \emph{Lot}. To confirm the quality of the GAN model from a computational aspect, we closely examined the convergence and the loss behavior of both the generator, the discriminator, and the classifier in our model. In particular, the low and stable reconstruction error indicates the GAN can reproduce realistic-looking SEM images. We also observed that the classifier can accurately predict the attributes from both the original and hypothetical (GAN-generated) images. This implies the generator can produce realistic modifications that can be correctly classified by the same classifier that correctly predicted attributes from the original images. Moreover, we also resort to the material scientists for further evaluating the quality of generated images, as their domain knowledge is essential for understanding the intrinsic details and material concepts that may not easily be evaluated by the computational metrics. According to the feedback from three material scientists, they not only had a hard time distinguishing between the images from original and hypothetical \emph{Lots} but also confirmed that the modification reflects the intended changes as described by the attribute inputs. These observations are also demonstrated in the examples of the attribute-guided modification as shown in Figure~\ref{fig:attEdit} (additional samples are also provided in the supplementary material).} We see in the top row, the larger crystal in the original image (left column) is naturally broken into smaller ones in the synthesized image that aim to decrease the overall \emph{size}. Alternatively, we can see smaller crystals are removed (or suppressed) in the synthesized image to increase the overall crystal \emph{size} (highlighted by brown boxes). In the second row, we can see that the small porous structures are being added in the rightmost image (increased \emph{porosity}), whereas the corresponding region is smoothed out in the middle image (decreased \emph{porosity}). The \emph{polydispersity } attribute also works well, as we can see the GAN try to remove smaller crystal in the middle image (decreased \emph{polydispersity}) while increasing them in the case of increasing \emph{polydispersity}. The \emph{facetness} is the only attribute that does not seem to be effectively isolated. Even though it appears to reduce/increase \emph{facetness} (see region marked by green and yellow squares), yet it also brings along more drastic change with respect to \emph{polydispersity} and \emph{size}. Moreover, there is likely an inherent dependency among these attributes, which we may not be able to eliminate even with additional data and labels. For more examples, please refer to the supplementary materials. \subsection{Actionable Explanation Pipeline} \label{sec:forwardbackward} As illustrated in Figure~\ref{fig:pipeline}, once an image from the hypothetical \emph{Lot} is generated, we can feed it into the predictive model to predict the respective mechanical properties (e.g., peak stress). As discussed in Section~\ref{sec:background}, one of the goals for building the regression model is to better understand the relationship between SEM images and the mechanical properties of the respective \emph{Lots}. The introduction of the attribute-driven image generation process not only exposes the explicitly defined material features but also enables the ability to actively control them to form intervention operations that are essential for reasoning about counterfactual relationships (i.e., alter a material feature and then observe corresponding changes in the prediction). An added benefit of the image editing GAN is that it often strives to introduce minimal alteration in the image for required attribute change (e.g., for face image, the attGAN can change the hair color without altering other facial features). Such behavior makes it suitable to reason about the effect of the change, as the editing does not intend to change other features or the general structure of the original image. The most straightforward way to ascertain the relationship between the material attributes and the predicted mechanical properties is to do a simple ``forward'' sensitivity analysis by observing how predicted stress changes as we vary the material properties in the image generation process. To understand the impact of a particular set of attributes, we can fix all other attributes while varying the values of the attributes of interests. We then feed the generated images to the predictive model and obtain the corresponding predicted peak stress (see Section~\ref{sec:result} for more details). \SL{ Such an analysis allows us to estimate the sensitivity (i.e., importance) for each of the material attributes prediction, which enables material scientists to form intuition about the influence of the attribute changes on the peak stress of a given \emph{Lot}. } \BK{However, in several scenarios, we may be interested in answering retrospective questions that can provide precise actionable insights to improve the performance of a certain \emph{Lot}, e.g., what specific changes should be made to the attributes of a given \emph{Lot} to increase its peak stress?} To address this challenge, we introduce a ``backward'' explanation scheme, in which an optimization is performed to obtain the necessary changes to the input attributes for obtaining the desired peak stress change. Let us define the generative editing model as ${G}(\mathbf{I};\mathbf{A})$, where $\mathbf{I}$ is the original image and ${\mathbf{A}}=\{a_1,\cdots,a_N\}$ are the material attributes that control the editing. Given an SEM image $\mathbf{I}$ for which the regressor $R$ predicts a certain peak stress, we aim to identify attributes $\mathbf{A'}$ with minimal deviation from $\mathbf{A}$ such that the edited image $\mathbf{I(A')}={G}(\mathbf{I};\mathbf{A'})$ would lead to a higher/lower peak stress prediction $p$. Given an image $\mathbf{I}$ with corresponding image attribute vector $\mathbf{A}$ and a target peak stress $p$, we formulate backward explanation problem as follows: \begin{equation} \label{opt} \begin{aligned} \min_{\mathbf{A}'} \quad & \|\mathbf{A}-\mathbf{A'}\|_q\\ \textrm{s.t.} \quad & p=R(\mathbf{I}(\mathbf{A}'))\\ & \mathbf{I(A')} = G(\mathbf{I};\mathbf{A}'). \\ \end{aligned} \end{equation} The neural network model makes the formulation \eqref{opt} non-linear and non-convex that makes it difficult to solve the problem in its original form. Thus, we formulate a relaxed version of the problem that can be solved efficiently as follows \begin{equation} \label{opt1} \begin{aligned} \min_{\mathbf{A'} }\quad & \lambda \cdot \|p - R(G(\mathbf{I};\mathbf{A}'))\|_2 + \|\mathbf{A} - \mathbf{A}')\|_1,\\ \end{aligned} \end{equation} \SL{where mean squared error (MSE) loss is used to encourage the predicted peak stress to be closer to the target peak stress $p$. Further, to obtain a more sparse (i.e., understandable) explanation, we set the $q=1$ in the regularization term.} Since, both regressor $R$ and generator $G$ are differentiable, we can compute the gradient of the objective function via back-propagation, and solve the optimization problem using gradient descent algorithm. \SL{ The backward explanation enables us to answer the retrospective questions by automatically identifying specific attribute changes that can lead to the target peak stress. Such an analysis not only facilitates a direct way for examining the effects of simultaneous modification of multiple attributes but also produces the actionable guidance for the synthesis process for achieving a more desirable material property.} Despite the model's aim to predict peak stress of the given \emph{Lot}, the prediction itself is made based on a single image tile (each \emph{Lot} contains a large number of image tiles, see details in Section~\ref{sec:background}). As a result, it is imperative to look beyond the behavior of individual prediction and examine the average behavior of all image tiles of a specific \emph{Lot}. The same applies to the model explanation, in which we can obtain a more comprehensive understanding of the behavior of the \emph{Lot} by averaging or aggregating its explanation (e.g., utilizing boxplot, see Section~\ref{sec:result} for details). \section{Related Works} With the recent advances in deep learning, scientists are increasingly relying on data-driven modeling for solving scientific challenges. Machine learning has been successfully applied to a variety of domains, such as physics~\cite{peterson2017zonal, anirudh2019improved}, biology~\cite{webb2018deep}, material science~\cite{butler2018machine}, and many more \cite{reichstein2019deep, kurth2018exascale, baldi2001bioinformatics}. Many of these scientific machine learning applications either focus on building accurate surrogate models for the underlying physical phenomenon or developing sophisticated predictive models from complex simulation/experimental output (e.g., image and time-series) to predict various properties of interest. Furthermore, few existing works have looked into addressing the explainability challenges in the context of scientific applications~\cite{SimonyanVedaldiZisserman2013, ZeilerFergus2014, YosinskiCluneNguyen2015, bach2015pixel, lapuschkin2019unmasking, kailkhura2019reliable}. In machine learning research, the opaque nature of deep neural networks has prompted many efforts to improve their explanation and interpretation~\cite{xie2020explainable}. Most of these works focus on traditional computer vision or natural language processing applications. One key strategy for interpretation is attributing the prediction importance into the model's input domain, most notably for the convolution neural network (CNN). Various approaches~\cite{SimonyanVedaldiZisserman2013, ZeilerFergus2014, YosinskiCluneNguyen2015, bach2015pixel, lapuschkin2019unmasking} have been proposed to highlight the important regions of an input image that contributes the most to a decision. One can also consider the attribution scheme from a model agnostics perspective~\cite{RibeiroSinghGuestrin2016, KrausePererNg2016, LundbergLee2017}, e.g., the LIME~\cite{RibeiroSinghGuestrin2016} explains a prediction by fitting a localized linear model for approximating the classification boundary for a given prediction. However, such an attribution is only useful if the input domain itself is explainable to the user (e.g., natural images), which is not the case in many scientific applications (e.g., complex SEM images). Moreover, the assignment of the importance to input (or pixel) space is limited in the sense that it can only provide passive correlative information. Many important insights can only be obtained from a counterfactual understanding involving interventional operation, i.e., what kind of \emph{changes} to the input is necessary if a specific model output is requested. To address these challenges counterfactual explanation approaches~\cite{kusner2017counterfactual,narendra2018explaining, Goyal2019, anne2018grounding} have been proposed, e.g., the counterfactual visual explanation~\cite{Goyal2019} work introduces a patch-based image editing and optimization scheme for obtaining interpretable changes in the pixel space for altering a prediction. However, the patch-based editing can severely limit the expressiveness of the input modification. In this work, we use a generative adversarial neural network (GAN) for making meaningful input interventions leveraging GAN's latent space. Specifically, by leveraging the GAN model that is able to meaningfully edit the model inputs (e.g., image) via domain-specific attributes, we allow explanation in the context of the application for obtaining actionable domain understanding. \SL{GAN-based explanation of a classifier was explored and applied to non-scientific applications in our preliminary work~\cite{liu2019generative} in a similar direction. In this paper, our focus is however on regression problems for scientific applications.} \SL{ On the material science front, the utilization of machine learning for predicting material properties has been investigated in multiple previous works~\cite{decost2017characterizing, webel2018new, gallagher2020predicting}. In particular, predicting peak stress from SEM images has been studied in the work by Gallagher et al.\cite{gallagher2020predicting}, in which both traditional computer vision techniques and deep neural network approaches have been explored. Compared to the state-of-the-art peak stress prediction works that focus on making accurate predictions, the proposed work focuses on how to extract actionable scientific insights by explaining what features in images are important to the presumably accurate prediction of these models.} Although we focus on a specific materials application in this paper, the proposed actionable explanation generation approach is quite generic and can be applied to a wide range of scientific applications. \section{Results and Discussions} \label{sec:result} Here we illustrate the application of the proposed techniques to help material scientists obtain actionable insights from the regression model and infer the underlying relationship between feedstock materials’ characteristics and their compacted mechanical performance. \begin{figure*}[!t] \centering \includegraphics[width=0.99\linewidth]{forwardExplainDemo.pdf} \caption{ \SL{ Illustration of the forward explanation. In (a), the effect of varying the attribute of a single input image tile is illustrated. Here we explore the images from different hypothetical \emph{Lots} by varying the crystal size. The x-axis is the normalized material attribute values (0-1), which corresponds to the relative strength of the changes, e.g., zero indicates a very small size with respect to the norm in the given \emph{Lot}. The horizontal dotted line illustrates the measured peak stress of the given \emph{Lot}. We can also perform similar aggregated analysis on all image tiles from a specific \emph{Lot} (in this case \emph{Lot} AS) through a boxplot~\cite{potter2006methods} as illustrated in (b).}} \label{fig:forwardExplainDemo} \end{figure*} The proposed technique allows the integration of material attributes (such as, crystal \emph{size}, \emph{porosity}, \emph{polydispersity}) as tunable knobs in the analysis pipeline. As a result, material scientists can intuitively reason about the impact of varying a given attribute on the predicted peak stress. As illustrated in Figure~\ref{fig:forwardExplainDemo}(a), by altering the \emph{size} attribute when generating the images of hypothetical \emph{Lots}, we can observe changes in the predicted peak stress values. Here, we generated $11$ images with \emph{size} attribute varying from 0.0 to 1.0 (the full range of the attribute) while fixing all other attributes. Attribute values are shown in the x-axis, whereas the predict peak stress (in \emph{psi}) is shown in the y-axis. As shown in Figure~\ref{fig:forwardExplainDemo}(a), for a single input SEM image instance, the predicted peak stress decreases as we increase the crystal particle size. The visual effects of the size attribute change can also be observed in the corresponding images (only three images are shown due to space constraint). Since our regression model generates peak stress prediction for a given \emph{Lot} based on a single image tile (each \emph{Lot} image contains thousands of image tiles), certain variation exists among the image tiles within each \emph{Lot}. Therefore, for evaluating the model behavior it is crucial to understand the average behavior of predictions for all image tiles. As shown in Figure~\ref{fig:forwardExplainDemo}(b), we show the aggregated results from all tiles from the \emph{Lot AS}. In the boxplot, each vertical glyph (along the x-axis that corresponds to attribute value size) encodes predictions of all image tiles with the same attribute values. The y-axis shows the predicted peak stress. Despite the variation among the tiles in the \emph{Lot}, we can observe a similar trend in both (a) and (b). \begin{figure*}[!t] \centering \includegraphics[width=0.99\linewidth]{forwardExplainLot.pdf} \caption{ Forward explanations illustrate the sensitivity of the peak stress with respect to varying attribute values for different \emph{Lots}. } \label{fig:forwardExplainLot} \end{figure*} Similarly, we can evaluate how each of the four attributes impacts the peak stress prediction by applying a similar sensitivity analysis for different \emph{Lots} by varying the attribute values one at a time. In Figure~\ref{fig:forwardExplainLot}, we illustrate three \emph{Lots}, with high (M), median (AS), and low (N) peak stress values, respectively. As shown in the plots, the \emph{size} and \emph{facet} attributes have a pronounced and consistent effect on the prediction output, which shows that {\em having larger particles in general has a detrimental impact to the compressive peak performance of the sample, while having more well faceted crystals in the samples are beneficial to increasing the peak stress values}. Our analysis also highlights that there are no clear trends for both \emph{porosity} and \emph{polydispersity} attributes, which diverges depending on the selection of the \emph{Lot}. The divergence in these attributes show that {\em there is more than one single pathway to achieve a particular peak stress value}, since the exemplar cases of M, AS, and N \emph{Lots} all have very different original attributes. The ability to edit and modify attributes from a distinct original point to either increase or decrease the desired performance provides powerful visualization cues to the subject matter experts while also informing which knobs should be tuned (and their sensitivities) to achieve the desired performance. \begin{figure*}[htbp] \centering \includegraphics[width=0.99\linewidth]{backwardExplainInstance.pdf} \caption{ Backward actionable explanation for a single SEM image. The images from original and hypothetical \emph{Lot} (through GAN-based image manipulation) are shown on the left, and the corresponding attribute changes that led to an increase or decrease of the predicted peak stress are illustrated in the bar chart on the right. } \label{fig:backwardPerInstance} \end{figure*} As discussed in Section~\ref{sec:forwardbackward}, the sensitivity analysis that utilizes forward evaluation only allows us to discern the impact of varying one (or limited combination) attribute. To truly understand the interaction among them, we introduced backward explanation, in which we ask what are the minimal changes that can be made to the attribute so that the model would alter the prediction to a desirable value. In Figure~\ref{fig:backwardPerInstance}, the original image and modified image for increasing and decreasing predicted peak stress (based on the changes in the attributes) are shown. In the top row (Lot N), we can see in both SEM images (left) and attribute bar-plot (right), that decreasing crystal \emph{size}, while increasing \emph{porosity}, \emph{polydispersity}, \emph{facetness} will lead to higher peak stress prediction. The same pattern can be observed for \emph{Lot} AT (mid-row). Although bottom row (Lot F) shows a slight deviation compared to earlier patterns on porosity, the small absolute value indicates the change in \emph{porosity} does not contribute much to the changes in the generated image. One thing to note is that the increase of the \emph{facetness} attributes in the image generation process seems to also lead to a marked increase in \emph{polydispersity} and a reduction of average \emph{size} (see Section~\ref{sec:imageSynthesis}), so the effect we observe for altering \emph{facetness} is likely also due to the changes in \emph{size} attribute. \begin{figure*}[htbp] \centering \includegraphics[width=0.99\linewidth]{backwardExplainLot.pdf} \caption{ Backward explain result for the given \emph{Lots} by averaging per-image explanation. } \label{fig:backwardPerLot} \end{figure*} We also estimate the overall behavior of the entire \emph{Lot} by averaging the backward explanation of all image titles from a given \emph{Lot}. As shown in Figure~\ref{fig:backwardPerLot}, we can utilize a similar attribute change plot to illustrate the averaging behaviors by showing the mean values. The plots show that the rule we identified by examining backward explanation of individual images is consistent with average behavior of the entire \emph{Lot}. The \emph{polydispersity}, \emph{size}, and \emph{facetness} behave consistently in increasing/decreasing of the peak stress. Since both \emph{polydispersity} and \emph{size} attributes are directly associated with the mean and variance of crystal size, \emph{the reduction of size appears to be the most effective route to increase peak stress prediction}. In discussion with subject matter experts (SMEs), grinding of TATB is a common practice to increase peak strength for samples which does not meet the desired peak stress requirement. According to the SMEs, the grinding process increases the number of smaller crystal sizes, (thereby lowering the overall crystal sizes) while increasing the polydispersity of the samples. Increasing polydispersity in particle distributions is a common approach to increasing particle packing densities~\cite{sohn1968effect} in a number of different applications. \SL{ Compared to the conventional wisdom, our method not only explicitly confirms the impact of the crystal sizes, but also produces realistic depiction of the appearance of the material for which the given predictive model would predict to have a higher peak stress value. Moreover, our method enables a multifaceted analysis from a single instance of the prediction to the averaging behavior of the entire \emph{Lot}, from the sensitivity of a single attribute to the joint influence of multiple ones for achieving an optimal objective value. Our approach will also be valuable in emerging applications where scientists have not yet formed deep scientific insights and can serve as a useful computation tool by providing actionable explanations without extensive experiments.} \subsubsection*{Disclaimer} \section*{Acknowledgement} This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. This work is reviewed and released under LLNL-JRNL-811201. We would also like to extend our gratitude and appreciation to Brian Gallagher for his valuable feedback and comments on the manuscript. \bibliographystyle{unsrt} \section*{Supplemental Material} \section{SEM Image Dataset} The SEM dataset we used for training and evaluating both the regressor and generative model consists of 59,690 greyscale images, with a resolution of $256 \times 256$ (downsampled from $1000 \times 1000$ in the original image). These images are from 30 unique \emph{Lots} (batch of material), in which compressive strength testing is carried out for each \emph{Lot} to obtain the corresponding peak stress value. \section{Detail of the Peak Stress Prediction Model} \textbf{Model architecture:} The regression model architecture for peak stress prediction is based on the Wide ResNet model \cite{zagoruyko2016wide}, with a total of 28 convolutional layers and a widening factor of 1, followed by an adaptive average pooling layer. Since the Wide ResNet model was originally proposed for classification, we also need to replace the final \emph{softmax} layer with a fully connected regression layer \emph{tanh} activations to predict continuous scalar values. Our implementation is based on PyTorch. \noindent\textbf{Training setup:} We set aside 10\% of the training data for validation, leaving a total of 53721 training images and 5969 validation images. All images are preprocessed by subtracting the mean and dividing by the standard deviation. For data augmentation, we do horizontal flips. We train the regression model with the mean squared error (MSE) loss function and the Adam optimizer \cite{kingma2014adam} with a learning rate of 0.001 and a minibatch size of 64. We used early stopping to terminate training when the validation performance did not improve, and the whole training procedure stops in 48 epochs. \noindent\textbf{Prediction performance:} Globally, the regression model achieved a root mean square error (RMSE) of $66.0$ and a mean absolute percentage error (MAPE) of 3.07\% across all \emph{Lots}. For each \emph{Lot}, the peak stress predictions versus the ground-truth peak stress values are shown in Figure~\ref{fig:predictionPerLot}, where the error bars present the standard deviation of predictions across images in the \emph{Lot}. The root mean square error per \emph{Lot} is plotted in Figure~\ref{fig:rmsePerLot}. \begin{figure*}[htbp] \centering \includegraphics[width=0.8\linewidth]{predictionPerLot.pdf} \caption{ The predicted and ground-truth value of peak stress for different \emph{Lots}. } \label{fig:predictionPerLot} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[width=0.8\linewidth]{rmsePerLot.pdf} \caption{ RMSE of the predicted peak stress values for different \emph{Lots}. } \label{fig:rmsePerLot} \end{figure*} \section{Training and Evaluation of Image Editing GAN} \noindent\textbf{Training setup:} As discussed in Section~\ref{sec:method}, our result for SEM image is achieved by training the AttGAN~\cite{attGAN} with the material attribute labels provide by material sciences. Compared to training setup for celebrity image dataset, the largest different with SEM images is the number of available labels. For the celebrity images, labels are obtained for each individual face images. However, it is the case for the SEM images, in which we only have label for each \emph{Lot} that contain large number of images. We trained the attGAN utilizing a pytorch implementation with the same learning rate as for the celebrity dataset. The output image of the GAN is same as input with a resolution of $256 \times 256$. We trained the model for 70 epochs. The training cutoff is determined by examining the sample results during training, where additional epochs do not appear to improve the visual fidelity of the generated modification. \noindent\textbf{Training Evaluation:} The training details for the GAN is shown in Figure~\ref{fig:GAN_training}. The attGAN jointly trains the generator (contains both encoder and decoder), the discriminator, and the classifier for predicting attributes. Please refer to the original work~\cite{attGAN} for more details about model architecture. As shown in Figure~\ref{fig:GAN_training}, (a) illustrates the generator's reconstruction error. As the error decreases, the generator is able to produce more realistic SEM images. (b) shows discriminator's adversarial loss. The loss increases over training iterations denoting that the generator is producing more realistic images, in turn, fooling the discriminator. The (c) and (d) show the classification loss for predicting the attribute labels from both original (part of the overall training loss for the discriminator) and generated (part of the overall training loss for the generator) SEM images. The decreasing and then stabilizing behavior of the classifier losses indicates the jointly trained classified can accurately predict the attributes for both the real and fake (generated) images. This also implies that the generator can produce realistic modifications that can be correctly classified by the same classifier that is predicting attributes from the real images correctly. Moreover, the low and stable reconstruction indicates the generator can reproduce realistic-looking SEM images. These combined observations provide the evidence to support our claim that the GAN is trained well and the quality of the image editing process is good as showcased by the classifier performance on generated images. \begin{figure*}[htbp] \centering \includegraphics[width=0.99\linewidth]{GAN_training.pdf} \caption{ The performance curves for the GAN training. (a) illustrates the generator's reconstruction loss. As the error decreases, the generator is able to produce more realistic SEM images. (b) shows discriminator's adversarial loss, which increases over training iterations as the generator is producing more realistic images, in turn, fooling the discriminator. The (c) and (d) show the classification loss for predicting the attribute labels for both original (part of the overall training loss for the discriminator) and generated (part of the overall training loss for the generator) SEM image. } \label{fig:GAN_training} \end{figure*} \section{Additional Examples for the GAN Modified SEM Images} To better illustrate the performance our model, we provide additional examples of the GAN-based modifications, as shown in Figure~\ref{fig:additionalResults_1} to Figure~\ref{fig:additionalResults_7}. One thing to note is that many of the modification is reflected in detailed and localized features (e.g., porosity), which may not be obvious at first glance or for people without related background. We advise the reader to zoom in the image to see the details. For porosity, we can observe a rougher texture with small dips on the crystal in case of increased porosity (rightmost image). For polydispersity, the things to look for is whether smaller crystals are disappearing/appearing between the gaps between larger crystal dependence on decreasing/increasing peak stress prediction. \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\linewidth]{resultImage_label_1_idx_313} \caption{ Additional examples of GAN generated images. Similar to the corresponding figure in the main text, the left column is the original SEM image. The middle column images are synthesized to increase the corresponding attributes, whereas the right column is synthesized to decrease the corresponding attribute. The four rows correspond to the four attributes, namely, \emph{porosity}, \emph{polydispersity}, \emph{size}, and \emph{facetness} } \label{fig:additionalResults_1} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\linewidth]{resultImage_label_2_idx_753} \caption{ Additional examples of GAN generated images. Similar to the corresponding figure in the main text, the left column is the original SEM image. The middle column images are synthesized to increase the corresponding attributes, whereas the right column is synthesized to decrease the corresponding attribute. The four rows correspond to the four attributes, namely, \emph{porosity}, \emph{polydispersity}, \emph{size}, and \emph{facetness} } \label{fig:additionalResults_2} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\linewidth]{resultImage_label_6_idx_133} \caption{ Additional examples of GAN generated images. Similar to the corresponding figure in the main text, the left column is the original SEM image. The middle column images are synthesized to increase the corresponding attributes, whereas the right column is synthesized to decrease the corresponding attribute. The four rows correspond to the four attributes, namely, \emph{porosity}, \emph{polydispersity}, \emph{size}, and \emph{facetness}} \label{fig:additionalResults_3} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\linewidth]{resultImage_label_9_idx_399} \caption{ Additional examples of GAN generated images. Similar to the corresponding figure in the main text, the left column is the original SEM image. The middle column images are synthesized to increase the corresponding attributes, whereas the right column is synthesized to decrease the corresponding attribute. The four rows correspond to the four attributes, namely, \emph{porosity}, \emph{polydispersity}, \emph{size}, and \emph{facetness}} \label{fig:additionalResults_4} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\linewidth]{resultImage_label_23_idx_561} \caption{ Additional examples of GAN generated images. Similar to the corresponding figure in the main text, the left column is the original SEM image. The middle column images are synthesized to increase the corresponding attributes, whereas the right column is synthesized to decrease the corresponding attribute. The four rows correspond to the four attributes, namely, \emph{porosity}, \emph{polydispersity}, \emph{size}, and \emph{facetness}} \label{fig:additionalResults_5} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\linewidth]{resultImage_label_23_idx_717} \caption{ Additional examples of GAN generated images. Similar to the corresponding figure in the main text, the left column is the original SEM image. The middle column images are synthesized to increase the corresponding attributes, whereas the right column is synthesized to decrease the corresponding attribute. The four rows correspond to the four attributes, namely, \emph{porosity}, \emph{polydispersity}, \emph{size}, and \emph{facetness}} \label{fig:additionalResults_6} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\linewidth]{resultImage_label_34_idx_555} \caption{ Additional examples of GAN generated images. Similar to the corresponding figure in the main text, the left column is the original SEM image. The middle column images are synthesized to increase the corresponding attributes, whereas the right column is synthesized to decrease the corresponding attribute. The four rows correspond to the four attributes, namely, \emph{porosity}, \emph{polydispersity}, \emph{size}, and \emph{facetness} } \label{fig:additionalResults_7} \end{figure*}
{ "timestamp": "2020-07-20T02:03:41", "yymm": "2007", "arxiv_id": "2007.08631", "language": "en", "url": "https://arxiv.org/abs/2007.08631" }
\section{Introduction\label{sec:intro}} Machine learning techniques are impacting science at an impressive pace from robotics~\cite{argall2009survey} to genetics~\cite{libbrecht2015machine}, medicine~\cite{he2019practical}, and physics~\cite{carleo2019machine}. In physics, reservoir computing \cite{verstraeten2007experimental,schrauwen2007overview}, based on echo-state neural networks \cite{jaeger2001echo,JaegerReview,jaegerScience}, is gathering much attention for model-free, data-driven predictions of chaotic evolutions \cite{ottLyapunov,ottPRL,ottAttractor,vlachas2018data,nakai2018machine}. Here, we scrutinize the use of reservoir computing to build effective models for predicting the slow degrees of freedom of multiscale chaotic systems. We also consider hybrid reservoirs, blending data with predictions based on an imperfect model \cite{ottHybrid} (see also Ref.~\cite{wikner2020combining}). Multiscale chaotic systems represent a challenge to both theory and applications. For instance, turbulence can easily span over 4/6 decades in temporal/spatial scales \cite{warhaft2002turbulence}, while climate time scales range from hours of atmosphere variability to thousands years of deep ocean currents \cite{peixoto1992physics,pedlosky2013geophysical}. These huge ranges of scales stymie direct numerical approaches making modeling of fast degrees of freedom mandatory, being slow ones usually the most interesting to predict. In principle, the latter are easier to predict: the maximal Lyapunov exponent (of the order of the inverse of the fastest time scale) controls the early dynamics of very small perturbations appertaining to the fast degrees of freedom that saturate with time, letting the perturbations on the slow degrees of freedom to grow at a slower rate controlled by the typically weaker nonlinear instabilities \cite{lorenz1996predictability,aurell1996growth,cencini2013finite}. However, owing to nonlinearity, fast degrees of freedom depend on, and in turn, impact on the slower ones. Consequently, improper modeling the former severely hampers the predictability of the latter~\cite{boffetta2000predictability}. We focus here on a simplified setting with only two time scales, i.e. on systems of the form: \begin{equation} \begin{aligned} \dot{\bm X}&=\frac{1}{\tau_s}\bm F_s(\bm X,\bm x)\\ \dot{\bm x}&=\frac{1}{\tau_f}\bm F_f(\bm x,\bm X)\,, \end{aligned} \label{eq:1} \end{equation} where $\bm X$ and $\bm x$ represent the slow and fast degrees of freedom, respectively. The time scale separation between them is controlled by $c\!=\!\tau_s/\tau_f$. The goal is to build an effective model for the slow variables, $\dot{\bm X}\!=\!\!\bm F_{\mathrm{eff}}(\bm X)$, to predict their evolution. When the fast variables are much faster than the slow ones ($c \gg 1$), multiscale techniques \cite{sanders2007averaging,pavliotis2008multiscale} can be used to build effective models. Aside from such limit, systematic methods for deriving effective models are typically unavailable. In this article, we show that reservoir computers trained on time series of the slow degrees of freedom can be optimized to build (model-free data-driven) effective models able to predict the slow dynamics. Provided the reservoir dimensionality is high enough, the method works both when the scale separation is large, red basically recovering the results of standard multiscale methods, such as the adiabatic approximation, and when it not so large. Moreover, we show that even an imperfect knowledge of the slow dynamics can be used to improve predictability, also for smaller reservoirs. The material is organized as follows. In Sec.~\ref{sec:implementation} we present the reservoir computing approach for predicting chaotic systems, moreover we provide the basics of its implementation also considering the case in which an imperfect model is available (hybrid implementation). In Sec.~\ref{sec:result} we present the main results obtained with a specific multiscale system. Section~\ref{sec:end} is devoted to discussions and perspectives. In Appendix~\ref{app:implementation} we give further details on implementation, including the choice of hyperparameters. Appendix~\ref{app:model} presents the adiabatic approximation for the multiscale system here considered. In Appendix~\ref{app:hybrid} we discuss and compare different hybrid schemes. \section{Reservoir computing for chaotic systems and its implementation \label{sec:implementation}} Reservoir computing \cite{verstraeten2007experimental,schrauwen2007overview} is a brain inspired approach based on a recurrent neural network (RNN), the reservoir (R) -- i.e. an auxiliary high dimensional nonlinear dynamical system naturally suited to deal with time sequences--, (usually) linearly coupled to a time dependent lower dimensional input (I), to produce an output (O). To make O optimized for approximating some desired dynamical observable, the network must be trained. Reservoir computing implementation avoids backpropagation \cite{werbos1990backpropagation} by only training the output layer, while R-to-R and I-to-R connections are quenched random variables. Remarkably, the reservoir computing approach allows for fast hardware implementations with a variety of nonlinear systems \cite{larger2017high,tanaka2019recent}. Choosing the output as a linear projection of functions of the R-state, the optimization can be rapidly achieved via linear regression. The method works provided R-to-R connections are designed to force the R-state to only depend on the recent past history of the input signal, fading the memory of the initial state. \subsection{Predicting chaotic systems with reservoir computing} When considering a chaotic dynamical system with state $\bm s(t)=(\bm X(t),\bm x(t))$, with reference to Eqs.~(\ref{eq:1}), the input signal $\bm u(t)\in \mathrm{I\!R}^{D_{I}}$ is typically a subset of the state observables, $\bm u(t)\!=\!\bm h(\bm s(t))$. For instance, in the following we consider functions of the slow variables, $\bm X$, only. When the dimensionality, $D_R$, of the reservoir is large enough and the R-to-R connections are suitable chosen, its state, $\bm r(t)\in \mathrm{I\!R}^{D_{R}}$, becomes a representation -- an echo -- of the input state $\bm s(t)$ \cite{jaegerScience,schrauwen2007overview,ottAttractor}, via a mechanism similar to generalized synchronization \cite{ottAttractor,pikovsky2003synchronization}. In this configuration, dubbed open loop \cite{rivkind2017local} (Fig.~\ref{fig:1}a), the RNN is driven by the input and, loosely speaking, synchronizes with it. When this is achieved, the output, $\bm v(t)\in \mathrm{I\!R}^{D_{O}}$ can be trained (optimized) to fit a desired function of $\bm s(t)$, for instance, to predict the next outcome of the observable, i.e. $\bm v(t+\Delta t) =\bm u(t+\Delta t)$. After training, we can close the loop by feeding the output as a new input to R (Fig.~\ref{fig:1}b), thus obtaining an effective model for predicting the time sequence. For the closed loop mode to constitute an effective (neural) model of the dynamics of interest, we ask the network to work for arbitrary initial conditions, i.e. not only right after the training: a property dubbed \textit{reusability} in Ref. ~\cite{ottAttractor}. For this purpose, when starting from a random reservoir state, a short synchronization period in open loop is needed before closing the loop. The method to work requires some stability property which cannot, in general, be granted in the closed loop configuration \cite{rivkind2017local}. \begin{figure}[t!] \centering \includegraphics[width=1\columnwidth]{fig1.png} \caption{(Color online) Sketch of reservoir computing: (a) the components and their connections; (b) the two modes of operation: open loop for synchronizing the reservoir to the input and for training, closed loop for prediction. \label{fig:1}} \vspace{-0.5truecm} \end{figure} \subsection{Implementation\label{sec:RC}} Reservoir neurons can be implemented in different ways \cite{verstraeten2007experimental}, we use echo state neural network \cite{jaegerScience}, mostly following \cite{ottLyapunov,ottPRL,ottAttractor}. Here, we assume $D_{R}\!\gg\! D_{I}\!=\!D_{O}$ and the input to be sampled at discrete time intervals $\Delta t$. Both assumptions are not restrictive, for instance in the hybrid implementation below we will use $D_O\neq D_I$ and the extension to continuous time is straightforward \cite{verstraeten2007experimental}. The reservoir is built via a sparse (low degree, $d$), random graph represented via a $D_R\times D_R$ connectivity matrix $\mathbb{W}_R$, with the non zero entries uniformly distributed in $[-1,\;1]$, scaled to have a specific spectral radius $\rho=\max\{|\mu_i|\}$ with $\mu_i$ being the matrix eigenvalues. The request $\rho<1$ is sufficient, though not strictly necessary \cite{jiang2019model}, to ensure the echo state property \cite{jaeger2001echo,JaegerReview} in open loop, namely the synchronization of $\bm r(t)$ with $\bm s(t)$. We distinguish training and prediction. Training is done in open loop mode using an input trajectory $\bm u(t)$ with $t\in[-T_s,T_t]$, where $T_t$ is the training input sequence length, and $T_s$ is the length of initial transient to let the $\bm r(t)$, randomly initialized at $t=-T_s$, to synchronize with the system dynamics. After being scaled to be zero mean and unit standard deviation, the input is linearly coupled to the reservoir nodes via a $D_R\times D_{I}$ matrix $\mathbb{W}_{I}$, with the non zero entries taken as random variables uniformly distributed in $[-\sigma,\;\sigma]$. In open loop mode the network state $\bm r(t)$ is updated as \begin{equation} \bm r(t+\Delta t)=\tanh[\mathbb{W}_R\bm r(t)+\mathbb{W}_{I} \bm u(t)]\,. \label{eq:2} \end{equation} In the above expression the $\tanh$ is applied element wise, and can be replaced with other nonlinearities. The output is computed as $\bm v(t+\Delta t)=\mathbb{W}_O\bm r^\star(t+\Delta t)$ with the $D_R\times D_{O}$ matrix $\mathbb{W}_O$ obtained via linear regression by imposing $\mathbb{W}_O=\arg\min_{\mathbb{W}} \{\sum_{0\leq t\leq T_t}||\bm v(t)-\bm u(t)||^2+\alpha \mathrm{Tr}[\mathbb{W}\mathbb{W}^T] \}$, to ensure the output to be the best predictor of the next input observable. The term proportional to $\alpha$ is a regularization, while $\bm r^\star$ is a function of the reservoir state. Here, we take $r^*_i(t)=r_i(t)$ if $i$ is odd and $r^*_i(t)=r^2_i(t)$ otherwise \cite{Note1}. Once $\mathbb{W}_O$ is determined, we switch to prediction mode. Given a short sequence of measurements, in open loop, we can synchronize the reservoir with the dynamics (\ref{eq:2}), and then close the loop letting $\bm u(t)\leftarrow \bm v(t)=\mathbb{W}_O\bm r^\star(t)$ in Eq.~(\ref{eq:2}). This way Eq.~(\ref{eq:2}) becomes a fully data-driven effective model for the time signal to be predicted. The resulting model, and thus its performances, will implicitly depend both on the hyperparameters ($d,\rho$ and $\sigma$) defining the RNN structure and the I-to-R connections and on the length of the training trajectory ($T_t$). The choices of these hyperparameters are discussed in Appendix~\ref{app:implementation}. \subsection{Hybrid implementation\label{sec:hybrid}} So far we assumed no prior knowledge of the dynamical system that generated the input. If we have an imperfect model for approximately predicting the next outcome of the observables $\bm u(t)$, we can include such information in a hybrid scheme by slightly changing the input and/or output scheme to exploit this extra knowledge \cite{ottHybrid,wikner2020combining}. The idea of blending machine learning algorithms with physics informed model is quite general and it has been exploited also with methods different from reservoir computing, see e.g. Refs.~\cite{milano2002neural,wan2018data,weymouth2013physics}. \begin{figure*}[t!] \centering \includegraphics[width=1\textwidth]{fig2-eps-converted-to} \vspace{-0.7truecm} \caption{(Color online) Prediction error growth for a single realization of a network of $D_R\!=\!500$ neurons. (a) Average (over $10^4$ initial conditions) ($\log_{10}$)error $\langle E(t)\rangle$ vs time during synchronization (open loop, gray region) and prediction (closed loop) for $c\!=\!10$ and $\Delta t\!=\!0.1$: the yellow shaded area circumscribes the twin and random twin model predictions (see text); reservoir computer prediction (solid, black curve) compared with that of the truncated model (purple, dotted curve), of the model fast variables replaced by their average (blue, dash dotted curve) and model (\ref{eq:improved}) (red, dashed curve). The inset shows the same (closed loop only) for $\Delta t\!=\!0.01$. (b) An instance of a prediction experiment, showing the reference (dash dotted, light blue curves) evolution of the $X$ (top) and $Z$ (bottom) variables of the coupled Lorenz model (\ref{eq:lorenzslow}) together with the prediction obtained via the reservoir (black, solid curve) and the adiabatic model (dashed, red curve). For details on hyperparameters see Appendix~\ref{app:hyper}. \label{fig:2}} \end{figure*} Let $\wp[\bm u(t)]=\hat{\bm u}(t+\Delta t)$ be the estimated next outcome of the observable $\bm u(t)$ according to our imperfect model. The idea is to supply such information in the input by replacing $\bm u(t)$ with the column vector $(\bm u(t),\wp[\bm u(t)])^T$, thus doubling the dimensionality of the input matrix. For the output we proceed as before. The whole scheme is thus as the above one with the only difference that $\mathbb{W}_O$ is now a $D_R\times D_I/2$. The switch to the prediction mode is then obtained using $(\mathbb{W}_O\bm r^\star (t),\wp[\mathbb{W}_O\bm r^\star (t)])^T$ as input in Eq.~(\ref{eq:2}). It is worth noticing that other hybrid schemes are possible, e.g. in Ref.~\cite{ottHybrid} the output has the form $\bm v(t+\Delta t)=\mathbb{W}_O(\bm r^\star(t),\wp[\bm u(t)])^T$, namely a combination of the prediction based on the network and on the physical model. In Appendix~\ref{app:hybrid} we comment further on our choice, and we compare it with the scheme proposed in Ref.~\cite{ottHybrid}. \section{Results for a two time scales system \label{sec:result}} We now consider the model introduced in Ref.~\cite{boffetta1998extension} as a caricature for the interaction of the (fast) atmosphere and the (slow) ocean. It consists of two Lorenz systems coupled as follows: \begin{eqnarray} &&\left\{ \begin{array}{l} \dot{X}=a(Y-X) \\ \dot{Y}=R_sX-ZX-Y-\epsilon_sxy \\ \dot{Z}=XY-bZ \end{array} \right. \label{eq:lorenzslow}\\ & &\left\{ \begin{array}{ll} \dot{x}=ca(y-x)\\ \dot{y}=c(R_fx-zx-y)+\epsilon_fYx\\ \dot{z}=c(xy-bz)\,, \end{array} \right. \label{eq:lorenzfast} \end{eqnarray} where Eqs. (\ref{eq:lorenzslow}) and Eqs. (\ref{eq:lorenzfast}) describe the evolution of the slow and fast variables, respectively. We fix the parameters as in Ref.~\cite{boffetta1998extension}: $a=10$, $b\!=\!8/3$, $R_s\!=\!28$, $R_f\!=\!45$, $\epsilon_s\!=\!10^{-2}$ and $\epsilon_f\!=\!10$, while for the time scale separation parameter, $c$, we use $c\!=\!10$ (as in Ref.~\cite{boffetta1998extension}) and $c\!=\!3$. The former corresponds to a scale separation such that the adiabatic approximation already provides good results (see below). Moreover, for $c\!=\!10$, the error growth on the slow variables is characterized by two exponential regimes \cite{boffetta1998extension}: the former with rate given by the Lyapunov exponent of the full system $\lambda_f\!\approx\! 11.5$, and the latter by $\lambda_s\!\approx\! 0.85$, controlled by the fast and slow instabilities, respectively. This decomposition can be made more rigorous as shown in Ref.~\cite{ginelli} for a closely related model. We test the reservoir computing approach inputting the slow variables, i.e. $\bm u(t)\!=\!(X(t),Y(t),Z(t))$. In open loop, we let the reservoir to synchronize with the input, subsequently we perform the training and optimize $\mathbb{W}_O$ as explained earlier. Then, to test the prediction performance we consider $10^4$ initial conditions, for each of which, we feed the slow variables to the network in open loop and record, from $t\!=\!-T_s$ to $t\!=\!0$, the one step ($\log_{10}$)error $E(t)\!=\!\log_{10}\| {\bm v}(t)-{\bm u}(t)\|$, ${\bm v}(t)$ being the one-step network forecast (output). Initially, the average ($\log_{10}$)error $\langle E(t)\rangle$ decreases linearly as shown in the grey regions of Figs.~\ref{fig:2}a and \ref{fig:3}a, which is a visual proof of the echo state property. Then, it reaches a plateau - the synchronization error $E_S$ - which can be interpreted as the average smallest ($\log_{10}$)error on the initial condition and the one step error prediction. At the end of the open loop, after synchronization, we switch to the prediction (closed loop) configuration and compute the ($\log_{10}$)error growth between the network prediction and the reference trajectory. Moreover, we take the output variables at the end of the open loop and use them as initial conditions for other models (discussed below) which are used as a comparison. First, we consider the perfect model with an error on the initial condition, i.e Eqs.~(\ref{eq:lorenzslow}) with the slow variables set equal to the network-obtained values at $t=0$, i.e. at the end of the open loop. By construction, the network does not forecast the fast variables, which are thus initialized either using their exact values from the reference trajectory (twin model), which is quite ``unfair'', or random values (random twin) from the stationary measure on the fast attractor with fixed slow variables. Then we consider increasingly refined effective models for the slow degrees of freedom only: a ``truncated'' model, $\dot {\bm X}\!= \!\bm{F}_T(\bm X)$, obtained from Eqs.~(\ref{eq:lorenzslow}) by setting $\epsilon_s\!=\!0$; a model in which we replace the fast variables in Eqs.~(\ref{eq:lorenzslow}) with their global average; the adiabatic model in which fast variables are averaged with fixed slow variables, which amounts to replacing $\epsilon_s xy$ in the equation for $\dot{Y}$ with (see Appendix~\ref{app:model} for details on the derivation): \begin{equation} \epsilon_s \langle xy\rangle_{\bm X}\!=\!(1.07\!+\!0.26Y/c) \,\Theta(\!1.07\!+\!0.26Y/c)\,, \label{eq:improved} \end{equation} where $\Theta$ denotes the Heaviside step function. In Fig.~\ref{fig:2} we show the results of the comparison between the prediction obtained with the reservoir computing approach and the different models above described for $c=10$, with sampling time $\Delta t=0.1$ (and $\Delta t=0.01$ in the inset of Fig.~\ref{fig:2}a). Figure~\ref{fig:2}a shows that eliminating the fast degrees of freedom (truncated model) or just considering their average effect leads to very poor predictions, while the prediction of the reservoir computer is comparable to that of the adiabatic model (\ref{eq:improved}), as qualitatively shown in Fig.~\ref{fig:2}b (whose top/bottom panels show the evolution of the slow variables $X$ and $Z$ for the reference trajectory and the predictions obtained via the reservoir and adiabatic model). Remarkably, the reservoir-based model seems to even slightly outperform the twin model. A fact we understand as follows: by omitting fast components, one does not add fast decorrelating fluctuations to those intrinsic to the reference trajectories, thus reducing effective noise. Notice that the zero error on fast components of the twin model is rapidly pushed to its saturation value by the error on the slow variables. The sampling time $\Delta t\!=\!0.1$ is likely playing an important role during learning by acting as a low passing filter. Indeed the comparison with twin model slightly deteriorates for $\Delta t\!=\!0.01$ (see Fig.~\ref{fig:2}a inset). Figure~\ref{fig:3} shows the results for $c\!=\!3$. Here, the poor scale separation spoils the effectiveness of the adiabatic model (\ref{eq:improved}) while the prediction obtained via the reservoir computing approach remains effective, as visually exemplified in Fig.~\ref{fig:3}b and quantified in Fig.~\ref{fig:3}a. Notice that, however, the network predictability deteriorates with respect to the previous case and the twin model does better, though the reservoir still outperforms the random twin model. This slight worsening is likely due to the fact that discarded variables are not fast enough to average themselves out, making the learning task harder. Nevertheless, the network remains predictive. \begin{figure*}[t!] \centering \includegraphics[width=1\textwidth]{fig3-eps-converted-to} \vspace{-0.7truecm} \caption{(Color online) Same as Fig.~\ref{fig:2} for the case $c=3$. \label{fig:3}} \end{figure*} \subsection{Which effective model the network has built?} We now focus on the case $c\!=\!10$ and $\Delta t\!=\!0.01$, for which we can gain some insights into how the network works by comparing it to the adiabatic model~(\ref{eq:improved}). The sampling time is indeed small enough for time differences to approximate derivatives. In Fig.~\ref{fig:4} we demonstrate that the network in fact generates an effective model akin to the adiabatic one (\ref{eq:improved}). Here we show a surrogate of the residual time derivative of $Y$, meaning that we removed the truncated model derivative, as a function of $Y$: \begin{equation} \Delta \widetilde{\dot{Y}}= \frac{Y(t+\Delta t)-Y(t)}{\Delta t}-\frac{Y_T(t+\Delta t)-Y(t)}{\Delta t}\,, \label{eq:dderiv} \end{equation} The expression in Eq.~(\ref{eq:dderiv}) provides a proxy for how the network has modeled the term $-\epsilon_s xy$ in Eqs.~(\ref{eq:lorenzslow}). The underlying idea is as follows. We let the network evolve in closed loop, at time $t$ it takes as input the forecasted slow variables $\bm v(t)\!=\!(\hat{X}(t),\hat{Y}(t),\hat{Z}(t))$ and it outputs the next step forecast $\bm v(t\!+\!\Delta t)\!=\!(\hat{X}(t\!+\!\Delta t),\hat{Y}(t\!+\!\Delta t),\hat{Z}(t\!+\!\Delta t))$. We then use $\bm v(t)$ as input to the truncated model, and evolve it for a time step $\Delta t$ to obtain $(X_T(t+\Delta t),Y_T(t+\Delta t),Z_T(t+\Delta t))$. Equation~(\ref{eq:dderiv}) is then used to infer how the network models the coupling with the fast variables. Evolving by one time step $\bm v(t)$ using Eqs.~(\ref{eq:improved}) and then again employing (\ref{eq:dderiv}) we, obviously, obtain the line $-1.07-0.26Y$ (dashed in Fig.~\ref{fig:2}c). The network residual derivatives (black dots in Fig.~\ref{fig:4}) distribute on a narrow stripe around that line. This means that the network, for wide scale separation, performs an average conditioned on the values of the slow variables. For $c=10$, such conditional average is equivalent to the adiabatic approximation (\ref{eq:improved}), as discussed in Appendix~\ref{app:model}. For comparison, we also show the residual derivatives (\ref{eq:dderiv}) computed with the full model (\ref{eq:lorenzslow}-\ref{eq:lorenzfast}) (gray dots), which display a scattered distribution, best fitted by Eq.~(\ref{eq:improved}). For $c=3$, while the adiabatic approximation is too rough, remarkably the network still performs well even though is more difficult to identify the model it has build, which will depend on the whole set of slow variables (see Appendix~\ref{app:model} for a further discussion). \begin{figure}[t!] \centering \includegraphics[width=1\columnwidth]{fig4-eps-converted-to} \vspace{-0.7truecm} \caption{(Color online) Residual derivatives (\ref{eq:dderiv}) vs $Y$ for $c\!=\!10$ and $\Delta t\!=\!0.01$, computed with the network (black dots), the multiscale model (\ref{eq:improved}) (yellow, dashed line), and the full dynamics (gray dots). For details on hyperparameters see Appendix~\ref{app:hyper}. \label{fig:4}} \end{figure} \subsection{Predictability time and hybrid scheme} So far we focused on the predictability of a quite large network ($D_R\!=\!500$ as compared to the low dimensionality of Eqs.~(\ref{eq:lorenzslow}-\ref{eq:lorenzfast})). How does the network performances depend on the reservoir size $D_R$? In Fig.~\ref{fig:5} we show the $D_R$-dependence of the average (over reservoir realizations and initial conditions) predictability time, $T_p$, defined as the first time such that the error between the network predicted and reference trajectory reaches the threshold value $\Delta^*=0.4 \langle ||\bm X||^2\rangle^{1/2}$. For $D_R\gtrsim 450$, the predictability time saturates while for smaller reservoirs it can be about threefold smaller and, in addition, with large fluctuations mainly due to unsuccessful predictions, i.e. instances in which the network is unable to proper modeling the dynamics (see Fig~\ref{fig:S1}). Remarkably, implementing the hybrid scheme even with a poorly performing predictor such as the truncated model, the forecasting ability of the network improves considerably (as also shown in Fig.~\ref{fig:5}). In particular, with the hybrid scheme, saturation is reached earlier (for $D_R\gtrsim 300$) and, for smaller reservoirs, the predictability time of the hybrid scheme is longer. Moreover, the hybrid scheme is less prone to failures even for small $D_R$, hence fluctuations are smaller (see Fig.~\ref{fig:S1}). Note that the chosen hybrid design ensures that the improvement is only due to reservoir capability of building a better effective model, reducing the average synchronization ($\log_{10}$)error $\langle E_S\rangle$ (see the insets of Fig.~\ref{fig:5} and \ref{fig:S1}, and the discussion in Appendix~\ref{app:hybrid}) and thus the error on the initial condition of the slow variables. Indeed, in the inset of Fig.~\ref{fig:5} we also show the slope predicted on the basis of the slow perturbation growth rate, $\lambda_s$ \cite{boffetta1998extension}. \begin{figure}[t!] \centering \includegraphics[width=1\columnwidth]{fig5-eps-converted-to} \vspace{-0.7truecm} \caption{(Color online) Average predictability time, $T_p$ normalized with the slow finite size Lyapunov exponent $\lambda_s$ (left scale) and with the (fast) maximal Lyapunov exponent $\lambda_f$ (right scale), versus reservoir size $D_R$ (hyperparameters for the hybrid implementation are the same of the reservoir only approach which are discussed in Appendix~\ref{app:hyper}) for reservoir only (purple circles) and hybrid scheme (green squares), system parameters $c\!=\!10$ and $\Delta t\!=\!0.1$. Error bars denote statistical standard deviation over 20 independent network realizations, each sampling $10^3$ initial conditions. Inset: $T_p\lambda_s$ vs synchronization average ($\log_{10}$)error $\langle E_S\rangle$. The slope of the black line is $-1$ corresponding to the slow perturbation growth rate $\lambda_s$.\label{fig:5}} \end{figure} The above observations boil down to the fact that the difference between hybrid and reservoir only approach disappears at increasing $D_R$ as the same plateau values for both synchronization error and predictability time are reached. In other terms, if the reservoir is large enough, adding the extra information from the imperfect model does not improve the model produced by the network. These conclusions can be cast in a positive message by saying that using a physically informed model allows for reducing the size of the reservoir to achieve a reasonable predictability and hence an effective model of the dynamics with a smaller network, which is important when considering multiscale systems of high dimensionality. We remark that in Fig.~\ref{fig:5} the predictability time $T_p$ was made nondimensional either using the growth rate of the slow dynamics $\lambda_s$ (left scale of Fig.~\ref{fig:5}), or using the Lyapunov exponent $\lambda_f$ of the full system (right scale) which is dominated by the fast dynamics. For large networks the predictability time is as large as $5$ (finite size) Lyapunov times, which corresponds to about $70$ Lyapunov times with respect to the full dynamics. Such a remarkably long predictability with respect to the fastest time scale is typical of multiscale systems, where the maximal Lyapunov exponent does not say much for the predictability of the slow degrees of freedom \cite{lorenz1996predictability,aurell1996growth,cencini2013finite}. Figures~\ref{fig:2}a, \ref{fig:3}a and \ref{fig:5}(inset) (see also the inset of Fig.~\ref{fig:S1}), show that it is hard reaching synchronization error below $10^{-2}$. Even when this happens it does not improve the predictability, as the error quickly (even faster than the Lyapunov time) raises to values $O(10^{-2})$. Indeed, such an error threshold corresponds to the crossover scale between the fast- and slow-controlled regime of the perturbation growth (see Fig.~2 in Ref.~\cite{boffetta1998extension}). In other terms, pushing the error below this value requires the reservoir to (partially) reconstruct also the fast component dynamics. \begin{figure}[t!] \centering \includegraphics[width=1\columnwidth]{fig6-eps-converted-to} \caption{(Color online) Nondimensional predictability time $T_p\lambda_s$ of $20$ network realizations (each averaged over $10^4$ initial conditions), in reservoir only (purple circles) and hybrid scheme (green squares), as a function of the reservoir size $D_R$. The solid curves display the average over all realizations, already presented in Fig.~\ref{fig:3}. Notice that, in the reservoir only scheme, a number of outliers are present for $D_R\lesssim 400$, these correspond to ``failed'' networks that make poor medium term predictions or even fail to reproduce the climate. Remarkably, such failures are not observed in the hybrid scheme. Inset: $\langle E_S\rangle$, i.e. the average (over network realizations and $10^4$ initial conditions for each realizations) ($\log_{10}$)error at the end of the open loop versus the reservoir size $D_R$ for the reservoir only (purple curve) and the hybrid (green curve) scheme, respectively. Symbols display the synchronization error in each network realization. Notice that there are no realizations with strong departure from the average as observed in main panel for the predictability time: this shows that the predictability performance is not always liked to the synchronization error (see text for a further discussion). Data refer to the case $c=10$ and $\Delta t=0.1$. For hyperparameters see Appendix~\ref{app:hyper}. \label{fig:S1}} \end{figure} \subsection{The role of the synchronization error\label{sec:error}} In the previous section, we have used the average predictability time, $T_p$, as a performance metrics. If we interpret the synchronization error (at the end of the open loop) as the error on the initial conditions, since the system is chaotic, we could naively think that reducing such error always enhances the predictability. Consequently, one can expect the size of such error to be another good performance metrics. In the following, we show that this is only partially true. Obviously, to achieve long term predictability the smallness of the synchronization error is a necessary condition. Indeed the (log)error at the end of open loop cycle, $E_S$, puts an upper limit to the predictability time as \begin{equation} T_p\lesssim\frac{1}{\lambda_{s}}[\log(\Delta^*)-E_S]\,, \label{eq:tpred} \end{equation} as confirmed by the solid line in the inset of Fig.~\ref{fig:5}. However, it is not otherwise very informative about the overall performance. The reason is that the value of $E_S,$ which can also be seen as the average error on one step predictions, does not provide information on the structural stability of the dynamics. Indeed, for a variety of hyperparameters values, we have observed low $E_S$ resulting in failed predictions: in other terms the model built by the network is not effective in forecasting and in reproducing the climate. In these cases, the network was unable to generate a good effective model, as shown in Fig.~\ref{fig:S1}: this typically happens for relatively small $D_R$ in the reservoir only implementation. In a less extreme scenario, the error $E_S$ can be deceptively lower than the scale at which the dynamics has been properly reconstructed. This latter case is relevant to the multiscale setting since, as outlined at the end of the previous section, fast variable reconstruction is necessary to push the initial error below a certain threshold. In some cases, we did observe the synchronization error falling below the typical value $10^{-2}$ but immediately jumping back to it, implying unstable fast scale reconstruction (for instance, see $c=3$, $\Delta t=0.01$ in Fig.~\ref{fig:3}a). As a consequence of the two above observations, $E_S$ is an unreliable metric for hyperparameters landscape exploration as well. We also remark that, even if fast scales were modeled with proper architecture and training time, and $E_S$ could be pushed below the crossover with an actual boost in performance, such improvements would not dramatically increase the predictability time of the slow variables, since they are suppressed by the global (and greater, as dominated by the fast degrees of freedom) Lyapunov exponent. This situation as discussed above is typical of multiscale systems. \section{Conclusions\label{sec:end}} We have shown that reservoir computing is a promising machine learning tool for building effective, data-driven models for multiscale chaotic systems able to provide, in some cases, predictions as good as those that can be obtained with a perfect model with error on the initial conditions. Moreover, the simplicity of the system allowed us to gain insights into the inner work of the reservoir computing approach that, at least for large scale separation, is building an effective model akin to that obtained by asymptotic multiscale techniques. Finally, the reservoir computing approach can be reinforced by blending it with an imperfect predictor, making it to perform well also with smaller reservoirs. While we have obtained these results with a relatively simple two-timescale model, given the success of previous applications to spatially extended systems \cite{ottPRL}, we think the method should work also with more complex high dimensional multiscale systems. In the latter, it may be necessary to consider multi reservoir architectures \cite{carmichael2019analysis} in parallel \cite{ottPRL}. Moreover, reservoir computing can be used to directly predict unobserved degrees of freedom \cite{lu2017reservoir}. Using this scheme and the ideas developed in this work it would be interesting to explore the possibility to build novel subgrid schemes for turbulent flows \cite{meneveau2000scale,sagaut2006large} (see also Ref.~\cite{wikner2020combining} for a very recent attempt in this direction based on reservoir computing with hybrid implementation), preliminary tests could be performed in shell models for turbulence for which physics only informed approaches have been only partially successful \cite{biferale2017optimal}. \begin{acknowledgments} We thanks L. Biferale for early interest in the project and useful discussions. AV and FB acknowledge MIUR-PRIN2017 ``Coarse-grained description for non-equilibrium systems and transport phenomena'' (CO-NEST). We acknowledge the CINECA award for the availability of high performance computing resources and support from the GPU-AI expert group in CINECA. \end{acknowledgments}
{ "timestamp": "2020-10-21T02:28:13", "yymm": "2007", "arxiv_id": "2007.08634", "language": "en", "url": "https://arxiv.org/abs/2007.08634" }
\section{Introduction} In length metrology by optical interferometry, the wavefront errors affect the period of the interference signal. The calibration of lasers against frequency standards achieves relative uncertainties smaller than $10^{-10}$, but it is not possible to trace back the wavelength to the frequency via the plane-wave dispersion equation. The relevant corrections have been extensively investigated in the literature \cite{Dorenwendt:1976,Mana:1989,Bergamin:1994,Cordiali:1997,Niebauer:2003,Cavagnero:2006,Robertsson:2007,Fujimoto:2007,Dagostino:2011,Andreas:2011,Andreas:2012,Andreas:2015,Andreas:2016,Mana:2018}. When the interfering wavefronts differ only by the propagation distances through the interferometer arms, the fractional wavelength difference -- which, typically, ranges from parts in $10^{-7}$ to parts in $10^{-9}$ -- is proportional to the square of the beam divergence which, for arbitrary paraxial beams, is proportional to the trace of the second central-moment of the angular power-spectrum \cite{Bergamin:1999,Mana:2017a}. Characterizations of the laser beams leaving a combined x-ray and optical interferometer brought into light wavefront and wavelength ripples having a spatial bandwidth of a few mm$^{-1}$ and amplitudes as large as $\pm 20$ nm \cite{Balsamo:2003} and $\pm 10^{-8}\lambda$ \cite{Sasso:2016}, respectively, which might have a detrimental effect on the accuracy of the measurements. Since the differential wavefront-errors -- i.e., a non-uniform phase profile of the interference pattern -- cannot be explained by aberrations of beam feeding the interferometer, we carried out an analysis of the effect of wavefront aberrations due to the interferometer optics. In section \ref{theory}, we outline the mathematical framework needed to model two-beam interferometry and paraxial propagation and show a { one-dimensional} analytical calculation of the difference of the fringe period from the plane-wave wavelength. Eventually, we report about a Monte Carlo { two-dimensional} calculation of the fringe period in the presence of wavefront errors caused by the interferometer optics. \section{Mathematical model}\label{theory} \subsection{Phase of the interference signal} The interferometer slides two beams, $u_0({\bi{r}};z+s)\exp(-\rmi kz)$ and $u_1({\bi{r}};z)\exp(-\rmi kz)$, one with respect to the other by a distance $s$ while keeping them coaxial. By leaving out the $\exp(-\rmi kz)$ term of the optical fields -- where $k=2\pi/\lambda$ is the plane-wave wave number and $z$ the propagation distance -- and assuming and infinite detector, the integrated interference signal is \begin{eqnarray} S(s) &= &\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} |u_0({\bi{r}};s)+u_1({\bi{r}};0)|^2 \, \rmd {\bi{r}} \nonumber \\ &= & \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} |\tilde{u}_0({\bi{p}};s)+\tilde{u}_1({\bi{p}};0)|^2 \, \rmd {\bi{p}} , \end{eqnarray} where we reset the origin of the $z$ axis, ${\bi{r}}$ is a position vector in the detector plane (orthogonal to the $z$ axis), $\tilde{u}_0({\bi{p}};s)$ and $\tilde{u}_1({\bi{p}};0)$ are the angular spectra of the interfering beams \cite{Goodman:1996}, and ${\bi{p}}$ is the wavevector of the angular spectra basis, $\exp(-\rmi {\bi{p}} \bi{r})$. The phase of the integrated interference pattern in excess (or defect) with respect to $-ks$ is \cite{Bergamin:1999} \begin{equation}\label{phase} \Phi(s) = \arg\big[\Xi(s)\big] , \end{equation} where \begin{equation}\label{Xi} \Xi(s) = \int_{-\infty}^{+\infty} \tilde{u}_1^*({\bi{p}};0) U({\bi{p}};s) \tilde{u}_0({\bi{p}};0)\, \rmd {\bi{p}} \end{equation} is the interference term of the integrated intensity. In (\ref{Xi}), we used $\tilde{u}_0({\bi{p}};s) = U({\bi{p}};s) \tilde{u}_0({\bi{p}};0)$, where \begin{equation} U({\bi{p}};s)= \exp\left(\frac{\rmi p^2 s}{2k}\right) , \end{equation} is the reciprocal space representations of the paraxial approximation of the free-space propagator and $p^2=|{\bi{p}}|^2$. The fringe period is \begin{equation}\label{error} \lambda_e = \lambda \bigg(1 + \frac{1}{k} \frac{\rmd \Phi}{\rmd s} \bigg|_{s=0} \bigg) , \end{equation} where the sign of the derivative is dictated by the negative sign chosen for the plane-wave propagation. { It must be noted that, since $U^*({\bi{p}};z)U({\bi{p}};z+s) = U({\bi{p}};s)$, the interfering beams can be propagated by the same distance $z$ without changing (\ref{Xi}) and, consequently, $\lambda_e$. Therefore, (\ref{error}) depends only on the length difference of the interferometer arms, not on the detection-plane distance from, for instance, the beam waist.} The interferometer recombines the light beams after delivering it through arms of different optical lengths. We consider the case when the interferometer arms have the same length; an analysis of the fringe phase and period as a function of the arm difference is given in \cite{Cavagnero:2006}. However, we want to allow the interferometer arms to deviate from perfection. Therefore, $\tilde{u}_1({\bi{p}};0)$ and $\tilde{u}_0({\bi{p}};0)$ are intrinsically different, meaning that they cannot be made equal by freely propagating one of the two, and, as implicitly assumed in (\ref{Xi}), the aberrations occur after the beam splitting but before the interferometer mirrors. \subsection{Propagation of the wavefront errors} To give an analytical one-dimensional example, let the complex amplitudes of the direct space representation of the interfering beams differ by a small wavefront error $\varphi(x)$, that is, \begin{equation}\label{u-series} u_1(x) = u_0(x) \rme^{\rmi \varphi(x)} \approx u_0(x) \big[ 1 + \rmi\varphi(x) - \varphi^2(x)/2 \big] , \end{equation} where we omitted the $z=0$ specification, and let \begin{equation}\label{gauss} u_0(x) = \left(\frac{2}{\pi w_0^2}\right)^{1/4} \rme^{-x^2/w_0^2} \end{equation} be a normalized Gaussian beam. Since we are interested to small sliding distance with respect to the Rayleigh length -- that is, $ks\theta^2/2 \ll 1$, where $\theta$ is the beam divergence -- it is convenient to use a finite difference approximation of the $z$ derivative in the paraxial wave equation and the first-order approximation, \begin{equation} U(x;s) \approx 1 - \frac{\rmi s \partial^2_x}{2 k} , \end{equation} of the direct-space representation of the free-space propagator. Hence, \begin{equation}\label{Xix} \Xi(s) = \int_{-\infty}^{+\infty} u_1^*(x) \left( 1 - \frac{\rmi s \partial^2_x}{2 k} \right) u_0(x)\, \rmd x . \end{equation} It is worth noting that, since the $-\partial^2_x$ operator is self-adjoint, it does not matter what of the interfering beams is slid. Therefore, for the convenience of the $\Xi(s)$'s computation, we choose to propagate $u_0(x)$. By using (\ref{u-series}) and carrying out the integrations in (\ref{Xix}), we obtain \cite{Mathematica} \begin{numparts}\begin{equation}\fl {\rm{Re}}[\Xi(s)] = \sqrt{\pi} - \frac{s}{kw_0^2} \int_{-\infty}^{+\infty} \rme^{-\xi^2}(1-\xi^2)\varphi(\xi)\, \rmd \xi - \frac{1}{2} \int_{-\infty}^{+\infty} \rme^{-\xi^2}\varphi^2(\xi)\, \rmd \xi \end{equation} and \begin{equation}\fl {\rm{Im}}[\Xi(s)] = \frac{\sqrt{\pi} s}{2kw_0^2} - \frac{s}{2kw_0^2} \int_{-\infty}^{+\infty} \rme^{-\xi^2}(1-\xi^2)\varphi^2(\xi)\, \rmd \xi + \int_{-\infty}^{+\infty} \rme^{-\xi^2}\varphi(\xi)\, \rmd \xi , \end{equation}\end{numparts} where $\xi=\sqrt{2}x/w_0$. \subsection{Fractional error}\label{analytical} The plane-wave wavelength is shorter than the fringe period $\lambda_e$ as defined in (\ref{error}); the fractional difference is \cite{Mathematica} \begin{eqnarray}\label{corre-1}\fl\nonumber \frac{\Delta \lambda}{\lambda} \approx &\frac{\theta^2}{8} \bigg[ 1 + \frac{1}{\pi} \int_{-\infty}^{+\infty} \rme^{-\xi^2}(1-2\xi^2)\varphi(\xi)\, \rmd \xi \int_{-\infty}^{+\infty} \rme^{-\xi^2}\varphi(\xi)\, \rmd \xi - \\ \fl &\frac{1}{2\sqrt{\pi}} \int_{-\infty}^{+\infty} \rme^{-\xi^2}(1-2\xi^2)\varphi^2(\xi)\, \rmd \xi \bigg] , \end{eqnarray} where the calculation was carried out up to the second perturbative order, $\Delta\lambda=\lambda_e - \lambda$, $\theta=2/(kw_0)$ is the $u_0$'s divergence, and $\xi=\sqrt{2}x/w_0$. The simplest way to investigate the effect of the wavefront ripple is to consider the phase grating \begin{equation}\label{varphi} \varphi(\xi)=\epsilon\sin(a\xi+\alpha) , \end{equation} where $a=2\pi w_0/(\sqrt{2}\Lambda)$, $\Lambda$ is the grating pitch, and $\epsilon \ll 1$ rad. Hence, by carrying out the integrations in (\ref{corre-1}), we obtain \cite{Mathematica} \begin{equation}\label{e1D} \frac{\Delta \lambda}{\lambda} \approx \frac{\theta^2}{8} \left[ 1 + \frac{a^2\rme^{-a^2}\cos(2\alpha) + (2+a^2)\rme^{-a^2/2}\sin^2(\alpha)}{2} \epsilon^2 \right] \end{equation} The $\theta^2/8$ term is proportional to the variance of the $u_0(x)$ angular spectrum. It is the one-dimensional equivalent of half the trace of the second central-moment of the angular spectrum \cite{Bergamin:1999,Mana:2017a}, which is the standard ingredient to calculate the needed correction and takes the diffraction of arbitrary paraxial beams into account. \begin{figure}\centering \includegraphics[width=6.3cm]{Fig-theta.pdf} \includegraphics[width=6.3cm]{Fig-delta.pdf} \caption{Left: propagation directions of the aberrated and the superimposed beams ($u_1$ and $u_0+u_1$, respectively) exiting the interferometer. Right: delta values of the approximate wavelength differences (\ref{a1}-$c$) relative to the average difference (\ref{average}) {\it vs.} the fractional spatial frequency $w_0/\Lambda$ of the wavefront error (\ref{varphi}). The root-mean-square amplitude of the wavefront error is 10 nm.} \label{delta-1D} \end{figure} To quantify the impact of the wavefront error, we compare the fractional difference (\ref{e1D}) to the approximations $\Delta\lambda/\lambda \approx \Tr(\bGamma_i)/2$, where $\bGamma_i$ is: i) the second central-moment of the angular spectrum of the unperturbed beam $u_0$ illuminating the interferometer, ii) the aberrated, $u_1$, and iii) superimposed, $u_0+u_1$, beams exiting the interferometer. In the first case we have \begin{numparts}\begin{equation}\label{a1} \Tr(\bGamma_0)/2 = \theta^2/8 , \end{equation} in the second \begin{equation}\label{a2} \Tr(\bGamma_1)/2 = \frac{\theta^2}{8} \left[ 1 + a^2(1+\rme^{-a^2} - 2\rme^{-a^2/2})\epsilon^2 \right] , \end{equation} and, in the third, \begin{equation}\label{a3} \Tr(\bGamma_{01})/2 = \frac{\theta^2}{8} \left[ 1 + \frac{1}{4}a^2(1+2\rme^{-a^2} - 2\rme^{-a^2/2})\epsilon^2 \right] . \end{equation}\end{numparts} In (\ref{a2}-$c$), we used the approximation (\ref{u-series}) and, for the sake of simplicity, set $\alpha=0$. It is worth noting that, as shown in Fig.\ \ref{delta-1D} (left), the propagation directions of the aberrated and superimposed beams, $u_1$ and $u_0+u_1$, deviate from that of the unperturbed beam $u_0$ by $\theta_1=-a\rme^{-a^2/4}\epsilon/k$ and $\theta_{01}=-a\rme^{-a^2/4}\epsilon/(2k)$, respectively. The misalignment occurring when $\Lambda/w_0 \approx 3$ mirrors the beam's perception of a wavefront tilt. The fractional delta values of (\ref{a1}-$c$), \begin{equation}\label{delta} \delta_i = \frac{\Tr(\Gamma_i)/2 - \Delta\lambda/\lambda} {\Delta\lambda/\lambda} , \end{equation} relative to the fractional difference (\ref{e1D}) evaluated with $\alpha=0$, \begin{equation}\label{average} \Delta\lambda/\lambda = \frac{\theta^2}{8} \left( 1 + \frac{a^2\rme^{-a^2} \epsilon^2}{2} \right) , \end{equation} are shown in Fig.\ \ref{delta-1D} (right). In the case of a phase-grating pitch equal or shorter than the beam diameter, the increased angular spread of the aberrated beam does not affect the fringe period. We do not have an explanation of this. \section{Numerical analysis} The analytical treatment of section \ref{analytical} suggests that the actual fringe period might be different from the estimate based on the second central moment of the angular spectrum. Since the oversimplified analysis can hardly quantify this difference and the associated uncertainty, we resorted to a Monte Carlo estimate. In the simulation, the two interfering beams, \begin{equation}\label{MCu} u_i(x,y) = \big[ 1+A_i(x,y)/2 \big] g(x,y)\rme^{\rmi \varphi_i(x,y)} , \end{equation} were independently generated $10^3$ times. In (\ref{MCu}), $A_i(x,y)$ and $\varphi_i(x,y)$ are the intensity and phase noises and \begin{equation} g(x,y)=\rme^{-(x^2+y^2)/w_0^2} \end{equation} is the Gaussian beam feeding the interferometer, where $w_0=\sqrt{2}$ mm. { In the Monte Carlo simulation, we considered collinear beams} and $A_i(x,y)$ and $\varphi_i(x,y)$ were collections of Gaussian, independent, and zero-mean random variables indexed by the observation-plane coordinates. As shown in Figs. \ref{wfe-2D} and \ref{wfe}, they have $\sigma_\varphi=10$ nm and $\sigma_A=0.025$ standard deviations and were filtered so has to have the same correlation length of about 0.5 mm observed experimentally \cite{Balsamo:2003,Sasso:2016}. { We did not consider the wavefront curvature and imperfect recombinations of the interfering beams, which might be modelled by amplitude and phase perturbations \cite{Cavagnero:2006}.} \begin{figure}\centering \includegraphics[width=6.2cm]{Fig-wfe-2D.pdf} \includegraphics[width=6.2cm]{Fig-profile-error-2D.pdf} \caption{Left: simulated wavefront error; the colour scale spans $\pm 30$ nm. Right: residuals from a Gaussian of the simulated intensity profile; the colour scale spans $\pm 2\%$ of the maximum beam intensity. The standard deviations of the wavefront errors and intensity profile are $\sigma_\varphi=10$ nm and $\sigma_A=0.025$, respectively.} \label{wfe-2D} \end{figure} \begin{figure}\centering \includegraphics[width=8.5cm]{Fig-wfe-1D.pdf} \caption{Orange: $y=0$ section of the differential wavefront error shown in Fig.\ \ref{wfe-2D}. The blue line is the same section of the (aberrated) intensity profile.} \label{wfe} \end{figure} \begin{figure}\centering \includegraphics[width=6.0cm]{Fig-power-spectrum-2D.pdf} \includegraphics[width=6.5cm]{Fig-power-spectrum-radial.pdf} \caption{Left: residuals of the angular power spectrum after subtracting the best-fit spectrum of a Gaussian beam; the colours indicate the normalized density. Right: averaged radial plot of the angular power spectrum (orange dots); the blue dots are the angular spectrum of the Gaussian beam feeding the interferometer. The standard deviations of the wavefront error and intensity profile are $\sigma_\varphi=10$ nm and $\sigma_A=0.025$, respectively. { The dashed line indicates the instrumental background of the angular-spectrum measurements \cite{Mana:2017a}.}} \label{spectrum} \end{figure} The Monte Carlo simulation proceeded by Fourier transforming $u_0(x,y)$ and $u_1(x,y)$ and by calculating their interference and excess phase according to (\ref{Xi}) and (\ref{phase}). Figure \ref{spectrum} shows the angular spectrum of the aberrated beam. The plateau at $10^{-4}$ mrad$^{-2}$, which extends up to about $1$ mrad, originates from the $A(x,y)$ and $\varphi(x,y)$ noises. { According to (\ref{error}), the Monte Carlo values} of the fractional difference between the fringe period and $\lambda$, \begin{equation}\label{MC} \frac{\Delta\lambda}{\lambda}\bigg|_{\rm MC} = \frac{\Delta\Phi}{2\pi s/\lambda} , \end{equation} where $\Delta\lambda=(\lambda_e - \lambda)/\lambda$, are obtained by propagating the fields back and forward by $s/2=\pm 50\lambda$ and by calculating the phase difference \begin{equation} \Delta\Phi = \arg[\Xi(s/2)]-\arg[\Xi(-s/2)] , \end{equation} which is null when the fringe period is equal to the plane-wave wavelength $\lambda$. \begin{table} \caption{ Comparison of analytical, equation (\ref{corre-1}), and numerical, equation (\ref{MC}), calculations of the fractional difference (expressed in nm/m) between the fringe period and the plane-wave wavelength in some one dimensional cases.}\label{test} \begin{tabular}{@{}lllllll} \br case &$\sigma_{A_1}$ &$\sigma_{A_2}$ &$\sigma_{\phi_1}$/nm &$\sigma_{\phi_2}$/nm &Eq.\ (\ref{corre-1}) &numerics \\ \mr $A_1=A_0=0, \varphi_1=\varphi_0=0$ &$-$ &$-$ &$-$ &$-$ &1.792 &1.792 \\ $A_1=A_0 \ne 0, \varphi_1=\varphi_0 \ne 0$ &0.025 &0.025 &$50$ &$50$ &4.762 &4.762 \\ $A_0\ne 0, A_1=0, \varphi_0\ne 0, \varphi_1 =0$ &0.025 &$-$ &$10$ &$-$ &2.163 &2.152 \\ $A_0=0, A_1\ne 0, \varphi_0 =0, \varphi_1\ne 0$ &$-$ &0.025 &$-$ &$10$ &1.719 &1.724 \\ \br \end{tabular} \end{table} To check the numerical calculations, we considered some one-dimensional cases, where the analytical expressions of the difference between the fringe period and the plane-wave wavelength are available, and compared the numerical calculations against the values predicted by (\ref{corre-1}). The results are summarized in table \ref{test}. To quantify the effect of two-dimensional wavefront errors, the fractional differences numerically calculated were compared to the approximations, \begin{equation}\label{TrG} \frac{\Delta\lambda}{\lambda}\bigg|_i = \frac{1}{2}\Tr(\bGamma_i) , \end{equation} which holds when the interferometer does not aberrate the interfering beams \cite{Bergamin:1999,Mana:2017a}, used to correct the interferometric measurements in \cite{Bartl:2017,Fujii:2018}. In (\ref{TrG}), $\bGamma_i$ is the second central-moment of the angular spectrum of: 1) the Gaussian beam feeding the interferometer, $|\tilde{g}({\bi{p}})|^2$, 2) the interfering beams leaving the interferometer, $|\tilde{u}_0({\bi{p}})|^2$ and $|\tilde{u}_1({\bi{p}})|^2$, and 3) the interfering-beam superposition, $|\tilde{u}_0({\bi{p}}) + \tilde{u}_1({\bi{p}})|^2$. The fractional delta values of the approximate differences (\ref{TrG}) relative to the numerical one (\ref{MC}), \begin{equation}\label{chi} \delta_i = \frac{\Tr(\Gamma_i)/2 - \left(\Delta\lambda/\lambda \right)_{\rm MC}}{\left(\Delta\lambda/\lambda\right)_{\rm MC}} , \end{equation} are shown in Fig.\ \ref{histo}. The difference estimated from the angular spectrum of the beam entering the interferometer is equal to the mean of the actual (numerically calculated) values, which are scattered by about 12\%. Contrary, the differences estimated from the angular spectra of the beams leaving the interferometer, superposed or not, are significantly larger than truth. It is worth noting that the period values separately calculated from the angular spectra of each of the two beams leaving the interferometer are statistically identical. This agrees with the observation that the Monte Carlo simulation does not distinguish between the interfering beams. \begin{figure}\centering \includegraphics[width=6.2cm]{Fig-histo-0.pdf} \includegraphics[width=6.2cm]{Fig-histo-3.pdf} \includegraphics[width=6.2cm]{Fig-histo-1.pdf} \includegraphics[width=6.2cm]{Fig-histo-2.pdf} \caption{Distributions of the delta values of the approximate wavelength differences (\ref{TrG}) -- obtained from the angular spectra of: the Gaussian beam feeding the interferometer, $|\tilde{g}({\bi{p}})|^2$ (case 1, top left); the interfering-beam superposition, $|\tilde{u}_0({\bi{p}}) + \tilde{u}_1({\bi{p}})|^2$, (case 3, top right); the beams leaving the interferometer, $|\tilde{u}_0({\bi{p}})|^2$ and $|\tilde{u}_1({\bi{p}})|^2$, (case 2, bottom left and right) -- relative to the numerical difference (\ref{MC}). The standard deviations of the wavefront errors and intensity profile are $\sigma_\varphi=10$ nm and $\sigma_A=0.025$. The blue line are the normal distributions best fitting the histograms.} \label{histo} \end{figure} \section{Conclusions} We observed that the laser beams leaving the combined x-ray and optical interferometer used to measure the lattice parameter of silicon display wavelength and phase imprints having a spatial bandwidth of a few mm$^{-1}$ and local wavefront errors and wavelength variations as large as $\pm 20$ nm and $\pm 10^{-8}\lambda$ \cite{Sasso:2016}. These aberrations are likely due to the interferometer optics. Besides, the observed imprints correspond to a root-mean-square deviation from flatness of each of the optics' surfaces of less than 3 nm for scale lengths from 0.1 mm to 2 mm. { Since our measurements, which were corrected on the basis of the angular spectra of the laser beam, aimed at $10^{-9}$ fractional accuracy, questions arise about the impact of these errors. The Monte Carlo simulation of the interferometer operation indicates that the corrections made depend on the angular spectra having been measured before or after the interferometer. The correction is faithfully evaluated when the spectrum is measured before the interferometer. Unfortunately, not being aware of the problem, we measured the angular spectra after the interferometer \cite{Massa:2011,Massa:2015,Mana:2017a}. However, we note that the excess of correction is due to the angular-spectrum plateau that increases the central second-moment of the incoming beam. Since, as shown in Fig.\ \ref{spectrum} (right), this plateau is indistinguishable from the instrumental background of the spectrum measurement, it was subtracted from the data and excluded from consideration. Therefore, the plateau and, consequently, the wavefront errors did not bias the corrections made. By using the typical aberrations observed in our set-up, the correction uncertainty is 12\%, which is within the 15\% cautiously associated with them \cite{Mana:2017a}. Nevertheless this reassuring conclusion, our work evidenced unexpected critical issues, which deserve further investigations and on-line determinations of the needed correction, e.g., by reconstructing its value from the (measurable) dependence on the detector area.} These results have a value also in other experiments, such as those determining the Planck constant and the local acceleration due to gravity, where precision length-measurements by optical interferometry play a critical role. \section*{References} \providecommand{\newblock}{}
{ "timestamp": "2020-07-20T02:00:12", "yymm": "2007", "arxiv_id": "2007.08524", "language": "en", "url": "https://arxiv.org/abs/2007.08524" }
\section{Outline of our Contributions} In this section, we give a concise overview of the techniques involved in our algorithms. The details of the proofs are deferred to the following sections. \subsection{Intervals} \label{sec:intervals_main} We now give an overview of our dynamic algorithms for intervals achieving the bounds of Theorem~\ref{thm:intervals_result}. \paragraph{Notation.} In what follows, $S$ denotes the current set of intervals, $n=|S|$ is the number of intervals, and $\alpha (S)$ denotes the size of a maximum independent set of $S$. We will show that our algorithms maintain a dynamic independent set $I$ such that $|I| \geq (1-\epsilon) \alpha(S)$ in time $O_{\varepsilon}(\log n)$, for $0 < \epsilon <1$. Note that while stating the results in the Introduction section, we used $a>1$ to denote the approximation ratio of an algorithm, meaning that $\opt / \alg \leq a$. Showing that $|I| \geq (1-\epsilon) \alpha(S)$ is equivalent to showing a $(1+\epsilon')$-approximation for $\epsilon' = \frac{1}{1-\epsilon} -1$ and Theorem~\ref{thm:intervals_result} follows. \paragraph{Intuition.} We begin with some intuition and high-level ideas. Let us first mention, as observed by Henzinger et. al.~\cite{Henz20}, that trying to maintain maximum independent sets exactly is hopeless, even in the case of intervals. Indeed, there are instances where $\Omega(n)$ changes are required, as illustrated in Figure~\ref{fig:maximum_global}. \begin{figure}[ht] \centering \includegraphics[scale=1]{maximum_global.pdf} \caption{Example where a single insertion causes $\Omega(n)$ changes to the maximum independent set. If interval $x$ is not in the set, then the black intervals define a maximum independent set. Once $x$ gets inserted, then $x$ together with the blue intervals form the new maximum independent set.} \label{fig:maximum_global} \end{figure} Since we only aim at maintaining an approximate solution, we can focus on maintaining a $k$-maximal independent set. An independent set is $k$-maximal, if for any $t \leq k$, there is no set of $t$ intervals that can be replaced by a set of $t+1$ intervals. Maintaining a $k$-maximal independent set implies that all changes will involve $O(k)$ intervals. \begin{definition} \label{def:k-maximal} A $k$-maximal independent set $I \subseteq S$ for some integer $k\geq 0$ is a subcollection of disjoint intervals of $S$ such that for every positive integer $t \leq k$, there is no pair $A\in {\binom{I}{t}}$ and $B\in {S\setminus {\binom{I}{t+1}}}$ such that $(I\setminus A)\cup B$ is an independent set of $S$. \end{definition} Note that for $k=0$, this corresponds to the usual notion of inclusionwise maximality. The following lemma states that local optimality provides an approximation guarantee. It is a special case of a much more general result of Chan and Har-Peled~\cite{chan2012approximation}~(Theorem 3.9). \begin{lemma} \label{lem:chan} There exists a constant $c$ such that for any $k$-maximal independent set $I\subseteq S$, $|I|\geq (1-\frac ck)\cdot \alpha (S)$. \end{lemma} Thus, we set as our goal the dynamic maintenance of a $k$-maximal independent set. It turns out, however, that even this is not easy and there might be cases where $\Omega(n/k)$ changes of $\Theta(k)$ intervals (therefore $\Omega(n)$ overall changes) are needed to maintain a $k$-maximal independent set. This is illustrated in Figure~\ref{fig:maximal_global}. \begin{figure}[ht] \centering \includegraphics[scale=1]{maximal_global.pdf} \caption{Example where a $k$-maximal independent set changes completely after a single insertion, for $k=2$. Before the insertion of $x$, the black intervals define a $k=2$-maximal independent set. Once $x$ gets inserted, then a 2-to-3 exchange is possible: the set $L^A_1$ of two intervals can be replaced by the set $L^B_1$ of three intervals. Once this exchange is made, other 2-to-3 exchanges are possible: the set $L^A_2$ can be replaced by $L^B_2$. Moreover, the set $R^A_1$ can be replaced by $R^B_1$, which in turn enables the replacement of $R^A_2$ by $R^B_2$. The same changes percolate to the left and right for arbitrarily long instances. Observe that in all exchanges, one green interval is strictly contained in a black interval.} \label{fig:maximal_global} \end{figure} \paragraph{Our Approach.} To overcome those pathological instances, we observe that those occur because in a $k$-maximal independent set $I$, there might be intervals $y \in S \setminus I$ which are strictly contained in an interval $a \in I$. It turns out that if we eliminate this case, we can indeed maintain a $k$-maximal independent set in logarithmic update time. Thus our goal is to maintain a $k$-maximal independent set $I$, where there are no intervals of $S \setminus I$ that are strictly contained in intervals of $I$. We will call such independent sets \textit{$k$-valid}, as stated in the following definition. \begin{definition} \label{def:valid} An independent set of intervals $I \subseteq S$ is called \textit{$k$-valid}, if it satisfies the following two properties: \begin{enumerate} \item \textit{No-containment:} No interval of $S \setminus I$ is completely contained in an interval of $I$. \item \textit{$k$-maximality:} The independent set $I$ is $k$-maximal, according to definition~\ref{def:k-maximal}. \end{enumerate} \end{definition} Our main technical contribution is maintaining a $k$-valid independent set subject to insertions and deletions in time $O(k^2 \log n)$ (in fact for insertions our time is even better, $O(k \log n)$). Since by definition all $k$-valid independent sets are $k$-maximal, this combined with Lemma~\ref{lem:chan} implies the result. More precisely, for $\epsilon = c/k$, we get $|I| \geq (1 - \epsilon) \cdot \alpha(S)$ with update time $O( \frac{\log n}{\epsilon})$ for insertions and $O( \frac{\log n}{\epsilon^2})$ for deletions. \paragraph{Our Algorithm.} We now give the basic ideas behind our algorithm. Let $I$ be the current independent set we maintain. Suppose that there exists a pair $(A,B)$ of sizes $t$ and $t+1$, for $t \leq k$, such that $A \subseteq I$ and $B \cap I = \emptyset$ and $\ensuremath{(I \setminus A) \cup B}$ is an independent set. Such a pair is a certificate that $I$ is not a $k$-maximal independent set. We call such a pair an \textit{alternating path}, since (as we show in Section~\ref{sec:intervals_details}, Lemma~\ref{lem:mes}) it induces an alternating path in the intersection graph of $I \cup B$. Our main algorithm is essentially based on searching alternating paths of size at most $k$. This can be done in time $O(k \log n)$ using our data structures (Section~\ref{sec:intervals_alg}). \paragraph{Insertions.} Suppose a new interval $x$ gets inserted. Our insertion algorithm will be the following: \medskip \noindent \textbf{Case 1:} $x$ is strictly contained in $a \in I$ (Figure~\ref{fig:insert_subset}). Then \begin{enumerate} \item Replace $a$ by $x$. \item Check on the left for an alternating path; if found, do the corresponding exchange. Same for right. \end{enumerate} \noindent \textbf{Case 2:} $x$ is not contained. Then check if there exists alternating path $(A,B)$ involving $x$. If so, do this exchange. \begin{figure}[t] \centering \includegraphics[scale=1]{insert_subset.pdf} \caption{Case 1 of our insertion algorithm. Inserted interval $x$ is a subset of interval $a \in I$. After replacing $a$ by $x$, at most two alternating paths might be found, one to the left of $x$, namely $(L_A,L_B)$ and one to the right, $(R_A,R_B)$.} \label{fig:insert_subset} \end{figure} The proof of correctness (that means, showing that after this single exchange of the algorithm, we get a $k$-valid independent set) requires a more careful and strict characterization of the alternating paths that we choose. The details are deferred to Section~\ref{sec:intervals_details}. \begin{comment} In case 1, if $x$ is contained in $a$, then the no-containment property gets violated, thus we need to replace $a$ by $x$. This might open space for exchanges which were not possible before, as illustrated in Figure~\ref{fig:insert_subset}. Need to say something about leftmost/rightmost endpoint queries. \begin{figure}[ht] \centering \includegraphics[scale=1]{insertion_alternating.pdf} \caption{Case 2 of our insertion algorithm. Alternating path is formed involving the inserted interval $x$. (a) If endpoints of $x$ intersect consecutive intervals, then the alternating path extends in both directions of $x$ (b) if $x$ intersects only one interval of $I$, then the alternating path lies one the one side of $x$.} \label{fig:insertion_alternating} \end{figure} In case 2, correctness is easier to follow: if upon insertion, there exists a possible $j$ to $j+1$ exchange $(A,B)$, for $j \leq k$, this implies that $I$ is not $k$-maximal anymore, thus this exchange should be performed. Then we need to claim that the new independent set $\ensuremath{(I \setminus A) \cup B}$ is $k$-valid. Although this seems intuitively clear, it is rather non-trivial and some extra requirements on the alternating paths are needed. The formal proof is given by Lemma~\ref{lem:higher_exchange} and takes more than a page. \end{comment} \paragraph{Deletions.} We now describe the deletion algorithm. Suppose interval $x \in I$ gets deleted. We check for alternating paths to the left and to the right of $x$. Let $L$ be the alternating path found in the left and $R$ the one found in the right (if no such path is found, set $L$ or $R$ to $\emptyset$). We then check if they can be merged, that is, if the two corresponding exchanges can be performed simultaneously (see Figure~\ref{fig:del_alternating_both}). \begin{figure}[ht] \centering \includegraphics[scale=1]{del_alternating_both.pdf} \caption{After deletion of interval $x$, alternating paths $L = (L_A,L_B)$ and $R = (R_A,R_B)$ are formed to the left and right of $x$ respectively. If they can be merged, we do both exchanges.} \label{fig:del_alternating_both} \end{figure} \begin{enumerate} \item Both $L$ and $R$ are non-empty and they can be merged (Figure~\ref{fig:del_alternating_both}). We perform both exchanges. \item Both $L$ and $R$ are non-empty but cannot be merged. In this case perform only one of the two exchanges (details deferred to following sections). \item Only one of $L$ and $R$ are non-empty: Do this exchange. \item Both $L$ and $R$ are empty. In this case, we check whether there exists an alternating path involving an interval $y$ containing $x$. If yes, then do the exchange. Otherwise do nothing. \end{enumerate} Again, proving correctness requires some effort. The important operation is to search for alternating paths starting from a point $x$, which can be done in time $O(k \log n)$. From this, the whole deletion algorithm can be implemented to run in time $O(k^2 \log n)$ in the worst case. \subsection{Squares} \label{sec:outline_squares} Our presentation for how to maintain an approximate maximum dynamic independent set of squares is split into two sections. In Section~\ref{s:quadtree} we show how to do this statically, which is not new, but allows a clean presentation of our main novel ideas. In Section~\ref{sec:squares_dynamic}, we show how to make this dynamic, mostly using standard but cumbersome data structuring ideas. We define a randomly scaled and shifted infinite quadtree. Associate each square with the smallest enclosing node of the quadtree. Call squares that intersect the center of their quadtree node \emph{centered} and discard all noncentered squares, see Figure~\ref{f:centered}. Nodes of the quadtree associated with squares are called the \textit{marked nodes} of the infinite quadtree, and call the quadtree $\jmark{Q}$ the union of all the marked nodes and their ancestors. Note that multiple squares may be associated with one quadtree node. \paragraph{High-level overview.} We will show that given a $c$-approximate solution for intervals, we can provide a $O(c)$-approximation for squares. To do that we proceed into a four-stage approach. We first focus on the static case and then discuss the modifications needed to support insertions/deletions. \begin{enumerate} \item \label{item:centered_lose} We show that by losing a factor of $16$ in expectation, we can restrict our attention to centered squares (Lemma~\ref{l:bds}), thus we can indeed discard all non-centered squares. \item \label{item:path_lose} Then we focus on the quadtree $\jmark{Q}$. We partition $\jmark{Q}$ into leaves, internal nodes, and monochild paths, which will be stored in a compressed format. We show that given a linearly approximate solution for monochild paths of $\jmark{Q}$, we can combine these solutions with a square from each leaf to obtain an $O(1)$-approximate solution for $\jmark{Q}$ (Lemma~\ref{l:combine}). Roughly, if each monochild path our solution has size $\geq (1/d) \cdot \opt - \gamma $ (for some parameter $\gamma$), we get a $(2+d \cdot (\gamma+1))$-approximate solution for $\jmark{Q}$. Thus, it suffices to solve the problem for squares stored in monochild paths. To obtain intuition behind this, observe in Figure~\ref{f:thequad}(c) that each path has a pink region which corresponds to the region of the top quadtree node of the path minus the region of quadtree node which is the child of the bottom node of the path. We call a protected independent subset of the squares of path an independent set of squares that stays entirely within the protected region. All regions of protected paths and leaves (orange) are disjoint and thus their independent sets may be combined without risk of overlap. This is what we do, we prove that a $O(1)$-approximate maximum independent set can be obtained with a single square associated with each leaf node and a linear approximate maximum protected independent set of each path. No squares associated with internal nodes of the quadtree form part of our independent set. \item \label{item:monochild_lose4} To obtain an approximate independent set in monochild paths, we partition each monochild path into four monotone subpaths, and show (Fact~\ref{f:max4}) that by loosing a factor of 4, it suffices to use only the independent set of monotone subpath with the independent set of maximum size. Let us see this a bit more closely. Figure~\ref{f:patha} illustrates such a path of length 30. Each node on the path has by definition only one child. The quadrant of a node is the quadrant where that single child lies. We partition the marked nodes of each path into four groups based on the quadrant's child, we call these monotone subpaths, each group is colored differently in the figure. We observe that the the centers of the nodes on each monotone subpath are monotone. % We proceed separately on each and use the one with largest independent set, losing a factor of four. \item \label{item:monotone_lose2} We show that independent set of centered squares in monotone subpaths reduces to the maximum independent set of intervals by losing roughly a factor of 2. More precisely, given a $c$-approximate solution for intervals, we can get a solution for monotone subpaths of size $\geq \frac{1}{2c} \opt - 1$. (Lemma~\ref{l:indsetsize}). This is achieved as follows. As illustrated in Figure~\ref{f:pathb}, we associate each square on a monotone subpath with an interval which corresponds to the depth of its node to the depth of the deepest node on the subpath that it intersects the center of. % For each subpath, we take the squares associated with the nodes of the subpath, and compute an independent set (red and orange intervals in the figure) with respect to the intervals associated with each square. % We observe that while the set of squares in the previous step have independent intervals, the squares may nevertheless intersect, and may intersect the gray region, which would violate the protected requirement. However, only adjacent squares can intersect and thus by taking every other square from the independent set with respect to the intervals this new set of squares is independent with respect to the squares. By beginning this removal process with the deepest interval, the gray region in the figure, which is not part of the protected region, is also avoided. Observe the red set of squares in the figure is an independent set and avoids the gray region. \end{enumerate} \paragraph{Putting everything together.} Combining all those parts, we get that due to (\ref{item:monotone_lose2}), a $c$-approximate solution for intervals gives a solution for squares of monotone subpaths that is at least half the interval solution minus one. A factor of 4 is lost in the conversion from monotone paths to paths due to (\ref{item:monochild_lose4}), thus for monotone paths our solution has size $\frac{1}{8c} \opt - 1$. Consequently, due to (\ref{item:path_lose}), we get $d = 8c$ and $\gamma=1$ and this gives a $(2+16 \cdot c)$-approximation for centered squares. Finally, due to (\ref{item:centered_lose}), a $(2+16c)$-approximation for centered squares implies an $(2+16c)\cdot16=256c+32$-approximation for squares. \begin{figure}[t] \begin{minipage}[c]{0.4\textwidth} \includegraphics[width=\textwidth]{square-figure-1.pdf} \end{minipage}\hspace{0.1\textwidth} \begin{minipage}[c]{0.4\textwidth} \caption{Two quadtree nodes are illustrated, labelled at their center point. The dark blue squares are centered and have $\jmark{\eta}_1$ as their node, the red squares are centered and have $\jmark{\eta}_2$ as their node, and the light blue squares are not centered.} \label{f:centered} \end{minipage} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[scale=.7]{square-figure-2.pdf} \end{center} \caption{The quadtree of the illustrated squares drawn normally (a) and as a tree (b). In (b) each node is categorized as a leaf, an internal node, or part of a monochild path. In (c) the protechted regions of each path and leaf are illustrated, and are pairwise disjoint.} \label{f:thequad} \end{figure} \begin{figure} \begin{minipage}[c]{0.4\textwidth} \includegraphics[width=\textwidth]{square-figure-3.pdf} \end{minipage} \hspace{0.15\textwidth} \begin{minipage}[c]{0.4\textwidth} \caption{A single monochild path in a quadtree is illustrated. Figure is not to scale, for example, if drawn to scale $\jmark{\eta}_1$ would be in the center and the rest of the figure would be in the lower-right corner. The nodes on the path are labelled by depth, and fall into four groups based on which quadrant their child lies in. Crucially, each of these four groups is a monotone path.} \label{f:patha} \end{minipage} \vspace{3pc} \begin{minipage}[c]{0.45\textwidth} \includegraphics[width=\textwidth]{square-figure-4.pdf} \end{minipage}\hspace{0.1\textwidth} \begin{minipage}[c]{0.4\textwidth} \caption{Several squares are illustrated which are associated with the nodes of type $\jmark{\textsc{IV}}$. They each are associated with an interval which spans the depths of the nodes of type $\jmark{\textsc{IV}}$ that the contain the centers of. These intervals are drawn vertically on the right. Observe that the union of the orange and red intervals is an independent set of intervals, but the orange and red squares intersect. However, by taking every other interval of this union one obtains the red intervals which correspond to the red squares and which are disjoint. } \label{f:pathb} \end{minipage} \end{figure} \paragraph{Going Dynamic.} In order to make this basic framework dynamic we need a few additional ingredients, which are the subject of Section~\ref{sec:squares_dynamic}: \begin{itemize \item Use a link-cut structure \cite{DBLP:journals/jcss/SleatorT83} on top of the quadtree, as it is not balanced, this is needed for searching where to add a new node and various bulk pointer updates \item Use our dynamic interval structure within each path. \item Support changes to the shape of the quadtree, this can cause the paths to split and merge, and thus this may cause the splitting and merging of the underlying dynamic interval structures, which is why we needed to support these operations (see extensions of intervals, Section~\ref{sec:intervals_extend}). \item For the purposes of efficiency, all squares are stored in a four-dimensional labelled range query structure. This will allow efficient, $O(\log^5 n)$, computation of the local changes needed by the dynamic interval structure. \end{itemize} Those differences worsen the approximation ratio at some places. \begin{itemize} \item In step~\ref{item:monochild_lose4} of the description above we said that for the static structure, we divide the monochild path into four monotone subpaths and by loosing a factor of 4, we pick among them the one whose maximum independent set has the largest size. However, in the dynamic case, this path might change very frequently; as the monotone subpath maximum independent set is unstable, we do not change from using the independent set from one subpath until it falls to being less than half of the maximum. This causes the running time bound to be amortized instead of worst-case and increases the bound by a factor of 2. That is, instead of losing a factor of 4 by focusing on monotone paths, we loose 8. \item In step~\ref{item:monotone_lose2} for the static case, we lose a factor of 2 due to picking every other square from the independent intervals. Dynamically we need more flexibility, we will ensure that there is between 1-3 squares between each one that was taken, and we show how a red-black tree can simply serve this purpose. Thus, given a $c$-approximate dynamic independent set of intervals structure (supporting splits and merges) we get a solution for monotone paths of size at least $4c \opt - 3$. \end{itemize} Putting everything together in a similar way as in the static case, we get for monochild paths a solution of size at least $(1/32c) \opt - 3 $, i.e., having $d=32c$ and $\gamma=3$. Therefore due to step \ref{item:path_lose}, we get $(2+ d (\gamma+1))$-approximation. By replacing and using $c=2$ as the approximation factor for dynamic intervals (an easy upper bound on $1+\epsilon$) we get that our method maintains an approximate set of independent squares that is expected to be within a 4128-factor of the maximum independent set, and supports insertion and deletion in $O(\log^5 n)$ amortized time. While 4128 seems large, it is simply a result of a combination of a steady stream of steps which incur losses of a factor of usually 2 or 4. We note that we have chosen clarity of presentation over optimizing the constant of approximation, had we made the opposite choice, factors of two could be reduced to $1+\epsilon$. However, this is not the case everywhere, and the constant-factor losses having to do with using centered squares and not using any squares associated with internal nodes are inherent in our approach. There is also nothing in our structure that would prevent implementation. It has many layers of abstraction, but each is simple, and probably the hardest thing to code would be the link-cut trees \cite{DBLP:journals/jcss/SleatorT83} if one could not find an implementation of this swiss army knife of operations on unbalanced trees (see \cite{DBLP:journals/jea/TarjanW09} for a discussion of the implementation issues in link-cut trees and related structures). \section{Introduction} We consider the maximum independent set problem on dynamic collections of geometric objects. We wish to maintain, at any given time, an approximately maximum subset of pairwise nonintersecting objects, under the two natural update operations of insertion and deletion of an object. Before providing an outline of our results and the methods that we used, we briefly summarize the background and state of the art related to the independent set problem and dynamic algorithms on geometric inputs. In the maximum independent set (MIS) problem, we are given a graph $G = (V,E)$ and we aim to produce a subset $I \subseteq V$ of maximum cardinality, such that no two vertices in $I$ are adjacent. This is one of the most well-studied algorithmic problems and it is among the Karp's 21 classic \textsf{NP}-complete problems \cite{karp1972reducibility}. Moreover, it is well-known to be hard to approximate: no polynomial time algorithm can achieve an approximation factor $n^{1-\epsilon}$, for any constant $\epsilon>0$, unless $\P = \textsf{NP}$~\cite{Zuck07,Hastad1999}. \paragraph{Geometric Independent Set.} Despite those strong hardness results, for several restricted cases of the MIS problem better results can be obtained. We focus on such cases with geometric structure, called \textit{geometric independent sets}. Here, we are given a set $S$ of geometric objects, and the graph $G$ is their intersection graph, where each vertex corresponds to an object, and two vertices form an edge if and only if the corresponding objects intersect. A fundamental and well-studied problem is the 1-dimensional case where all objects are intervals. This is also known as the \textit{interval scheduling} problem and has several applications in scheduling, resource allocation, etc. This is one of the few cases of the MIS problem which can be solved in polynomial time; it is a standard textbook result (see e.g.~\cite{KleinTardos}) that the greedy algorithm which sweeps the line from left to right and at each step picks the interval with the leftmost right endpoint produces always the optimal solution in time $O(n \log n)$. Independent sets of geometric objects in the plane such as axis-aligned squares or rectangles have been extensively studied due to their various applications in e.g., VLSI design~\cite{HM85}, map labeling~\cite{AKS97} and data mining~\cite{KMP98,BDMR01}. However, even the case of independent set of unit squares is \textsf{NP}-complete~\cite{fowler1981optimal}. On the positive side several geometric cases admit a polynomial time approximation scheme (PTAS). One of the first results was due to Hochbaum and Maass who gave a PTAS for unit $d$-cubes in $R^d$~\cite{HM85} (therefore also for unit squares in 2-d). Later, PTAS were also developed for arbitrary squares and more generally hypercubes and fat objets~\cite{chan2003polynomial, erlebach2005polynomial}. More recently, Chan and Har-Peled~\cite{chan2012approximation} showed that for all pseudodisks (which include squares) a PTAS can be achieved using local search. Despite this remarkable progress, even seemingly simple cases such as axis-parallel rectangles in the plane, are notoriously hard and no PTAS is known. For rectangles, the best known approximation is $O(\log \log n)$ due to the breakthrough result of Chalermsook and Chuzhoy~\cite{Parinya09}. Recently, several QPTAS were designed~\cite{aw-asmwir-13,ce-amir-16}, but still no polynomial $o(\log \log n)$-approximation is known. \paragraph{Dynamic Independent Set.} In the dynamic version of the Independent Set problem, nodes of $V$ are inserted and deleted over time. The goal is to achieve (almost) the same approximation ratio as in the offline (static) case while keeping the update time, i.e., the running time required to compute the new solution after insertion/deletion, as small as possible. Dynamic algorithms have been a very active area of research and several fundamental problems, such as Set-Cover have been studied in this setting (we discuss some of those results in Section~\ref{sec:related}). \paragraph{Previous Work.} Very recently, Henzinger et al.~\cite{Henz20} studied geometric independent set for intervals, hypercubes and hyperrectangles. They obtained several results, many of which extend to the substantially more general weighted independent set problem where objects have weights (we discuss this briefly in Section~\ref{sec:related}). Here we discuss only the results relevant to our context. Based on a lower bound of Marx~\cite{Marx07} for the offline problem, Henzinger et al.~\cite{Henz20} showed that any dynamic $(1+\epsilon)$-approximation for squares requires $\Omega(n^{1/\epsilon})$ update time, ruling out the possibility of sublinear dynamic approximation schemes. As for upper bounds, Henzinger et al.~\cite{Henz20} considered the setting where all objects are located in $[0,N]^d$ and have minimum length edge of 1, hence therefore also bounded size ratio of $N$. They presented dynamic algorithms with update time $\mathrm{polylog}(n,N)$. We note that in general, $N$ might be quite large such as $\exp{n}$ or even unbounded, thus those bounds are not sublinear in $n$ in the general case. In another related work, Gavruskin et al.~\cite{gavruskin2015dynamic} considered the interval case under the assumption that no interval is fully contained in other interval and obtained an optimal solution with $O(\log n )$ amortized update time. Quite surprisingly, no other results are known. In particular, even the problem of efficiently maintaining an independent set of intervals, without any extra assumptions on the input, remained open. \subsection{Our Results} In this work, we present the first dynamic algorithms with polylogarithmic update time for geometric versions of the independent set problem. First, we consider the 1-dimensional case of dynamic independent set of intervals. \begin{theorem} \label{thm:intervals_result} There exist algorithms for maintaining a $(1+\epsilon)$-approximate independent set of intervals under insertions and deletions of intervals, in $O_{\epsilon}(\log n)$ worst-case time per update, where $\varepsilon>0$ is any positive constant and $n$ is the total number of intervals. \end{theorem} This is the first algorithm yielding such a guarantee in the comparison model, in which the only operations allowed on the input are comparisons between endpoints of intervals. To achieve this result we use a novel application of local search to dynamic algorithms, based on the paradigm of Chan and Har-Peled~\cite{chan2012approximation} for the static version of the problem. At a very high-level (and ignoring some details) our algorithms can be phrased as follows: Given our current independent set $I$ and the new (inserted/deleted) interval $x$, if there exists a subset of $t \leq k$ intervals which can be replaced by $t+1$, do this change. We show that using such a simple strategy, the resulting independent set has always size at least a fraction $(1 - \frac{c}{k})$ of the maximum. The main ideas and the description of our algorithms is in Section~\ref{sec:intervals_main}. The detailed analysis and proof of running time are in Section~\ref{sec:intervals_details}. Next, we consider the problem of maintaining dynamically an independent set of squares. A natural question to ask is whether we can again apply local search. The problem is not with local search itself: an $(1+\epsilon)$-approximate MIS can be obtained if there are no local exchanges of certain size possible (due to the result of Chan and Har-Peled~\cite{chan2012approximation}); the problem is algorithmically implementing these local exchanges, which comes down to the issue that the 2-D generalization of maximum has linear size and not constant size. Note that the lower bound of Henzinger et. al.~\cite{Henz20} also implies that local search on squares cannot be implemented in polylogarithmic time. To circumvent this, we adopt a completely different technique, reducing the case of squares to intervals while losing a $O(1)$ factor in the approximation. We conjecture that one could implement local search to yield a $(1+\epsilon)$-approximation by using some kind of sophisticated range search to find local exchanges, at a cost of $O(n^c)$ for some $c>1$, which is another tradeoff that conforms to the lower bound of~\cite{Henz20}. \begin{theorem} \label{thm:squares_result} There exist algorithms for maintaining an expected $O(1)$-approximate independent set of axis-aligned squares in the plane under insertions and deletions of squares, in $O(\log^5 n)$ amortized time per update, where $n$ is the total number of squares. \end{theorem} To obtain this result, we reduce the case of squares to intervals using a random quadtree and decomposing it carefully into relevant paths. First, we show that for the static case, given a $c$-approximate solution for intervals we can obtain a $O(c)$-approximate solution for squares (Section~\ref{s:quadtree}). To make this dynamic, more work is needed: we need a dynamic interval data structure supporting extra operations such as split, merge and some more. For that reason, we extend our structure from Theorem~\ref{thm:intervals_result} to support those additional operations while maintaining the same approximation ratio (Section~\ref{sec:intervals_extend}). Then, we dynamize our random quadtree approach to interact with the extended interval structure and obtain a $O(1)$-approximation for dynamic squares (Section~\ref{sec:squares_dynamic}). We then show in Section~\ref{s:hyper} that our approach naturally extends to axis-aligned hypercubes in $d$ dimensions, providing a $O(4^d)$-approximate independent set in $O(2^d \log ^{2d+1} n)$ time. \subsection{Other Related Work} \label{sec:related} \paragraph{Dynamic Algorithms.} Dynamic graph algorithms has been a continuous subject of investigation for many decades; see~\cite{EGI99}. Over the last few years there has been a tremendous progress and various breakthrough results have been achieved for several fundamental problems. Among others, some of the recently studied problems are set cover~\cite{AAGPS19,BHN19,GKKP17}, geometric set cover and hitting set~\cite{DBLP:conf/compgeom/AgarwalCSXX20}, vertex cover~\cite{DBLP:conf/soda/BhattacharyaK19}, planarity testing~\cite{DBLP:conf/stoc/HolmR20} and graph coloring~\cite{DBLP:journals/algorithmica/BarbaCKLRRV19,DBLP:conf/soda/BhattacharyaCHN18,DBLP:conf/stacs/Henzinger020}. A related problem to MIS is the problem of maintaining dynamically a \textit{maximal independent set}; this problem has numerous applications, especially in distributed and parallel computing. Since maximal is a local property, the problem is ``easier'' than MIS and allows for better approximation results even in general graphs. Very recently, several remarkable results have been obtained in the dynamic version of the problem~\cite{DBLP:conf/stoc/AssadiOSS18, DBLP:conf/soda/AssadiOSS19, DBLP:conf/focs/BehnezhadDHSS19, DBLP:conf/focs/ChechikZ19}. \paragraph{Weighted Independent Set.} The maximum independent set problem we study here is special case of the more general weighted independent set (WIS) problem where each node has a weight and the goal is to produce an independent set of maximum weight. Clearly, MIS is the special case of WIS where all nodes have the same weight. The WIS problem has also been extensively studied. Usually stronger techniques that in MIS are needed. For instance, the greedy algorithm for intervals does not apply and obtaining the optimal solution in $O(n \log n)$ time requires a standard use of dynamic programming~\cite{KleinTardos}. Similarly, for squares the local-search technique of Chan and Har-Peled~\cite{chan2012approximation} does not provide a PTAS. This is the main reason that our approach here does not extend to the dynamic WIS problem. We note that dynamic WIS was studied in the recent work of Henzinger et al.~\cite{Henz20}. Authors provided dynamic algorithms for intervals, hypercubes and hyperrectangles lying in $[0,N]^d$ with minimum edge length 1, with update time polylog$(n,N,W)$, where $W$ is the maximum weight of an object. \section{Dynamic Independent Set of Intervals} \label{sec:intervals_details} As it is clear from discussion of Section~\ref{sec:intervals_main}, in order to maintain a $(1-\epsilon)$-approximation of the maximum independent set, it suffices to maintain an independent set which (i) is $k$-maximal and (ii) satisfies the property that no interval is contained in an interval of the independent set. This latter property is referred to as the no-containment property. In this section we describe how to maintain dynamically such an independent set of intervals subject to insertions and deletions. In Section~\ref{sec:intervals_def}, we start by introducing all definitions and background which will be necessary to formally define and analyse our algorithm. The formal description of our algorithm and data structures, as well the proof of running time is in Section~\ref{sec:intervals_alg}. The proof of correctness for our insertions/deletions algorithms, which is the most technical and complicated part is in Section~\ref{sec:intervals_cor}. In Section~\ref{sec:intervals_extend} we present some extensions of our results (maintaining a $k$-valid independent set under splits and merges) which will be used in Section~\ref{sec:squares_dynamic} to obtain our dynamic structure for squares. \subsection{Definitions and Background} \label{sec:intervals_def} We now define formally alternating paths (described in Section~\ref{sec:intervals_main}) and introduce some necessary background on them. In particular we will focus on specific alternating paths, called proper, defined below. \subsection*{Alternating Paths} Let $(A,B)$ be a pair of independent sets of $S$ of sizes $t$ and $t+1$ for some $0\leq t\leq k$ such that $(I\setminus A)\cup B$ is an independent set of $S$. Hence such a pair is a certificate that the independent set $I$ is not $k$-maximal. We observe that any inclusionwise minimal such pair induces an alternating path: a sequence of pairwise intersecting intervals belonging alternately to $B$ and $A$. \begin{lemma}[Alternating paths] \label{lem:mes} Let $(A,B)\in {{\binom{I}{t}}\times {S\setminus {\binom{I}{t+1}}}}$ be a pair such that $(I\setminus A)\cup B$ is an independent set, and there is no $A'\subset A$ and $B'\subset B$ such that $(A',B')$ also satisfies the property. Then the set $A\cup B$ induces an alternating path of length $2t+1$ in the intersection graph of $I\cup B$. \end{lemma} \begin{proof} In what follows, we identify the intervals of $I$ and $B$ with the corresponding vertices in their intersection graph. If $(A,B)$ is inclusionwise minimal, then its vertices must induce a connected component. The intersection graph of $I\cup B$ is an interval graph with clique number 2, hence its connected components are caterpillars. First note that in the caterpillar induced by $(A,B)$, every vertex $a\in A$ has degree at most three. Indeed, if $a$ has degree four or more, then it must fully contain two intervals of $B$, yielding a smaller pair $(A',B')$ with $t=1$. The intervals of $A$ are linearly ordered. Let us consider them in this order. If the first interval $a\in A$ is adjacent to three vertices in $B$, say $b_1,b_2,b_3$, then the interval $b_2$ must be fully contained in $a$, and we can end the alternating path with $a$ and $b_2$, and remove all their successors. This yields a smaller $(A',B')$, a contradiction. If $a$ has degree one, then it must be the case that an interval of $A$ further on the right has degree three, since otherwise $|B|\leq |A|$. Pick the first interval $a'$ of $A$ of degree three, adjacent to $b_1,b_2,b_3$. The interval $b_2$ must be fully contained in $a'$. Hence a smaller $(A',B')$ can be constructed by removing all predecessors of $b_2$ and $a'$, a contradiction. Therefore, $a$ must have degree two, with neighbors $b_1,b_2$. The vertices $b_1$ and $a$ are the first two in the alternating path, and we can iterate the reasoning with the next interval of $A$, if any. \end{proof} We will refer to such pairs $(A,B)\in \binom{I}{t}\times {S\setminus \binom{I}{t}}$ as {\em inducing an alternating path with respect to $I$}. Note that we allow $t=0$, in which case the pair has the form $(\emptyset, \{x\})$ and the alternating path has length 1. \begin{observation} \label{obs:alt_a_no_containment} If $(A,B)$ is an alternating path with respect to an independent set $I$, then no interval of $A$ is strictly contained in an interval of $B$. \end{observation} Note that the inverse is not true in general. The leftmost and/or rightmost interval of $B$ might be strictly contained in an interval of $A$. We focus on a particular class of alternating paths which we call \textit{smallest}. \begin{definition} \label{def:smallest-path} An alternating path $(A,B)$ with respect to an independent set $I$ is called smallest if there is no alternating path $(A',B')$ such that $A' \subset A$. \end{definition} We make the following key observation. \begin{lemma} \label{lem:pick} Consider a smallest alternating path induced by $(A,B)$, $A=\{a_1,\ldots ,a_t\}$, $B=\{b_1,\ldots ,b_{t+1}\}$ for some $0\leq t\leq k$, where the intervals in each set are indexed according to their order on the real line. Then every interval $b_i$ for $2\leq i\leq t$ can be assumed to be an interval with leftmost right endpoint among all intervals with left endpoint in the range $[r(b_{i-1}), r(a_{i-1})]$. Similarly, $b_1$ can be assumed to be an interval with leftmost right endpoint among all intervals with left endpoint in the range $[r(a'),r(a_1)[$, where $a'$ is the interval on the left of $a_1$ in $I$ if it exists, or in $]-\infty, r(a_1)[$ otherwise. \end{lemma} \begin{proof} If the interval $b_i$ does not have the leftmost right endpoint, then we can replace it with one that has. For $i\leq t$, this new interval must intersect $a_i$ as well, for otherwise the pair $(A,B)$ is not smallest. \end{proof} Note that the symmetric is also true: If $(A,B)$ is a smallest alternating path, then there exists a smallest alternating path $(A,B')$ which satisfies the leftmost right endpoint property, i.e, $b'_i$ is an interval with the rightmost left endpoint among all intervals with right endpoint in the range $[\ell(a_i),\ell(b_{i+1})]$; the proof is identical to the proof of Lemma~\ref{lem:pick} above by flipping the terms left and right. Using this observation, we can proceed to the following definition. \begin{definition} \label{def:proper} An alternating path $(A,B)$ is called \textit{proper} if it is a smallest alternating path and it satisfies the leftmost right endpoint property. \end{definition} Clearly, by the discussion above, given a proper alternating path $(A,B)$, there exists also a smallest alternating path $(A,B')$ which satisfies the rightmost left endpoint property. \begin{definition} \label{def:sibling} Let $(A,B)$ be a proper alternating path. The smallest alternating path $(A,B')$ that satisfies the rightmost left endpoint property is called \textit{sibling} of $(A,B)$. \end{definition} All swaps made by our insertions and deletion algorithms will involve solely proper alternating paths or their siblings. \subsection{Algorithm and Data Structures} \label{sec:intervals_alg} Here we get more closely on the details of the algorithm presented informally in Section~\ref{sec:intervals_main}. \subsubsection*{The Interval Query Data Structure} We will use a data structure which supports standard operations like membership queries, insert and delete in time $O(\log n)$. Moreover we need to answer queries of the following type: Given $a,b$, find an interval having the leftmost right endpoint, among all intervals whose left endpoint lies in the range $[a,b[$. We refer to these queries as {\em leftmost right endpoint} queries. The symmetric queries (among all intervals whose right endpoint lies in $[a,b[$, find the one having the rightmost left endpoint) are referred as {\em rightmost left endpoint queries}. \begin{lemma}[Interval Query Data Structure (IQDS)] \label{lem:ors} There exists a data structure storing a set of intervals $S$ and supporting: \begin{itemize} \item {\bf Insertions and deletions}: Insert an interval $x$ in $S$/ delete an interval $x$ from $S$. \item {\bf Leftmost right endpoint queries.} $\jop{Report-Leftmost}(a,b)$: Among intervals $y$ with $\ell(y) \in (a,b)$, report the one with the leftmost right endpoint (or return NULL). \item {\bf Rightmost left endpoint queries.} $ \jop{Report-Rightmost}(a,b)$: Among all intervals $y$ with $r(y) \in (a,b)$, report the one with the rightmost left endpoint. \item {\bf Endpoint Queries.} Given an interval $x$, return its left and right endpoints. \item {\bf Merge}: Given two such data structures containing sets of intervals $S_1$ and $S_2$, and a number $t$ such that $\ell(s)\leq t$ for all $s\in S_1$ and $\ell(s)> t$ for all $s\in S_2$, construct a new data structure containing $S_1\cup S_2$, \item {\bf Split}: Given a number $t$, split the data structure into two, one containing $S_1 :=\{s: \ell (s)\leq t\}$, and one containing $S\setminus S_1$, \end{itemize} in $O(\log n)$ time per operation in the worst case. \end{lemma} \begin{proof} We resort to augmented red-black trees, as described in Cormen et al.~\cite{CLRS09}. The keys are the left endpoints of the intervals, and we maintain an additional information at each node: the value of the leftmost endpoint of an interval in the subtree rooted at the node. This additional information is maintained at a constant overhead cost. Leftmost right endpoint queries are answered by examining the $O(\log n)$ roots of the subtrees corresponding to the searched range. The structure can be duplicated to handle the symmetric rightmost left endpoint queries. \end{proof} \paragraph{Remarks.} Before proceeding to presenting our algorithms using the data structure of Lemma~\ref{lem:ors}, we make some remarks: \begin{enumerate} \item In fact our data structure can be implemented in a comparison-based model where the only operations allowed are comparisons between endpoints of intervals. In particular, leftmost right endpoint queries (and symmetrically rightmost left endpoint queries) are used only for $a$ and $b$ being endpoints of intervals of $S$. Here, we present them as getting as input arbitrary coordinates just for simplicity of exposition. \item For the context of this section, it is sufficient to use augmented red-black trees to support those operations in time $O(\log n)$. However, later we would need to use the intervals data structure as a tool to support independent set of squares, this will not be enough. The details will be described in Section~\ref{sec:squares_dynamic}. \item The split and merge operations are only needed to make our extensions to squares work (see Section~\ref{sec:intervals_extend}). The reader interested in intervals may ignore them. \end{enumerate} We will maintain two such data structures, one for storing the set of all intervals $S$ and one storing the current independent set $I$. \paragraph{Alternating paths in time $O(k \log n)$} We show that using such a data structure, we can find alternating paths of size at most $k$ in time $O(k \cdot \log n)$. In particular, we are going to have the following procedure: \begin{itemize} \item $\jop{Find-Alternating-Path-Right} (I, k,(a,b)$): Find an alternating path, with respect to the independent set $I$, of size at most $k$, where the leftmost interval has left endpoint in $(a,b)$. This alternating path will satisfy the leftmost right endpoint property. \end{itemize} The other, completely symmetric, procedure $\jop{Find-Alternating-Path-Left}$, does the same thing, only with left and right (and left endpoints and right endpoints) reversed. It therefore suffices to describe only $\jop{Find-Alternating-Path-Right}$. We let $A\gets\emptyset$ and $B\gets \emptyset$, and proceed as follows. Let $\NEXT$ be the leftmost interval of $I$ to the right of $b$ (if exists). If $\NEXT = \NULL$, let $r(\NEXT) = \infty$. \begin{enumerate} \item \label{step:min} Among all intervals in $S\setminus I$ with left endpoint in $[a, b[$, if any, let $y$ be the one such that $r(y)$ is minimum. \item If such a $y$ exists, then: \begin{enumerate} \item If $r(y) < \ell(\NEXT)$ then $B \leftarrow B \cup \lrbrace{y}$. Return $(A,B)$. \item If \begin{itemize} \item $\ell(\NEXT) \leq r(y)<r(\NEXT)$, \item {\em and} $|A|<k$, \end{itemize} then $A \leftarrow A \cup \lrbrace{\NEXT}$, $B = B \cup \lrbrace{y}$ and iterate from step~\ref{step:min} with $(a,b)$ replaced by $(r(y),r(\NEXT))$ and $\NEXT$ replaced by the first interval of $I$ that follows on its right. \item Otherwise return fail. \end{enumerate} \item Otherwise return fail. \end{enumerate} By construction, this procedure performs at most $k$ iterations where in each iteration the only operations required are leftmost right endpoint queries and finding the next interval in independent set $I$, which can both be done in time $O(\log n)$. Therefore the overall running time is always $O(k \log n)$. \paragraph{Some auxiliary operations} Sometimes we might need to transform an alternating path $(A,B)$ satisfying the leftmost right endpoint property to another path $(A,B')$ (possibly with $B = B' $) which satisfies the rightmost left end point property (or vice versa). We show that our data structure supports this in time $O(|A| \cdot \log n)$. Let $I$ be a $k$-maximal independent set and let $A = \lrbrace{a_1,\dotsc,a_t}$ and $B = \lrbrace{b_1,\dotsc,b_{t+1}}$, such that $(A,B)$ is a smallest alternating path satisfying the rightmost left endpoint property. We will show how to transform it into a path $(A,B')$ satisfying the rightmost left endpoint property. The main idea is to start from $b_1$ and for all $i=1,\dotsc,t+1$, replace $b_i$ with another interval $b'_i$ which intersects both $a_{i-1}$ and $a_i$ and has the leftmost right endpoint property. Let $x \in I$ be the interval of $I$ to the left of $a_1$ (if any). We start by finding the interval with the leftmost right endpoint, among all intervals with left endpoint in $(r(x), \ell(a_1) )$ (set $r(x)$ to -$\infty$ if $x$ does not exist). This interval will be $b'_1$. Note that it might be possible that $b'_1 = b_1$. We continue in the same way for all $i \leq t+1$. Once interval $b'_{i-1}$ is fixed we answer the query Report-Leftmost$(r(b'_{i-1}),r(a_{i-1}))$ and the outcome will be the new interval $b'_i$. Overall we answer $t$ leftmost right endpoint queries, thus the total running time is $O(t \cdot \log n)$. Note that in the algorithm above, all leftmost right endpoint queries will return for sure an interval and will never be NULL; this is because the interval $b_i$ satisfies the requirements, so there exists at least one interval to report. Moreover, there is the possibility that in step $i$, the interval $b'_i$ ends before interval $a_i$ starts. We will make sure that our algorithms use this procedure in instances which this does not happen (proven in Lemmata~\ref{lem:higher_exchange} and~\ref{lem:j-to-j}). \subsubsection*{Description of Algorithms} We now describe our algorithms in pseudocode using our data structure and the operations it supports. Whenever we use $L$ or $R$ to denote alternating paths, we implicitly assume that those are defined by sets $(L_A,L_B)$, such that $L_A \subseteq I$ and $L_B \subseteq S \setminus I$ (resp. $(R_A,R_B)$). Whenever we say that we perform the exchange defined from alternating path $L$ (resp. $R$) we mean that we set $I \leftarrow (I \setminus L_A) \cup L_B$ (resp. $ I \leftarrow (I \setminus R_A) \cup R_B $). \paragraph{Insertions.} Interval $x$ gets inserted. Let $a_{\ell}$ be the interval of $I$ containing $\ell(x)$ (NULL if such interval does not exist) and $a_r$ the one containing $r(x)$. \begin{enumerate} \item If both $a_\ell$ and $a_r$ are NULL, then \begin{enumerate} \item \label{case:insert-one} If no interval of $I$ lies between $\ell(x)$ and $r(x)$ (that is, $x$ can be added), then $I \leftarrow I \cup \lrbrace{x}$. \end{enumerate} \item If both $a_\ell$ and $a_r$ are defined, then: \begin{enumerate} \item \label{case:strict_contain} If $a_{\ell} = a_r$, hence if $x$ is strictly contained in interval $a := a_\ell=a_r \in I$, then: \begin{itemize} \item Replace $a$ by $x$: $I \leftarrow (I \setminus \lrbrace{a})\cup x$. \item $R \leftarrow \jop{Find-Alternating-Path-Right} (I,k,(r(x),r(a))$. If $R \neq \emptyset$, do this exchange. \item $L \leftarrow \jop{Find-Alternating-Path-Left} (I,k,(\ell(a),\ell(x))$. If $L \neq \emptyset$, do this exchange. \end{itemize} \item \label{case:insert_alt_both} If $a_{\ell}$, $a_r$ are two consecutive intervals of $I$, then try to find an alternating path containing $x$: \begin{itemize} \item $R \leftarrow \jop{Find-Alternating-Path-Right} ((I,k-2,(r(x),r(a)))$. \item If $R \neq \emptyset$, then set $L \leftarrow \jop{Find-Alternating-Path-Left} ((I,k-2-|R|,(\ell(x),\ell(a))$). \\ If both $L= (L_A,L_B)$ and $R = (R_A,R_B)$ are nonempty, then: \begin{itemize} \item Set $A \leftarrow L_A \cup \lrbrace{a_{\ell},a_r} \cup R_A$ and $B \leftarrow L_B \cup \lrbrace{x} \cup R_B$. $(A,B)$ is an alternating path of size at most $k$. Do this exchange. \end{itemize} \end{itemize} \end{enumerate} \item \label{case:insert_alt_one} If only $a_r$ exists (the case where only $a_{\ell}$ exists is symmetric), then try to find an alternating path of size at most $k-1$ to the right: \begin{itemize} \item $R \leftarrow \jop{Find-Alternating-Path-Right} ((I,k-1,(r(x),r(a))$). \item If $R= (R_A,R_B)$ non empty, then set $A \leftarrow R_A \cup \lrbrace{a_r}$, $B \leftarrow R_B \cup \lrbrace{x}$. \\ $(A,B)$ is an alternating path. Do this exchange. \end{itemize} \end{enumerate} \paragraph{Deletions.}Interval $x$ gets deleted. If $x \notin I$, which can be checked in time $O(\log n)$, then we do nothing. So we focus on the case $x \in I$. Let $a_{\ell}$ be the interval of $I$ to the left of $x$ (if it exists) and $a_r$ the interval to the right of $I$ (if it exists). We first delete $x$ and then search for alternating paths to the right and left of $x$: \medskip \noindent $R \leftarrow \jop{Find-Alternating-Path-Right} ((I,k,(r(a_{\ell}),\ell(a_r)))$). \medskip \noindent $L \leftarrow \jop{Find-Alternating-Path-Left} ((I,k,(r(a_{\ell}),\ell(a_r)))$). \medskip \noindent $L$ has the rightmost left endpoint property. If nonempty, we replace $L$ by its sibling which satisfies the leftmost right endpoint property, as explained above. \begin{enumerate} \item If $L$ and $R$ are nonempty, then check whether they can be merged, that is, whether the right endpoint of the rightmost interval of $L_B$, say $r(L)$, is to the left of the left endpoint of the leftmost interval of $R_B$, $r(L) < \ell(R)$. \begin{enumerate \item \label{case:del-both-sides} If yes, then do the exchanges defined by $R$ and $L$. \item \label{case:del-both-one} Otherwise do the exchange defined either from $L$ or $R$ (arbitrarily) \end{enumerate} \item \label{case:del-one-side} If only one of $L$ and $R$ is nonempty, do this exchange. \item If both $L$ and $R$ are empty, then search for an alternating path including an interval $y$ containing $x$: $y \leftarrow \jop{Report-Leftmost}(r(a_{\ell}),\ell(x))$) ($y$ contains $x$). \begin{enumerate} \item \label{case:del-superset-one} If $r(y) < \ell(a_r)$ ($y$ can be added), then $I \leftarrow I \cup \lrbrace{y}$. \item \label{case:del-superset-path} Otherwise, check for alternating paths including intervals strictly containing $x$ (if any): Let $a_1,\dotsc,a_{k}$ be the $k$ intervals of $I$ to the right of $x$, ordered from left to right (note $a_1=a_r$). If some interval does not exist, set it to $\NULL$. Let also $a_0 = x$. For $i=1$ to $k$, use $\jop{Find-Alternating-Path-Left} (I,k,(r(a_{i-1}),\ell(a_i)))$ to search to the left for an alternating path of length at most $k$. Whenever a path $(A,B)$ is found, do this exchange and stop. \end{enumerate} \end{enumerate} \paragraph{Running time.} It is easy to see that for insertion all operations used require time $O(k \log n)$ and for deletion $O(k^2 \log n)$; this increase in deletion time comes solely due to case (\ref{case:del-superset-path}) where we need to search at most $k$ times for alternating paths of size at most $k$, which requires $O(k \log n)$ time. \subsection{Correctness} \label{sec:intervals_cor} We now prove correctness of our algorithms. Recall that by Definition~\ref{def:valid} a $k$-valid independent set of intervals is $k$-maximal and satisfies the no-containment property. We show that our algorithms always maintain a $k$-valid independent set of intervals. \paragraph{Some easy observations.} We begin with some easy, yet useful, observations. \begin{observation} \label{obs:insert_one} Let $I$ be a $k$-valid independent set of $S$. If an interval $x$ gets inserted such that $I \cup \lrbrace{x}$ is an independent set, then $I \cup \lrbrace{x}$ is also $k$-valid. \end{observation} \begin{observation} \label{obs:delete_notinset} If an interval $x \notin I$ gets deleted, then $I$ remains $k$-valid. \end{observation} \begin{observation} \label{obs:del_superset} Let $I$ be a $k$-valid independent set. Assume that for an interval $x \in I$ there exists $y \in S$, such that $y$ contains $x$ and $(I \setminus \lrbrace{x}) \cup \lrbrace{y}$ is also an independent set. Then, $(I \setminus \lrbrace{x}) \cup \lrbrace{y}$ is $k$-maximal. \end{observation} \paragraph{Main Technical Lemmas.} It turns out that the most crucial technical part of our approach is the following two lemmas, which are used both to the insertion and deletion algorithms. The first lemma has to do with $j$ to $j+1$ exchanges and the second with the $j$ to $j$ exchanges. \begin{lemma} \label{lem:higher_exchange} Let $I$ be a $k$-valid independent set of intervals. Assume there exists a proper alternating path $(A,B)$, such that $|A| = t$, $|B|=t+1$ for $k < t \leq 2k+1$. Then, $(I \setminus A) \cup B$ is also a $k$-valid independent set. \end{lemma} To state the second lemma we need the following definition. \begin{definition} Let $I$ be a $k$-valid independent set. A set $B = \lrbrace{b_1, \dotsc, b_j} \subseteq S \setminus I$ is called a left/right \textit{substitute} of a set $A = \lrbrace{a_1,\dotsc,a_j} \subseteq I$ if the following holds: \begin{enumerate \item There is no way to extend $A$ and $B$ to create alternating paths of size $t$ to $t+1$, for any $t \leq 2k$. \item Left substitute: If interval $a_j$ was not there, then $(A \setminus \lrbrace{a_j},B)$ would be a proper alternating path. Symmetrically for right substitute, if $a_1$ was not there, then $(A \setminus \lrbrace{a_1},B)$ would be a proper alternating path. \end{enumerate} \end{definition} Another important lemma, concerning exchanges with the same number of intervals. \begin{lemma} \label{lem:j-to-j} Let $I$ be a $k$-valid independent set. Let $A \subseteq I$ and $B$ a (left or right) substitute of $A$. Then, $(I \setminus A) \cup B$ is a $k$-valid independent set. \end{lemma} Proofs of lemmata~\ref{lem:higher_exchange} and~\ref{lem:j-to-j} are deferred to the end of this subsection. We first show how they can be combined with the observations above to show correctness of our dynamic algorithm. \vspace{0.16cm} \noindent \textbf{Correctness of the Insertion Algorithm.} We need to perform a case analysis depending on the change made by our algorithm after each insertion. However, in all cases our approach is the same: we show that the overall change is equivalent to (i) either a $j$-to-$j+1$ exchange for $j \leq 2k+1$ or a $j$-to-$j$ substitution before insertion of $x$, plus (ii) adding $x$ in the independent set. The resulting independent set remains valid after step (i) due to Lemma~\ref{lem:higher_exchange} or~\ref{lem:j-to-j} respectively and after step (ii) using observation~\ref{obs:insert_one}. We now begin the case analysis. First, observe that we need only to consider the case where the algorithm performs exchanges. If no exchanges are made, then it is easy to see that $I$ remains $k$-valid: both $k$-maximality and no-containment can only be violated due to $x$ and if this is the case we fall into one of the cases where the algorithm makes changes. Thus we assume that the algorithm does some change. In case the new interval $x$ does not intersect any other interval of $I$ and gets inserted (case~\ref{case:insert-one} of the algorithm) then the new independent set is $k$-valid due to Observation~\ref{obs:insert_one}. In case where the inserted interval $x$ is strictly contained in an interval $a \in I$, which corresponds to case~\ref{case:strict_contain} in the pseudocode of Section~\ref{sec:intervals_alg} (case 1 in the description of Section~\ref{sec:intervals_main}), three subcases might occur: \begin{enumerate \item Alternating paths were found in both directions: $L = (L_A,L_B) $ and $R = (R_A,R_B)$ (see Figure~\ref{fig:insert_subset}). Let $A = L_A \cup \lrbrace{a} \cup R_A$ and $B = L_B \cup R_B$. Observe that $(A,B)$ is an alternating path of size $j \leq 2k+1$ in the intersection graph of $S$ before the insertion of $x$. Thus, the overall change is equivalent to (i) doing a $j$ to $j+1$ exchange in the previous graph, for $j \leq 2k+1$, then (ii) adding $x$. Thus using Lemma~\ref{lem:higher_exchange} and Observation~\ref{obs:insert_one}, we get that the new independent set is $k$-valid. \item An alternating path was found only in one direction: Assume that it is found only to the left, i.e., $L = (L_A,L_B) \neq \emptyset$ and $R = \emptyset$ (see Figure~\ref{fig:insert_1b}). Note that before $x$ was inserted, $L_B$ was a left substitute of $L_A \cup \lrbrace{a}$. Thus the overall change made from our algorithm is equivalent to (i) performing a substitution of $L_A \cup \lrbrace{a} $ by $L_B$ in the previous graph, then (ii) adding $x$ in $I$. $I$ remains $k$-valid after step (i) due to Lemma~\ref{lem:j-to-j} and after step (ii) due to Observation~\ref{obs:insert_one}. \item No alternating path is found neither to the left nor to the right: $L = R = \emptyset$. Here it is easy to show that the new independent set is $k$-valid; clearly it satisfies the no-containment property. It remains to show the $k$-maximality. Assume for contradiction that there exists an alternating path $(A,B)$ of size at most $k$; this alternating path should involve $x$ (otherwise $I$ was not $k$-maximal which contradicts the induction hypothesis) and since $x$ is subset of $a$, it should involve $a$, thus it was an alternating path before insertion of $x$, contradiction. \end{enumerate} \begin{figure}[ht] \centering \includegraphics[scale=1]{insert_1b.pdf} \caption{$x$ is contained in $a$, an alternating path $(L_A,L_B)$ is found.} \label{fig:insert_1b} \end{figure} It remains to show correctness for the cases where an alternating path involving $x$ is found and an exchange is made, that is, cases \ref{case:insert_alt_both} and \ref{case:insert_alt_one} of the insertion algorithm. The two cases are similar. In case \ref{case:insert_alt_both} (an alternating path extends both to the left and to the right of $x$ --see Figure~\ref{fig:insertion_alternating_both}), let $(L_A,L_B)$ be the alternating path found in the left and $(R_A,R_B)$ the one found at the right of $x$. Note that before insertion of $x$, $L_B$ was a left substitute of $L_A \cup \lrbrace{a_\ell}$ and $R_B$ was a right substitute of $ R_A \cup \lrbrace{a_r} $. Thus the the overall change made by the algorithm, removing $L_A \cup \lrbrace{a_\ell} \cup R_A \cup \lrbrace{a_r}$ from $I$ and adding $L_B \cup \lrbrace{x} \cup R_B$, is equivalent to (i) performing two substitutions before insertion of $x$ and (ii) adding $x$; thus by Lemma~\ref{lem:j-to-j} and Observation~\ref{obs:insert_one} we get that the new independent set is $k$-valid. In case~\ref{case:insert_alt_one}, the analysis is the same, just $L$ or $R$ is empty and $a_\ell$ or $a_r$ respectively is null, thus the same arguments hold. \begin{figure}[ht] \centering \includegraphics[scale=1]{insertion_alternating_both.pdf} \caption{After insertion of $x$, an alternating path $(A,B)$ is formed, where $A = L_A \cup \lrbrace{a_{\ell},a_r} \cup R_A$ and $B = L_B \cup \lrbrace{x} \cup R_B $} \label{fig:insertion_alternating_both} \end{figure} \paragraph{Correctness of the Deletion Algorithm.} Recall that if the deleted interval $x$ is not in the current independent set $I$, then we do not make any change and by Observation~\ref{obs:delete_notinset} $I$ remains $k$-valid. So we focus on the case $x \in I$. Same as for insertion, it is easy to show that is the algorithm does not make any change other than deleting $x$, then $I$ remains $k$-maximal. We proceed to a case analysis, assuming algorithm did some change. \begin{enumerate} \item Alternating paths found in both directions and they can be merged (Case \ref{case:del-both-sides} of the deletion algorithm). In this case we have two alternating paths $L = (L_A,L_B)$ and $R = (R_A,R_B)$ (see Figure~\ref{fig:del_alternating_both}) with $|L_A|, |R_A| \leq k$. Let $A = L_A \cup \lrbrace{x} \cup \lrbrace{R_A}$ and $B = L_B \cup R_B$. Observe that, before deletion of $x$, $(A,B)$ was an alternating path of size $|L_A| + |R_A| +1 \leq 2k+1$. The exchange made by our algorithm (deleting $x$, removing $L_A, R_A$ from $I$ and adding $L_B, R_B$ to $I$) is equivalent to performing the exchange $(A,B)$ before deletion of $x$; then when $x$ is deleted, $I$ is not affected (by Observation~\ref{obs:delete_notinset}). By Lemma~\ref{lem:higher_exchange} we get that the new independent set is $k$-valid. \item An exchange is performed only to the left (right) of $x$ (cases \ref{case:del-both-one} and \ref{case:del-one-side} of the deletion algorithm). We show the case of left; the one for right is symmetric. $L = (L_A,L_B)$ is an alternating path on the left of $x$. Note that before deletion of $x$, $L_B$ was a left substitute of $L_A \cup \lrbrace{x}$. Thus the performed exchange is equivalent to a (i) performing a $j$-to-$j$ substitution before the deletion of $x$, then (ii) deleting $x$ from $S$. In step (i) we remain $k$-valid due to Lemma~\ref{lem:j-to-j} and in step (ii) due to Observation~\ref{obs:delete_notinset}. \item If an interval $y$ containing $x$ gets added to $I$ after deletion of $x$ (case \ref{case:del-superset-one} of deletion algorithm), then clearly $I$ satisfies the no-containment property: this is because the interval $y$ we use is the one with leftmost right endpoint among intervals containing $x$. Moreover, the new independent set $I' = I \setminus \lrbrace{x} \cup \lrbrace{y}$ is $k$-maximal due to Observation~\ref{obs:del_superset}. Overall, $I'$ is a $k$-valid independent set. \item In case we find an alternating path $(A,B)$ including an interval $y$ containing $x$ (case \ref{case:del-superset-path} of deletion algorithm), let $L_A \subseteq A$ be the intervals of $A$ to the left of $x$ and $R_A \subseteq A$ the ones to the right of $x$. Similarly let $L_B$ and $R_B$ be the intervals of $B$ to the left/right of $y$ (see Figure~\ref{fig:del_superset_alter}). Note that before the deletion of $x$, $L_B$ is a left substitute of $L_A$ and $R_B$ is a right substitute of $R_A$. Thus the exchange made by the deletion algorithm is equivalent to (i) substituting $L_A$ by $L_B$ and $R_A$ by $R_B$ before the deletion of $x$ and (ii) after the deletion of $x$ replacing it by $y$. After the substitutions of step (i) we remain $k$-valid due to Lemma~\ref{lem:j-to-j} and for step (ii) we use Observation~\ref{obs:del_superset}. \end{enumerate} \begin{figure}[ht] \centering \includegraphics[scale=1]{del-superset-alter.pdf} \caption{Case \ref{case:del-superset-path} of deletion algorithm: After deletion of $x$, a new alternating path $(A,B)$ is formed with $A = L_A \cup R_A$ and $B = L_B \cup \lrbrace{y} \cup R_B$.} \label{fig:del_superset_alter} \end{figure} \paragraph{Missing proofs.} In the remainder of this section, we give the full proofs of lemmata~\ref{lem:higher_exchange} and~\ref{lem:j-to-j} which were omitted earlier. \medskip \noindent {\bf Lemma~\ref{lem:higher_exchange}} {\em (restated) Let $I$ be a $k$-valid independent set of intervals. Assume there exists a proper alternating path $(A,B)$, such that $|A| = t$, $|B|=t+1$ for $k < t \leq 2k+1$. Then, $(I \setminus A) \cup B$ is also a $k$-valid independent set.} \begin{proof} We prove the lemma by contradiction. First we show that the no-containment property is true for $(I \setminus A) \cup B$. We then proceed to $k$-maximality. All proofs are shown using a contradiction argument. Let $A = \lrbrace{a_1,\dotsc,a_{t}}$ and $B = \lrbrace{b_1,\dotsc,b_{t+1}}$. \medskip \noindent \textit{No containment:} Assume for contradiction that there exists an interval $y \in S \setminus ((I \setminus A) \cup B)$ that is strictly contained in an interval $x \in (I \setminus A) \cup B$. Clearly, $x \in B$, since for all intervals of $I \setminus A$, the no-containment property is true (because $I$ is $k$-valid). Moreover, $y \notin A$, since $(A,B)$ is an alternating path, thus by Observation~\ref{obs:alt_a_no_containment} no interval of $A$ is strictly contained in an interval of $B$. Thus $y \in S \setminus (I \cup B) $. Overall, we have $y \in S \setminus (I \cup B) $ and $x \in B$ such that $y$ is strictly contained in $x$. Let $i$ be the integer such that $b_i=x$; clearly $1 \leq i \leq t+1$. There are 4 cases to consider depending on how $y$ intersects with $a_{i-1}$ and $a_i$\footnote{Corner cases: If $i=1$ then $a_{i-1} = a_0$ does not exist; similarly, if $i=t+1$, then $a_i = a_{t+1}$ does not exist. In case an interval $a_{i-1}$ or $a_i$ does not exist, we simply assume that it exists and does not intersect with $y$.}, illustrated in Figure~\ref{fig:no-containment}. \begin{enumerate} \item $y$ does not intersect with none of $a_{i-1}$, $a_i$. This contradicts $k$-maximality of $I$, since $I \cup \lrbrace{y}$ would be an independent set. \item $y$ strictly contained in $a_{i-1}$ or $a_i$. This contradicts the fact that $I$ is a $k$-valid independent set (no-containment violated). \item $y$ intersects only one interval of $A$ but it is not strictly contained in it. Assume it intersects $a_{i-1}$; we construct a contradicting alternating path from left to right (proof for $a_i$ will be symmetric, i.e., constructing a contradicting alternating path from right to left). Then, $b_1, a_1, \dotsc,b_{i-1}, a_{i-1}, y$ is an alternating path of size $i-1 \leq t$, contradicting that $(A,B)$ is a smallest alternating path. Corner case: if $i = t+1$, then this contradiction does not hold: the new alternating path has the same size. But in that case, $r(y) < r(x)$, meaning that $x= b_i$ is not the interval with the leftmost right endpoint that could be added in the alternating path, thus $(A,B)$ is not proper. Contradiction. For the case $y$ intersects $a_i$, the same corner case appears if $i=1$; same way, this will contradict that $(A,B)$ satisfies the leftmost right endpoint property. \item $y$ intersects both $a_{i-1}$ and $a_i$. In that case, $y$ could replace $b_i$ in the alternating path; contradicts the fact that $(A,B)$ is a proper alternating path: here $r(y) < r(b_i)$, yet $b_i$ was included in the alternating path. \end{enumerate} \begin{figure}[ht] \centering \includegraphics[scale=1]{no-containment.pdf} \caption{Obtaining contradiction in all cases for the no-containment property.} \label{fig:no-containment} \end{figure} Overall, in all cases we obtained a contradiction, implying that $(I \setminus A) \cup B$ satisfies the no-containment property. \medskip \textit{$k$-maximality:} We now show that $(I \setminus A) \cup B$ is a $k$-maximal independent set. Assume for contradiction that there exists a pair $(C,D)$ of size at most $k$ that induces an alternating path with respect to $(I\setminus A)\cup B$. We will show that this contradicts either that $I$ is $k$-maximal or that $(A,B)$ is a proper alternating path. Let $C=\{c_1,\ldots ,c_{t'}\}$ and $D=\{d_1,\ldots ,d_{t'+1}\}$, with $t'\leq k$. First observe that $C\cap B\not=\emptyset$; this can be easily shown by contradiction: If $C \cap B = \emptyset $, then $(C,D)$ is an alternating path with respect to $I$ of size at most $k$, contradicting that $I$ is $k$-maximal\footnote{Note that here we use crucially that $I$ satisfies the no-containment property. For an arbitrary $k$-maximal set with the no-containment property, this is not true, and such alternating paths could exist.}. Since $C \cap B \neq \emptyset$, we have that $C$ is non-empty. \begin{comment} We make two simplifying assumptions, that we will lift later. \begin{enumerate} \item {\em Contiguous subsequence} assumption: We assume that the sequence $c_1,c_2,\ldots ,c_{t'}$ is a contiguous subsequence of $b_1,b_2,\ldots ,b_{t+1}$. From this assumption, there is some $i$ such that $c_1 = b_i$. Moreover we have that $i+t' -1 \leq t+1$. Thus the alternating path induced by $(C,D)$ is $d_1,b_i,\dotsc, b_{i+t'-1},d_{t'+1}$. \item {\em Disjointness} assumption: We assume that $D\cap A=\emptyset$. \end{enumerate} \end{comment} Since $|C| \leq k$ and $|B| >k$, $C$ cannot be a strict superset of $B$. Either $C$ will be a contiguous subsequence of $B$ or it will extend it in one direction (left or right). As a result, one extreme interval of $C$ (either the leftmost or the rightmost) will belong to $B$. We give the proof for the case that the leftmost interval of $C$, namely $c_1$, belongs to $B$. In case $c_1 \notin B$, then $c_{t'} \in B$ and the proof is essentially the same by considering the mirror images of the intervals and obtaining the contradiction for the sibling alternating path $(A,B')$ that satisfies the rightmost left endpoint property (see Definition~\ref{def:sibling}). From now on we focus on the case where $c_1 \in B$. There exists some $1 \leq i \leq t+1$ such that $b_i = c_1$. Since $(A,B)$ induces an alternating path, there are (at most) two intervals $a_{i-1}$ and $a_i$ intersecting $b_i = c_1$ (in case $i=1$ there is only one interval, $a_i = a_1$ and in case $i=t+1$ there exists only $a_{i-1} = a_t$). We consider the intersection pattern of $a_{i-1},a_i,d_1$. Note that since $(I \setminus A) \cup B$ satisfies the no-containment property, we have that $d_1$ can not be strictly contained in $c_1 = b_i$. Note that if $i >1$, then $d_1$ should intersect with $a_{i-1}$: they both contain the left endpoint of $c_1 = b_i$. Moreover, it must be that $d_1 \neq a_{i-1}$, since $a_{i-1}$ contains the point $r(b_{i-1})$, but $d_1$ does not. In case $i=1$, then $a_{i-1} = a_0$ does not exist; for convenience in the proof we assume that $a_{i-1} = a_0$ exists and does not intersect with $d_1$. We need to consider two separate cases depending on the intersection between $d_1$ and $a_i$. \medskip \noindent \textbf{Case 1: $d_1$ does not intersect $a_i$.} We distinguish between two subcases. \begin{enumerate \item \textit{$d_1$ does not intersect $a_{i-1}$}. Recall this can happen only if $i=1$. In that case, we have that $d_1$ does not intersect any interval of $I$, thus $I \cup \lrbrace{d_1}$ is an independent set, therefore $I$ is not maximal, contradiction. \item \textit{$d_1$ intersects $a_{i-1}$} (see Figure~\ref{fig:higher_exchange_contr}): In that case, $b_1,a_1, \dotsc, b_{i-1},a_{i-1},d_1$ is an alternating path. Equivalently, for $A' = \lrbrace{a_1,\dotsc,a_{i-1}}$ and $B' = \lrbrace{b_1,\dotsc,b_{i-1}, d_1}$, $(A',B')$ is an alternating path. Since $A' \subset A$, we get that the alternating path $(A,B)$ is not proper. Contradiction. \end{enumerate} \medskip \noindent \textbf{Case 2: $d_1$ intersects $a_i$} (see Figure~\ref{fig:higher_exchange_contr}). Here we do not need any subcases. Note that $r(d_1) < r(c_1) = r(b_i)$. Thus, if $\ell(d_1) > \ell(c_1) = \ell(b_i)$, then $d_1$ is a strict subset of $c_1 =b_i$, which contradicts the no-containment property of $(I \setminus A) \cup B$. So it must be the case that $\ell(d_1) < \ell(c_1) = \ell(b_i)$. But then, in the alternating path $(A,B)$, the interval $b_i$ could have been replaced by $d_1$; this contradicts the assumption that $(A,B)$ is a proper alternating path, since it does not satisfy the leftmost right endpoint property. \medskip \begin{figure}[ht] \centering \includegraphics[scale=1]{higher_exchange_contr.pdf} \caption{Showing the contradiction. On top, case 1(ii): If $d_1$ does not intersect $a_i$, then $b_1,a_1, \dotsc, a_{i-1}, d_1$ is an alternating path. Down, case 2: If $d_1$ intersects $a_i$, then it could replace $b_i$ and give an alternating path satisfying the leftmost right endpoint property.} \label{fig:higher_exchange_contr} \end{figure} We crucially note that the proof holds even if $A \cap D \neq \emptyset$: in all cases the only interval of $D$ used to obtain contradiction was $d_1$; since $d_1 \notin A$, as explained above, then the proof holds even if $d_j \in A$ for some $j > 1$.\qedhere \begin{comment} Here we need to be a bit more careful and use some structural properties of the intervals involved. We start with a simple observation. \begin{observation} \label{obs:intersections} For any $2 \leq j \leq t'$, the interval $d_j$ intersects with $a_{i+j-2}$ \end{observation} \begin{proof} The left endpoint of $c_j = b_{i+j-1}$ is contained in $d_j$ (since $(C,D)$ is an alternating path). Moreover, the left endpoint of $c_j = b_{i+j-1}$ is contained in $a_{i+j-2}$, since $(A,B)$ is an alternating path. \end{proof} \begin{claim} \label{claim:rightmost} The rightmost interval of $I$ intersecting with $d_{t'}$ is $a_{i+t'-2}$. \end{claim} \begin{proof} By Observation~\ref{obs:intersections}, $d_{t'}$ intersects with $a_{i+t'-2}$. The claim is clearly true in case $a_{i+t'-2}$ is the rightmost interval of $I$. Thus it suffices to prove it for the case where $a_{i+t'-2}$ is not the rightmost interval of $I$. \begin{itemize} \item \textit{Case 1: $a_{i+t'-2}$ is not the rightmost interval of $A$.} That means, interval $a_{i+t'-1}$ exists. We prove the claim by contradiction. Assume this is not true, i.e., $d_{t'}$ intersects $a_{i+t'-1}$. Thus the left endpoint of $d_{t'+1}$ is on the right of the left endpoint of $a_{i+t'-1}$, i.e., $\ell(a_{i+t'-1}) < \ell(d_{t'+1})$. Due to the assumption we make, we know that $b_{i+t'}$ exists and intersects with $a_{i+t'-1}$. Moreover, since the alternating path $(C,D)$ ends, we have that $d_{t'+1}$ does not intersect with $b_{i+t'}$ and it is entirely on its left. Since $(A,B)$ is an alternating path, $\ell(b_{i+t'}) < r(a_{i+t'-1})$. Clearly, $r(d_{t'+1}) < \ell(b_{i+t'})$, since the alternating path $(C,D)$ ends at $d_{t'+1}$. Combining this with the discussion above we get $r(d_{t'+1}) < \ell(b_{i+t'}) < r(a_{i+t'-1})$. Overall we get that $\ell(a_{i+t'-1}) < \ell(d_{t'+1})$ and $r(d_{t'+1}) < r(a_{i+t'-1})$. In words, $d_{t'+1}$ is strictly contained in $a_{i+t'-1}$, contradiction, since $I$ is a $k$-valid independent set. \item \textit{Case 2: $a_{i+t'-2}$ is the rightmost interval of $A$.} That means, interval $c_{t'} = b_{i+t'-1}$ is the rightmost interval of $B$. Since $(C,D)$ is an alternating path ending at $d_{t'+1}$, we have that $d_{t'+1}$ is entirely on the left of $a_{i+t'-1}$. Therefore, $d_{t'}$ is entirely on the left of $a_{i+t'-1}$ \qedhere \end{itemize} \end{proof} \gknote{@Jean: Note that the proof holds even if $A \cap D \neq \emptyset$, with noticing that $d_1 \notin A$ by construction of the paths. Thus no need for special care for this case. If you want this to be phrased explicitly in the proof, let me know.} Pick the smallest $j \geq 2$ such that such that $d_j$ intersects with exactly one interval of $A$, namely $a_{i+j-2}$. Such a $j$ always exists and $j \leq t'$, due to Claim~\ref{claim:rightmost} above. Then we have the following. \begin{enumerate}[(i)] \item \textit{$d_1$ does not intersect $a_{i-1}$}: Then $d_1,a_i,\dotsc,a_{i+j-2},d_j$ is an alternating path. In other words, $A' = \lrbrace{a_i,\dotsc,a_{i+j-2}}$ and $B' = \lrbrace{d_1,\dotsc,d_j}$. Alternating path of size $j-1 \leq t' \leq k$, contradicting $k$-maximality of $I$. \item \textit{$d_1$ intersects $a_{i-1}$}: Then $b_1,a_1, \dotsc, b_{i-1},a_{i-1},d_1,a_i,\dotsc,a_{i+j-2},d_j$ is an alternating path. Formally, $A' = \lrbrace{a_1,\dotsc,a_{i+j-2}}$ and $B' = \lrbrace{b_1,\dotsc,b_{i-1},d_1,\dotsc, d_j}$ is an alternating path with $A' \subseteq A$, contradicting that $(A,B)$ is a proper alternating path. In particular, if $A' \subset A$, then $(A,B)$ is not a smallest alternating path, contradiction. If $A=A'$ (which implies $j=t'$ and $a_{i+t'-1} = a_t$) then we have that intervals $d_1,\dotsc,d_{t'}$ have right endpoints to the left of $b_{i},\dotsc,b_{i+t'-1} = c_t'$ respectively, contradicting that $(A,B)$ is a proper alternating path. \end{enumerate} \end{comment} \end{proof} We conclude with the proof of Lemma~\ref{lem:j-to-j}. \medskip \noindent {\bf Lemma~\ref{lem:j-to-j}} {\em (restated) Let $I$ be a $k$-valid independent set. Let $A \subseteq I$ and $B$ a (left or right) substitute of $A$. Then, $(I \setminus A) \cup B$ is a $k$-valid independent set.} \begin{proof} The proof of the no-containment property is the same as in Lemma~\ref{lem:higher_exchange}, by considering four cases and proving contradiction to all of them. The only difference is that the corner case with $i=t+1$ in case 3 cannot appear. \textit{k-maximality:} Without loss of genrality, we only give the proof for the case $B$ is a left substitute of $A$. Part of the proof carries over from Lemma~\ref{lem:higher_exchange}. Suppose for contradiction that $\ensuremath{(I \setminus A) \cup B}$ is not $k$-maximal and there exists an alternating path $(C,D)$ for $C=\{c_1,\ldots ,c_{t'}\}$ and $D=\{d_1,\ldots ,d_{t'+1}\}$, with $t'\leq k$. We note that, in contrast to Lemma~\ref{lem:higher_exchange}, now it is not obvious that $C \cap B \neq \emptyset$ (see Figure~\ref{fig:C_disj_B}). Thus we first give the proof for this case and later we consider the case $C \cap B = \emptyset$. We focus on the case $C \cap B \neq \emptyset$. We distinguish between two sub-cases, depending on whether $c_1 \in B$ or not. \medskip \noindent \textbf{Case 1: $c_1 \in B$}. Note that in this case, either $C$ is a strict subset of $B$, or $C$ extends $B$ to the right. Let $c_1 = b_i$. The proof is same as the proof of Lemma~\ref{lem:higher_exchange} based on intersections between $d_1$ and $a_{i}, a_{i-1}$ and showing the exact same contradiction in all cases. \noindent \textbf{Case 2: $c_1 \notin B$.} Note that in that case, $C$ is either a strict superset of $B$, or extends $B$ to the left. Observe that $c_1$ is on the left side of intervals of $B$. Let $b_1=c_i$. Observation: For all $1 \leq t \leq \min \lrbrace{j,t'+1-i} $, interval $d_{i+t}$ intersects $a_{t}$ (they both contain the right endpoint of $b_{t}$). Let $t \leq j-1$ be the smallest index such that $d_{i+t}$ does not intersect $a_{t+1}$. We claim that such $t$ always exists. Then, $d_1,c_1,\dotsc,d_i,a_1,d_{i+1}, \dotsc, d_{i+t}$ is an alternating path of size less than $k$; equivalently $C' = \lrbrace{c_1, \dotsc,c_{i-1},a_1, \dotsc, a_{t}}$ and $D' = \lrbrace{d_1,\dotsc,d_{i+t}}$ is an alternating path with respect to $I$, of size $i+t-1 \leq i+t'+1-i-1 = t' < k$, a contradiction. It remains to show that such a $t$ always exist. To this end, we distinguish between two subcases to conclude the proof: \begin{enumerate \item Case $A \cap D \neq \emptyset$: Let $j'$ be the smallest index such that $d_{j'} \in A$. Since for all $t \leq \min \lrbrace{j,t'+1-i}$ each interval $d_{i+t}$ intersects interval $a_t$, we get that $d_{j'}$ intersects $a_{j'-i}$, thus $d_{j'} = a_{j'-i}$. That means, interval $d_{j'-1}$, does not intersect $a_{j'-i}$. Thus for $t = j' - i -1$, we have that interval $d_{i+t}$ does not intersect $a_{t+1}$. \item Case $A \cap D = \emptyset$. In that case we need some further case analysis. \begin{enumerate \item $t'+1-i \geq j$: Note that in this case, $C$ is a strict superset of $B$. Assume such $t$ does not exist. Then, $((C \setminus B)\cup A ,D) $ is an alternating path of size at most $k$, contradicting that $I$ is $k$-maximal. \item $t'+1-i < j$: Note that in this case, $C$ extends $B$ on its left. Assume such $t$ does not exist. Then, we have that interval $d_{t'} = d_{i+(t'-i)}$ intersects both $a_{t'-i}$ and $a_{t'-i+1}$, therefore $\ell(d_{t'+1}) > r(d_{t'}) = r(d_{i+(t'-i)}) \geq \ell(a_{t'-i+1})$. Also, interval $b_{t'-i+2}$ exists (since $t'-i+2 \leq j$) and has $\ell(b_{t'-i+2}) < r(a_{t'-i+1}) $. Also $r(d_{t'+1}) < \ell(b_{t'-i+2}) $ (because the alternating path ends at $d_{t'+1}$), therefore $r(d_{t'+1}) < r(a_{t'-i+1})$. Overall we have that $\ell(d_{t'+1}) > \ell(a_{t'-i+1})$ and $r(d_{t'+1}) < r(a_{t'-i+1})$, i.e, $d_{t'+1}$ is strictly contained in $a_{t'-i+1}$, contradicting that $I$ is a $k$-valid independent set. \end{enumerate} \end{enumerate} \begin{figure}[ht] \centering \includegraphics[scale=1]{C_disj_B.pdf} \caption{$B$ is a left substitute of $A$, and it might be that $(C,D)$ is an alternating path. But in that case, $(A \cup C,B \cup D)$ is an alternating path.} \label{fig:C_disj_B} \end{figure} It remains to consider the case $C \cap B = \emptyset$ (Figure~\ref{fig:C_disj_B}) and obtain again a contradiction. First, show that the only way this can happen is if $C$ contains consecutive intervals to the right of $A$. But in this case, observe that $(A \cup C, B \cup D)$ is an alternating path of size $j+t' \leq 2k$, which contradicts that $B$ is a substitute of $A$. \qedhere \end{proof} \subsection{Extensions} \label{sec:intervals_extend} We now extend our data structure to also support, in the same running time $O(k^2 \log n)$, the following operations: \begin{enumerate} \item {\bf Merge:} Given two sets of intervals $S_1$ and $S_2$ and such that for all $x \in S_1$ $\ell(x) \leq t$ and for all $y \in S_2$, $\ell(y) > t$ and two $k$-valid independent sets $I_1$ and $I_2$ of $S_1$ and $S_2$ respectively, get a $k$-valid independent set $I$ of $S= S_1 \cup S_2$. \item {\bf Split:} Given a set of intervals $S$, a $k$-valid independent set $I$ of $S$, and a value $t$, split $S$ into $S_1$ and $S_2$ such that $x \in S_1$ $\ell(x) \leq t$ and for all $y \in S_2$, $\ell(y) > t$ and produce $k$-valid independent sets $I_1, I_2$ of $S_1$ and $S_2$ respectively. \item {\bf Clip($t$).} Assume we store a set $S$ of intervals and a $k$-valid independent set $I$ and let $t$ be a point to the left of the rightmost left endpoint of all intervals of $S$. This operation shrinks all intervals $x$ with $r(x) > t$, such that $r(x) = t$. \end{enumerate} Furthermore we show that some types of changes in the input set $S$ do not affect our solution. \begin{itemize} \item {\bf Extend($y$):} Assume we store a set $S$ of intervals and a $k$-valid independent set $I$. Then, if an interval $y \in S \setminus I$ gets replaced by $y'$ such that $y$ is a strict subset of $y'$, then $I$ remains $k$-valid. \end{itemize} The operations on the interval query data structure can be done using red-black trees as explained in Lemma~\ref{lem:ors}. The non-trivial part is to show how to maintain $k$-valid independent sets under splits and merges. For example, when merging, the leftmost interval of $I_2$ might intersect (or even be strictly contained in) the rightmost interval of $I_1$. We show that we can reduce those operations to a constant number of insertions and deletions. \paragraph{1. Merge:} We now describe our merge algorithm. We start by inserting a fake tiny interval $b$ in $S_1$, with $\ell(b) >t$ such that $b$ is strictly contained in any interval of $L$ it intersects, and it does not intersect the leftmost interval of $S_2$. Our algorithm can be described as follows. \begin{enumerate} \item \label{merge:ins_b} Insert $b$ in $S_1$. Update $I_1$ to $I'_1$. \item \label{merge:merge} $S \leftarrow S_1 \cup S_2$ and $I \leftarrow I'_1 \cup I_2 $. \item \label{merge:del_b} Delete $b$ from $S$. Update $I$ to $I'$. \end{enumerate} \paragraph{Running time.} Note that this algorithm runs in time $O(k^2 \log n)$, where $n = |S|$. This is because steps~\ref{merge:ins_b} and~\ref{merge:merge} require $O(k \log n)$ due to our insertion algorithm and Lemma~\ref{lem:ors} respectively, and step~\ref{merge:del_b} requires time $O(k^2 \log n)$ using our deletion algorithm from Section~\ref{sec:intervals_alg}. \paragraph{Correctness.} We show that the merge algorithm indeed produces a $k$-valid independent set of $S$. Since our insertion algorithm from Section~\ref{sec:intervals_alg} maintains a $k$-valid independent set, in particular it satisfies the no-containment property, we get that after step~\ref{merge:ins_b}, $b \in I'_1$. Therefore, after step~\ref{merge:merge}, $I = I'_1 \cup I_2$ is an independent set of $S$, since $b$ ensures that no overlap exists. We want to show that $I$ is also $k$-valid. Towards proving this, we make one observation. \begin{observation} \label{obs:bomb} For any interval $x \in S_1 \cup S_2$, the endpoints $\ell(x)$ and $r(x)$ do not intersect $b$. \end{observation} We now proceed to our basic lemma, showing that the new independent set is $k$-valid. \begin{lemma} \label{lem:merge_valid} The independent set $I = I'_1 \cup I_2$ obtained in step~\ref{merge:merge} is $k$-valid \end{lemma} \begin{proof} $k$-maximality: If there exists an alternating path, it should contain $b$. But due to Observation~\ref{obs:bomb} no endpoint of any interval intersects $b$, thus there cannot be such an alternating path. No-Containment: No interval of $I'_1$ contains an interval of $S_1$. No interval of $I_2$ contains an interval of $S_2$. By construction, an interval of $I_2$ cannot contain an interval of $S_1$, since the left endpoints are on different sides of $t$. Similarly, an interval of $I'_1 \setminus \lrbrace{b}$ does not contain an interval of $S_2$. Finally, $b$ does not contain any interval of $S_2$ due to Observation~\ref{obs:bomb}. \end{proof} Now it is easy to see that the algorithm outputs a $k$-valid independent set: After step~\ref{merge:merge}, $I = I'_1 \cup I_2$ is a $k$-valid independent set of $S_1 \cup S_2 \cup \lrbrace{b}$. Thus after deletion of $b$, the new independent set $I'$ is a $k$-valid independent set of $S = S_1 \cup S_2$, since our deletion algorithm from Section~\ref{sec:intervals_alg} maintains a $k$-valid independent set. \paragraph{2. Split:} We now proceed to the split operation. It is almost dual to the merge and the ideas used are very similar. Recall we maintain a set of intervals $S$ and a $k$-valid independent set $I$, and we want to split to $S_1$ and $S_2$ and corresponding independent sets $I_1$ and $I_2$, such that all intervals $x \in S_1$ have $\ell(x) \leq t$ and all $y \in S_2$ have $\ell(x) > t$. We introduce a fake tiny interval $b$ such that $\ell(b) > t$ which is strictly contained in all intervals $x$ with $\ell(x) \leq t$ and $r(x) >t$ and $r(b)$ is smaller than the leftmost left endpoint larger than $t$. The algorithm is the following \begin{enumerate} \item \label{split:insert} Insert $b$ in $S$. $I'$ is the new independent set. \item \label{split:split} Split $S$ into $S_1 \cup \lrbrace{b}$ and $S_2$. Split $I'$ into $I'_1$ and $I_2$, where $b$ is the rightmost interval of $I'_1$. \item \label{split:delete} Delete $b$ from $S_1 \cup \lrbrace{b}$ and update $I'_1$ to $I_1$ using our deletion algorithm. \end{enumerate} \paragraph{Running time.} Like merge, this algorithm runs in worst-case $O(k^2 \log n)$ time, due to the bounds from Lemma~\ref{lem:ors} and the running time of our insertion and deletion algorithms. \paragraph{Correctness.} The proof of correctness is similar to that of the merge operation. After step~\ref{split:insert}, $I'$ is a $k$-valid independent set of $S\cup\lrbrace{b}$. It remains to realize that after step~\ref{split:split}, $I'_1$ and $I_2$ are $k$-valid independent sets of $S_1 \cup \lrbrace{b}$ and $S_2$ respectively. Then, the result for $I_2$ is immediate and for $I_1$ it comes from correctness of our deletion algorithm of Section~\ref{sec:intervals_alg}. \paragraph{3. Clip:} Given a set $S$ of intervals, a $k$-valid independent set $I$ and $t$ be a point to the left of the rightmost left endpoint of all intervals of $S$, we shrink all intervals $x$ with $r(x) > t$, such that $r(x) = t$. We show that the change in the independent set $I$ can be supported in time $O(\log n)$. Let $\tau$ be the interval of $S$ the rightmost left endpoint. Note that $\ell(\tau) < t$. We begin with an observation, which is essentially a corollary of Lemma~\ref{lem:higher_exchange} from Section~\ref{sec:intervals_cor}. \begin{observation} \label{obs:valid_leftmost} Let $I$ be a $k$-valid independent set maintained by the IQDS structure. Then, using the IQDS structure, we can modify $I$ to contain the leftmost interval $\tau$ of $S$ and remain $k$-valid, in time $O(k \log n)$. \end{observation} \begin{proof} Let $x$ be the righthmost interval of $I$. If $x=\tau$ then $\tau \in I$ and no change is needed. We focus thus on the case $x \neq \tau$. Set $I' \leftarrow (I \setminus \lrbrace{x}) \cup \tau $. This might create an alternating path of size exactly $k$ to the left of $\tau$. Search for such alternating path $(A,B)$ and if exists, do the swap, i.e, set $I'' \leftarrow (I' \setminus A) \cup B$. Clearly the runtime using IQDS is $O(k \log n)$ It is easy to see that the new independent set $I''$ is $k$-valid: If no alternating path found, then the change is a right 1-to-1 substitution, thus by Lemma~\ref{lem:j-to-j}, $I''$ is $k$-valid. If an alternating path $(A,B)$ was found, then the overall change corresponds to an alternating path with respect to $I$, of size $k+1$: the alternating path is $(A \cup \lrbrace{x},B \cup \lrbrace{\tau})$. By Lemma~\ref{lem:higher_exchange}, this exchange produces a $k$-valid independent set; thus $I''$ is $k$-valid. \end{proof} Using Observation~\ref{obs:valid_leftmost}, we can show that the operation CLIP($x$) maintains a $k$-valid independent set. We first make sure that $\tau \in I$; if not we modify $I$ using the procedure described above (in time $O(k \log n)$). Then it is easy to see that even after shrinking the intervals, $I$ remains a $k$-valid independent set. The no-containment property holds trivially, since no interval has a larger left endpoint. $k$-maximality is also easy: since no interval has its left endpoint inside $\tau$, there cannot be any alternating path involving $\tau$. Alternating paths without $\tau$ cannot exist, since then they should have existed before, contradicting that $I$ is $k$-valid. \paragraph{4. Extend:} We conclude by showing that given a $k$-valid independent set of intervals $I$ of $S$, if an interval $y \in S \setminus I$ gets replaced by a strict superset $y'$, then $I$ remains $k$-valid. \begin{lemma} \label{lem:insert_superset} Let $I$ be a $k$-valid independent set of a set $S$ of intervals. Then, if an interval $y \in S \setminus I$ gets replaced by $y'$ such that $y'$ strictly contains $y$, then $I$ remains $k$-valid. \end{lemma} \begin{proof} The no-containment is clearly preserved. $I$ satisfies this property, and the new interval contains the previous. Since $y$ was not contained in any interval $x \in I$, then $y'$ is not contained either. The $k$-maximality property can be proven by contradiction. Suppose that after replacing $y$ by $y'$, there exists an alternating path $(A,B)$ of size $t \leq k$. Let $A = \lrbrace{a_1,\dotsc,a_t}$ and $B = \lrbrace{b_1,\dotsc,b_{t+1}}$. There exists some $1 \leq i \leq t+1$ such that $b_i = y'$. Let us first focus in the case $2 \leq i \leq t$. Interval $b_i$ should intersect both $a_{i-1}$ and $a_i$. Since $b_i = y'$ contains $y$, we examine the intersections between $y$ and $a_{i-1},a_i$. \noindent \textbf{Case 1: $y$ does not intersect any of $a_{i-1},a_i$.} Then, $y$ could be added to $I$ and produce an independent set, thus $I$ was not maximal with respect to $S$, contradiction. \noindent \textbf{Case 2: $y$ intersects only one of $a_{i-1}$ or $a_i$}. We focus on the case intersecting $a_{i-1}$ and the other is symmetric. We have that $b_1,a_1,\dotsc,a_{i-1},y$ is an alternating path of size $i-1 \leq t \leq k $, contradiction. \noindent \textbf{Case 3: $y$ intersects both $a_{i-1}$ and $a_i$.} Then, $(A,(B \setminus b_i) \cup y )$ is an alternating path of $S$ with respect to $I$ of size $t \leq k$, contradiction, since $I$ is $k$-maximal. It remains to consider the corner case that $i=1$ or $i=t+1$. There, $b_i = y'$ intersects only one interval ($a_1$ or $a_t$ respectively). Thus the only cases appearing are the cases (1) and (2) above, and we show the contradiction using the same arguments. \end{proof} \section{Static Squares: The Quadtree Approach}\label{s:quadtree} \begin{comment} \begin{table} \begin{center} \begin{tabular}{|cp{4in}|} \hline $\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}$ & The four quadrants of a quadtree cell \\ \hline $\Depthmax{\jmark{P}}{s}$ & For a square in $\Squares{\jmark{P}}$, is the last node $\jmark{\eta} \in \jmark{P}$ where $s$ intersects the center $\Squarecenter{\jmark{\eta}}$. \\ \hline $\Depth{\jmark{\eta}}$ & The depth of $\jmark{\eta}$ in $\jmark{Q}$ \\ \hline $\Depth{s}$ & Shorthand for $\Depth{\jmark{\eta}(s)}$ \\ \hline $\Depthmax{\jmark{P}}{\jmark{\eta}}$ & Shorthand for $\Depth{\jmark{\eta}'_{\jmark{P}}(s)}$ \\ \hline $\Depths{\jmark{P}}$ & The depths on a path or subpath: $\{ \Depth{\jmark{\eta}}) | \jmark{\eta} \in \jmark{P} \}$ \\ \hline $I$ & An independent set of squares \\ \hline $\Iopt{S}$ & A maximum independent set of $S$ \\ \hline $\jmark{I}$ & The output, the approximate independent set \\ \hline $\Ipath{\jmark{P}}$ & An independent subset of the squares on a path, $\Squares{\jmark{P}}$ \\ \hline $\Ippath{\jmark{P}}$ & An protected independent subset of the squares on a path, $\Squares{\jmark{P}}$ \\ \hline $\Iintopt{\Pathq{q}},\Iintapprox{\Pathq{q}}$ & Maximum and approximate interval-independent subsets of $\Squares{\Pathq{q}}$ \\ \hline $\Interval{\jmark{P}}{s}$ & Shorthand for $[\Depth{s},\Depthmax{\jmark{P}}{s}]$ \\ \hline $\jmark{\eta}$ & Used to designate a quadtree node \\ \hline $\Square{\jmark{\eta}}$ & The defining square of quadtree node $\jmark{\eta}$ \\ \hline $\Squarecenter{\jmark{\eta}}$ & The center of the square of a node \\ \hline $\Node{s}$ & The smallest quadtree node that contains the square $s$ \\ \hline $\jmark{M}$ & The marked nodes: $\cup_{s\in \jmark{\boxbox (\jmark{S})}} \Node{s}$ \\ \hline $\jmark{P}(\jmark{\eta}_{\text{top}},\jmark{\eta}_{\text{bottom}})$ & A path in $\jmark{Q}$ from $\jmark{\eta}_{\text{top}}$ to $\jmark{\eta}_{\text{bottom}}$, exclusive \\ \hline $\jmark{P},\jmark{P_{\text{top}}},\jmark{P_{\text{bottom}}}$ & shorthand for denoting $\jmark{P}=\jmark{P}(\jmark{\eta}_{\text{top}},\jmark{\eta}_{\text{bottom}})$ \\ \hline $\Pathq{q}$ & A subpath of $P$ containing those nodes whose child is in the $q^{\text{th}}$ quadrant. \\ \hline $\Pathextend{q}$ & $\jmark{P}_q$ extended to include $\jmark{P_{\text{bottom}}}$. \\ \hline $s$ & Used to designate a square in $\jmark{S}$ \\ \hline $\jmark{S}$ & The set of squares we wish to find an independent set of \\ \hline $\jmark{\stackrel{\infty}{Q}}$ & The infinite quadtree with a random root square \\ \hline $\jmark{Q}$ & The finite subtree of $\jmark{\stackrel{\infty}{Q}}$ quadtree consisting of the marked nodes $\jmark{M}$ and their ancestors \\ \hline $\jmark{Q_{\text{paths}}}$ & The monochild paths of $\jmark{Q}$ \\ \hline $\jmark{Q_{\text{leaves}}}$ & The leaves of $\jmark{Q}$ \\ \hline $\jmark{Q_{\text{internal}}}$ & The internal nodes of $\jmark{Q}$ \\ \hline $\jmark{Q}$ & A generic quadrant \\ \hline $\Half{\Pathq{q}}{I}$ & Remove every other square from $I$, starting from the deepest. \\ \hline $\jmark{\boxbox (\jmark{S})}$ & The subset of $\jmark{S}$ that is centered with respect to $\jmark{Q}$ \\ \hline $\Squares{\jmark{P}}$ & The squares on a path, $\cup_{n \in \jmark{P}} \Squares{\jmark{\eta}}$ \\ \hline $\Squares{\jmark{\eta}}$ & The set of squares $s$ in $\jmark{S}$ where $n(s)=n$ \\ \hline Protected ind set of $\jmark{P}$ & A subset of $\Squares{\jmark{P}}$ that is independent and does not intersect $\jmark{P_{\text{bottom}}}$ \\ \hline \end{tabular} \end{center} \caption{Table of notation for Section~\ref{quadtree}.} \end{table} \end{comment} In this section we turn our attention to squares. We present a $O(1)$-approximate solution for the (static) independent set problem where all input objects are squares. Although such results (or even PTAS) are already known, we present our approach for the static case, while developing the structural observations that will become the invariants when we address dynamization in Section~\ref{sec:squares_dynamic}. For this section, the input $S$ is a set of $n$ axis-aligned squares in the plane. Let $\Iopt{S}$ denote a maximum independent set of $S$. The maximum size of an independent set of $S$ will be denoted by $\OPT{S}=|\Iopt{S}|$. Given a set $\jmark{S}$, our goal is to compute a set $\jmark{I} \subseteq \jmark{S}$ which is an independent set of squares and where $E[|\jmark{I}|] \geq c\cdot \OPT{\jmark{S}}$ for some absolute constant $c>0$. We will use as a black box a solution for the 1-D problem of given a set of intervals, to compute a $c$-approximate maximum independent set of these intervals. We will show that using such a solution, we can obtain a $O(c)$-approximation for squares. \paragraphh{Random quadtrees.} We assume all squares in $\jmark{S}$ are inside the unit square $[0,1]^2$. Let $\jmark{\stackrel{\infty}{Q}}$ be the infinite quadtree where the root node is a square centered on a random point in the the unit square and with a random side length in $[1,2]$. Everything that follows is implicitly parameterized by this choice of random quadtree $\jmark{\stackrel{\infty}{Q}}$ and $\jmark{S}$. We use $\jmark{\eta}$, possibly subscripted, to denote a node of the quadtree and $\Square{\jmark{\eta}}$ to denote $\jmark{\eta}$'s defining square. We will use $s$, possibly subscripted to denote a square in $\jmark{S}$. \paragraphh{Centered squares.} Given a square $s$, let $\Node{s}$ be the smallest quadtree node of $\jmark{\stackrel{\infty}{Q}}$ that completely contains $s$, we say that $s$ and $\Node{s}$ are associated with each other\footnote{We note that computing the coordinates of $\Node{s}$ from $s$ is the only operation on the input that we need beyond performing comparisons. This operation can be implemented with a binary logarithm, a floor, and a binary exponentiation.}. A square $s$ is said to be $\emph{centered}$ if $s$ contains the center point of its associated node, $\Square{\Node{s}}$. See Figure~\ref{f:centered}. We use $\jmark{\boxbox (\jmark{S})}$ to denote the set subset of $\jmark{S}$ where the squares are centered. \paragraph{Outline.} We will present the approach in a similar way as explained at a high-level in Section~\ref{sec:outline_squares}. \begin{enumerate} \item First, we will show that by loosing a factor of $16$, we can restrict our attention to centered squares (Lemma~\ref{l:bds}). \item In Section~\ref{sec:paths_suffice} we focus on the subtree of $\jmark{\stackrel{\infty}{Q}}$ including nodes associated with centered squares and their ancestors, denoted by $\jmark{Q}$ and show that given a linearly approximate solution for paths of $\jmark{Q}$, we can get a $O(1)$-approximate solution for $\jmark{Q}$. (Lemma~\ref{l:combine}). \item In Section~\ref{sec:paths_8c} we show how to decompose each path into four monotone subpaths, and that by loosing a factor of 4, it suffices to solve the problem in each of the monotone subpaths and use only the largest of the four independent sets. \item Last, we show that an approximate independent set of centered squares in monotone subpaths reduces to the problem of the maximum independent set of intervals. (Lemma~\ref{l:indsetsize}). \end{enumerate} \subsection{Centered Squares} We begin with showing that by losing a $O(1)$ factor, we can focus on centered squares and search for an approximate maximum independent set in $\jmark{\boxbox (\jmark{S})}$ rather than $\jmark{S}$ itself. \begin{lemma} \label{l:bds} The maximum size of an independent set of the centered squares $\jmark{\boxbox (\jmark{S})}$ is expected to be at least $\frac{1}{16}$ that of $\jmark{S}$: $E[\OPT{\jmark{\boxbox (\jmark{S})}}] \geq \frac{1}{16}\OPT{\jmark{S}}$. \end{lemma} \begin{proof} Given a square $s$ of size $k$, let $\ell$ be the size of the smallest quadtree cell that is larger then $k$. Observe that $\ell$ is uniformly distributed from $k$ to $2k$. Let $\jmark{\eta}$ be the node of the quadtree of size $\ell$ that contains the lower-left corner of $s$. We know $s$ is centered if its lower-left corner lies in the the lower-left quadrant of $\jmark{\eta}$ and if $s$ lies entirely in $\jmark{\eta}$. The first, that the lower-left corner of $s$ is in the lower left quadrant of $\jmark{\eta}$, happens with probability $\frac{1}{4}$. Given this, we need to know if the $x$ extent of $s$ is in the node, this can be done by checking of where the square begins relative to the left of the node plus its size is smaller than the width of the node; that is, whether its $x$ coordinate, which uniformly in $[0,\frac{l}{2}]$ relative to the left of $\jmark{\eta}$ plus its $x$-extent, which is $k$, is less than the width of the square $\ell$, which is uniform in $[k,2k]$. This happens with probability $\frac{1}{2}$, and independently with probability $\frac{1}{2}$ for the $y$-extent as well. Multiplying, this gives a probability of at least $\frac{1}{16}$ that $s$ in centered in $\jmark{\eta}$. For any $s \in \jmark{S}$, let $i(s)$ be 1 if $s \in \jmark{\boxbox (\jmark{S})}$, otherwise, $i(s)=0$; from the previous paragraph $E[i(s)]\geq \frac{1}{16}$. Let $I$ be $\Iopt{\jmark{S}} \cap \jmark{\boxbox (\jmark{S})}$, those squares from a maximum independent set that are centered. We know $E[|I|] =\sum_{s\in \Iopt{\jmark{S}}}i(s)$, and thus by linearity of expectation $E[|I|] \geq \frac{\OPT{\jmark{S}}}{16}$. Since $I$ is a subset of $\Iopt{\jmark{S}}$, it is an independent set and thus $\OPT{\jmark{\boxbox (\jmark{S})}}\geq |I|$. Combining these gives the lemma. \end{proof} \subsection{From Quadtrees to Paths} \label{sec:paths_suffice} We now show that the essential hardness on obtaining an independent set on the quadtree relies on getting independent sets on special types of paths of the tree. \paragraphh{Marked nodes and the finite quadtree.} The quadtree nodes in $\cup_{s\in \jmark{\boxbox (\jmark{S})}} \Node{s}$ are said to be \emph{marked} and are denoted as $\jmark{M}$. Let $\Squares{\jmark{\eta}}$ be the inverse of the $\Node{s}$ function, that is, given a node of the quadtree $\jmark{\eta}$, it returns the set of squares $s$ such that $\Node{s}=\jmark{\eta}$, these are the squares associated with a node. We define the quadtree $\jmark{Q}$ to be the subtree of the infinite quadtree $\jmark{\stackrel{\infty}{Q}}$ containing the marked nodes $\jmark{M}$ and their ancestors. % Each node of $\jmark{Q}$ is a leaf, an internal node (a node with more than one child), or a monochild node (a node with one child); with the exception that the root is considered to be an internal node if it has one child. See figure~\ref{f:thequad}. We use the word \emph{path} to refer to a maximal set of connected monochild nodes in $\jmark{Q}$. By definition, the node above the top node and below the bottom node in a path must exist in $\jmark{Q}$ and will not be monochild. We use these nodes to denote a path: $\jmark{P}(\jmark{\eta}_{\text{top}},\jmark{\eta}_{\text{bottom}})$ refers to the path strictly between $\jmark{\eta}_{\text{top}}$ and $\jmark{\eta}_{\text{bottom}}$, where $\jmark{\eta}_{\text{top}}$ is an ancestor of $\jmark{\eta}_{\text{bottom}}$, neither is a monochild node, and there are only monochild nodes between $\jmark{\eta}_{\text{top}}$ and $\jmark{\eta}_{\text{bottom}}$. Given a path $\jmark{P}=\jmark{P}(\jmark{\eta}_{\text{top}},\jmark{\eta}_{\text{bottom}})$, let $\Squares{\jmark{P}}$ refer to the squares associated with nodes of the path, $ \Squares{\jmark{P}} \coloneqq \cup_{n \in \jmark{P}} \Squares{\jmark{\eta}}$, and $\jmark{P_{\text{top}}},\jmark{P_{\text{bottom}}}$ refer to the nodes $\jmark{\eta}_{\text{top}},\jmark{\eta}_{\text{bottom}}$ that bound $\jmark{P}$. \begin{comment} \begin{figure} \centering \includegraphics{quad-tree-nodes.pdf} \caption{(a) An illustration of the quadtree $Q$ (b) a path $P(\jmark{\eta}_{top},\jmark{\eta}_{bottom})$ of $Q$. \jinote{I would be clear that $\jmark{Q_{\text{paths}}}$ is the set of all paths. There is nothing called $Q_{\text{path}}$. There is a question as to how to illustrate a quadtree. A quadtree node has four positions, which are fixed. Just like in a BST, you draw with a left or a right child clearly, the quadtree should be drawn so you can determine the child. So for (b), it looks odd as all of the children seem to be going the same way, which is not the case in general. I would have one larger version of what is on the left, with the paths labelled as $P_1, P_2$, and the leaves and internal nodes also nubmered somehow. Then you can say $\jmark{Q_{\text{paths}}}=\{P_1, P_2 \ldots\}$ and $\jmark{Q_{\text{leaves}}}=\{n^{\text{leaf}}_1, $, etc. I would remove the illustration on the right as it is not clear.} \sbnote{we can now remove this figure, and refer to second Figure from 2.2} } \label{fig:quad-tree-nodes} \end{figure} \end{comment} Let $\jmark{Q_{\text{leaves}}}$ refer to the set of leaves of $\jmark{Q}$, $\jmark{Q_{\text{internal}}}$ refer to the internal nodes of $\jmark{Q}$, and $\jmark{Q_{\text{paths}}}$ refer to the set of monochild paths of $\jmark{Q}$. The nodes in these sets partition the nodes of $\jmark{Q}$. Observe that the size of $\jmark{Q}$, measured in nodes, cannot be bounded as a function of $|\jmark{\boxbox (\jmark{S})}|$ as we do not have any bound on the aspect ratio of the squares stored. However, the number of leaves, $|\jmark{Q_{\text{leaves}}}|$, internal nodes, $|\jmark{Q_{\text{internal}}}|$, and paths, $|\jmark{Q_{\text{paths}}}|$ are all linear in $|\jmark{\boxbox (\jmark{S})}|$. \paragraphh{Protected Independent Sets.} Given a path $\jmark{P}$ we say a set $\Ippath{\jmark{P}} \subseteq \Squares{\jmark{P}}$ is a protected independent set with respect to $\jmark{P}$ if it is an independent set, and if no square in $\Ippath{\jmark{P}}$ intersects the square $\Square{(\jmark{P_{\text{bottom}}})}$. This definition implies that no square $\Ippath{\jmark{P}}$ can intersect any squares associated with any nodes in $\jmark{Q}$ not on this path; that is, $\Ippath{\jmark{P}}$ is disjoint from all squares in $\jmark{\boxbox (\jmark{S})} \setminus \Squares{\jmark{P}}$. It is this property that makes protected independent sets valuable. We show that to obtain an approximate independent sets, it is sufficient to use protected independent sets, along with one square associated with each leaf: \begin{lemma} \label{l:combine} Let $\jmark{I}$ be the subset of $\jmark{\boxbox (\jmark{S})}$ which is the union of \begin{itemize} \item An arbitrary square in $\Squares{\jmark{\eta}}$ for each leaf $\jmark{\eta} \in \jmark{Q_{\text{leaves}}}$ \item For each path $\jmark{P} \in \jmark{Q_{\text{paths}}}$, a protected independent set $\Ippath{\jmark{P}}$. We require that $$\sum_{\mathclap{\jmark{P} \in \jmark{Q_{\text{paths}}}}} | \Ippath{\jmark{P}}| \geq c_1 \sum_{\mathclap{\jmark{P} \in \jmark{Q_{\text{paths}}}}} \OPT{\Squares{\jmark{P}}} - c_2 | \jmark{Q_{\text{paths}}} | ,$$ for some absolute positive constants $c_1> 0$, $c_2 \geq 0$. That is, in aggregate, all of these protected independent sets must be within a linear factor of the maximum protected independent sets on all paths. \item No squares in $\Squares{\jmark{\eta}}$, for each internal node $\jmark{\eta} \in \jmark{Q_{\text{internal}}}$ \end{itemize} Observe that $|\jmark{I}|=|\jmark{Q_{\text{leaves}}}| +\sum_{{\jmark{P} \in \jmark{Q_{\text{paths}}}}} |\Ippath{\jmark{P}}|$. The set $\jmark{I}$ is an independent set of squares and $|\jmark{I}|\geq \frac{c_1}{2c_1+c_2+1} \OPT{\jmark{\boxbox (\jmark{S})}}$. \end{lemma} \begin{proof} First, we argue that $\jmark{I}$ is an independent set. For any two squares $s_1, s_2 \in \jmark{I}$, we argue they cannot intersect. This has several cases: \begin{itemize} \item Both $s_1$ and $s_2$ are associated with the same leaf node $\jmark{\eta}=\Node{s_1}=\Node{s_2} \in \jmark{Q_{\text{leaves}}}$. This cannot happen as this means both $\jmark{\eta}_1$ and $\jmark{\eta}_2$ are in $\Squares{\jmark{\eta}}$, but only one element from $\Squares{\jmark{\eta}}$ is in $\jmark{I}$ by the construction. \item Both $\Node{s_1}$ and $\Node{s_2}$ are nodes, neither of which is the same as or the ancestor of the other. In a quadtree, squares of nodes which are not ancestors or descendants are disjoint, and thus $s_1$ and $s_2$, which are contained in the squares defining these quadtree nodes, $\Square{\Node{s_1}}$ and $\Square{\Node{s_1}}$ are disjoint. \item Both $\Node{s_1}$ and $\Node{s_2}$ are in the same $\Squares{\jmark{P}}$, for some $\jmark{P} \in\jmark{Q_{\text{paths}}}$. Then they would only be in $\jmark{I}$ if they were both in $\Ippath{\jmark{P}}$, which is by definition an independent set. \item The only remaining case is where $\Node{s_1}$ is part of some path $\jmark{P} \in \jmark{Q_{\text{paths}}}$ and $s_2$ is a descendent of this path and thus inside the square $\Square{(\jmark{P_{\text{bottom}}})}$. But, since $s_1$ is in $\Ippath{\jmark{P}}$, and $\Ippath{\jmark{P}}$ is a protected independent set, by definition $s_1$ will not intersect $\Square{(\jmark{P_{\text{bottom}}})}$ and thus will not intersect $s_2$. \end{itemize} Second, we argue that $\jmark{I}$ is an approximation. For disjoint $S_1,S_2$, $\OPT{S_1 \cup S_2} \leq \OPT{S_1} + \OPT{S_2}$. Thus we compute the independent sets of the squares associated with each part of the quadtree (leaves, internal nodes, degree-1 paths) separately: \begin{align*} \OPT{\jmark{\boxbox (\jmark{S})}} & \leq \OPT{\jmark{Q_{\text{leaves}}}} + \OPT{\jmark{Q_{\text{internal}}} } + \sum_{\mathclap{\jmark{P} \in \jmark{Q_{\text{paths}}}}} \OPT{\Squares{\jmark{P}}} \\ \intertext{As all leaves are disjoint, are associated with at least on square each, and each square in a leaf intersects the center of the leaf, the optimum is always exactly one square from each leaf and $\OPT{\jmark{Q_{\text{leaves}}}}= |\jmark{Q_{\text{leaves}}}|$:} & \leq |\jmark{Q_{\text{leaves}}}| + \OPT{\jmark{Q_{\text{internal}}} } + \sum_{\mathclap{\jmark{P} \in \jmark{Q_{\text{paths}}}}} \OPT{\Squares{\jmark{P}}} \\ \intertext{As the number of internal nodes is at most the number of leaves, and there can be most one square associated with each internal node in any independent set, $\OPT{\jmark{Q_{\text{internal}}} }\leq |\jmark{Q_{\text{internal}}}|\leq |\jmark{Q_{\text{leaves}}}|$:} & \leq 2|\jmark{Q_{\text{leaves}}}| + \sum_{\mathclap{\jmark{P} \in \jmark{Q_{\text{paths}}}}} \OPT{\Squares{\jmark{P}}}| \\ \intertext{In the statement of the lemma $ \sum_{\jmark{P} \in \jmark{Q_{\text{paths}}}} | \Ippath{\jmark{P}}| \geq c_1 \sum_{\jmark{P} \in \jmark{Q_{\text{paths}}}} \OPT{\Squares{\jmark{P}}} - c_2 | \jmark{Q_{\text{paths}}} |$:} & \leq 2|\jmark{Q_{\text{leaves}}}| + \frac{1}{c_1} \sum_{{\jmark{P} \in \jmark{Q_{\text{paths}}}}} |\Ippath{\jmark{P}}|+ \frac{c_2}{c_1}|\jmark{Q_{\text{paths}}}| \\ \intertext{As there are no more paths than leaves:} & \leq \lrp{2+\frac{c_2}{c_1}}|\jmark{Q_{\text{leaves}}}| +\frac{1}{c_1} \sum_{{\jmark{P} \in \jmark{Q_{\text{paths}}}}} |\Ippath{\jmark{P}}| \\ & \leq \lrp{2+\frac{c_2+1}{c_1}}\lrp{ |\jmark{Q_{\text{leaves}}}| +\sum_{\mathclap{\jmark{P} \in \jmark{Q_{\text{paths}}}}} |\Ippath{\jmark{P}}| } \\ \intertext{By the definition of $\jmark{I}$ in the statement of the lemma:} & \leq \lrp{2+\frac{c_2+1}{c_1}} |\jmark{I}| = \frac{1}{\frac{c_1}{2c_1+c_2+1}} |\jmark{I}| \end{align*}\qedhere \end{proof} \subsection{Paths} \label{sec:paths_8c} In the previous subsection, the results depended on obtaining an approximate independent set for the squares associated with each path. In this subsection we will show how this can be solved, given a solution to the 1-D problem of computing the approximate independent set of intervals. \paragraphh{Monotone paths. } In this section we assume a monochild path $\jmark{P} \in \jmark{Q_{\text{paths}}}$. We assume paths are ordered from the highest to lowest node in the tree. Let $\Depth{\jmark{\eta}}$ represent the depth of a node in $\jmark{Q}$, and we use $\Depth{s}$ as a shorthand for $\Depth{\Node{s}}$. As each node on a path in $\jmark{Q_{\text{paths}}}$ has only one child, label each node in the path with the quadrant of its child ($\jmark{\textsc{I}}, \jmark{\textsc{II}}, \jmark{\textsc{III}}, \jmark{\textsc{IV}}$). The label of a square in $s \in \Squares{\jmark{P}}$ is the label of $\Node{s}$ in $\jmark{P}$. We partition the nodes of $\jmark{P}$ into subpaths $\Pathq{\jmark{\textsc{I}}}, \Pathq{\jmark{\textsc{II}}}, \Pathq{\jmark{\textsc{III}}}$ and $\Pathq{\jmark{\textsc{IV}}}$. See~Figure~\ref{f:patha}. We use $\Pathextend{q}$ to refer to $\Pathq{q}$ with $\jmark{P_{\text{bottom}}}$ appended on the end, we call this an \emph{extended subpath} (we will use $q$ to make statements that apply to an arbitrary quadrant). All of $\Pathq{q}$ and $\Pathextend{q}$ are referred to as \emph{monotone} (extended) subpaths as for every pair of nodes $\jmark{\eta}_1,\jmark{\eta}_2$ in a subpath, if $\jmark{\eta}_1$ appears before $\jmark{\eta}_2$ on the subpath, then $\jmark{\eta}_2$ is in quadrant $q$ of $\jmark{\eta}_1$. Our general strategy will be to solve the independent set problem for each of the four quadrants and take the largest, which only will cause the loss of a factor of four: \begin{fact} \label{f:max4} As each $\jmark{P}$ in $\jmark{Q_{\text{paths}}}$ is partitioned into $\Pathq{q}$ for $q \in \{\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}\}$, we know that $$\max_{q \in \{\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}\} } \OPT{\Squares{\Pathq{q}}} \geq \frac{1}{4} \OPT{\Squares{\jmark{P}}}$$ and $$\max_{q \in \{\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}\} } \sum_{\jmark{P} \in \jmark{Q_{\text{paths}}}} \OPT{\Squares{\Pathq{q}}} \geq \frac{1}{4} \sum_{\jmark{P} \in \jmark{Q_{\text{paths}}}} \OPT{\Squares{\jmark{P}}}.$$ \end{fact} \old{ \begin{figure} \centering \includegraphics{fig/quad-tree-paths.pdf} \caption{(a) An illustration of the monotone paths (b)} \label{fig:quad-tree-paths} \end{figure} } \paragraphh{From monotone paths to intervals. } Given a monotone extended subpath $\Pathq{q}$, and square $s$, where $s \in \Squares{\Pathq{q}}$, we will be interested in which are the nodes $\jmark{\eta} '$ in $\Pathq{q}$ where $s$ intersects the centers $\Squarecenter{\jmark{\eta} '}$. As $s$ is centered, we know $s$ intersects $\Squarecenter{\Node{s}}$, and by the definition of $\Node{s}$, we know $s$ will not intersect any $\Squarecenter{\jmark{\eta} '}$ for any $\jmark{\eta} '$ that comes before $\Node{s}$ in $\Pathq{q}$ (and thus is a parent of $\Node{s}$ in $\jmark{Q}$). Let $\jmark{\eta} '_{\Pathq{q}}(s)$ be the last node in $\Pathq{q}$ such that $s$ intersects its center. We use $\Depthmax{\Pathq{q}}{s}$ as a shorthand for $d(\jmark{\eta} '_{\Pathq{q}}(s))$. Trivially $\Depth{s} \leq \Depthmax{\Pathq{q}}{s}$. We denote the interval $[\Depth{s},\Depthmax{\Pathq{q}}{s}]$ as $\Interval{P_q}{s}$. See Figure~\ref{f:pathb}. We use $\Pathq{q}[d_1,d_2]$ to refer to the subpath of $\Pathq{q}$ consisting of the nodes of depths between $d_1$ and $d_2$, inclusive. What is interesting about $s$ is that from the above we know that $s$ only intersects squares of $\Squares{\Pathq{q}}$ with depths in the range $\Interval{P_q}{s}$, but that it has number of interesting geometric properties including that intersects \emph{all} such squares: \begin{lemma} \label{l:monotone} Given an extended monotone subpath $\Pathq{q}$: \begin{itemize} \item The centers of the nodes of monotone path $\Pathq{q}$ followed by an arbitrary point in $\Square{(\jmark{P_{\text{bottom}}})}$ are monotone with respect to the $x$ and $y$ axes. \item A square $s \in \Squares{\Pathq{q}}$ intersects the centers $\Squarecenter{\jmark{\eta}}$ of all nodes $\jmark{\eta} \in \Pathq{q}$ with depths in $\Pathq{q}[\Interval{\Pathq{q}}{s}]$, and thus $s$ intersects all squares $s$ in $\Squares{\Pathq{q}}$ with depths in $\Pathq{q}[\Interval{\Pathq{q}}{s}]$. \item Given squares $s_1,s_2 \in \Squares{\Pathq{q}}$, if the intervals $\Interval{\Pathq{q}}{s_1}$ and $\Interval{\Pathq{q}}{s_2}$ intersect, then $s_1$ and $s_2$ intersect. \item Given squares $s_1,s_2,s_3 \in \Squares{\Pathq{q}}$, where $\Interval{\Pathq{q}}{s_1}$, $\Interval{\Pathq{q}}{s_2}$, $\Interval{\Pathq{q}}{s_3}$ are disjoint, and $\Interval{\Pathq{q}}{s_1}$ is to the left of $\Interval{\Pathq{q}}{s_2}$ which is to the left of $\Interval{\Pathq{q}}{s_3}$, then $s_1$ and $s_3$ are disjoint. \item Given squares $s_1,s_2 \in \Squares{\Pathq{q}}$, $\Depth{s_1}<\Depth{s_2}$ if the intervals $\Interval{\Pathq{q}}{s_1}$ and $\Interval{\Pathq{q}}{s_2}$ are disjoint, then $s_1$ does not intersect $\Square{(\jmark{P_{\text{bottom}}})}$. \end{itemize} \end{lemma} \begin{proof} The first statement holds by the definition of a monotone (extended) subpath, since all succeeding nodes are in the same quadrant relative to the centers of all preceding nodes. The second statement holds with regards to any squares (in fact, any rectangles) and monotone sequences of points. The third statement is trivial, given the second, as if two squares cover the same point, they intersect. The proof of the fourth statement may be found in Figure~\ref{fig:monotone}; it requires more than the monotone nature of the centers, but also that each square is contained in a quadrant which is part of a monotone path. The last point is really the same as the second, using the fact that the first point holds for an arbitrary point in $\Square{(\jmark{P_{\text{bottom}}})}$. \end{proof} \begin{figure} \centering \includegraphics{quadnest-new.pdf} \caption[fuck]{ We wish to prove the penultimate point of Lemma~\ref{l:monotone}: given squares $s_1,s_3 \in \Squares{\Pathq{q}}$ where $\Interval{\Pathq{q}}{s_1}$, $\Interval{\Pathq{q}}{s_2}$ are disjoint, and $\Interval{\Pathq{q}}{s_1} < \Interval{\Pathq{q}}{s_3}$. Let $\jmark{\eta}_1 \coloneqq \Depth{s_1}$, $\jmark{\eta}_3 \coloneqq \Depth{s_3}$ and let $\jmark{\eta}_2$ be a node between $\jmark{\eta}_1$ and $\jmark{\eta}_3$ on $\Pathq{q}$. Observe that the depths of the nodes are increasing: $\Depth{\jmark{\eta}_1}<\Depth{\jmark{\eta}_2}<\Depth{\jmark{\eta}_3}$. Assume w.l.o.g., that $q=\jmark{\textsc{IV}}$. Three quadtree cells, $\jmark{\eta}_1, \jmark{\eta}_2, \jmark{\eta}_3$, that are a subsequence of $\Pathq{\jmark{\textsc{IV}}}$, are illustrated with their centers $c_1=\Squarecenter{\jmark{\eta}_1},c_2=\Squarecenter{\jmark{\eta}_2},c_1=\Squarecenter{\jmark{\eta}_3}$. Observe that cell $\jmark{\eta}_2$ is contained in the lower-right quadrant of $\jmark{\eta}_1$, and $\jmark{\eta}_3$ is in the lower-right quadrant of $\jmark{\eta}_2$. % Square $s_1$ and $s_3$ are illustrated. Since $\Interval{\Pathq{q}}{s_1} < \Interval{\Pathq{q}}{s_2}$, we know that $s_1$ does not intersect $c_2$. Additionally $s_3$ by definition is contained in $\jmark{\eta}_3$. The basic geometric fact we wish to illustrate, from which the penultimate point of Lemma~\ref{l:monotone} follows, is that since $s_1$ does not intersect $c_2$, then it cannot intersect the red square $s_3$. This follows as relative to $c_2$, $c_1$ is to the upper-left, and any point in $\jmark{\eta}_3$ is to the lower right, and any axis-aligned square including points both to the upper-left of a point and the lower-right, must include the point itself. Thus, as the blue square $s_1$ must by definition include $c_1$, if it is to intersect the red square which lies entirely in $\jmark{\eta}_3$, it must intersect $c_2$. } \label{fig:monotone} \end{figure} We call some $I \subseteq \Squares{\Pathq{q}}$ \emph{interval independent} if the intervals $\{ \Interval{\Pathq{q}}{s} | s \in I\}$ are independent. We would like to connect the notion of interval independent squares with independent squares. However, there exist sets of squares which are interval independent but yet have intersecting squares, see Figure\ref{f:pathb} for an example. Let $\Ipath{\Pathq{q}}$ be an interval independent subset of $\Squares{\Pathq{q}}$. Let $\Half{\Pathq{q}}{\Ipath{\Pathq{q}}}$ be a subset formed by $\Ipath{\Pathq{q}}$ taking every other element out of $\Ipath{\Pathq{q}}$, starting from the last (deepest) one. As $\Ipath{\Pathq{q}}$ is interval independent, the depths of the squares are distinct, and $\Half{\Pathq{q}}{\Ipath{\Pathq{q}}}$ is uniquely defined. % Clearly, $|\Half{\Pathq{q}}{\Ipath{\Pathq{q}}}| \geq \frac{1}{2}|\Ipath{\Pathq{q}}|-1 $ \begin{lemma}\label{l:isprotected} Given an interval-independent set $\Ipath{\Pathq{q}} \subseteq \Squares{\Pathq{q}}$, the set $\Half{\Pathq{q}}{\Ipath{\Pathq{q}}}$ is a protected independent set. \end{lemma} \begin{proof} First, we argue that this is an independent set. As we took the squares corresponding to every other interval, by the penultimate point of Lemma ~\ref{l:monotone}, they are independent. From the last point of Lemma~\ref{l:monotone}, the only square in $\Squares{\Pathq{q}}$ that could intersect $\jmark{P_{\text{bottom}}}$ is the last one, and by definition this is not included in $\Half{\Pathq{q}}{\Ipath{\Pathq{q}}}$. % \end{proof} \begin{lemma} \label{l:indsetsize} Given $P_q$, let $\Iintopt{\Pathq{q}}$ be a maximum interval-disjoint subset of $\Squares{\Pathq{q}}$. Let $\Iintapprox{\Pathq{q}}$ be an interval-disjoint subset of $\Squares{\Pathq{q}}$ where $c|\Iintapprox{\Pathq{q}}|\geq |\Iintopt{\Pathq{q}}|$ for some $c\geq 1$. Then, $|\Half{\Pathq{q}}{\Iintapprox{\Pathq{q}}}| \geq \frac{1}{2c}\OPT{\Pathq{q}}-1$. \end{lemma} \begin{proof} \begin{align*} |\Half{\Pathq{q}}{\Iintapprox{\Pathq{q}}}| & \geq \frac{1}{2} |\Iintapprox{\Pathq{q}}|-1 & \text{From taking every other element} \\ & \geq \frac{1}{2c}|\Iintopt{\Pathq{q}}|-1 & \text{Given} \\ & \geq \frac{1}{2c}\OPT{\Pathq{q}}-1 \end{align*} The last line follows from the third point of Lemma~\ref{l:monotone}, which implies that if a subset of $\Squares{\Pathq{q}}$ is interval-disjoint, then it independent; thus the size of the maximum interval-disjoint subset of $\Squares{\Pathq{q}}$, $|\Iintopt{\Pathq{q}}|$, is a lower bound on the size $\OPT{\Pathq{q}}$ of a maximum independent set of $\Squares{\Pathq{q}}$. \end{proof} \subsection{Summary} \begin{theorem} \label{t:static} Given a $c$-approximation algorithm for maximum independent set of intervals, one obtain an expected $256c+32$-approximate randomized algorithm for maximum independent set of squares. \end{theorem} \begin{proof} For each $P$ in $\jmark{Q_{\text{paths}}}$, and $q \in \{\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}\}$ we have the intervals $\{\Interval{\Pathq{q}}{s}| s \in \Squares{\Pathq{q}}\}$. Give each of these sets of intervals to the assumed solution for an approximate independent set of intervals, and let $\Iintapprox{\Pathq{q}}$ be the squares that give rise to the solution. Now by removing every other element of $\Iintapprox{\Pathq{q}}$, beginning with the last, we obtain $\Half{\Pathq{q}}{\Iintapprox{\Pathq{q}}}$. From Lemma~\ref{l:isprotected} we know that $\Iintapprox{\Pathq{q}}$ is a protected independent set of squares, not just intervals. As for the size of $\Iintapprox{\Pathq{q}}$: \begin{align*} & \max_{q \in \{\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}\} } \sum_{\jmark{P} \in \jmark{Q_{\text{paths}}}} |\Half{\Pathq{q}}{\Iintapprox{\Pathq{q}}}| \\ &\geq \max_{q \in \{\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}\} } \sum_{\jmark{P} \in \jmark{Q_{\text{paths}}}} \lrp{\frac{1}{2c}\OPT{\Pathq{q}}-1 } & \text{Lemma~\ref{l:indsetsize}} \\ &\geq \lrp{ \max_{q \in \{\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}\} } \sum_{\jmark{P} \in \jmark{Q_{\text{paths}}}} \frac{1}{2c}\OPT{\Pathq{q}}}-|\jmark{Q_{\text{paths}}}| \\ & \geq \frac{1}{8c} \sum_{\jmark{P} \in \jmark{Q_{\text{paths}}}} I_{\text{opt}}(\jmark{P}) -|\jmark{Q_{\text{paths}}}| & \text{Fact \ref{f:max4}} \end{align*} Thus, for $$q_{\max} \coloneqq \underset{q \in \{\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}\}}{\operatorname{argmax}} \sum_{\jmark{P} \in \jmark{Q_{\text{paths}}}} |\Half{\Pathq{q}}{\Iintapprox{\Pathq{q}}}|,$$ the sets $\Half{\Pathq{q}}{\Iintapprox{\Pathq{q}}}$ will satisfy the requirements of Lemma~\ref{l:combine} with $c_1=\frac{1}{8c} $ and $c_2=1$. Thus, the sets $\Half{\Pathq{q}}{\Iintapprox{\Pathq{q}}}$ along with an arbitrary square associated with each node $\jmark{Q_{\text{leaves}}}$ by Lemma~\ref{l:combine} is an independent set of squares of size at least at least $\frac{\frac{1}{8c}}{2\cdot \frac{1}{8c}+1+1}\OPT{\jmark{\boxbox (\jmark{S})}}=\frac{1}{2+16c} \OPT{\jmark{\boxbox (\jmark{S})}}$; from Lemma~\ref{l:bds} this is expected to be at least $\frac{1}{2+16c} \cdot \frac{1}{16} \OPT{S}= \frac{1}{256c+32} \OPT{S}$. \end{proof} \section{Dynamization} \label{sec:squares_dynamic} \newcommand{\jti}[1]{\textit{#1}} \newcommand{\tabto{1pc}}{\tabto{1pc}} \newcommand{\tabto{2pc}}{\tabto{2pc}} \newcommand{\tabto{3pc}}{\tabto{3pc}} \newcommand{\tabto{4pc}}{\tabto{4pc}} \newcommand{\tabto{5pc}}{\tabto{5pc}} \newcommand{\tabto{6pc}}{\tabto{6pc}} \newcommand{\text{this}}{\text{this}} \newcommand{\jtc}[1]{\tabto{2.7in}\parbox[t]{2.7in}{\textit{// #1}}} \newcommand{\Ss}[1]{\jmark{S}^{\jmark{\text{search}}}_{#1}} \newcommand{\Si}[1]{\jmark{S}^{\jmark{\text{interval}}}_{#1}} \newcommand{\jmark{S}^{\jmark{\text{path}}}}{\jmark{S}^{\jmark{\text{path}}}} \newcommand{\Sq}[1]{\jmark{S}^{\jmark{\text{qtree}}}_{#1}} \newcommand{S^{\jmark{\text{qman}}}}{S^{\jmark{\text{qman}}}} \newcommand{S^{\jmark{\text{top}}}}{S^{\jmark{\text{top}}}} In the dynamic structure, squares are inserted and deleted into an initially empty set of squares, and any changes to an approximate maximal independent set are reported back. Thus, the user of the dynamic structure, by keeping track of all changes that have been reported, will know the current contents of the independent set. This data structure is a dynamization of the quadtree approach presented in section~\ref{s:quadtree}. We present the data structure in three parts: \begin{comment} The quadtree structure is the main structure and which efficiently stores the quadtree and its decomposition into paths, leaves, and internal nodes, and brings together the independent sets of the various parts as previously described in Lemma~\ref{l:combine}. The path structure represents a path, and there will be one such structure maintained by the quadtree structure for each path in $\jmark{Q_{\text{paths}}}$. This structure translates each square stored into an interval in the same manner as the static structure, and uses the dynamic interval structure of section~\ref{sec:intervals_details} for each of the four monotone paths to dynamize maintaining an approximately optimal set of intervals. This requires swapping out the IQDS of \ref{lem:ors} for one that is compatible with the intervals generated from our quadtree approach; the details of this are presented in Section~\ref{sec:squaresiqds}. Finally, there is a search structure. This is the secret sauce of the efficiency of our entire method, and allows the dynamic interval structure to query the intervals of squares while the underlying quadtree changes. \end{comment} \begin{enumerate} \item The \textit{quadtree structure} is the main structure and which efficiently stores the quadtree and its decomposition into paths, leaves, and internal nodes, and brings together the independent sets of the various parts as previously described in Lemma~\ref{l:combine}. \item The \textit{path structure} represents a path, and there will be one such structure maintained by the quadtree structure for each path in $\jmark{Q_{\text{paths}}}$.This structure translates each square stored into an interval in the same manner as the static structure, and uses the dynamic interval structure of section~\ref{sec:intervals_details} for each of the four monotone paths to dynamize maintaining an approximately optimal set of intervals. This requires swapping out the IQDS of \ref{lem:ors} for one that is compatible with the intervals generated from our quadtree approach; the details of this are presented in Section~\ref{sec:squaresiqds}. \item The \textit{search structure} is the secret sauce of the efficiency of our entire method, and allows the dynamic interval structure to query the intervals of squares which are stored in this structure implicitly. \end{enumerate} We proceed in a bottom-up fashion here, beginning with the search structure and culminating with the quadtree structure, with the details of the IQDS at the end. \subsection{Search Structure} \paragraphh{Overview. } This structure is the only part of the more complex structure which does not directly correspond to something defined in Section~\ref{s:quadtree}. Rather, it plays a supporting role and is vital for speed reasons. There will be one instance of the search structure, which is globally accessible. Logically, the searching structure stores a collection of squares which can be modified via $\jop{Insert}$ and $\jop{Delete}$ operations. Each square in the collection has a mark, which will be one of the quadrant labels $\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}$ or $\jop{None}$. Given a square $s$, the quadtree structure will maintain that $s$'s mark will be $\jop{None}$ if $\Node{s}$ is currently an internal node or a leaf in $\jmark{Q}$, and will be the quadrant of the child if $\Node{s}$ is monochild. There is one query operation, and one operation to change the marked state of some of the squares, which will be used by the query structure when a change in the quadtree necessitates a change in the mark of some squares. We use $s.t, s.b, s.l, s.r$ to refer to the coordinates of the four sides of a square $s$, and $d$ to refer to one of the directions in $\{t,b,l,r\}$. \paragraphh{Operations: } Formally, the operations supported are: \begin{itemize} \item $\jop{Init}$: Makes new empty structure \item $\jop{Insert}(s,m)$: Inserts square $s$ with mark $m$ into the structure \item $\jop{Delete}(s)$: Deletes a square $s$ from the structure \item $\jop{RangeMark}(s_1,s_2,m)$: Gives mark $m$ to all squares $s$ stored such that $s_1.d \leq s.d \leq s_2.d$ for $d\in \{t,b,l,r\}$. \item $s=\jop{RangeSearch}(s_1,s_2,m)$ Finds and returns a/the marked square with mark $m$ stored such that $s_1.d \leq s.d \leq s_2.d$ for all $d\in \{t,b,l,r\}$ or reports that there is no such square. \end{itemize} \paragraphh{Invariants:} The quadtree structure will ensure that the following holds at the end of every operation: All squares in $\jmark{\boxbox (\jmark{S})}$ must be stored in this search structure, and thus the insertion and deletion operations must be called when there are changes to $\jmark{\boxbox (\jmark{S})}$. For each square $s$, its mark is the quadrant of the child if $\Node{s}$ has is monochild (if $\Node{s}$ is on some $\jmark{P} \in \jmark{Q_{\text{paths}}}$) and $\jop{None}$ otherwise (if $\Node{s}$ is a leaf or an internal node). As the structure of the quadtree changes, the quadtree structure will need to call $\jop{RangeMark/RangeUnmark}$ to ensure that this invariant is kept up to date. For example, to mark in all squares in $\Squares{\jmark{\eta}}$ with mark $m$ can be done with one call to $\jop{RangeMark}(s_1,s_2,m)$ where $s_1$ is the lower-left quadrant of the square of $\Square{\jmark{\eta}}$ and $s_2$ is the upper-right quadrant of the square of $\Square{\jmark{\eta}}$. \paragraphh{Implementation. } Each square is stored as a 4-D point in a standard range tree structure~\cite{DBLP:journals/ipl/Bentley79}. The range tree is augmented with a possible note with a mark on each node (in this paragraph a node is used to refer to a node of a range tree, not of the quadtree) indicating that all points in its subtree should have that mark. For any point, its mark is determined by its highest ancestor with such a note. Any time a non-leaf node with a note is touched, its note is removed and pushed down to its children, overwriting any note on its children. Additionally, all nodes have an indication of whether all nodes squares in this subtree have the same mark, according to the information in the subtree (ignoring any notes in the ancestors). With such standard augmentation, standard range query operations can be executed, such operations may be limited to a particular mark, and all points in a range can have their mark changed. \paragraphh{Runtime.} All operations take time $O(\log^4 n)$ using the standard range tree analysis. We do not bother with fractional cascading~\cite{DBLP:journals/algorithmica/ChazelleG86,DBLP:journals/algorithmica/ChazelleG86a} as while this may shave a log, this complicates the simple description of the implementation of the marks. \subsection{Path Structure} The path structure is a structure which represents a path $P \in \jmark{Q_{\text{paths}}}$ and which maintains an approximate protected independent set of the squares associated with nodes of the path. As such there will be one instance of this structure for every path. Though the details are numerous, the idea behind the path structure is simple: it stores the squares on a path, supports modifications of the path such as split and merge, and translates the squares into intervals which it passes on to our structure for dynamic intervals. The path structure is called by the quadtree structure every time a square is inserted or deleted that is associated with a node on the path. It is also called by the quadtree structure whenever a split, merge, extend, or contract operation needs to be performed due to the paths changing due to a structural change in the quadtree. In this way the quadtree structure ensures that there is always one path structure for each path in $\jmark{Q_{\text{paths}}}$. We require that the quadtree structure updates the search structure before the path structures, and the path structure has access to the search structure. The ADT of the path structure is as follows: \begin{itemize} \item $ \jop{Init}(\jmark{\eta}_{\text{top}},\jmark{\eta}_{\text{bottom}})$: Creates a new path structure for the path from $\jmark{\eta}_{\text{top}}$ to $\jmark{\eta}_{\text{bottom}}$. % \item $\Delta I = \jop{Insert}(s)$: This is called by the quadtree structure whenever a square $s$ is inserted that is associated with a node on this path. Changes to the independent set are reported. % \item $\Delta I = \jop{Delete}(s)$: This is called by the quadtree structure whenever a square $s$ is deleted that is associated with a node on this path. Changes to the independent set are reported. % \item $(\Delta I,\jmark{P}_\text{new}) = \jop{Split}(\jmark{\eta})$: Given a node $\jmark{\eta}$ on this $\jmark{P}$, splits this structure into two. This structure will represent the path defined from the former $\jmark{P_{\text{top}}}$ to $\jmark{\eta}$ and the new path will represent the path defined from $\jmark{\eta}$ to the former $\jmark{P_{\text{bottom}}}$. The node $\jmark{\eta}$ itself will no longer belong to a path. Changes to the protected independent sets are reported, that is the difference between the independent set before this operation and the union of the two paths' structures protected independent at the completion of this operation. % \item $\Delta I = \jop{Merge}(\jmark{P}')$: Given another instance of an monotone path structure $P'$ which is adjacent below this one, that is, where $\jmark{P_{\text{bottom}}}=\jmark{P_{\text{top}}}'$, the two paths and node between them, $\jmark{P_{\text{bottom}}}=\jmark{P_{\text{top}}}'$, are combined into this structure and $\jmark{P}'$ becomes invalid. Changes to the union of protected independent sets before the operation as compared to the single one after are reported. % \item $\Delta I = \jop{Extend/Contract}(\jmark{\eta})$: Set $\jmark{P_{\text{bottom}}}$ to $\jmark{\eta}$. There must be no marked nodes between the old and the new $\jmark{P_{\text{bottom}}}$. Extend refers to $\jmark{\eta}$ being below the current $\jmark{P_{\text{bottom}}}$, thus making the path larger, and contract refers to $\jmark{\eta}$ being on the current path, thus making it smaller. Changes to the independent set are reported. \end{itemize} Note that in a \jop{Extend}, \jop{Contract}, \jop{Split}, and \jop{Merge}, there is one possibly marked node that is added or removed from the path(s) that was not on a path before. However, the squares associated with this node are not passed in as a parameter as there could be arbitrarily many, but the implementation of these operations uses the search structure to access them; this works as at most one of the squares associated with this node can be added or removed to the independent set, and searching for this one square can be done with the search structure. \paragraphh{Implementation overview.} The implementation works by following the same logic of the static case. To review, in the static case the path $\jmark{P}$ was partitioned into four monotone subpaths $\Pathq{q}$, for $q \in \{\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}} \}$. In each of these subpaths, squares $s$ associated with nodes on the monotone subpath $\Pathq{q}$ were associated with intervals $\Interval{\Pathq{q}}{s}\coloneqq [\Depth{s},\Depthmax{\Pathq{q}}{s}]$ corresponding to the depths of the centers of the nodes on the subpath that they spanned. An approximately maximal set of squares whose intervals were independent, $\Iintapprox{\Pathq{q}}$, was then obtained. This set is not necessarily a protected independent sets of squares, but we showed it could easily be transformed into one by removing every other square, starting from the deepest, to yield $\jmark{I}(\Pathq{q})$. Finally, the independent set chosen, $\jmark{I}(\jmark{P})$ was the largest of the four $\jmark{I}(\Pathq{q})$. In order to make this dynamic several minor changes are needed. The first is that to maintain the four approximate independent sets of intervals, $\Iintapprox{\Pathq{q}}$, the dynamic intervals structure of section~\ref{sec:intervals_details} is used. The second is that the condition that the chosen independent set be that largest of $\Iintapprox{\Pathq{q}}$ is too strict as we need to report changes in the independent set, and if the largest oscillates frequently between two large sets, reporting these changes will become unacceptably expensive. So, instead, we require that the chosen independent set be within a factor of two of the largest $\Iintapprox{\Pathq{q}}$, which allows an easy amortization of the cost of switching sets at the expense of losing let another factor of two in our constant. The third change is that taking every-other interval from $\Iintapprox{\Pathq{q}}$ to yield $\jmark{I}(\Pathq{q})$ is too strict to be maintained dynamically. We thus adopt a more relaxed approach where there are between one and three unchosen intervals between the chosen intervals, thus losing another factor of two in the approximation. The fourth change is that the paths themselves are not static, and may be split, merged, extended or contracted as the quadtree shape changes. This is easy to support given that the dynamic independent interval structure support splits and merges. Specifically, we store the four interval-independent subsets of $\Squares{\Pathq{q}}$, $\Iintapprox{\Pathq{q}}$, in four red-black trees, sorted by depth. We call the nodes of non-maximal black depth \emph{chosen}. It is a basic fact of red-black trees that there will be between one and three non-chosen nodes between chosen nodes, and the minimum and maximum nodes will not be chosen. The squares in the chosen nodes are thus, by the same logic as lemma~\ref{l:isprotected}, a protected independent set. It is also useful to note that in every standard red-black tree operations (insert, delete, split, merge) only a constant number of nodes can have their chosen/non-chosen status change. One of the four red-black trees is marked as \emph{active}, and it is the chosen squares of this tree which form the protected independent set $\jmark{I}(\jmark{P})$ seen by the user of this structure. We will maintain that the active red-black tree is the largest of the four, or at least within a factor two of the largest. Just before finishing the execution of all operations, this invariant is checked, and if it no longer holds, the active red-black tree is changed to the largest of the four, and all the chosen squares in the previously active red-black tree are reported as removed and all the chosen squares in the newly active red-black tree are reported as added to the independent set. This will result in periodic large changes in the independent set, but these changes are easily shown to be constant amortized in the same manner as the classic array resizing problem. As for the implementation of the operations, this comes down to maintaining the four approximate interval-independent subsets in red-black trees, and four dynamic interval structures. In Lemma~\ref{lem:ors}, the dynamic interval structure listed the queries that it needs to be able to preform on intervals, the IQDS operations, and we will show in Section~\ref{l:squaresiqds} how to implement them. The $\jop{Extend}$ and $\jop{Merge}$ operations require creating a new interval structure to be merged into the existing one(s). For the $\jop{Extend}$ operation, this corresponds to the squares of a newly added node. For the $\jop{Merge}$ operation, this corresponds to the squares of the node defining the top of the one path and the bottom of the other, and is thus part of neither. Fortunately, in both cases, an arbitrary square from the node can be chosen to be the independent set, and this is maximal. During these operations care must be taken when splitting or merging the dynamic interval structures associated with the paths as during these operations the intervals may change. In section~\ref{s:splitmergeintervals} we show the details of how these splits and merges are executed. We now summarize how each operation is executed. \noindent To execute $\Delta I = \jop{Insert}(s)$: \newcommand{\item If quadrant $q$ is not the active one set $\Delta I = []$, as changes in non-active independent sets are recorded but not returned.}{\item If quadrant $q$ is not the active one set $\Delta I = []$, as changes in non-active independent sets are recorded but not returned.} \newcommand{\Jf}{ \item If the red-black tree of the active quadrant no longer has size within a factor-2 of the largest of the four red-black trees, append to $\Delta I$ $(s,\jop{Delete})$ for all squares $s$ in marked nodes in the active quadrant, change the active quadrant to that of the red-black tree with largest size, and append to $\Delta I$ $(s,\jop{Insert})$ for squares $s$ in marked nodes in the new active red-black tree. \item Return $\Delta I$.} \newcommand{\Jd}{This returns changes to the independent sets of intervals $\Iintapprox{\Pathq{q}}$, and the red-black tree storing $\Iintapprox{\Pathq{q}}$ is updated accordingly. Append to $\Delta I$ insertions and deletions of squares to reflect any changes to the squares stored in the marked nodes in the red-black tree.} \newcommand{\Jdd}{This returns changes to the independent sets of intervals $\Iintapprox{\Pathq{q}}$, and the red-black trees storing $\Iintapprox{\Pathq{q}}$ are updated accordingly. Append to $\Delta I_q$ insertions and deletions of squares to reflect any change in the squares in the red/black tree.} \begin{itemize}[noitemsep] \item Set $\Delta I = []$ \item Let $q$ be the quadrant of the child of $\Node{s}$ which we know is monochild. \item Let $\Interval{\Pathq{q}}{s}\coloneqq [\Depth{s},\Depthmax{\Pathq{q}}{s}]$. This is an IQDS operation and the implementation is discussed in Section~\ref{sec:squaresiqds}. \item Insert $\Interval{\Pathq{q}}{s}$ into the quadrant $q$ dynamic interval structure. \Jd \item If quadrant $q$ is not the active one set $\Delta I = []$, as changes in non-active independent sets are recorded but not returned. \Jf \end{itemize} \noindent To execute $\Delta I = \jop{Delete}(s)$: \begin{itemize}[noitemsep] \item Set $\Delta I = []$ \item Let $q$ be the quadrant of the child of $\Node{s}$, which we know is monochild. \item Let $\Interval{\Pathq{q}}{s}\coloneqq [\Depth{s},\Depthmax{\Pathq{q}}{s}]$. This is an IQDS operation and the implementation is discussed in Section~\ref{sec:squaresiqds}. \item Delete $\Interval{\Pathq{q}}{s}$ from the quadrant $q$ dynamic interval structure. \Jd \item If quadrant $q$ is not the active one set $\Delta I = []$, as changes in non-active independent sets are recorded but not returned. \Jf \end{itemize} \noindent The steps to execute $\Delta I = \jop{Extend}(\jmark{\eta})$: \begin{itemize}[noitemsep] \item Set $\Delta I_q = []$, for $q \in \{\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}\}$. \item Let $q$ be the quadrant of $\jmark{P_{\text{bottom}}}$ that $\jmark{\eta}$ lies in. \item Let $s$ be a the first square in the linked list storing $\Squares{\jmark{\eta}}$ \item Let $S'$ be a new interval data structure containing the interval of $s$ with respect to the new path $\jmark{P_{\text{top}}}$ to $\jmark{\eta}$ and quadrant $q$. This is an IQDS query. \item Add $s$ to the red-black tree storing $\Iintapprox{\Pathq{q}}$. Append to $\Delta I$ insertions and deletions of squares to reflect any change in the squares stored in marked node in the red-black tree. \item Use the method in section \ref{s:splitmergeintervals} to merge the new data structure $S'$ with the existing interval data structure. \Jd \item Set $\Delta I = \Delta I_q$, where $q$ is the active set. \Jf \end{itemize} \noindent The steps to execute $\Delta I = \jop{Contract}(\jmark{\eta})$: \begin{itemize}[noitemsep] \item Set $\Delta I_q = []$, for $q \in \{\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}\}$. \item Let $q$ be the quadrant of $\jmark{\eta}$ that $\jmark{P_{\text{bottom}}}$ lies in. \item If there is a square $s$ in red-black tree $q$ storing $\Iintapprox{\Pathq{q}}$ associated with node $\jmark{\eta}$, remove it. Append to $\Delta I$ insertions and deletions of squares to reflect any change in the marked squares in the red/black tree, and the removal of $s$. \item Use the method in section \ref{s:splitmergeintervals} to split the existing interval data structure at the depth of at least $\jmark{\eta}$. Discard the right structure. \Jd \item Set $\Delta I = \Delta I_q$, where $q$ is the active set. \Jf \end{itemize} \noindent To execute $\Delta I = \jop{Merge}(\jmark{P}')$: \begin{itemize}[noitemsep] \item Set $\Delta I_q = []$, for $q \in \{\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}\}$. \item Let $q$ be the quadrant of $\jmark{P_{\text{bottom}}}$ that $\jmark{P}'$ lies in. \item Let $s$ be a square in $\Squares{\jmark{P_{\text{bottom}}}}$, if it is not empty. \item Let $S'$ be a new data interval data structure containing the interval of $s$ with respect to the new path $\jmark{P_{\text{top}}}$ to $\jmark{\eta}$ and quadrant $q$. \item Let $S''$ be the interval data structure of $\jmark{P}'$ \item Add $s$ to the red-black tree storing $\Iintapprox{\Pathq{q}}$. Merge the four RB trees of this path with those of $P'$. Append to $\Delta I_q$ insertions and deletions of squares of marked nodes in the quadrant-$q$ red-black tree to reflect any change in the marked squares in the red-black trees. \item Use the method in section \ref{s:splitmergeintervals} to merge the quadrant-$q$ dynamic interval structure with new data structure $S'$ and then merge all four interval data structure with those of $S''$. \Jdd \item Set $\Delta I = \Delta I_q$, where $q$ is the active set. \Jf \end{itemize} \noindent To execute $(\Delta I,\jmark{P}_\text{new}) = \jop{Split}(\jmark{\eta})$: \begin{itemize}[noitemsep] \item Set $\Delta I_q = []$, for $q \in \{\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}\}$. \item Use the method in section \ref{s:splitmergeintervals} to split the intervals of the four dynamic interval structures each into three groups, based on those with depths at least, at most, and equal to that of $\jmark{\eta}$. Discard the middle structure, and place the at least group into a newly created path structure $\jmark{P}_\text{new}$. This returns changes to the independent sets of intervals $\Iintapprox{\Pathq{q}}$, and the red-black trees storing $\Iintapprox{\Pathq{q}}$ are updated accordingly. Append to $\Delta I_q$ insertions and deletions of squares to reflect any change in the squares in the red-black trees. \item Delete from the red-black trees the (possibly) single square associated with $\jmark{\eta}$. Split the four RB trees of this path based on $\jmark{\eta}$ and move the larger part to $P'$. \Jdd \item Set $\Delta I = \Delta I_q$, where $q$ is the active set. \Jf \end{itemize} \begin{lemma} \label{l:dynpath} For each path structure: \begin{itemize} \item For each $q$ the path structure maintains a protected subset $\Iintapprox{\Pathq{q}}$ of $\Squares{\Pathq{q}}$ where $\Iintapprox{\Pathq{q}} \geq \frac{1}{8} \OPT{\Squares{\Pathq{q}}} - 3$. \item The path structure maintains a protected subset $\jmark{I}(\jmark{P})$ of $\Squares{\jmark{P}}$ where $\jmark{I}(\jmark{P}) \geq \frac{1}{64} \OPT{\Squares{\jmark{P}}} - 3$. \item The runtime of each operation is at most $O(\log^5 n)$ amortized. \end{itemize} \end{lemma} \begin{proof} With the exception of the change of the active red-black tree, the implementation only requires a constant number of operations and calls to the operations of the interval structure. In turn, the interval structure only makes a constant number of calls to the IQDS operations listed in Lemma~\ref{lem:ors}. These are executed with calls to the search structure in time $O(\log^5 n)$ each, as described in section~\ref{sec:squaresiqds}. The dynamic interval structures only report a constant number of changes of the approximate maximum set of intervals per operation. Thus, by standard amortization arguments, the large cost incurred by switching the active red-black tree is only constant amortized per operation (use as a potential function the difference between the size of the largest red black tree and the active one times a constant). Adjusting for the use of the red-black tree instead of every-other node gives a set of size at least $\frac{1}{4c} \OPT{\Squares{\Pathq{q}}} - 3$ assuming a $c$-approximate independent set returned by the dynamic path structure, as it will take at least every fourth square, missing up to three at the end. As for the approximation factor for dynamic intervals, using Lemma~\ref{l:indsetsize} with $c=2$ (an easy upper bound on $1+\epsilon$) gives the first point. From the first point to the second point, we are converting from one quadrant to the best of the four, which from Fact~\ref{f:max4} says we lose a factor of four, but as in the dynamic case we are actually only taking something which is within a factor two of the best of the four, so lose a factor of 16 when moving from the first to the second point. \end{proof} \subsection{Quadtree Structure} The quadtree structure is the main data structure. It maintains an approximate maximum independent set of squares under the insertion and deletion operations. In these operations it returns any changes to the independent set. In order to do this, the quadtree structure contains and maintains one path structure for each path in $\jmark{Q_{\text{paths}}}$. The independent set it logicicaly maintains follows the static case and is the union of the approximate maximal protected independent sets from these path structures and a single square associated with each leaf in $\jmark{Q_{\text{leaves}}}$. As such, the quadtree structure needs to make sure that there is one path structure for each path $\jmark{P}$ in $\jmark{Q_{\text{paths}}}$, and that they are informed via their insert and delete methods of any changes in the sets $\Squares{\Pathq{q}}$. Additionally, as squares are inserted and deleted, the shape of the quadtree $\jmark{Q}$ and thus the set $\jmark{Q_{\text{paths}}}$ may change, and thus the quadtree structure needs to call the various operations on the path structures to reflect any such changes in the composition of $\jmark{Q_{\text{paths}}}$. Also, as the structure of the quadtree changes, $\jmark{Q_{\text{leaves}}}$ may change and thus any changes to the part of the independent set composed of one element of $\Squares{\jmark{\eta}}$ for each $\jmark{\eta}\in \jmark{Q_{\text{leaves}}}$ needs to be reported. Finally, the quadtree structure needs to maintain the invariants of the searching structure. This needs to be called for the insertion and deletion of any squares in $\jmark{\boxbox (\jmark{S})}$, which is easy enough. However, each square in the search structure is marked based on whether $\jmark{\eta}(s)$ is a monochild, and if so, which quadrant the child is. As the shape of a quadtree changes, this could result in the marked status of many squares changing, and the range marking method of searching structure need to be called. \paragraphh{Operations: } \begin{itemize} \item $\Delta I = \jop{Insert}(s)$: Inserts square $s$ and any changes to the independent set are reported \item $\Delta I = \jop{Delete}(s)$: Deletes square$s$ and any changes to the independent set are reported \end{itemize} \paragraphh{Implementation overview:} The implementation of the quadtree structure is the most complex part of the data structuring required, but at the same time the most standard. This is because in order to maintain the path structures and the leaves, of course one needs to know the shape of the quadtree. As the size of $\jmark{Q}$, measured in nodes, is unbounded, it is stored in the standard compressed way where unmarked monochild nodes are contracted; such a tree has at most linear size in the marked nodes. This compressed quadtreee is explicitly stored. In each node, the squares associated with each node are stored in a linked list. We need to be able to identify, given a newly inserted square, where is its node. Is it in the existing quadtree, and if not where should it be added? As the quadtree is not necessarily balanced, and we are seeking polylogarithmic time, additional data structuring on top the quadtree is needed. Secondly, for each node in the quadtree we wish to logically maintain a pointer to a path structure to the representing the path it is on, or is the bottom of. However, if these pointers are maintained in the obvious way by having a single pointer from each node, a single structural change in the quadtree could cause a linear number of these pointers to change (see 3(c) Figure~\ref{fig:insertRect}). However, fortunately, the solution to both of these algorithmic issues is the Link-Cut tree of Sleator and Tarjan \cite{DBLP:journals/jcss/SleatorT83}. We maintain a link-cut tree structure on top of the compressed quadtree. This allows the searching of the quadtree, and the obtaining and changing of the path pointers to be effectuated in logarithmic time. \begin{figure} {\centering \includegraphics[scale=1]{insertCasesRect-new.pdf}} \caption[fuck]{Insertion into the quadtree. The new node is $\jmark{\eta}_1$, the LCA of the new node and the existing tree is $\jmark{\eta}_2$. Three cases are illustrated, depending on whether $\jmark{\eta}_2$ is a leaf, internal node, or monochild node. All cases begin with the new square being added to the searching structure with mark $\jop{None}$. Each path structure is is illustrated, which represents a maximal sequence of monochild nodes. Each path structure has pointers to the two nodes $\jmark{P_{\text{top}}}$ and $\jmark{P_{\text{bottom}}}$ in between which the nodes on the path lie. All nodes have a pointer to the path that they lie on or are $\jmark{P_{\text{bottom}}}$ of. } \label{fig:insertRect} \end{figure} \paragraphh{Insertion.} We now can describe in detail the insertion process of on new square $s$. See Figure~\ref{fig:insertRect} for an overview. First the coordinates of $\jmark{\eta}(s)$ are computed, which can be done with arithmetic and a discrete logarithm. Then it is checked whether $s$ is centered, whether it contains the center of $\jmark{\eta}(s)$, if it is not the insertion procedure returns having done nothing. Then, $\jmark{\eta}(s)$ is either in the compressed quadtree or not. If not, this could be because it needs to be added as a new leaf, or it could be because it is in $\jmark{Q}$ but as a monochild unmarked node and thus has been compressed. In any case the search is done using the link cut tree. Link-cut trees support a so-called oracle search in $O(\log n)$ time \cite{DBLP:journals/algorithmica/AronovBDGILS18}. That is, given some query, and a node in the tree, if one can compute in constant time whether the node is the answer to the query, and if not which connected component of the tree minus the node the query lies in, then the search runs in time $O(\log n)$. For searching for a node in a quadtree, this oracle is a simple geometric computation, as given a node a query node is in a child if it lies entirely in one of the node's quadrants, with the quadrant number indicating the child, else it lies in the part of the tree attached to the node's parent. To summarize $\jmark{\eta}(s)$ is searched for in the quadtree and the outcome is one of the following: \begin{enumerate}[noitemsep,nolistsep] \item The node $\jmark{\eta}(s)$ is already present in the compressed quadtree. \item The node $\jmark{\eta}(s)$ is a compressed node, that is it is in $\jmark{Q}$ but not in the compressed quadtree as it is monochild and is currently unmarked. Let $\jmark{\eta}_1$ (above) and $\jmark{\eta}_2$ (below) denote the nodes in the compressed quadtree bookending the insertion point. \item The node $\jmark{\eta}(s)$ is not in the quadtree $\jmark{Q}$. It should be attached to $\jmark{\eta}_3$ which is$\ldots$ \begin{enumerate}[noitemsep,nolistsep] \item a leaf. \item an node of $\jmark{Q_{\text{internal}}}$. \item a marked monochild node between $\jmark{\eta}_4$ (above) and $\jmark{\eta}_5$ (below). \item an unmarked monochild node (a compressed node). Thus $\jmark{\eta}_3$ is in $\jmark{Q}$ but not the compressed quadtree, where it is between $\jmark{\eta}_4$ (above) and $\jmark{\eta}_5$ (below). \end{enumerate} \end{enumerate} Once the case has been established the changes need to carried out in three phases: Changes to the searching structure, structural changes to the paths and quadtree, and finally changes to the independent set caused by changes of the leaves. The order of the first two is crucial as the path operations require that changes to the searching structure to have already been effectuated. For the searching structure, the new square needs to be added by calling $\jop{Insert}(s,m)$ with the appropriate mark. If $\jmark{\eta}(s)$ is a new node, $\jmark{\eta}(s)$ will be a leaf, and it should be added with $m=\jop{None}$, otherwise it should be added with the type of the node it is being added to. Additionally, squares that will have the type of their node change need to have the mark changed in the search structure. This can happen to the squares in $\Squares{\jmark{\eta}_3}$ in case 3(a) and (c); in the first $\jmark{\eta}_3$ changes from being a leaf to a monochild node and thus the mark on the squares in $\Squares{\jmark{\eta}_3}$ must change from $\jop{None}$ to one of $\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}$ depending on how $\jmark{\eta}(s)$ is attached to $\jmark{\eta}_3$. In 3(c) $\jmark{\eta}_3$ changes from being a monochild node to an internal node, thus the mark on the nodes in $\Squares{\jmark{\eta}_3}$ must change to $\jop{None}$. In both cases a single call to $\jop{ChangeMarkOnSquaresOfNode}(\jmark{\eta}_3,m)$ will suffice. Now, structural changes need to be performed. In case 2, the node $\jmark{\eta}(s)$ need to be added between $\jmark{\eta}_1$ and $\jmark{\eta}_2$ with a path pointer identical to the node below, $\jmark{\eta}_2$. In case 3(d), the node $\jmark{\eta}_3$ needs to be added between $\jmark{\eta}_4$ and $\jmark{\eta}_5$, with a path pointer identical to the node below, $\jmark{\eta}_5$. Then in case 3, $\jmark{\eta}(s)$ is created as a leaf attached to $\jmark{\eta}_3$. In case 3(a), the path which had $\jmark{\eta}_3$ as it bottom node needs to be extended to include $\jmark{\eta}(s)$, this is done by calling $\jop{Extend}(\jmark{\eta}(s))$ on the path of $\jmark{\eta}_3$ and having the path pointer of $\jmark{\eta}(s)$ point to this path. In cases 3(b-d) the new leaf $\jmark{\eta}(s)$ has its path pointer pointing to a new empty path structure created with $\jop{Init}(\jmark{\eta}_3,\Node{s})$ and that represents the empty path between the leaf $\jmark{\eta}(s)$ and the now-internal node $\jmark{\eta}_3$. In cases 2(c-d), the addition of $\jmark{\eta}(s)$ causes $\jmark{\eta}_3$ to switch from being a monochild node on a path, to an internal node. This requires that the path be split. Let $\jmark{\eta}_6$ be the bottom node of the path of $\jmark{\eta}_3$. We call $\jop{Split}(\jmark{\eta}_3)$ on $\jmark{\eta}_3$'s path and receive a new path structure $P_{\text{new}}$. We set the path pointers on all nodes from $\jmark{\eta}_5$ to $\jmark{\eta}_6$ to $P_{\text{new}}$. This is where the link-cut tree's management of the path pointers is crucial, as there could be many nodes from $\jmark{\eta}_5$ to $\jmark{\eta}_6$, but the link cut tree can change them all logically in logarithmic time. With the structural changes complete, the square $s$ is added to the end of the linked list of $\jmark{\eta}(s)$. If $\jmark{\eta}(s)$ is a monochild node, which could be the case in case (1) and is definitely true in case (2), then we are required to call $\jop{Insert}(s)$ on $\jmark{\eta}(s)$'s path. In all of the path operations, any reported changes to the the independent sets reported by the path structure must saved and returned. The third stage is to report any changes to the independent set as a result of the leaves of the quadtree changing, as one of the two components of the independent set is the first node in the linked list of all leaf nodes. In cases $3 (a-d)$ $\jmark{\eta}(s)$ is a new leaf containing only $s$, so $s$ is reported as being added to the independent set. In case $3 (a)$, $\jmark{\eta}_3$ was a leaf but is a leaf no longer, so the first square in $\jmark{\eta}_3$ is reported as being deleted (Note that the $\jop{Extend}$ operation may result in this square being re-added to this set, this is intentional and is okay). \old{ \begin{figure} {\centering \includegraphics[scale=1]{insertCasesRect-new.pdf}} \caption[fuck]{Insertion into the quadtree. The new node is $\jmark{\eta}_1$, the LCA of the new node and the existing tree is $\jmark{\eta}_2$. Three cases are illustrated, depending on whether $\jmark{\eta}_2$ is a leaf, internal node, or monochild node. All cases begin with the new square being added to the searching structure with mark $\jop{None}$. Each path structure is is illustrated, which represents a maximal sequence of monochild nodes. Each path structure has pointers to the two nodes $\jmark{P_{\text{top}}}$ and $\jmark{P_{\text{bottom}}}$ in between which the nodes on the path lie. All nodes have a pointer to the path that they lie on or are $\jmark{P_{\text{bottom}}}$ of. } \label{fig:insertRect} \end{figure} } \paragraphh{Deletions.} Deletions are handled in a largely symmetric fashion which we now describe. As in deletions, the square $s$ is tested to see if it is centered, and if so nothing is done. Otherwise $\Node{s}$ is located in the quadtree. \begin{enumerate}[noitemsep,nolistsep] \item The linked list containing $\Squares{\Node{s}}$ contains $s$ and other squares \item The linked list containing $\Squares{\Node{s}}$ only contains $s$ \begin{enumerate}[noitemsep,nolistsep] \item $\Squares{\Node{s}}$ is a leaf with parent $\jmark{\eta}_1$ \begin{enumerate}[noitemsep,nolistsep] \item Node $\jmark{\eta}_1$ is an monochild node. \item Node $\jmark{\eta}_1$ is an internal node with two children and $\Squares{\jmark{\eta}_2}$ is empty. \item Node $\jmark{\eta}_1$ is an internal node with two children $\Squares{\jmark{\eta}_2}$ is nonempty. \item Node $\jmark{\eta}_1$ is an internal node with more than two children. \end{enumerate} \item $\Squares{\Node{s}}$ is an monochild node on a path \item $\Squares{\Node{s}}$ is an internal node \end{enumerate} \end{enumerate} Once the case has been established the changes need to carried out in three phases: Changes to the searching structure, structural changes to the paths and quadtree, and finally changes to the independent set caused by changes of the leaves. The order of the first two is crucial as the path operations require that changes to the searching structure to have already been effectuated. For the searching structure, the square $s$ needs to be removed by calling $\jop{Delete}(s)$. Additionally, squares that will have the type of their node change need to have the mark changed in the search structure. This will happen in case 2(a)(i) where $\jmark{\eta}_1$ will become a leaf and thus all squares in $\Squares{\jmark{\eta}_1}$ need to have their mark changed to $\jop{None}$. It will also happen in case 2(a)(iii) where $\jmark{\eta}_1$ will change from being an internal node to monochild and thus $\Squares{\jmark{\eta}_1}$ need to have their mark changed to one of $\jmark{\textsc{I}},\jmark{\textsc{II}},\jmark{\textsc{III}},\jmark{\textsc{IV}}$ depending on which quadrant the other child of $\jmark{\eta}_1$ is in. Now, structural changes need to be performed. First $s$ is removed from the linked list of $\Node{s}$. In (1) and 2(a)(iii) the quadtree structure remains untouched. In 2(a)(iii) and 2(b) a node on a path loses its last square and thus becomes unmarked and becomes compressed; the compressed quadtree is called to reflect this, but the paths do not change. In 2(a)(i), as $n_1$ is now a leaf, $\jop{Contract}(n_1)$ is called on its path. The most complicated cases are 2(a)(ii-iii) where the deletion causes $\jmark{\eta}_1$ to stop being an internal node and it is merged via $\jop{Merge}$ with the paths above and below. The third stage is to report any changes to the independent set as a result of the leaves of the quadtree changing, as one of the two components of the independent set is the first node in the linked list of all leaf nodes. This only occurs in 2(a)(ii) where $\jmark{\eta}_1$ becomes a leaf and so the first square of $\Squares{\jmark{\eta}_1}$ should be reported as being added to the independent set. \paragraphh{Main result.} \begin{theorem} The quadtree structure maintains a dynamic set of squares under insertion and deletion in $O(\log^5 n)$ time and reports changes to an independent subset of these squares whose size is expected to be a $4128=O(1)$-approximate factor approximation of the maximum independent set. \end{theorem} \begin{proof} This structure only does a constant number of operations, where each is an operation on a link-cut tree, a path structure, or a search structure. These all have $O(\log^5 n)$ amortized cost. In lemma~\ref{l:dynpath} we showed that we can maintain an approximate protected independent set each path that was within a linear factor of optimal, with constants $c_1=\frac{1}{64}$ and $c_2=3$. By lemma~\ref{l:combine} the approximation factor for centered squares was obtained as a function of these constants (which were better in the static case). Specifically, Lemma~\ref{l:combine} says that we have an $\frac{c_1}{2c_1+c_2+1}=258$-approximation for centered squares. Finally, as in theorem~\ref{t:static}, lemma~\ref{l:bds} is applied to convert the approximation factor on centered squares to an expected one on all squares, at a cost of a factor of 16. Thus our overall approximation factor is at most 4128 in expectation. \end{proof} \subsection{The Interval Query Data Structure (IQDS) for Squares} \label{sec:squaresiqds} In this section, we prove the following: \begin{lemma} \label{l:squaresiqds} We can support the following operations in the time indicated: \begin{itemize} \item $\jop{Get-Interval}(\jmark{P},q,s)$: Given a path $\jmark{P} \in \jmark{Q_{\text{paths}}}$ and $q$, and a square $s \in \Squares{\Pathq{q}}$, report the interval $\Interval{\Pathq{q}}{s}$ in time $O(\log n)$ \item $\jop{Report-Leftmost}(\jmark{P},q,\ell_1,\ell_2)$: Given a path $\jmark{P}$ and quadrant $q$, among all squares in $s \in \Squares{\Pathq{q}}$ with left endpoint of its interval $\Interval{\Pathq{q}}{s}$ in $(\ell_1,\ell_2)$ report the one with minimal right endpoint in time $O(\log^5 n)$. \item $\jop{Report-Rightmost}(\jmark{P},q,r_1,r_2)$: Given a path $\jmark{P}$ and quadrant $q$, among all squares in $s \in \Squares{\Pathq{q}}$ with right endpoint of its interval $\Interval{\Pathq{q}}{s}$ in $(r_1,r_2)$ report the one with maximal left endpoint in time $O(\log^5 n)$. \end{itemize} \end{lemma} The above shows that the queries of Interval Query Data Structure (IQDS) can be answered for each dynamic interval structure, of which there is one associated with each monotone path $\Pathq{q}$. The path and quadrant serves as the identifier of which interval data structure the query is to be executed on. This is a key to the efficiency of our method. We are able to store all of the squares in one data structure, the search structure, so that the intervals of squares can be easily computed on the fly given the top and bottom and quadrant number of the monotone path they lie on. In this way, seemingly impossible changes like when during a structural change to the quadtree causes two paths to merge, and many intervals associated with squares grow, are easily handled as the search structure already has enough information to query the intervals of any monotone path that is consistent with the squares stored without any changes. It also means that split, merge, insert, and delete need not be directly implemented in the IQDS as used for dynamic squares, they can return having done nothing. This is because in implementing the IQDS here we have access to the search structure, which is maintained by the quadtree structure to contain all of the squares and marks. Thus the search structure and as well as the top and bottom of the path being queried and quadrant is the only thing needed to answer a query, and this information is in the monotone path that owns each instance of the dynamic intervals structure that calls the IQDS. We now go through a series of technical lemmas that prove the above Lemma~\ref{l:squaresiqds} beginning with a lemma that shows a mapping between squares on a given monotone path with intervals in a certain range and a region of 4-dimensional space. The reader is encouraged to see Figure~\ref{fig:twoells} which provides motivation for the first technical lemma. \begin{figure} \begin{minipage}[c]{3in} \includegraphics[width=\textwidth, clip=true, trim=0 0 0 1pc ]{geomofintervals-new.pdf} \end{minipage} \hspace{1pc} \begin{minipage}[c]{3in} \caption{The black points represent the centers of the squares of nodes of some monotone path $\Pathq{\jmark{\textsc{I}}}$, which are monotone and increasing in depth as you go to the upper right. The basic fact illustrated is given a square $s$, the following two statements are equivalent: (1) The first of the centers $s$ intersects is of a node of depth $\ell$ between $\ell_1$ and $\ell_2$ and the last of these centers $s$ intersects is a node of depth $r$ is between $r_1$ and $r_2$. (2) The lower left corner of $s$ must be in the region labelled $R_\ell$ and the upper-right corner must be in the region labelled $R_r$.} \label{fig:twoells} \end{minipage} \end{figure} \begin{lemma} \label{l:geomofintervals} Given an $\jmark{P} \in \jmark{Q_{\text{paths}}}$, a quadrant $q$ assumed without loss of generality to be $\jmark{\textsc{I}}$, and values $\ell_1 \leq \ell_2 \leq r_1 \leq r_2]$ where there are nodes on $\Pathq{q}$ with depths $\ell_1,\ell_2,r_1$ and $r_2$: \begin{itemize}[noitemsep,nolistsep] \item Let $\Pathq{\jmark{\textsc{I}}}[d]$ be the node on the monotone path of the depth $d$, if it exists. \item Let $\ell_0$ be the depth of the next shallowest node on $\Pathq{\jmark{\textsc{I}}}$ before $\Pathq{\jmark{\textsc{I}}}[\ell_1]$ which will be $-\infty$ if $\Pathq{\jmark{\textsc{I}}}[\ell_1]$ is already the shallowest node of type $\jmark{\textsc{I}}$ in $\jmark{P}$. \item Let $r_3$ be the depth of next deepest node on $\Pathq{\jmark{\textsc{I}}}$ after $\Pathq{\jmark{\textsc{I}}}[r_2]$, which will be $\infty$ if $\Pathq{\jmark{\textsc{I}}}[r_2]$ is already the deepest node of type $\jmark{\textsc{I}}$ in $\jmark{P}$. \item Let $\jmark{\eta}_{\text{top}} \coloneqq \jmark{P}[d[\jmark{P_{\text{top}}}]+1]$ be the top node on the path $\jmark{P}$, thus a child of the internal node $\jmark{P_{\text{top}}}$ that serves to define $\jmark{P}.$ \item Let $R_{\ell}$ be the region that is in the lower-left quarter-plane relative to $\Squarecenter{\jmark{P}[\ell_2]}$ but not in the lower-left quarter-plane relative to $\Squarecenter{\jmark{P}[\ell_0]}$ (the second condition is omitted if $\ell_0 = - \infty$). \item Let $R_r$ be the region that is in the upper-left quarter-plane relative to $\Squarecenter{\jmark{P}[r_1]}$ but not in the upper-left quarter-plane relative to $\Squarecenter{\jmark{P}[r_3]}$ (the second condition is omitted if $r_3=\infty$). \end{itemize} \noindent Then: \begin{itemize}[noitemsep,nolistsep] \item Any square $s \in \Pathq{\jmark{\textsc{I}}} $ with $\Interval{\Pathq{\jmark{\textsc{I}}}}{s} =[\ell,r]$, where $\ell \in [\ell_1,\ell_2]$ and $r \in [r_1,r_2]$ has its lower left endpoint in $R_{\ell} \cap \Square{(\jmark{\eta}_{\text{top}})}$ and its upper right endpoint is in $R_{r} \cap \Square{(\jmark{\eta}_{\text{top}})}$ \item For any square $s$ if \begin{itemize}[noitemsep,nolistsep] \item $\Node{s}$ is a monochild node of type $\jmark{\textsc{I}}$ and \item $s$'s lower left endpoint is in $R_{\ell} \cap \Square{(\jmark{\eta}_{\text{top}})}$ and \item $s$'s upper right endpoint is in $R_{r} \cap \Square{(\jmark{\eta}_{\text{top}})}$ \end{itemize} \noindent then: \begin{itemize}[noitemsep,nolistsep] \item $\Node{s}\in \Pathq{\jmark{\textsc{I}}}$ and \item $\Interval{\Pathq{\jmark{\textsc{I}}}}{s} = [\ell,r]$, where $\ell\in[\ell_1,\ell_2]$ and $r\in[r_1,r_2]$. \end{itemize} \end{itemize} \end{lemma} \begin{proof} We prove the first claim first and assume we have a square $s \in \Pathq{\jmark{\textsc{I}}}$ with $\Interval{\Pathq{\jmark{\textsc{I}}}}{s}=[\ell,r]$ where $\ell \in [\ell_1,\ell_2]$ and $r \in [r_1,r_2]$. \begin{itemize}[noitemsep,nolistsep] \item The square $s$ is contained in $\Node{s}$ by definition. The region $\Square{\Node{s}}$ is contained in $\Square{(\jmark{\eta}_{\text{top}})}$ as $\jmark{\eta}_{\text{top}}$ is an ancestor (or equal to) $\Node{s}$ in the quadtree. Thus, the square $s$ is in $\Square{(\jmark{\eta}_\text{top})}$. \item The lower-left endpoint of $s$ is in the lower-left quadrant of $\jmark{P}[\ell]=\Node{s}$. Any centers nodes on the path $\jmark{P}$ deeper then $\jmark{P}[\ell]$ will be contained in the upper-left quadrant of $\jmark{P}[\ell]=\Node{s}$. Thus the lower-left endpoint of $s$ is in the lower left quarter-plane relative to $\Squarecenter{\jmark{P}[\ell_2]}$ as $\ell_2 \geq \ell$. \item If $\ell_0$ is defined, $s$ can not be in the lower-left quarter-plane relative to $\ell_0$. This is because then $s$ would include $\ell_0$ (as we know $s$ includes $\ell_2$ which is to the upper-left of $\ell_0$), and this would mean that $s$ includes the center of a node higher in the quadtree than $\Node{s}$, a contradiction with $\ell = d(\Node{s})$. \item The upper-right endpoint of $s$ is in the upper-right quadrant of $\jmark{P}[r]$. Any nodes on the path $\Pathq{\jmark{\textsc{I}}}$ with depth at most $r$ will contain $\Square{\jmark{P}[r]}$ in their upper-right quadrant. Thus the upper-right endpoint of $s$ is in the upper-right quarter-plane relative to $\Squarecenter{\jmark{P}[r_1]}$ as $r_1 \leq r$. \item If $r_3$ is defined, $s$ can not be in the upper-left quarter-plane relative to $r_3$. This is because then $s$ would include $r_3$ (as we know $s$ includes $r_2$ which is to the lower-right of $r_3$), and this would mean that $s$ includes the center of a node on $\Pathq{\jmark{\textsc{I}}}$ deeper then $r$, a contradiction. \end{itemize} To prove the second claim we assume we have a square $s$ where $n(s)$ is a monochild node of type $\jmark{\textsc{I}}$, $s$'s lower left endpoint is in $R_{\ell} \cap \Square{(\jmark{\eta}_{\text{top}})}$ and $s$'s upper right endpoint is in $R_{r} \cap \Square{(\jmark{\eta}_{\text{top}})}$. As $s$ lies entirely in $\Square{(\jmark{\eta}_{\text{top}})}$ then $\Node{s}$ is either on $\jmark{P}$ or is a descendent. But as the square $s$ includes $\Squarecenter{\jmark{P} [r_1]}$ ($R_r$ is entirely up the upper-right of $r_1$ and $R_{\ell}$ is entirely to the lower-left), we know that $\Node{s}$ is $\jmark{P}[r_1]$ or an ancestor. Thus $\Node{s}$ is on $\jmark{P}$, and as it is of type $\jmark{\textsc{I}}$, on $\Pathq{\jmark{\textsc{I}}}$. From the geometry of the regions $R_1$ and $R_2$ we know that $\Interval{\Pathq{\jmark{\textsc{I}}}}{s} = [\ell,r]$, where $\ell\in[\ell_1,\ell_2]$ and $r\in[r_1,r_2]$. If $\ell< \ell_1$, $\ell_0$ must be defined and then the lower-left corner of the square would be to the lower-left of $\ell_0$ and thus not in $R_\ell$. If $\ell>\ell_2$, then the lower-left corner of $s$ would not be to the lower-left of $\ell_2$ and thus would not be in $R_\ell$. If $r<r_1$, then the upper-right corner of $s$ would not be to the upper-right of $r_1$ and thus not include $R_r$. If $r>r_2$, $r_3$ must be defined and the upper right corner of $s$ would be to the upper-right of $r_3$ and thus not include $R_r$. \end{proof} With this technical lemma in had, now we show how with small amount of manipulation, the claims of main lemma of this section, Lemma~\ref{l:squaresiqds}, can be proven. \begin{lemma} \label{l:algogeom} Given an $\jmark{P} \in \jmark{Q_{\text{paths}}}$, a quadrant $q$ assumed without loss of generality to be $\jmark{\textsc{I}}$, and values $\ell_1 \leq \ell_2 \leq r_1 \leq r_2]$ where there are nodes on $\Pathq{q}$ with the four depths, one can determine in $O(\log^4 n)$ time whether there is some square $s$ with $\Node{s} \in \Pathq{q}$ and with $\Interval{\Pathq{\jmark{\textsc{I}}}}{s} = [\ell,r]$ where $\ell \in [\ell_1,\ell_2]$ and $r \in [r_1,r_2]$, and if so, report the square. \end{lemma} \begin{proof} \label{l:geomsearch} From Lemma~\ref{l:geomofintervals}, we know that any such square must have one corner in the regions $R_1$ and the other in $R_2$, as defined in that lemma. Both of these ranges are easily computed, and are of constant complexity, being either rectangles or L-shapes. Thus after decomposing each L shape into two rectangles, four calls to $\jop{RangeSearch}$ in the search structure suffice to answer the query. \end{proof} \begin{lemma} Given an $\jmark{P} \in \jmark{Q_{\text{paths}}}$, a quadrant $q$, and values $\ell_1 \leq \ell_2]$, one can determine in $O(\log^5 n)$ time whether there is some square $s$ with $\Node{s} \in \Pathq{q}$ and with $\Interval{\Pathq{\jmark{\textsc{I}}}}{s} = [\ell,r]$ where $\ell \in [\ell_1,\ell_2]$ and among all such squares has minimal $r$. \end{lemma} \begin{lemma} Given an $\jmark{P} \in \jmark{Q_{\text{paths}}}$, a quadrant $q$, and values $r_1 \leq r_2]$, one can determine in $O(\log^5 n)$ time whether there is some square $s$ with $\Node{s} \in \Pathq{q}$ and with $\Interval{\Pathq{\jmark{\textsc{I}}}}{s} = [\ell,r]$ where $r \in [r_1,r_2]$ and among all such squares has maximal $\ell$. \end{lemma} \begin{proof} (of both lemmas) Use Lemma~\ref{l:geomsearch} and binary search to find the minimal $r$/maximal $\ell$. \end{proof} \begin{lemma} Given a path $\jmark{P} \in \jmark{Q_{\text{paths}}}$ and quadrant $q$, and a square $s \in \Squares{\Pathq{q}}$, the values of $\Depth{s}$ and $\Depthmax{\Pathq{q}}{s}$ can be computed in $O(\log n)$ time \end{lemma} \begin{proof} From $s$ we can compute $\Node{s}$, and from the size of $\Square{\Node{s}}$ the negation of the depth can be obtained by a discrete binary logarithm. For $\Depthmax{\Pathq{q}}{s}$, we need to find the depth of the deepest node in the part of $\Pathq{q}$ starting at $\Node{s}$ that $s$ intersects. This is just a binary search along the path restricted to the nodes of quadrant $q$ which can be done with the aid of the link-cut tree in $O(\log n)$ time. \end{proof} \subsection{Splitting and Merging of Interval Structures} \label{s:splitmergeintervals} This subsection describes in detail what the path structures do when they need to split or merge interval structures. First we describe why this is not straightforward and provide some justification for the overall complexity of our structure. The curious reader who has made it this far may wonder why we have gone through all the effort to make sure that the dynamic interval structure only interfaces with the intervals via the operations of Lemma~\ref{lem:ors}, and that it does not store any intervals not in the current independent set. Can't it just store in a BST all intervals that are currently in its (full, not just independent) set? Alas, the answer is no. There are two reasons why our more complex approach is needed. The first is that when a node transitions form internal to multichild, and thus its square would join a path, we would need to create a structure representing a set of possibly large size. If we had to pass all these intervals to the independent set, we would lose our runtime guarantees. We get around this by the fact that these squares/intervals are already in the search structure and as we will by changing the mark of all of them (in polylog time) they will be visible via the search structure to the interval structure. \begin{figure}[ht] \centering \includegraphics{Ind_Rect_figs.pdf} \caption[fuck]{Intervals change. The interval of a nodes $s$ on a path $\Pathq{\jmark{\textsc{I}}}$ spans the range of depths from the depth of highest node on $\Pathq{\jmark{\textsc{I}}}$ that it intersects the center of to the depth of the deepest node on $\Pathq{q}$ that it intersects the center of. So, for example, in the figure, square $s_0$ has node $\jmark{\eta}(s_0)=\jmark{\eta}_2$ which is on the path from $\jmark{\eta}_1$ to $\jmark{\eta}_8$. As it intersects the centers of the nodes from $\jmark{\eta}_2$ to $\jmark{\eta}_8$, its interval $\Interval{\Pathq{\jmark{\textsc{I}}}}{s}=[\Depth{\jmark{\eta}_3},\Depth{\jmark{\eta}_8}]$. But, what happens if the square $s_{\text{del}}$ is deleted, which causes the leaf $\jmark{\eta}'_10$ to be removed? The node $\jmark{\eta}_9$ would then be monochild instead of being an internal node, and the path $P$, the node $\jmark{\eta}_9$, and the path $P'$ will be merged into one path, illustrated on the right. But now, on this path now there are nodes beyond the former last one $\jmark{\eta}_8$, which square $s_0$ intersects. As a result $\Interval{\Pathq{\jmark{\textsc{I}}}}{s}$ is now $[\Depth{\jmark{\eta}_3},\Depth{\jmark{\eta}_{18}}]$. Note that this expansion of an interval only happens in the limited case where an interval included already the last node on a path. Observe that if one were to view this process in reverse, starting with the after picture and inserting $s_{\text{del}}$ the effect is to take the two intervals which span the node which becomes internal and breaks the path into two, $s_2$ and $s_0$, and clip them to the depth of the bottom node of the new top path. } \label{fig:depthschange} \end{figure} The second subtlety is that given a square $s$, its interval $\Interval{\Pathq{q}}{s}\coloneqq [\Depth{s},\Depthmax{\jmark{P}}{s}]$ could change! Recall that $\Depthmax{\Pathq{q}}{s}$ is the maximum depth of the deepest node on $\Pathq{q}$ that $s$ intersects the center of. But what happens if $s$ intersects the deepest node on $\Pathq{q}$, and then because of merge, $\jmark{P}$ becomes longer? The depth $\Depthmax{\jmark{P}}{s}$ could increase. See Figure~\ref{fig:depthschange} for worked-out example of how this can happen. Our notation has reflected this $\Interval{\Pathq{q}}{s}$ includes the monotone path $\Pathq{q}$ precisely because the interval is a function of the monotone path and can change if the monotone path that it lies on changes. So, we must not store the intervals $[\Depth{s},\Depthmax{\jmark{P}}{s}]$ explicitly, as in a single path merge, an unbounded number of intervals could change, and what they change to will depend on the path that is merged on the bottom. This seems hopeless, until one realizes that this uncertainty as to the right endpoint of an interval is only among those intervals that intersect the last node in the path, thus only those whose right endpoint is to the right of the rightmost left endpoint. This is the magic of the search structure, given a square $s$ on a node of type $q$ on path $\jmark{P}$ and the $\jmark{P_{\text{top}}}$ and $\jmark{P_{\text{bottom}}}$ of the path, the depth of the deepest node of type $q$ on the path that $s$ intersects can be computed, and this is very much a function of the path. Given all of this, suppose we have two path structures $P_1$ and $P_2$ that are to be merged, where $P_1$ has a dynamic intervals structure $S_1$ and $P_2$ has dynamic interval structure $S_2$. Suppose $P_1$ has nodes with depths from $d_1$ to $d'_1$ and $P_2$ has depths from $d_2$ to $d'_2$, with $d_1\leq d'_1 < d_2 \leq d'_2$. We make first make one note that when storing an interval representing depths $[a,b]$, we actually store $[a-\frac{1}{3},b+\frac{1}{3}]$ in the dynamic interval interval structure. This makes no difference to anything said so far as the intervals we store and the queries we make have have integer depths, but will allow us in the next paragraph to insert an interval which is sure to be disjoint or contained in all others. Thus, before merging, we know that $S_1$ stores intervals with coordinates in $[d_1-\frac{1}{3},d'_1+\frac{1}{3}]$, $S_2$ stores intervals in the range $[d_2-\frac{1}{3},d'_2+\frac{1}{3}]$, and as the depths are integer with $d'_1<d_2$, these ranges are disjoint. After the merge, some intervals in $S_1$ which had as their right endpoints as $d'_1$ may now have endpoints in $[d_2,d'_2]$. As shown in Lemma~\ref{lem:insert_superset} the dynamic interval structure can support intervals growing, so long as they are not part of the independent set. So, before merging the structures, we insert the interval $[d'_1-\frac{1}{6},d'_1+\frac{2}{6}]$ into $S_1$. This has the effect that any intervals that might be elongated are contained in the newly inserted interval and due to the $k$-valid property are not part of the independent set. It is at this point that we view the intervals as being elongated. Then the merge operation is carried out on the structure. Then $[d'_2+\frac{1}{6},d'_2+\frac{2}{6}]$ is removed from the resultant structure. For splitting a similar phenomenon occurs, where if we are to split an path into two paths with depths in the ranges $[d_1,d'_1]$ and $[d_2,d'_2]$, during the split any intervals which spanned these two ranges will be store in the first dynamic interval structure and will have their right endpoints clipped to $d'_1$. In Observation~\ref{obs:valid_leftmost} we have shown that this can be done. \subsection{Hypercubes and Beyond} \label{s:hyper} We note that while we have presented everything for squares, everything holds for hypercubes in higher dimension. Two $2^d$ factors are lost in the approximation for a total loss of $2^{2d}$. The first comes from the chance that a hypercube is centered. Quadtrees naturally extend to higher dimensions, as to the notions of a monotone subpath. In dimension $d$, there will be $2^d$ different monotone subpaths so by taking the best of them (or an approximation of the best) will lose a factor of $2^d$ rather than the 4 of Fact~\ref{f:max4}. As for the runtime, the searching structure will be $2d$ dimensional instead of $4$ dimensional, thus queries in this structure will cost $O(\log^{2d} n)$ instead of $O(\log^4 n)$. As we binary search in that structure, this brings the final runtime up to $O(\log^{2d+1} n)$ instead of $O(\log^5 n)$. Alas, out methods do not extend obviously to general rectangles or circles. Our initial trick of throwing away all squares that are not centered relative to the quadtree works well for any fat object, but will fail for general rectangles. For circles or other non-rectangular fat objects, Lemma~\ref{l:monotone}, as illustrated in Figure~\ref{l:monotone} relies very much on the fact if an object contains two points in opposite quadrants, it must contain the origin; this fact holds only for axis-aligned rectangles. Thus causes our arguments to break down for circles, and other objects of possible interest, including non-axis-aligned squares.
{ "timestamp": "2020-07-20T02:03:58", "yymm": "2007", "arxiv_id": "2007.08643", "language": "en", "url": "https://arxiv.org/abs/2007.08643" }
\section{Introduction} Within the last decade, adversarial learning has become a core research area for studying robustness and machine learning. Adversarial attacks have expanded well beyond the original setting of imperceptible noise to more general notions of robustness, and can broadly be described as capturing sets of perturbations that humans are naturally invariant to. These invariants, such as facial recognition should be robust to adversarial glasses \citep{sharif2019general} or traffic sign classification should be robust to adversarial graffiti \citep{eykholt2018robust}, form the motivation behind many real world adversarial attacks. However, human invariants can also include notions which are not inherently adversarial, for example image classifiers should be robust to common image corruptions \citep{hendrycks2019benchmarking} as well as changes in weather patterns \citep{michaelis2019benchmarking}. On the other hand, although there has been much success in defending against small adversarial perturbations, most successful and principled methods for learning robust models are limited to human invariants that can be characterized using mathematically defined perturbation sets, for example perturbations bounded in $\ell_p$ norm. After all, established guidelines for evaluating adversarial robustness \citep{carlini2019evaluating} have emphasized the importance of the perturbation set (or the threat model) as a necessary component for performing proper, scientific evaluations of adversarial defense proposals. However, this requirement makes it difficult to learn models which are robust to human invariants beyond these mathematical sets, where real world attacks and general notions of robustness can often be virtually impossible to write down as a formal set of equations. This incompatibility between existing methods for learning robust models and real-world, human invariants raises a fundamental question for the field of robust machine learning: \begin{center} \emph{How can we learn models that are robust to perturbations without a predefined perturbation set?} \end{center} In the absence of a mathematical definition, in this work we present a general framework for learning perturbation sets from perturbed data. More concretely, given pairs of examples where one is a perturbed version of the other, we propose learning generative models that can ``perturb'' an example by varying a fixed region of the underlying latent space. The resulting perturbation sets are well-defined and can naturally be used in robust training and evaluation tasks. The approach is widely applicable to a range of robustness settings, as we make no assumptions on the type of perturbation being learned: the only requirement is to collect pairs of perturbed examples. Given the susceptibility of deep learning to adversarial examples, such a perturbation set will undoubtedly come under intense scrutiny, especially if it is to be used as a threat model for adversarial attacks. In this paper, we begin our theoretical contributions with a broad discussion of perturbation sets and formulate deterministic and probabilistic properties that a learned perturbation set should have in order to be a meaningful proxy for the true underlying perturbation set. The \emph{necessary subset} property ensures that the set captures real perturbations, properly motivating its usage as an adversarial threat model. The \emph{sufficient likelihood} property ensures that real perturbations have high probability, which motivates sampling from a perturbation set as a form of data augmentation. We then prove the main theoretical result, that a learned perturbation set defined by the decoder and prior of a conditional variational autoencoder (CVAE) \citep{sohn2015learning} implies both of these properties, providing a theoretically grounded framework for learning perturbation sets. The resulting CVAE perturbation sets are well motivated, can leverage standard architectures, and are computationally efficient with little tuning required. We highlight the versatility of our approach using CVAEs with an array of experiments, where we vary the complexity and scale of the datasets, perturbations, and downstream tasks. We first demonstrate how the approach can learn basic $\ell_\infty$ and rotation-translation-skew (RTS) perturbations \citep{jaderberg2015spatial} in the MNIST setting. Since these sets can be mathematically defined, our goal is simply to measure exactly how well the learned perturbation set captures the target perturbation set on baseline tasks where the ground truth is known. We next look at a more difficult setting which can not be mathematically defined, and learn a perturbation set for common image corruptions on CIFAR10 \citep{hendrycks2019benchmarking}. The resulting perturbation set can interpolate between common corruptions, produce diverse samples, and be used in adversarial training and randomized smoothing frameworks. The adversarially trained models have improved generalization performance to both in- and out-of-distribution corruptions and better robustness to adversarial corruptions. In our final setting, we learn a perturbation set that captures \emph{real-world} variations in lighting using a multi-illumination dataset of scenes captured ``in the wild'' \citep{murmann2019dataset}. The perturbation set generates meaningful lighting samples and interpolations while generalizing to unseen scenes, and can be used to learn image segmentation models that are empirically and certifiably robust to lighting changes. All code and configuration files for reproducing the experiments as well as pretrained model weights for both the learned perturbation sets as well as the downstream robust classifiers are at \url{https://github.com/locuslab/perturbation_learning}. \section{Background and related work} \paragraph{Perturbation sets for adversarial threat models} Adversarial examples were initially defined as imperceptible examples with small $\ell_1$, $\ell_2$ and $\ell_\infty$ norm \citep{biggio2013evasion, szegedy2013intriguing, goodfellow2014explaining}, forming the earliest known, well-defined perturbation sets that were eventually generalized to the union of multiple $\ell_p$ perturbations \citep{tramer2019adversarial, maini2019adversarial, croce2019provable, stutz2019confidence}. Alternative perturbation sets to the $\ell_p$ setting that remain well-defined incorporate more structure and semantic meaning, such as rotations and translations \citep{engstrom2017rotation}, Wasserstein balls \citep{wong2019wasserstein}, functional perturbations \citep{laidlaw2019functional}, distributional shifts \citep{sinha2017certifying, sagawa2019distributionally}, word embeddings \citep{miyato2016adversarial}, and word substitutions \citep{alzantot2018generating, jia2019certified}. Other work has studied perturbation sets that are not necessarily mathematically formulated but well-defined from a human perspective such as spatial transformations \citep{xiao2018spatially}. Real-world adversarial attacks tend to try to remain either inconspicuous to the viewer or meddle with features that humans would naturally ignore, such as textures on 3D printed objects \citep{athalye2017synthesizing}, graffiti on traffic signs \citep{eykholt2018robust}, shapes of objects to avoid LiDAR detection \citep{cao2019adversarial}, irrelevant background noise for audio \citep{li2019adversarialmusic}, or barely noticeable films on cameras \citep{li2019adversarial}. Although not necessarily adversarial, \citet{hendrycks2019benchmarking} propose the set of common image corruptions as a measure of robustness to informal shifts in distribution. \paragraph{Generative modeling and adversarial robustness} Relevant to our work is that which combines aspects of generative modeling with adversarial examples. While our work aims to learn \emph{real-world} perturbation sets from data, most work in this space differs in that they either aim to generate synthetic adversarial $\ell_p$ perturbations \citep{xiao2018generating}, run user studies to define the perturbation set \citep{sharif2019general}, or simply do not restrict the adversary at all \citep{song2018constructing, bhattad2020unrestricted}. \citet{gowal2019achieving} trained a StyleGAN to disentangle real-world perturbations when no perturbation information is known in advance. However the resulting perturbation set relies on a stochastic approximation, and it is not immediately obvious what this set will ultimately capture. Most similar is the concurrent work of \citet{robey2020model}, which uses a GAN architecture from image-to-image translation to model simple perturbations between datasets. In contrast to both of these works, our setting requires the collection of paired data to directly learn \emph{how} to perturb from perturbed pairs without needing to disentangle any features or translate datasets, allowing us to learn more targeted and complex perturbation sets. Furthermore, we formulate desirable properties of perturbation sets for downstream robustness tasks, and formally prove that a conditional variational autoencoder approach satisfies these properties. This results in a principled framework for learning perturbation sets that is quite distinct from these GAN-based approaches in both setting and motivation. \paragraph{Adversarial defenses and data augmentation} Successful approaches for learning adversarially robust networks include methods which are both empirically robust via adversarial training \citep{goodfellow2014explaining,kurakin2016adversarial, madry2017towards} and also certifiably robust via provable bounds \citep{wong2017provable, wong2018scaling, raghunathan2018certified, gowal2018effectiveness, zhang2019towards} and randomized smoothing \citep{cohen2019certified, yang2020randomized}. Critically, these defenses require mathematically-defined perturbation sets, which has limited these approaches from learning robustness to more general, real-world perturbations. We directly build upon these approaches by learning perturbation sets that can be naturally and directly incorporated into robust training, greatly expanding the scope of adversarial defenses to new contexts. Our work also relates to using non-adversarial perturbations via data augmentation to reduce generalization error \citep{zhang2017mixup, devries2017improved, cubuk2019autoaugment}, which can occasionally also improve robustness to unrelated image corruptions \citep{geirhos2018imagenet, hendrycks2019augmix, rusak2020increasing}. Our work differs in that rather than aggregating or proposing generic data augmentations, our perturbation sets can provide data augmentation that is targeted for a particular robustness setting. \section{Perturbation sets learned from data} For an example $x \in \mathbb R^m$, a perturbation set $\mathcal S(x)\subseteq \mathbb R^m$ is defined informally as the set of examples which are considered to be equivalent to $x$, and hence can be viewed as ``perturbations'' of $x$. This set is often used when finding an adversarial example, which is typically cast as an optimization problem to maximize the loss of a model over the perturbation set in order to break the model. For example, for a classifier $h$, loss function $\ell$, and label $y$, an adversarial attack tries to solve the following: \begin{equation} \maximize_{x' \in \mathcal S(x)} \ell(h(x'), y). \end{equation} A common choice for $\mathcal S(x)$ is an $\ell_p$ ball around the unperturbed example, defined as $\mathcal S(x) = \{ x + \delta : \|\delta\|_p \leq \epsilon\}$ for some norm $p$ and radius $\epsilon$. This type of perturbation captures unstructured random noise, and is typically taken with respect to $\ell_p$ norms for $p \in \{0, 1, 2, \infty\}$, though more general distance metrics can also be used. Although defining the perturbation set is critical for developing adversarial defenses, in some scenarios, the \emph{true} perturbation set may be difficult to mathematically describe. In these settings, it may still be possible to collect observations of (non-adversarial) perturbations, e.g. pairs of examples $(x,\tilde x)$ where $\tilde x$ is the \emph{perturbed data}. In other words, $\tilde x$ is a perturbed version of $x$, from which we can learn an approximation of the true perturbation set. While there are numerous possible approaches one can take to learn $\mathcal S(x)$ from examples $(x, \tilde x$), in this work we take a generative modeling perspective, where examples are perturbed via an underlying latent space. Specifically, let $g: \mathbb R^k \times \mathbb R^m \rightarrow \mathbb R^m$ be a generator that takes a $k$-dimensional latent vector and an input, and outputs a perturbed version of the input. Then, we can define a \emph{learned} perturbation set as follows: \begin{equation} \label{eq:perturbation_set} \mathcal S(x) = \{ g(z,x) : \|z\| \leq \epsilon\} \end{equation} In other words, we have taken a well-defined norm-bounded ball in the latent space and mapped it to a set of perturbations with a generator $g$, which perturbs $x$ into $\tilde x$ via a latent code $z$. Alternatively, we can define a perturbation set from a \emph{probabilistic modeling} perspective, and use a distribution over the latent space to parameterize a distribution over examples. Then, $\mathcal S(x)$ is now a random variable defined by a probability distribution $p_\epsilon(z)$ over the latent space as follows: \begin{equation} \label{eq:probabilistic_perturbation_set} \mathcal S(x) \sim p_{\theta}\;\; \textrm{such that}\; \theta = g(z,x), \quad z \sim p_\epsilon \end{equation} where $p_\epsilon$ has support $\{z : \|z\| \leq \epsilon\}$ and $p_{\theta}$ is a distribution parameterized by $\theta=g(z,x)$. \subsection{General measures of quality for perturbation sets} \label{sec:properties} A perturbation set defined by a generative model that is learned from data lacks the mathematical rigor of previous sets, so care must be taken to properly evaluate how well the model captures real perturbations. In this section we formally define two properties relating a perturbation set to data, which capture natural qualities of a perturbation set that are useful for adversarial robustness and data augmentation. We note that all quantities discussed in this paper can be calculated on both the training and testing sets, which allow us to concretely measure how well the perturbation set generalizes to unseen datapoints. For this section, let $d:\mathbb R^m \times\mathbb R^m\rightarrow \mathbb R$ be an distance metric (e.g. mean squared error) and let $x, \tilde x \in \mathbb{R}^m$ be a perturbed pair, where $\tilde x$ is a perturbed version of $x$. To be a reasonable threat model for adversarial examples, one desirable expectation is that a perturbation set should at least contain close approximations of the perturbed data. In other words, the set of perturbed data should be (approximately) a \emph{necessary subset} of the perturbation set. This notion of containment can be described more formally as follows: \begin{definition} A perturbation set $\mathcal S(x)$ satisfies the \emph{necessary subset property} at approximation error at most $\delta$ for a perturbed pair $(x, \tilde x)$ if there exists an $x'\in \mathcal S(x)$ such that $d( x', \tilde x) \leq \delta$. \end{definition} For a perturbation set defined by the generative model from Equation \eqref{eq:perturbation_set}, this amounts to finding a latent vector $z$ which best approximates the perturbed example $\tilde x$ by solving the following problem: \begin{equation} \label{eq:necessary_subset} \min_{\|z\|\leq \epsilon} d(g(z,x),\tilde x). \end{equation} This approximation error can be upper bounded with point estimates or can be solved more accurately with projected gradient descent. Note that mathematically-defined perturbation sets such as $\ell_p$ balls around clean datapoints contain all possible observations and naturally have zero approximation error. Our second desirable property is specific to the probabilistic view from Equation \eqref{eq:probabilistic_perturbation_set}, where we would expect perturbed data to have a high probability of occurring under a probabilistic perturbation set. In other words, a perturbation set should assign \emph{sufficient likelihood} to perturbed data, described more formally in the following definition: \begin{definition} A probabilistic perturbation set $\mathcal S(x)$ satisfies the \emph{sufficient likelihood property} at likelihood at least $\delta$ for a perturbed pair $(x,\tilde x)$ if $\mathbb E_{p_\epsilon(z)}[p_\theta(\tilde x)] \geq \delta$ where $\theta = g(z,x)$. \end{definition} A model that assigns high likelihood to perturbed observations is likely to generate meaningful samples, which can then be used as a form of data augmentation in settings that care more about average-case over worst-case robustness. To measure this property, the likelihood can be approximated with a standard Monte Carlo estimate by sampling from the prior $p_\epsilon$. \section{Variational autoencoders for learning perturbations sets} In this section we will focus on one possible approach using conditional variational autoencoders (CVAEs) to learn the perturbation set \citep{sohn2015learning}. We shift notation here to be consistent with the CVAE literature and consider a standard CVAE trained to generate $x\in \mathbb R^m$ from a latent space $z\in \mathbb R^k$ conditioned on some auxiliary variable $y$, which is traditionally taken to be a label. In our setting, the auxiliary variable $y$ is instead another datapoint such that $x$ is a perturbed version of $y$, but the theory we present is agnostic to the choice in auxiliary variable. Let the posterior distribution $q(z|x,y)$, prior distribution $p(z|y)$, and likelihood function $p(x|z,y)$ be the following multivariate normal distributions with diagonal variance: \begin{equation} q(z|x,y) \sim \mathcal N(\mu(x,y), \sigma^2(x,y)), \quad p(z|y) \sim \mathcal N(\mu(y), \sigma^2(y)), \quad p(x|z,y) \sim \mathcal N(g(z,y), I) \end{equation} where $\mu(x,y)$, $\sigma^2(x,y)$, $\mu(y)$, $\sigma^2(y)$, and $g(z,y)$ are arbitrary functions representing the respective encoder, prior, and decoder networks. CVAEs are trained by maximizing a likelihood lower bound \begin{equation} \log p(x|y) \geq \mathbb E_{q(z|x,y)}[\log p(x|z,y)] - KL(q(z|x,y) \| p(z|y)) \end{equation} also known as the SGVB estimator, where $KL(\cdot\|\cdot)$ is the KL divergence. The CVAE framework lends to a natural perturbation set by simply restricting the latent space to an $\ell_2$ ball that is scaled and shifted by the prior network. For convenience, we will define the perturbation set in the latent space \emph{before} the reparameterization trick, so the latent perturbation set for all examples is a standard $\ell_2$ ball $\{ u : \|u\|_2 \leq \epsilon\}$ where $z = u\cdot \sigma(y) + \mu(y)$. Similarly, a probabilistic perturbation set can be defined by simply truncating the prior distribution at radius $\epsilon$ (also before the reparameterization trick). \subsection{Theoretical motivation of using CVAEs to learn perturbation sets} Our theoretical results prove that optimizing the CVAE objective naturally results in both the necessary subset and sufficient likelihood properties outlined in Section \ref{sec:properties}, which motivates why the CVAE is a reasonable framework for learning perturbation sets. Note that these results are not immediately obvious, since the likelihood of the CVAE objective is taken over the full posterior while the perturbation set is defined over a constrained latent subspace determined by the prior. The proofs rely heavily on the multivariate normal parameterizations, with requiring several supporting results which relate the posterior and prior distributions. We give a concise, informal presentation of the main theoretical results in this section, deferring the full details, proofs, and supporting results to Appendix \ref{app:proofs}. Our results are based on the minimal assumption that the CVAE objective has been trained to some threshold as described in Assumption \ref{ass:objective}. \begin{assumption} \label{ass:objective} The CVAE objective has been trained to some thresholds $R,K_i$ as follows $$\mathbb E_{q(z|x,y)}[\log p(x|z,y)] \geq R, \quad KL(q(z|x,y) \| p(z|y)) \leq \frac{1}{2}\sum_{i=1}^k K_i$$ where each $K_i$ bounds the KL-divergence of the $i$th dimension. \end{assumption} Our first theorem, Theorem \ref{thm:cvae_short}, states that the approximation error of a perturbed example is bounded by the components of the CVAE objective. The implication here is that with enough representational capacity to optimize the objective, one can satisfy the necessary subset property by training a CVAE, effectively capturing perturbed data at low approximation error in the resulting perturbation set. \begin{theorem} \label{thm:cvae_short} Let $r$ be the Mahalanobis distance which captures $1-\alpha$ of the probability mass for a $k$-dimensional standard multivariate normal for some $0 < \alpha < 1$. Then, there exists a $z$ such that $\left\|\frac{z - \mu(y)}{\sigma(y)}\right\|_2 \leq \epsilon$ and $\|g(z,y) - x\|^2_2 \leq \delta$ for $$\epsilon = Br + \sqrt{\sum_i K_i}, \quad \delta = -\frac{1}{1-\alpha}\left(2R + m\log(2\pi)\right)$$ where $B$ is a constant dependent on $K_i$. Moreover, as $R\rightarrow -\frac{1}{2}m\log(2\pi)$ and $K_i \rightarrow 0$ (the theoretical limits of these bounds\footnote{In practice, VAE architectures in general have a non-trivial gap from the approximating posterior which may make these theoretical limits unattainable.}), then $\epsilon \rightarrow r$ and $\delta \rightarrow 0$. \end{theorem} Our second theorem, Theorem \ref{thm:cvaesuperset_short}, states that the expected approximation error over the truncated prior can also be bounded by components of the CVAE objective. Since the generator $g$ parameterizes a multivariate normal with identity covariance, an upper bound on the expected reconstruction error implies a lower bound on the likelihood. This implies that one can also satisfy the sufficient likelihood property by training a CVAE, effectively learning a probabilistic perturbation set that assigns high likelihood to perturbed data. \begin{theorem} \label{thm:cvaesuperset_short} Let $r$ be the Mahalanobis distance which captures $1-\alpha$ of the probability mass for a $k$-dimensional standard multivariate normal for some $0 < \alpha < 1$. Then, the \emph{truncated expected approximation error} can be bounded with $$\mathbb E_{p_r(u)}\left[\|g(u\cdot\sigma(y) + \mu(y),y) - x\|_2^2\right] \leq - \frac{1}{1-\alpha}(2R + m \log(2\pi))H$$ where $p_r(u)$ is a multivariate normal that has been truncated to radius $r$ and $H$ is a constant that depends exponentially on $K_i$ and $r$. \end{theorem} The main takeaway from these two theorems is that optimizing the CVAE objective naturally results in a learned perturbation set which satisfies the necessary subset and sufficient likelihood properties. The learned perturbation set is consequently useful for adversarial robustness since the necessary subset property implies that the perturbation set does not ``miss'' perturbed data. It is also useful for data augmentation since the sufficient likelihood property ensures that perturbed data occurs with high probability. We leave further discussion of these two theorems to Appendix \ref{app:proof_discussion}. \begin{table}[t] \caption{Condensed evaluation of CVAE perturbation sets trained to produce rotation, translation, ans skew transformations on MNIST (MNIST-RTS), CIFAR10 common corruptions (CIFAR10-C) and multi-illumination perturbations (MI). The approximation error measures the necessary subset property, and the expected approximation error measures the sufficient likelihood property. } \label{table:all_evaluate} \centering \begin{tabular}{lrrrr} \toprule Setting & Approx. error & Expected approx. error & CVAE Recon. error & KL \\%& LL \\ \midrule MNIST-RTS & $0.11$ & $0.54$ & $0.04$ & $22.2$ \\ CIFAR10-C & $0.005$ & $0.029$ & $0.001$ & $69.3$ \\%& $71$ \\ MI & $0.006$ & $0.049$ & $0.004$ & $65.8$ \\ \bottomrule \end{tabular} \end{table} \section{Experiments} Finally, we present a variety of experiments to showcase the generality and effectiveness of our perturbation sets learned with a CVAE. Our experiments for each dataset can be broken into two distinct types: the generative modeling problem of learning and evaluating a perturbation set, and the robust optimization problem of learning an adversarially robust classifier to this perturbation. We note that our approach is broadly applicable, has no specific requirements for the encoder, decoder, and prior networks, and avoids the unstable training dynamics found in GANs. Furthermore, we do not have the blurriness typically associated with VAEs since we are modeling perturbations and not the underlying image. All code, configuration files, and pretrained model weights for reproducing our experiments are at \url{https://github.com/locuslab/perturbation_learning}. In all settings, we first train perturbation sets and evaluate them with a number of metrics averaged over the test set. We present a condensed version of these results in Table \ref{table:all_evaluate}, which establishes a quantitative baseline for learning real-world perturbation sets in three benchmark settings that future work can improve upon. Specifically, the approximation error measures the necessary subset property, the expected approximation error measures the sufficient likelihood property, and the reconstruction error and KL divergence are standard CVAE metrics. The full evaluation is described in Appendix \ref{app:perturbation}, and a complete tabulation of our results on evaluating perturbation sets can be found in Tables \ref{table:mnist_evaluate}, \ref{table:cifar10c_evaluate}, and \ref{table:mi_evaluate} in the appendix for each setting. We then leverage our learned perturbation sets into new downstream robustness tasks by simply using standard, well-vetted techniques for $\ell_2$ robustness directly on the latent space, namely adversarial training with an $\ell_2$ PGD adversary \citep{madry2017towards}, a certified defense with $\ell_2$ randomized smoothing \citep{cohen2019certified}, as well as an additional data-augmentation baseline via sampling from the CVAE truncated prior. Pseudo-code for these approaches can be found in Appendix \ref{app:pseudocode}. We defer the experiments on MNIST to Appendix \ref{app:mnist}, and spend the remainder of this section highlighting the main empirical results for the CIFAR10 and Multi-Illumination settings. Additional details and supplementary experiments can be found in Appendix \ref{app:cifar10c} for CIFAR10 common corruptions, and Appendix \ref{app:mi} for the multi-illumination dataset. \subsection{CIFAR10 common corruptions} In this section, we first learn a perturbation set which captures common image corruptions for CIFAR10 \citep{hendrycks2019benchmarking}.\footnote{We note that this is not the original intended use of the dataset, which was proposed as a general measure for evaluating robustness. Instead, we are using the dataset for a different setting of learning perturbation sets.} We focus on the highest severity level of the blur, weather, and digital categories, resulting in 12 different corruptions which capture more ``natural'' corruptions that are unlike random $\ell_p$ noise (a complete description of the setting is in Appendix \ref{app:cifar10c}). We find that the resulting perturbation set accurately captures common corruptions with a mean approximation error of 0.005 as seen in Table \ref{table:all_evaluate}. A more in-depth quantitative evaluation as well as architecture and training details are in Appendix \ref{app:cifar10c_perturbation_set}. \begin{figure}[t] \centering \includegraphics[scale=0.95]{figures/cifar10_c_rectangle_long_cropflip_compact.pdf} \caption{Visualization of a learned perturbation set trained on CIFAR10 common corruptions. (top row) Interpolations from fog, through defocus blur, to pixelate corruptions. (middle row) Random corruption samples for three examples. (bottom row) Adversarial corruptions that misclassify an adversarially trained classifier at $\epsilon=10.2$.} \label{fig:cifar10c_compact} \vspace{-0.1in} \end{figure} \begin{table}[t] \caption{Adversarial robustness to CIFAR10 common corruptions with a CVAE perturbation set.} \label{table:cifar10c_robustness} \centering \begin{tabular}{lrrrrrr} \toprule & \multicolumn{3}{c}{Test set accuracy $(\%)$} & \multicolumn{3}{c}{Test set robust accuracy $(\%)$} \\ \cmidrule(r){2-4} \cmidrule(r){5-7} Method & Clean & Perturbed & OOD & $\epsilon=2.7$ & $\epsilon=3.9$ & $\epsilon=10.2$\\ \midrule CIFAR10-C data augmentation & $90.6$ & $87.7$ & $85.0$ & $42.4$ & $37.2$ & $17.8$ \\ CVAE data augmentation & $94.5$ & $\mathbf{90.5}$ & $89.6$ & $68.6$ & $63.3$ & $43.4$\\ CVAE adversarial training & $94.6$ & $90.3$ & $\mathbf{89.9}$ & $\mathbf{72.1}$ & $\mathbf{66.1}$ & $\mathbf{55.6}$\\ \midrule Standard training & $\mathbf{95.2}$ & $67.0$ & $68.1$ & $20.1$ & $17.8$ & $10.1$\\ AugMix \citep{hendrycks2019augmix} & $92.0$ & $68.8$ & $82.9$ & $39.5$ & $34.4$ & $16.8$\\ $\ell_2$ robust \citep{robustness} & $90.8$ & $74.4$ & $82.8$ & $58.4$ & $48.1$ & $20.6$\\ $\ell_\infty$ robust \citep{carmon2019unlabeled} & $89.7$ & $71.2$ & $80.3$ & $60.2$ & $50.7$ & $23.6$ \\ \bottomrule \end{tabular} \end{table} We qualitatively evaluate our perturbation set in Figure \ref{fig:cifar10c_compact}, which depicts interpolations, random samples, and adversarial examples from the perturbation set. For additional analysis of the perturbation set, we refer the reader to Appendix \ref{app:cifar10_pairing} for a study on using different pairing strategies during training and Appendix \ref{app:cifar10_latent} which finds semantic latent structure and visualizes additional examples. \paragraph{Robustness to corruptions} We next employ the perturbation set in adversarial training and randomized smoothing to learn models which are robust against worst-case CIFAR10 common corruptions. We report results at three radius thresholds $\{2.7, 3.9, 10.2\}$ which correspond to the $25$th, $50$th, and $75$th percentiles of latent encodings as described in Appendix \ref{app:cifar10_latent}. We compare to two data augmentation baselines of training on perturbed data or samples drawn from the learned perturbation set, and also evaluate performance on three extra out-of-distribution corruptions (one for each weather, blur, and digital category denoted OOD) that are not present during training. We highlight some empirical results in Table \ref{table:cifar10c_robustness}, where we first find that training with the CVAE perturbation set can improve generalization. Specifically, using the CVAE perturbation set during training achieves $3-5\%$ improved accuracy over training directly on the common corruptions (data augmentation) across all non-adversarial metrics. These gains motivate learning perturbation sets beyond the setting of worst-case robustness as a way to improve standard generalization. Additionally, the CVAE perturbation set improves worst-case performance, with the adversarially trained model being the most robust at $66\%$ robust accuracy for $\epsilon=3.9$ whereas pure data augmentation only achieves $17.8\%$ robust accuracy. Finally, we include a comparison to models trained with standard training, AugMix data augmentation, $\ell_\infty$ adversarial training, and $\ell_2$ adversarial training, none which can perform as well as our CVAE approach. We note that this is not too surprising, since these approaches have different goals and data assumptions. Nonetheless, we include these results for the curious reader, with additional details and discussion in Appendix \ref{app:baselines}. For certifiably robust models with randomized smoothing, we defer the results and discussion to Appendix \ref{app:cifar10c_certified}. \begin{figure}[t] \centering \includegraphics[scale=1]{figures/mi_unet_cropflip_adversarial_onerow.pdf} \caption{Pairs of MI scenes (left) and their adversarial lighting perturbations (right).} \label{fig:mi_adversarial_onerow} \vspace{-0.1in} \end{figure} \begin{table}[t] \caption{Learning image segmentation models that are robust to real-world changes in lighting with a CVAE perturbation set. } \label{table:mi_robustness} \centering \begin{tabular}{lrrrrrr} \toprule & \multicolumn{1}{c}{Test set accuracy (\%)} & \multicolumn{3}{c}{Test set robust accuracy (\%)}\\ \cmidrule(r){2-2} \cmidrule(r){3-5} Method & Perturbed & $\epsilon=7.35$ & $\epsilon=8.81$ & $\epsilon=17$\\ \midrule Fixed lighting angle & $37.2$ & $26.3$ & $24.2$ & $14.9$\\ MI data augmentation & $45.2$ & $38.0$ & $36.5$ & $27.1$\\ CVAE data augmentation & $41.5$ & $35.5$ & $33.9$ & $24.7$\\ CVAE adversarial training & $41.7$ & $39.4$ & $38.8$ & $35.4$\\ \bottomrule \end{tabular} \end{table} \subsection{Multi-illumination} Our last set of experiments looks at learning a perturbation set of multiple lighting conditions using the Multi-Illumination (MI) dataset \citep{murmann2019dataset}. These consist of a thousand scenes captured in the wild under 25 different lighting variations, and our goal is to learn a perturbation set which captures real-world lighting conditions. Since the process of learning lighting variations is largely similar to the image corruptions setting, we defer most of the discussion on learning and evaluating the CVAE-based lighting perturbation set to Appendix \ref{app:mi_perturbation_set}. Note that our perturbation set accurately captures real-world changes in lighting with a low approximation error of 0.006 as seen in Table \ref{table:all_evaluate}. We qualitatively evaluate our perturbation set by depicting adversarial examples in Figure \ref{fig:mi_adversarial_onerow}, with more visualizations (including samples and interpolations) in Appendix \ref{app:mi_visualizations}. \paragraph{Robustness to lighting perturbations} We devote the remainder of this section to studying the task of generating material segmentation maps which are robust to lighting perturbations, using our CVAE perturbation set. We highlight that adversarial training improves robustness to worst-case lighting perturbations over directly training on the perturbed examples, increasing robust accuracy from $27.1\%$ to $35.4\%$ at the maximum radius $\epsilon=17$. Additional results on certifiably robust models with randomized smoothing can be found in Appendix \ref{app:mi_robustness}. \section{Conclusion} In this paper, we presented a general framework for learning perturbation sets from data when the perturbation cannot be mathematically-defined. We outlined deterministic and probabilistic properties that measure how well a perturbation set fits perturbed data, and formally proved that a perturbation set based upon the CVAE framework satisfies these properties. This work establishes a principled baseline for learning perturbation sets with quantitative metrics which future work can potentially improve upon, e.g. by using a different generative modeling frameworks. The resulting perturbation sets open up new downstream robustness tasks such as adversarial and certifiable robustness to common image corruptions and lighting perturbations, while also potentially improving non-adversarial robust performance to natural perturbations. Our work opens a pathway for practitioners to learn machine learning models that are robust to targeted, real-world perturbations that can be collected as data.
{ "timestamp": "2020-10-09T02:16:12", "yymm": "2007", "arxiv_id": "2007.08450", "language": "en", "url": "https://arxiv.org/abs/2007.08450" }
\section{Introduction} Our work is motivated by the quest for high-level autonomy in service robots that can adapt to our everyday life. The key challenge comes from the fact that we cannot pre-program the robot since the working environment of a service robot is unpredictable \cite{paulius2019survey}, and the tasks for the robot to accomplish could be new in the sense that the robot has never been trained before. Instead of waiting for hours' trial-and-error, we expect the robot can learn new skills instantly and achieve high-level tasks even when the tasks are only partially or vaguely specified, e.g., ``John is coming for dinner, set-up the dinner table.'' Setting up a table for dinner may be new for the robot if it has never done the task before. The solution we propose is to learn related new skills through observing demo videos and apply the skills in solving a vague task assignment. In the solution, we assume that the robot can go to internet and fetch demo videos for the task at hand (like we learn new skills by searching Google or watching YouTube videos). We also assume that the robot can reliably detect the objects in the demo videos and match the objects in its surrounding environment. Our key idea of the solution is to formally specify the learned skills by employing a graph-based spatial temporal logic (GSTL), which was proposed recently for knowledge representation in autonomous robots \cite{liu2020graph}. GSTL enables us to formally represent both spatial and temporal knowledge that is essential for autonomous robots. It is also shown in \cite{liu2020graph} that the satisfiability problem in GSTL is decidable and can be solved efficiently by SAT. In this paper, we further ask (a) how to automatically mine GSTL specifications from demo videos and generate a domain theory, and (b) how to achieve an automated task planning based on the newly learned domain theory. Specifically, for the first question, we propose a new specification mining algorithm that can learn a set of parametric GSTL formulas describing spatial and temporal relations from the video. By parametric GSTL formulas, we mean the temporal and spatial variables in GSTL formulas are yet to be decided. Specification mining for spatial logic or temporal logic has been studied separately in the literature \cite{kong2017temporal,nenzi2018robust,bombara2016decision,jin2015mining,bartocci2016formal}. However, we cannot simply combine the existing specification mining techniques for GSTL as GSTL has a broader expressiveness (e.g. parthood, connectivity, and metric extension) compared to existing spatial temporal logic \cite{liu2020graph}. The present techniques face difficulties when it models actions involving coupled spatial and temporal information with metric extension, e.g., a hand holds a plate with a cup on top of it for one minute. To handle this difficulty, our basic idea is to generate simple spatial GSTL terms and construct more complicated spatial terms and temporal formulas based on the simple spatial terms inductively. Specifically, we construct spatial terms by mining both parthood and connectivity of the spatial elements and temporal formulas by considering preconditions and consequences of a given spatial term. The obtained parametric GSTL formulas can represent skills demonstrated by the video and form as a domain theory to facilitate automated task planning. For the second question, we propose an interacting proposer and verifier to achieve an automated task planning based on the newly learned domain theory in parametric GSTL. Many approaches have been proposed in AI and robotics on automated task planning, which can be roughly divided into several groups, including graph searching \cite{weld1999recent}, model checking \cite{li2012planning}, dynamic programming \cite{zhou2018mobile}, and MDP \cite{zheng2019vector}. Despite the success of existing approaches, there is a big gap between planning and executing where the task plan cannot guarantee the feasibility of the plans. The interaction between proposer and verifier aims to fill this gap. The proposer generates ordered actions, and the verifier makes sure the plan is feasible and can be achieved by robots. In the proposer, we use the available actions in the domain theory as basic building blocks and generate ordered actions for the verifier. The verifier checks temporal and spatial constraints posed by the domain theory and sensors by solving an SMT satisfiability problem for the temporal constraints and an SAT satisfiability problem for the spatial constraints. The contributions of this paper are mainly twofold. First, we propose a new specification mining algorithm for GSTL, where a set of parametric GSTL is learned from demo videos. The proposed specification mining algorithm can mine both spatial and temporal information from the video with limited data. The parametric GSTL formulas form the domain theory, which is used in the planning. Our work differs from the existing work where the domain theory is assumed to be given. Second, we design an automatic task planning framework containing an interactive proposer and a verifier for autonomous robots. The proposer can generate ordered actions based on the domain theory and the task assignment. The verifier can verify the feasibility of the proposed task plan and generate time instances for an executable action plan. The overall framework is able to independently solve a vague task assignment with detailed and executable action plans with limited human inputs. The rest of the paper is organized as follows. In Section \ref{related work}, we introduce related work on pursuing autonomous robots. In Section \ref{problem formulation and assumption}, we give a motivating scenario and formally state the problem. In Section \ref{Section: GSTL definitio}, we briefly introduce the graph-based spatial temporal logic, GSTL. In Section \ref{section:specification mining}, we introduce the specification mining algorithm based on demo videos. An automatic task planning framework is given in Section \ref{section:task planning}. We evaluate the proposed algorithms in Section \ref{section:evaluation} with a table setting example. Section \ref{Section: conclusion} concludes the paper. \begin{comment} \textbf{Why autonomous robots} In the last two decades, there are great developments in areas related to robotics, such as sensors technology, computer hardware, and artificial intelligence. Robots are more capable than ever to achieve fully autonomous. Both academia and industrial researchers show great interest in employing autonomous robots in applications such as service robots \cite{decker2017service}, autonomous driving, and search and rescue. However, one of the biggest challenges of achieving fully autonomous robots is the ability to generalize, learn, and reason on its own. This leads to our research interests in autonomous robots. Unlike normal robots that are preprogrammed to solve a certain task, autonomous robots have the ability to solve a given task without step-by-step instructions by storing knowledge and reasoning on it like a human. We believe autonomous robots are crucial to achieving fully autonomous robots in real-world applications for two reasons. First, the real world can defy even our wildest imaginations. There will always be unexpected situations that designers fail to consider for robots. Only robots with the ability to reason based on a set of knowledge are possibly able to handle unexpected situations. A situation like a broom wielding woman chasing a duck while riding an electric wheelchair \cite{wiki:xxx} is challenging for autonomous driving if the machine cannot reason, and its creator did not specify the situation in advance. Second, robots will have to deal with incomplete or even vague task specifications. autonomous robots with knowledge and ability to reason are able to solve a vague task specification, such as setting up a dining table without a detailed action plan. autonomous robots need to make decisions reactively based on perception instead of predefined sequences of steps to take. \textbf{Importance of knowledge representation with automated reasoning for autonomous robots.} The most important step for implementing autonomous robots is building a set of knowledge using knowledge representation. Challenged by the great uncertainty presented in the open-world problem, autonomous robots need comprehensive knowledge with the ability of reasoning to face these challenges. \textbf{Forms of knowledge representation.} Knowledge can be represented in different forms. Raw data such as an online receipt or cooking video contains knowledge needed for accomplishing certain cooking tasks. The amount of available online data is huge, but they are unstructured and in various formats. It is impossible for autonomous robots to reason and make decisions on it. To let autonomous robots perform rigorous reasoning in an unstructured world, researchers work on transferring unstructured data such as raw text and video into a relatively structured model using machine learning. Many results were available on this topic within the computer science community. One of the most popular models is a knowledge graph. In the knowledge graph, concepts are modeled as nodes, and their relations are modeled as labeled edges. Different knowledge graph has different relation schema defined for edges. Examples for knowledge graph include Google knowledge graph \cite{googleknowledgegraph}, ConceptNet \cite{liu2004conceptnet}, and Wikidata \cite{vrandevcic2014wikidata}. They make successful applications in areas such as recommendation systems and searching engine. However, the information in the knowledge graph could be inaccurate since contributors could be unreliable. They also face difficulties when describing the time and space sensitive information, which is particularly important to robots. Furthermore, since the knowledge graph normally does not have a rigorous defined relational semantics for the edges, it is possible to generate new knowledge by following implicit links (e.g., rules, definitions, axioms, etc.), but is nearly impossible to conduct complete and sound reasoning. \textbf{Logic-based approach and spatial temporal logic for knowledge representation in autonomous robots.} Another branch of knowledge representation is symbolic knowledge representation and reasoning performed by means of primitive operations manipulating predefined elementary symbols \cite{hertzberg2008ai}. Logic-based approaches and probabilistic reasoning, such as the Bayesian network \cite{pearl2014probabilistic} both fall into this category. Classic logic such as propositional logic \cite{post1921introduction}, first-order logic \cite{mccarthy1960programs}, and description logic \cite{baader2003description} are well developed and can be used to represent lots of knowledge with great expressiveness power in different domains. However, in general, they fail to capture the temporal and spatial characteristics of the knowledge, and the inference algorithms are often undecidable. For example, it is difficult to capture information such as a robot hand is required to hold a cup for at least five minutes. As spatial and temporal information are often particularly important for autonomous robots, spatial logic and temporal logic are studied both separately \cite{cohn2001qualitative,raman2015reactive} and combined \cite{kontchakov2007spatial,haghighi2016robotic,bartocci2017monitoring}. By integrating spatial and temporal operators with classic logic operators, spatial temporal logic shows great potential in specifying a wide range of task assignments for autonomous robots with automated reasoning ability. Thus, in this paper, we are interested in employing spatial temporal logics for knowledge representation in autonomous robots and developing automated reasoning based on it. \textbf{Bottleneck and challenges in symbolic-based approach in autonomous robots.} We will discuss more details on the related work of symbolic-based approach in the next section. Here, we briefly discuss the bottleneck and challenges faced in this area. The first challenge comes from the complexity of the inference algorithm. Classic logics and most of spatial temporal logics enjoy huge expressiveness due to their semantic definition and the way of combining spatial and temporal operators \cite{kontchakov2007spatial}. However, their inference algorithms are often undecidable, making it impossible for autonomous robots to make decisions in real-time. Human inputs are often needed to facilitate the deduction process. A balance between expressiveness and tractability is needed for autonomous robots. Specification mining is another challenge in the way of applying a symbolic-based approach to autonomous robots. The sensing data from the real world is often continuous and noisy, while the symbolic-based knowledge representation and reasoning operate on discretized symbols. This requires us to interpret the sensor data stream on a semantic level to transform it into a symbolic description in terms of categories that are meaningful in the knowledge representation \cite{hertzberg2008ai}. Most related works lack a specification mining block to fulfill this role. The lack of tractability and specification mining limits the application of the logic-based approach to autonomous robots. \textbf{How do we address the challenges.} Motivated by knowledge representation and reasoning for autonomous robots and challenges faced in existing work, we propose a new graph-based spatial temporal logic (GSTL) for autonomous robots with a sound and complete inference system. The proposed GSTL is able to specify a wide range of spatial and temporal specifications for autonomous robots while maintains a tractable inference algorithm. A specification mining algorithm is also proposed where GSTL formulas can be learned from video data directly. Furthermore, we use the proposed GSTL and automated reasoning system in designing a autonomous robot with the ability to independently generating a detailed action plan when a vague task assignment and a streaming video are given. \textbf{Novelty and necessity of our proposed work} The contributions of this paper are mainly twofold. First, we propose a new spatial temporal logic with a better balance between expressiveness and tractability. The inference system implemented by constraint programming is sound and complete. A specification mining algorithm is proposed where GSTL formulas can be learned from video stream data. Second, we design a framework for autonomous robots that are able to independently solve a vague task assignment with detailed action plans. Knowledge needed to solve a task is stored in a domain theory as a set of parameterized GSTL formulas. A task planner is proposed to generate an executable action plan based on inference rules, domain theory, and sensing inputs. \textbf{Their relationships}: autonomous robots should be able to perform rigorous reasoning in an unstructured world. This requires us to transfer unstructured data such as raw text and video into a structured model where we can perform logical reasoning. We use the knowledge graph as a middle step between the knowledge base and unstructured raw data. \textbf{Why do we need knowledge graph} Because automatic deduction systems for a given logic is often intractable, the majority of current applications of automated deduction systems require a direction from human users in order to work \cite{lifschitz2008knowledge}. Human users may be required to supply lemmas to the deduction system. In the worst case, a user may need to provide proof at every step \cite{fensel1997specifying}. This significantly reduces the autonomy of autonomous robots. With the ability to answer questions raised by the deduction systems, the knowledge graph is able to replace human users and serves as teachers to make the automated deduction systems tractable. The knowledge base in formal methods and the knowledge base in machine learning (the knowledge graph) are equivalent in some sense. The knowledge graph has similar expressiveness power as the classic logic, but it cannot express certain concepts of spatial and temporal logic. \end{comment} \section{Related work}\label{related work} \begin{comment} \subsection{Autonomous robots} The journey of achieving autonomy for autonomous robots starts at sensing and modeling the changing environment. This goal has been achieved through the development of sensors and extensive research on environment modeling such as SLAM \cite{khairuddin2015review}. Current research on autonomous robots focuses on specialization of robots by improving both general robustness and specific task performance \cite{kunze2018introduction}. Robots with various degrees of autonomy have been applied in space \cite{oettershagen2017design}, field \cite{broggi2012vislab,bechar2016agricultural}, and service \cite{hanheide2017and,tran2017robots,decker2017service}. Nearly all applications include the following key components, namely navigation $\&$ mapping, perception, knowledge representation $\&$ reasoning, task planning, and learning \cite{kunze2018artificial}. Each component is supported differently depending on applications. Navigation $\&$ mapping and perception are well supported in the areas mentioned above while knowledge representation $\&$ reasoning, task planning, and learning are only partially supported. However, many of them are strictly limited in specific task assignment which leaves many challenges and research questions. In long term, autonomous robots are required to adapt to the changing environment through learning and knowledge representation and reasoning. We introduce related work on knowledge representation and reasoning, learning, and automated task planning based on them in the following sections. \end{comment} A high-level autonomy for mobile robots is a very ambitious goal that needs support from many areas, such as navigation $\&$ mapping, perception, knowledge representation $\&$ reasoning, task planning, and learning. In this section, we will briefly introduce the most relevant work to us in knowledge representation, specification mining, and automated task planning. \subsection{Knowledge representation and reasoning} One of the most promising fields in knowledge representation and reasoning is the logic-based approach, where knowledge is modeled by predefined elementary (logic and non-logic) symbols \cite{wachter2018integrating}, and automated planning is performed through primitive operations manipulating the symbols \cite{hertzberg2008ai}. Classic logic such as propositional logic \cite{post1921introduction}, first-order logic \cite{mccarthy1960programs}, and description logic \cite{baader2003description} are well developed and can be used to represent knowledge with a great expressiveness power in different domains. However, this is achieved at the expense of tractability. The satisfiability problem of classic logic is often undecidable, which further limits its application in autonomous robots. Furthermore, in general, classic logic fails to capture the temporal and spatial characteristics of the knowledge. For example, it is difficult to capture information such as a robot hand is required to hold a cup for at least five minutes. As spatial and temporal information are often particularly important for autonomous robots, spatial logic and temporal logic are studied both separately \cite{cohn2001qualitative,raman2015reactive} and combined \cite{kontchakov2007spatial,haghighi2016robotic,bartocci2017monitoring,liu2020graph}. By integrating spatial and temporal operators with classic logic operators, spatial temporal logic shows great potential in specifying a wide range of task assignments for autonomous robots with automated reasoning ability. However, two significant concerns limit the applications of spatial temporal logic in autonomous robots. First, the knowledge needed for task planning is often given by human experts in advance. The dependence on human experts is caused by the lack of specification mining algorithm between the real world and the symbolic-based knowledge representation. Second, there are few results in spatial temporal logic where the task plan is automatically generated and is executable and explainable to robots. Lots of existing spatial temporal logic are undecidable due to their combination of spatial operators and temporal operators \cite{kontchakov2007spatial}, and the resulting task plan may not be feasible for robots to complete. In summary, the lack of specification mining and executable task planning limits spatial temporal logic's applications on autonomous robots. \subsection{Learning and specification mining} Learning is essential for autonomous robots since deployment in real worlds with considerable uncertainty means any knowledge the robot has is unlikely to be sufficient. By focusing on specific scenarios, robots can increase their knowledge through learning, which has been applied to applications such as assembling robots \cite{suomalainen2017geometric} and service robots \cite{decker2017service}. As the applications in autonomous robots are often task-oriented, the goal of learning is often to find a set of control policies for given tasks. Such policy can be learned through approaches such as learning from demonstration \cite{argall2009survey} and reinforcement learning \cite{tsurumine2019deep,sutton2018reinforcement}. In a logic-based approach, learning is often addressed by specification mining, where a set of logic formulas are learned from data or examples. Most of the recent research has focused on the estimation of parameters associated with a given logic structure \cite{asarin2011parametric,bartocci2013robustness,jin2015mining,yang2012querying}. However, the selected formula may not reflect achievable behaviors or may exclude fundamental behaviors of the system. Furthermore, by giving the formula structure \emph{a prior}, the mining procedure cannot derive new knowledge from the data. Few approaches such as directed acyclic graph \cite{kong2017temporal} and decision tree \cite{bombara2016decision} are explored for temporal logic where the structures are not entirely fixed. Despite the success of specification mining, the majority of specification mining algorithms developed to date generate a purely reactive policy that maps directly from a state to action without considering temporal relations among actions \cite{argall2009survey}. They have difficulty addressing complicated temporal and spatial requirements, such as accomplishing particular task infinity often or holding a cup for at least 5 minutes. One possible solution is encoding temporal and spatial information in the policy derivation process. With the ability of learning and reasoning, robots can independently solve a task assignment through automated task planning. We review related work on automated task planning as follows. \subsection{Automated task planning} The automated task planning determines the sequence of actions to achieve the task assignment. Existing planning approaches such as GRAPHPLAN \cite{weld1999recent}, STRANDS \cite{hawes2017strands}, CoBot \cite{veloso2012cobots} and Tangy \cite{tran2017robots} are able to generate ordered actions for a given task assignment from users. Different systems vary on if they consider preconditions and effects of robots' actions on time and resources. In GRAPHPLAN, both preconditions and effects of actions are modeled during the task planning. In STRANDS and CoBot, task planning is generated based on models (e.g., MDP model for a working environment) learned from the previous execution. Even though existing work can generate ordered actions, there is no guarantee that the ordered actions are feasible and executable for robots when spatial and temporal constraints are considered. A verification process is needed for the ordered actions \cite{kunze2018artificial}. As robots are more adaptable to a structured environment, the research trending is integrating learning and task planning processes in robots for a less structured environment. The performance of robots is evaluated over variation in task assignment and available resources \cite{kunze2018artificial} so that the plans are feasible and executable for robots. \section{Problem formulation}\label{problem formulation and assumption} We introduce the motivating scenarios and a formal problem statement in this section. In this paper, we aim to develop an autonomous robot with the ability to learn available actions from examples and generating executable actions to fulfill a given task assignment. The motivating scenario is given in Fig. \ref{exp:task planning example}, where the initial table setup is given in the left figure, and the goal is to set up the dining table as shown in the right figure. We formally state the problem as follows. \begin{problem} Given a set of demo videos $\mathcal{G}$ and an initial table setup $s_1,s_2,...$ as shown in the left in Fig. \ref{exp:task planning example}, we aim to accomplish a target table setup $s^*_1,s^*_2,...$ shown in the right in Fig. \ref{exp:task planning example} through an executable task plan $\psi$ as GSTL formulas. Here, $s_i$ and $s^*_i$ are GSTL spatial terms representing table setup. The problem is solved by solving the following two sub-problems. \begin{enumerate} \item Generate a domain theory $\Sigma=\{a_1,a_2,...\}$ in GSTL via specification mining based on video $\mathcal{G}$. \item Generate the task plan $\psi$ based on the initial setup $s_i$, the target setup $s^*_i$, and the domain theory $\Sigma$. \end{enumerate} \label{problem statement} \end{problem} \begin{figure} \centering \includegraphics[scale=0.54]{task_planning_example.pdf} \caption{An example of specification mining and automatic task planning for the dining table setting. The left figure is the initial table setup. The right figure is the target table setup.} \label{exp:task planning example} \end{figure} To solve Problem \ref{problem statement}, we adopt the following assumptions with justifications. First, we assume that we know the objects and concepts we are interested in and all the parthood relations for the objects. For example, we know ``hand is part of body part" and ``cup is a type of tool." Second, we assume reliable object detection with an accurate position tracking algorithm is available since there are many mature object detection algorithms \cite{zhao2019object} and stereo cameras like ZED can provide an accurate 3D position for objects \cite{chaudhary2018learning}. \section{Graph-based spatial temporal logic}\label{Section: GSTL definitio} In this section, we briefly introduce the graph-based spatial temporal logic (GSTL). First, we introduce the temporal and spatial representation for GSTL. \subsection{Temporal and spatial representations} There are multiple ways to represent time, e.g., continuous-time, discrete-time, and interval. As people are more interested in time intervals in autonomous robots, in this paper, we use a discrete-time interval. We employ Allen interval algebra (IA) \cite{allen1983maintaining} to model the temporal relations between two intervals. Allen interval algebra defines the following 13 temporal relationships between two intervals, namely before ($b$), meet ($m$), overlap ($o$), start ($s$), finish ($f$), during ($d$), equal ($e$), and their inverse ($^{-1}$) except equal. As for the spatial representations, we use regions as the basic spatial elements instead of points. Within the qualitative spatial representation community, there is a strong tendency to take regions of space as the primitive spatial entity \cite{cohn2001qualitative}. In practice, a reasonable constraint to impose would be that regions are all rational polygons. To consider the relations between regions, we consider parthood and connectivity in our spatial model, where parthood describes the relational quality of being a part, and connectivity describes if two spatial objects are connected. GSTL further includes directional information in connectivity. It is done by extending Allen interval algebra into 3D, which is more suitable for autonomous robots. The relations between two spatial regions are defined as $\mathcal{R}=\{(A,B,C):A,B,C\in \mathcal{R}_{IA}\}$, where $13\times 13\times 13$ basic relations are defined. An example is given in Fig. \ref{Relations in CA} to illustrate the spatial relations. For spatial objects, $X$ and $Y$ in the left where $X$ is at the front, left, and below of $Y$, we have $X\{(b,b,o)\}Y$. For spatial objects, $X$ and $Y$ in the right where $Y$ is completely on top of $X$, we have $X\{(e,e,m)\}Y$. \begin{figure}[H] \centering \includegraphics[scale=0.3]{CA.pdf} \caption{Representing directional relations between objects $X$ and $Y$ in 3D interval algebra} \label{Relations in CA} \end{figure} We apply a graph with a hierarchy structure to represent the spatial model. Denote $\Omega=\cup_{i=1}^{n}\Omega_i$ as the union of the sets of all possible spatial objects where $\Omega_i$ represents a certain set of spatial objects or concepts. \begin{definition}[Graph-based Spatial Model] The graph-based spatial model with a hierarchy structure $\mathcal{G}=(\mathcal{V},\mathcal{E})$ is constructed by the following rules. 1) The node set $\mathcal{V}=\{V_1,...,V_n\}$ is consisted of a group of node set where each node set $V_k$ represents a finite subset spatial objects from $\Omega_i$. Denote the number of nodes for node set $V_k$ as $n_k$. At each layer, $V_k=\{v_{k,1},...,v_{k,n_k}\}$ contains nodes which represent $n_k$ spatial objects in $\Omega_i$. 2) The edge set $\mathcal{E}$ is used to model the relationship between nodes, such as whether two nodes are adjacent or if one node is included within another node. $e_{i,j}\in \mathcal{E}$ if and only if $v_i$ and $v_j$ are connected. 3) $v_{k,i}$ is a \emph{parent} of $v_{k+1,j}$, $\forall k\in[1,...,n-1]$, if and only if objects $v_{k+1,i}$ belongs to objects $v_{k,i}$. $v_{k+1,j}$ is called a \emph{child} of $v_{k,i}$ if $v_{k,i}$ is its parent. Furthermore, if $v_i$ and $v_j$ are a pair of parent-child, then $e_{i,j}\in\mathcal{E}$. $v_i$ is a \emph{neighbor} of $v_j$ and $e_{i,j}\in\mathcal{E}$ if and only if there exist $k$ such that $v_i\in V_k$, $v_j\in V_k$, and the minimal distance between $v_i$ and $v_j$ is less than a given threshold $\epsilon$. \end{definition} An example is given in Fig. \ref{exp:Graph with a hierarchy structure} to illustrate the proposed spatial model. In Fig. \ref{exp:Graph with a hierarchy structure}, $V_1=\{kitchen\}$ , $V_2=\{body~part$, $tool$, $material\}$ and $V_3=\{head$, $hand$, $cup$, $bowl$, $table$, $milk$, $butter\}$. The parent-child relationships are drawn in solid lines, and the neighbor relationships are drawn in dashed lines. Each layer represents the space with different spatial concepts or objects by taking categorical values from $\Omega_i$, and connections are built between layers. The hierarchical graph can express facts such as ``head is part of a body part" and ``cup holds milk." \begin{figure} \centering \includegraphics[scale=0.5]{Hierachical-graph.pdf} \caption{The hierarchical graph with three basic spatial operators: parent, child, and neighbor where the parent-child relations are drawn in solid line and the neighbor relations are drawn in dashed lines.} \label{exp:Graph with a hierarchy structure} \end{figure} Based on the temporal and spatial model above, the spatial temporal signals we are interested in are defined as follow. \begin{definition}[Spatial Temporal Signal] A spatial temporal signal $x(v,t)$ is defined as a scalar function for node $v$ at time $t$ \begin{equation} x(v,t): V\times T \rightarrow D, \end{equation} where $D$ is the signal domain. \end{definition} \subsection{Graph-based spatial temporal logic} With the temporal model and spatial model in mind, we now give the formal syntax and semantics definition of GSTL. \begin{definition}[GSTL Syntax]\label{definition: GSTL syntax} The syntax of a GSTL formula is defined recursively as \begin{equation} \begin{aligned} &\tau := \mu ~|~ \neg\tau ~|~ \tau_1\wedge\tau_2 ~|~ \tau_1\vee\tau_2 ~|~ \mathbf{P}_A\tau ~|~ \mathbf{C}_A \tau ~|~ \mathbf{N}_A^{\left \langle *, *, * \right \rangle} \tau,\\ &\varphi := \tau ~|~ \neg \varphi ~|~ \varphi_1\wedge\varphi_2 ~|~ \varphi_1\vee\varphi_2 ~|~ \Box_{[\alpha,\beta]} \varphi~| ~ \varphi_1\sqcup_{[\alpha,\beta]}^{*}\varphi_2. \end{aligned} \label{complexity proof 1} \end{equation} where $\tau$ is spatial term and $\varphi$ is the GSTL formula; $\mu$ is an atomic predicate (AP), negation $\neg$, conjunction $\wedge$ and disjunction $\vee$ are the standard Boolean operators; $\Box_{[\alpha,\beta]}$ is the ``always" operator and $\sqcup_{[\alpha,\beta]}^{*}$ is the ``until" temporal operators with an Allen interval algebra extension, where $[\alpha,\beta]$ being a real positive closed interval and $*\in\{b, o, d, \equiv, m, s, f\}$ is one of the seven temporal relationships defined in the Allen interval algebra. Spatial operators are ``parent" $\mathbf{P}_A$, ``child" $\mathbf{C}_A$, and ``neighbor" $\mathbf{N}_A^{\left \langle *, *, * \right \rangle}$, where $A$ denotes the set of nodes which they operate on. Same as the until operator, $*\in\{b, o, d, \equiv, m, s, f\}$. \end{definition} The parent operator $\mathbf{P}_A$ describes the behavior of the parent of the current node. The child operator $\mathbf{C}_A$ describes the behavior of children of the current node in the set $A$. The neighbor operator $\mathbf{N}_A^{\left \langle *, *, * \right \rangle}$ describes the behavior of neighbors of the current node in the set $A$. We first define an interpretation function before the semantics definition of GSTL. The interpretation function $\iota(\mu,x(v,t)): AP\times D \rightarrow R$ interprets the spatial temporal signal as a number based on the given atomic proposition $\mu$. The qualitative semantics of the GSTL formula is given as follows. \begin{definition}[GSTL Qualitative Semantics] The satisfiability of a GSTL formula $\varphi$ with respect to a spatial temporal signal $x(v,t)$ at time $t$ and node $v$ is defined inductively as follows. \begin{enumerate} \item $x(v,t)\models \mu$, if and only if $\iota(\mu,x(v,t))>0$; \item $x(v,t)\models \neg\varphi$, if and only if $\neg(x(v,t))\models \varphi)$; \item $x(v,t)\models \varphi\land\psi$, if and only if $x(v,t)\models \varphi$ and $x(v,t)\models \psi$; \item $x(v,t)\models \varphi\lor\psi$, if and only if $x(v,t)\models \varphi$ or $x(v,t)\models \psi$; \item $x(v,t)\models \Box_{[\alpha,\beta]}\varphi$, if and only if $\forall t'\in[t+\alpha,t+\beta]$, $x(v,t')\models \varphi$; \item $x(v,t)\models \Diamond_{[\alpha,\beta]}\varphi$, if and only if $\exists t'\in[t+\alpha,t+\beta]$, $x(v,t')\models \varphi$; \end{enumerate} The until operator with interval algebra extension is defined as follow. \begin{enumerate} \item $({\bf x},t_k)\models \varphi\sqcup_{[\alpha,\beta]}^b\psi$, if and only if $({\bf x},t_k)\models\Box_{[\alpha,\beta]}\neg(\varphi\vee\psi)$ and $\exists t_1<\alpha,~\exists t_2>\beta$ such that $({\bf x},t_k)\models\Box_{[t_1,\alpha]}(\varphi\wedge\neg\psi)\wedge\Box_{[\beta,t_2]}(\neg\varphi\wedge\psi)$; \item $({\bf x},t_k)\models \varphi\sqcup_{[\alpha,\beta]}^o\psi$, if and only if $({\bf x},t_k)\models\Box_{[\alpha,\beta]}(\varphi\wedge\psi)$ and $\exists t_1<\alpha,~\exists t_2>\beta$ such that $({\bf x},t_k)\models\Box_{[t_1,\alpha]}(\varphi\wedge\neg\psi)\wedge\Box_{[\beta,t_2]}(\neg\varphi\wedge\psi)$; \item $({\bf x},t_k)\models \varphi\sqcup_{[\alpha,\beta]}^d\psi$, if and only if $({\bf x},t_k)\models\Box_{[\alpha,\beta]}(\varphi\wedge\psi)$ and $\exists t_1<\alpha,~\exists t_2>\beta$ such that $({\bf x},t_k)\models\Box_{[t_1,\alpha]}(\neg\varphi\wedge\psi)\wedge\Box_{[\beta,t_2]}(\neg\varphi\wedge\psi)$; \item $({\bf x},t_k)\models \varphi\sqcup_{[\alpha,\beta]}^\equiv\psi$, if and only if $({\bf x},t_k)\models\Box_{[\alpha,\beta]}(\varphi\wedge\psi)$ and $\exists t_1<\alpha,~\exists t_2>\beta$ such that $({\bf x},t_k)\models\Box_{[t_1,\alpha]}(\neg\varphi\wedge\neg\psi)\wedge\Box_{[\beta,t_2]}(\neg\varphi\wedge\neg\psi)$; \item $({\bf x},t_k)\models \varphi\sqcup^m\psi$, if and only if $\exists t_1<t<t_2$ such that $({\bf x},t_k)\models\Box_{[t_1,t]}(\varphi\wedge\neg\psi)\wedge\Box_{[t,t_2]}(\neg\varphi\wedge\psi)$; \item $({\bf x},t_k)\models \varphi\sqcup^s\psi$, if and only if $\exists t_1<\alpha<\beta<t_2$ such that $({\bf x},t_k)\models\Box_{[t_1,\alpha]}(\neg\varphi\wedge\neg\psi)\wedge\Box_{[\alpha,\beta]}(\varphi\wedge\psi)\wedge\Box_{[\beta,t_2]}(\neg\varphi\wedge\psi)$; \item $({\bf x},t_k)\models \varphi\sqcup^f\psi$, if and only if $\exists t_1<\alpha<\beta<t_2$ such that $({\bf x},t_k)\models\Box_{[t_1,\alpha]}(\neg\varphi\wedge\psi)\wedge\Box_{[\alpha,\beta]}(\varphi\wedge\psi)\wedge\Box_{[\beta,t_2]}(\neg\varphi\wedge\neg\psi)$; \end{enumerate} The spatial operators are defined as follows. \begin{enumerate} \item $x(v,t)\models \mathbf{P}_A\tau$, if and only if $\forall v_p\in A,~x(v_p,t)\models \tau$ where $v_p$ is the parent of $v$; \item $x(v,t)\models \mathbf{C}_{A}\tau$, if and only if $\forall v_c\in A,~x(v_c,t)\models \tau$ where $v_c$ is a child of $v$; \item $x(v,t)\models \mathbf{N}_{A}^{\left \langle b, *, * \right \rangle}\tau$, if and only if $\forall v_n \in A,~x(v_n,t)\models \tau$ where $v_n$ is a neighbor of $v$ and $v_n[x^+]<v[x^-]$; \item $x(v,t)\models \mathbf{N}_{A}^{\left \langle o, *, * \right \rangle}\tau$, if and only if $\forall v_n \in A,~x(v_n,t)\models \tau$ where $v_n$ is a neighbor of $v$ and $v_n[x^-]<v[x^-]<v_n[x^+]<v[x^+]$; \item $x(v,t)\models \mathbf{N}_{A}^{\left \langle d, *, * \right \rangle}\tau$, if and only if $\forall v_n \in A,~x(v_n,t)\models \tau$ where $v_n$ is a neighbor of $v$ and $v_n[x^-]<v[x^-]<v[x^+]<v_n[x^+]$; \item $x(v,t)\models \mathbf{N}_{A}^{\left \langle \equiv, *, * \right \rangle}\tau$, if and only if $\forall v_n \in A,~x(v_n,t)\models \tau$ where $v_n$ is a neighbor of $v$ and $v_n[x^-]=v[x^-],~v[x^+]=v_n[x^+]$; \item $x(v,t)\models \mathbf{N}_{A}^{\left \langle m, *, * \right \rangle}\tau$, if and only if $\forall v_n \in A,~x(v_n,t)\models \tau$ where $v_n$ is a neighbor of $v$ and $v_n[x^+]=v[x^-]$; \item $x(v,t)\models \mathbf{N}_{A}^{\left \langle s, *, * \right \rangle}\tau$, if and only if $\forall v_n \in A,~x(v_n,t)\models \tau$ where $v_n$ is a neighbor of $v$ and $v_n[x^-]=v[x^-]$, $v_n[x^+]>v[x^+]$; \item $x(v,t)\models \mathbf{N}_{A}^{\left \langle f, *, * \right \rangle}\tau$, if and only if $\forall v_n \in A,~x(v_n,t)\models \tau$ where $v_n$ is a neighbor of $v$ and $v_n[x^+]=v[x^+]$, $v_n[x^-]<v[x^-]$. \end{enumerate} \end{definition} Here $v[x^-]$ and $v[x^+]$ denote the lower and upper limit of node $v$ in x-direction. Definition for the neighbor operator in y-direction and z-direction is omitted for simplicity. Notice that the reverse relations in IA can be easily defined by changing the order of the two GSTL formulas involved, e.g., $\varphi\sqcup_{[\alpha,\beta]}^{o^{-1}}\psi\Leftrightarrow\psi\sqcup_{[\alpha,\beta]}^o\varphi$. Based on the IA relations, we can define six spatial directions (e.g. left, right, front, back, top, down) for the ``neighbor" operator. For instance, $\mathbf{N}_A^{left}=\mathbf{N}_A^{\left \langle *,+,+\right \rangle}$, where $*\in\{b,m\}$ and $+\in\{d,\equiv,o\}$. We further define another six spatial operators $\mathbf{P}_{\exists}\tau$, $\mathbf{P}_{\forall}\tau$, $\mathbf{C}_{\exists}\tau$, $\mathbf{C}_{\forall}\tau$, $\mathbf{N}_{\exists}^{\left \langle *, *, * \right \rangle}\tau$ and $\mathbf{N}_{\forall}^{\left \langle *, *, * \right \rangle}\tau$ based on the definition above. \begin{align*} &\mathbf{P}_{\exists}\tau=\vee_{i=1}^{n_p}\mathbf{P}_{A_i}\tau, ~\mathbf{P}_{\forall}\tau=\wedge_{i=1}^{n_p}\mathbf{P}_{A_i}\tau,~A_i=\{v_{p,i}\},\\ &\mathbf{C}_{\exists}\tau=\vee_{i=1}^{n_c}\mathbf{C}_{A_i}\tau, ~\mathbf{C}_{\forall}\tau=\wedge_{i=1}^{n_c}\mathbf{C}_{A_i}\tau,~A_i=\{v_{c,i}\},\\ &\mathbf{N}_{\exists}^{\left \langle *, *, * \right \rangle}\tau=\vee_{i=1}^{n_n}\mathbf{N}_{A_i}^{\left \langle *, *, * \right \rangle}\tau, ~\mathbf{N}_{\forall}^{\left \langle *, *, * \right \rangle}\tau=\wedge_{i=1}^{n_n}\mathbf{N}_{A_i}^{\left \langle *, *, * \right \rangle}\tau,~A_i=\{v_{n,i}\}, \end{align*} where $v_{p,i}$, $v_{c,i}$, $v_{n,i}$ are the parent, child, and neighbor of $v$ respectively and $n_c$, $n_n$ are the number of children and neighbors of $v$ respectively. As we can see from the syntax definition of GSTL, the definition implies the following assumption, which is reasonable to applications such as autonomous robots. \begin{assumption}[Domain closure] The only objects in the domain are those representable using the existing symbols, which do not change over time. \end{assumption} The restriction that no temporal operators are allowed in the spatial term is reasonable for robotics since usually predicates are used to represent objects such as cups and bowls. We do not expect cups to change to bowls over time. Thus, we do not need any temporal operator in the spatial term and adopt the following assumption. \section{Specification mining based on video}\label{section:specification mining} One of the key steps in employing spatial temporal logics for autonomous robots is specification mining. Specifically, it is crucial for autonomous robots to learn new information in the form of GSTL formulas from the environment via sensor (e.g., video) directly without human inputs. In this section, we introduce an algorithm of mining a set of parametric GSTL formulas through specification mining based on a demo video. \begin{comment We assume we have some knowledge \emph{a prior} in the form of templates, and both input and output are based on the templates. We define two types of templates where the first type is a GSTL spatial term used to describe the spatial environment, and the second type is a parametric GSTL formula used to describe temporal relations. For templates in spatial terms, we define the objects which are to be replaced as spatial object variables $\nu$ and the directional information as direction variables $*$. For templates in parametric GSTL formulas, we define the time intervals, which are to be replaced as temporal interval variables and the IA relations among the temporal intervals as IA variables. Depends on different applications and task assignments, one can choose different templates. In this paper, we use spatial terms as templates to describe the environment and connect spatial terms with temporal operators (e.g., the until operators with overlap IA extension) for actions involving humans. As an example, in this paper, we adopt the following templates \begin{equation} \begin{aligned} &\tau_1=\mathbf{C}^2_{\exists}(\nu_1\wedge\mathbf{N}_{\exists}^{*}\nu_2),~\nu_1\models\mathbf{P}_\exists tool,~\nu_2\models\mathbf{P}_\exists tool\\ &\tau_2=\mathbf{C}^2_{\exists}(hand\wedge\mathbf{N}_{\exists}^{*}\nu_1),~\nu_3\models\mathbf{P}_\exists tool\\ &a=(\Box_{[t_1,t_2]}\tau_1)\sqcup_{[a,b]}^o(\Box_{[a,b]}\tau_2)\sqcup_{[a,b]}^o(\Box_{[t_3,t_4]}\tau_1) \end{aligned} \label{templates} \end{equation} where $\nu_1$ and $\nu_2$ are the spatial variables, $t_1$, $t_2$, $t_3$, $t_4$, $a$, and $b$ are the temporal variables, $\tau_1$ describes the spatial relations between two objects which are tools, $\tau_2$ describes the spatial relations between hand and a tool, and $a$ describes the actions of moving objects from one place to another by hand. The goal of specification mining based on video is to find GSTL formulas with spatial and temporal variables satisfying the spatial temporal signals generated by the video. Notice that the specification mining algorithm proposed in this paper can also be applied to other templates of interest. Now we proceed to introduce the specification mining algorithm. The goal of specification mining based on video is to take video clips and templates $\tau_1$ and $\tau_2$ and generate a set of parametric GSTL formulas based on template $a$. First, for each $G_t^i$, we check if any objects that are closer than a given distance $d$ satisfy the given spatial term templates. If so, we generate a GSTL spatial term by replacing the spatial object variables with the objects satisfying $G_t^i$ and the direction variables with the direction satisfying $G_t^i$. For example, we use template $\tau_1$ to generate a GSTL term $\mathbf{C}_\exists^2(\mu_i\wedge\mathbf{N}_\exists^{\left \langle * \right \rangle}\mu_j)$, where $\mu_i$ and $\mu_j$ are the predicates representing the objects in $v_{t,i}^i$ and $v_{t,j}^i$ respectively. If the relative position of the two objects satisfies at any six directions defined in the neighbor operator semantic definition, we replace $*$ in the neighbor with corresponding directional relations. For an example in Fig \ref{hsvfilter}, we can generate the following GSTL terms for $G_t^i$, where $s_1$ states that the cup is behind the plate, $s_2$ states that the fork is at the left of the plate, $s_3$ states that the hand is grabbing a cup. \begin{align*} &s_1=\mathbf{C}_\exists^2(cup\wedge\mathbf{N}_\exists^{behind} plate)\\ &s_2=\mathbf{C}_\exists^2(fork\wedge\mathbf{N}_\exists^{left} plate)\\ &s_3=\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{*} cup). \end{align*} In theory, the ``Always" GSTL formulas generated by the previous step include all the information from the video. However, it does not show any temporal relations between any two formulas. Thus, we use ``Until" operators to mine more temporal information based on the template with a temporal structure. For each template with a temporal structure, we check if any ``Always" GSTL formulas satisfy the template. If so, we generate a GSTL formula by replacing the temporal variables with the ``Always" GSTL formulas and the IA variables with the IA relations satisfying the ``Always" GSTL formulas. Let us continue the previous example. Denote the formulas generated from the previous step based on $\tau_1$ and $\tau_2$ as $\varphi_i$ and $\psi_i$, respectively. We check if any two formulas $\varphi_i$, $\varphi_j$, and $\psi_i$ have the temporal relationship defined in template $a$ if and only if their time intervals satisfy the \emph{overlap} relations defined in Allen's interval algebra. For example, $\varphi_1$, $\varphi_2$, and $\psi$ satisfy $a_1=\varphi_1\sqcup_{[45,90]}^{o}\psi\sqcup_{[100,120]}^{o}\varphi_2$. In the end, we replace the time stamps in the formulas with temporal variables as specific time instances do not apply to other applications. The specification mining algorithm is summarized in Algorithm \ref{specification mining algorithm}. The proposed specification mining based on video algorithm terminates in finite time for finite length video since the number of GSTL terms and formulas one can get is finite. \end{comment} We first pre-processed the video and stored each video as a sequence of graphs: $\mathcal{G}^i=(G_{1}^i,...,G_{T}^i)$, where $G_{t}^i=(V_{t}^i,W_{t}^i)$. $G_{t}^i$ represents frame $t$ in the original video $i$. $V_{t}^i=(v_{t,1}^i,...,v_{t,k}^i)$ stores objects in frame $t$ where $v_{t,k}^i$ is the object such as ``cup" and ``hand". $w_{t,i,j}^i\in W_t^i$ stores the 3D directional information (e.g. left, right, front, back, top, down) between object $v_{t,i}^i$ and object $v_{t,j}^i$ at frame $t$. $w_{t,i,j}^i$ can be obtained easily based on the 3D information returned by the stereo camera. The specification mining procedure is introduced as follows, with an example to illustrate the algorithm. The basic idea of the proposed specification mining is to first build spatial terms inductively and then construct more complicated temporal formulas by assembling the spatial terms from the previous steps. Specifically, for each frame $G_t^i$ of the video $i$, we generate spatial terms $\nu$ for each objects detected in $G_t^i$. For autonomous robots, both connectivity and parthood spatial relations are crucial to make decisions. Thus, we need to mine both of them from each frames. For connectivity, we generate spatial terms $\nu_1\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}\nu_2$ if the distance of the objects represented by spatial terms $\nu_1$ and $\nu_2$ is less than a given threshold $d$. The direction variable $*$ is replaced with proper 3D spatial relations between the objects represented by $\nu_1$ and $\nu_2$. If the relative position of the two objects satisfies at any six directions defined in the neighbor operator semantic definition, we replace ${\left \langle *, *, * \right \rangle}$ with corresponding directional relations. For parthood, we generate spatial terms $\mathbf{P}_\exists\nu$ for each object in $G_t^i$. As we mentioned in Section \ref{problem formulation and assumption}, we assume we know parthood relations for objects we are interested in. Next, we build more complicated spatial terms by combining spatial terms from the previous steps. Let us assume the hierarchical graph defined in GSTL has three layers. We obtain the following spatial terms for frame $G_t^i$ which include both connectivity and parthood spatial information. \begin{equation} \begin{aligned} &\mathbf{C}_\exists^2(\nu_1\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}\nu_2),~ \mathbf{C}_\exists^2(\mathbf{P}_\exists\nu),\\ &\mathbf{C}_\exists^2(\nu_1\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}\mathbf{P}_\exists\nu_2),~ \mathbf{C}_\exists^2(\mathbf{P}_\exists\nu_1\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}\mathbf{P}_\exists\nu_2). \end{aligned} \label{spatial terms templates} \end{equation} For an example in Fig \ref{hsvfilter}, we can generate the following GSTL terms for $G_t^i$, where $r_1$ states that the cup is behind the plate, $r_2$ states that the fork is at the left of the plate, $r_3$ states that the hand is grabbing a cup. $r_4$ states there are tools in the current frame. $r_5$ states hand is operating tools. $r_6$ states body parts and tools are connected in the current frame. Some GSTL terms are omitted for simplicity. \begin{align*} &r_1=\mathbf{C}_\exists^2(cup\wedge\mathbf{N}_\exists^{behind} plate),\\ &r_2=\mathbf{C}_\exists^2(fork\wedge\mathbf{N}_\exists^{right} plate),\\ &r_3=\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{\left \langle b, b, b^{-1} \right \rangle} cup),\\ &r_4=\mathbf{C}_\exists^2(\mathbf{P}_\exists tool),~ r_5=\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{\left \langle b, b, b^{-1} \right \rangle}\mathbf{P}_\exists tool),\\ &r_6=\mathbf{C}_\exists^2(\mathbf{P}_\exists body ~part\wedge\mathbf{N}_\exists^{\left \langle b, b, b^{-1} \right \rangle}\mathbf{P}_\exists tool). \end{align*} To generate temporal formulas, we first merge consecutive $G_t^i$ with the exact same set of GSTL terms from the previous step by only keeping the first and the last frame. For example, assuming for video $i$ from $G_0^i$ to $G_{35}^i$, all frames satisfy $\tau_1$ and $\tau_2$, then we merge them together by only keeping $G_0^i$, $G_{35}^i$, and the GSTL terms they satisfied. Then we generate ``Always" formula based on the frames and terms we kept from the previous step. We find the maximum time interval for each GSTL term from the previous step. For example, $\varphi_1=\Box_{[0,90]}r_1$, $\varphi_2=\Box_{[100,190]}r_2$, and $\psi=\Box_{[45,120]}r_3$. In theory, the ``Always" GSTL formulas generated by the previous step include all the information from the video. However, it does not show any temporal relations between any two formulas. Thus, we use ``Until" operators to mine more temporal information based on the template with a temporal structure. As our goal is to build a domain theory focusing on available actions, we are interested in what can ``hand" do to other tools. Specifically, we want to generate a motion primitive in GSTL formulas which includes the action itself and the preconditions and effects of the action. Thus, we generate the following GSTL formula \begin{equation} \begin{aligned} &a=(\Box_{[t_1,t_2]}\tau_1)\sqcup_{[\alpha_1,\beta_1]}^o(\Box_{[\alpha,\beta]}\tau_2)\sqcup_{[\alpha_2,\beta_2]}^o(\Box_{[t_3,t_4]}\tau_3),\\ &\tau_1=\mathbf{C}^2_{\exists}(\nu_1\wedge\mathbf{N}_{\exists}^{\left \langle *, *, * \right \rangle}\nu_2),~\nu_1\models\mathbf{P}_\exists tool,~\nu_2\models\mathbf{P}_\exists tool,\\ &\tau_2=\mathbf{C}^2_{\exists}(hand\wedge\mathbf{N}_{\exists}^{\left \langle *, *, * \right \rangle}\nu_1),~\nu_1\models\mathbf{P}_\exists tool,\\ &\tau_3=\mathbf{C}^2_{\exists}(\nu_1\wedge\mathbf{N}_{\exists}^{\left \langle *, *, * \right \rangle}\nu_3),~\nu_3\models\mathbf{P}_\exists tool. \end{aligned} \label{templates} \end{equation} As we can see from \eqref{templates}, $\tau_1$, $\tau_2$, and $\tau_3$ share the same object $\nu_1$ because the hand is operating the object. We check if any ``Always" GSTL formulas satisfy \eqref{templates}. If so, we generate a GSTL formula by replacing $\tau_i$ with the ``Always" GSTL formulas. Let us continue the previous example. Denote the ``Always" formulas generated from the previous step as $\varphi_i$ and $\varphi_j$ ( formulas without hands) and $\psi_i$ (formulas with hands). If $\varphi_i$ and $\varphi_j$ have common objects and $\psi_i$ operates the object, then we check if they have the temporal relationship defined in \eqref{templates} if and only if their time intervals satisfy the \emph{overlap} relations defined in Allen's interval algebra. For example, $\varphi_1$, $\varphi_2$, and $\psi$ satisfy $a_1=\varphi_1\sqcup_{[45,90]}^{o}\psi\sqcup_{[100,120]}^{o}\varphi_2$. In the end, we replace the time stamps in the formulas with temporal variables as specific time instances do not apply to other applications. The specification mining algorithm is summarized in Algorithm \ref{specification mining algorithm}. The proposed specification mining based on video algorithm terminates in finite time for finite length video since the number of GSTL terms and formulas one can get is finite. \begin{algorithm}[ht] \SetAlgoLined \LinesNumbered \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{A set of video and parametric GSTL formulas templates $a$ in \eqref{templates}} \Output{Parametric GSTL formulas} \BlankLine For each video $i$, pre-process the video and stored each video as a sequence of graph $\mathcal{G}^i=(G_1^i,...,G_T^i)$\; \For{for each $G_t^i$}{ \For{for object $v_{t,k}^i$ in $G_t^i$ }{ Generate $\mathbf{C}_\exists^2(\mathbf{P}_\exists v_{t,k}^i)$ using the parthood information of $v_{t,k}^i$\; \If{the distance between $v_{t,k}^i$ and $v_{t,l}^i$ in $G_t^i$ is smaller than a given distance $d$}{ \If{$v_{t,k}^i$ and $v_{t,l}^i$ satisfy any formulas in \eqref{spatial terms templates}}{ Replace the $\nu_1$ and $\nu_2$ with $v_{t,k}^i$ and $v_{t,l}^i$\; Replace the directional variable $*$ with the corresponding 3D IA directions\; } } } } Merge consecutive $G_t^i$ with the exact same set of GSTL terms by only keeping the first and the last frame and generate ``Always" formula based on the frame and terms\; \If{``Always" GSTL formulas satisfy GSTL formula $a$ in \eqref{templates}}{ Replace $\tau_i$ with the ``Always" GSTL formulas\; Output the parametric GSTL formula\; } Output the parametric GSTL formula. \caption{Specification mining based on demo videos} \label{specification mining algorithm} \end{algorithm} \section{Automatic task planner based on domain theory}\label{section:task planning} From the specification mining through video, robots are able to learn a set of available actions to alter the environment along with its preconditions and effects. In this section, we focus on developing a task planner for autonomous robots to generate a detailed task plan from a vague task assignment using the available actions learned from the previous section. We first introduce the domain theory, which stores necessary information to accomplish the task for autonomous robots. Then we introduce the automatic task planner composed of the proposer and the verifier. In the end, an overall framework for autonomous robots that combines the task planner and domain theory is given. \subsection{Domain theory} On the one hand, the proposed GSTL formulas can be used to represent knowledge for autonomous robots. On the other hand, robots need a set of knowledge or common sense to solve a new task assignment. Thus, we define a domain theory in GSTL for autonomous robots which stores available actions for robots to solve a new task. In the domain theory, the temporal parameters are not fixed. The domain theory is defined as a set of parametric GSTL formulas as follows. \begin{definition} Domain theory $\Sigma$ is a set of parametric GSTL formulas that satisfies the following consistent condition. \begin{itemize} \item Consistent: $\forall \varphi_i,\varphi_j\in \Sigma$, there exists a set of parameters such as $\varphi_i\wedge\varphi_j$ is true. \end{itemize} \end{definition} For example, the set $\Sigma$ including the following parametric GSTL formulas is a domain theory. \begin{equation} \begin{aligned} &\Box_{[t_1,t_2]}\mathbf{C}_\exists (tools \wedge \mathbf{C}_\exists (cup\vee plate \vee fork \vee spoon)) \\ &\varphi_1= \Box_{[t_1,t_2]}\mathbf{C}_\exists^2 (hand \wedge \mathbf{N}_\exists^{\left \langle *, *, * \right \rangle} cup),\\ &\varphi_2= \Box_{[t_1,t_2]}\mathbf{C}_\exists^2 (cup \wedge \mathbf{N}_\exists^{\left \langle d, d, m \right \rangle} table),\\ &\varphi_2 \sqcup_{[\alpha_1,\beta_1]}^o\varphi_1 \sqcup_{[\alpha_2,\beta_2]}^o \varphi_2. \end{aligned} \end{equation} It states common sense such that tools includes cup, plate, fork, and spoon and action primitives such as hand grab a cup from a table and put it back after use it. \begin{comment} The following set is not a domain theory as it violates the minimum conditions. \begin{equation} \begin{aligned} &\Box_{[t_1,t_2]}\mathbf{C}_\exists (cup \wedge \mathbf{N}_\exists^{\left \langle *, *, * \right \rangle} hand) \\ &\Box_{[t_1,t_2]}\mathbf{C}_\exists (cup \wedge \mathbf{N}_\exists^{\left \langle *, *, * \right \rangle} (hand\vee table). \end{aligned} \end{equation} \end{comment} \begin{remark}[Modular reasoning] Our domain theory inherits the hierarchical structure from the hierarchical graph from the GSTL spatial model. This is an important feature and can be used to reduce deduction systems complexity significantly. Domain theory for real-world applications often demonstrates a modular-like structure in the sense that the domain theory contains multiple sets of facts with relatively little connection to one another \cite{lifschitz2008knowledge}. For example, a domain theory for kitchen and bathroom will include two sets of relatively self-contained facts with a few connections such as tap and switch. A deduction system that takes advantage of this modularity would be more efficient since it reduces the search space and provides less irrelevant results. Existing work on exploiting the structure of a domain theory for automated reasoning can be found in \cite{amir2005partition}. \end{remark} In this paper, we generate the domain theory through specification mining based on the demo video. Using the algorithm from the previous section, we have the following domain theory, which will be presented in the evaluation section. The domain theory in \eqref{knowledge base} is used in automated task planning. Notice that the domain theory does not limit to any specific initial state. The domain theory can be applied to any table set up involving cup, plate, spoon, and fork. \begin{equation} \begin{aligned} &s_1=\mathbf{C}^2_{\exists}(cup\wedge\mathbf{N}_{\exists}^{back}plate),~ s_2=\mathbf{C}^2_{\exists}(fork\wedge\mathbf{N}_{\exists}^{left}cup),~\\ &s_3=\mathbf{C}^2_{\exists}(spoon\wedge\mathbf{N}_{\exists}^{left}fork), s_4=\mathbf{C}^2_{\exists}(fork\wedge\mathbf{N}_{\exists}^{left}empty),\\ &s^*_1=\mathbf{C}^2_{\exists}(cup\wedge\mathbf{N}_{\exists}^{top}plate),~ s^*_2=\mathbf{C}^2_{\exists}(fork\wedge\mathbf{N}_{\exists}^{left}plate),~\\ &s^*_3=\mathbf{C}^2_{\exists}(spoon\wedge\mathbf{N}_{\exists}^{right}plate),\\ &a_1^{'}=\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{{\left \langle *, *, * \right \rangle}}cup),~ a_2^{'}=\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{{\left \langle *, *, * \right \rangle}}fork),\\ &a_3^{'}=\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{{\left \langle *, *, * \right \rangle}}spoon),~ a_4^{'}=\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{{\left \langle *, *, * \right \rangle}}plate),\\ &a_1=(\Box_{[t_1,t_2]}s_1)\sqcup_{[\alpha_1,\beta_1]}^o (\Box_{[t_3,t_4]} a_1^{'}) \sqcup_{[\alpha_2,\beta_2]}^o(\Box_{[t_5,t_6]}s^*_1),~\\ &a_2=(\Box_{[t_1,t_2]}s_2)\sqcup_{[\alpha_1,\beta_1]}^o (\Box_{[t_3,t_4]} a_2^{'}) \sqcup_{[\alpha_2,\beta_2]}^o(\Box_{[t_5,t_6]}s^*_2),~\\ &a_3=(\Box_{[t_1,t_2]}s_3)\sqcup_{[\alpha_1,\beta_1]}^o (\Box_{[t_3,t_4]} a_3^{'})\sqcup_{[\alpha_2,\beta_2]}^o(\Box_{[t_5,t_6]}s^*_3),~\\ &a_4=(\Box_{[t_1,t_2]}s_4)\sqcup_{[\alpha_1,\beta_1]}^o (\Box_{[t_3,t_4]} a_4^{'})\sqcup_{[\alpha_2,\beta_2]}^o(\Box_{[t_5,t_6]}s^*_2). \end{aligned} \label{knowledge base} \end{equation} \subsection{Control synthesis} The task planner takes environment information from sensors and available actions from the domain theory and solves a vague task assignment with detailed task plans. For example, we give a task assignment to a robot by asking it to set up a dining table. A camera will provide the current dining table setup, and the domain theory stores information on what actions robots can take. The goal for the task planner is to generate a sequence of actions robots need to take such that the robot can set up the dining table as required. Specifically, we propose to implement the task planner as two interacting components, namely proposer and verifier. The proposer first proposes a plan based on the domain theory and its situational awareness. The verifier then checks the feasibility of the proposed plan based on the domain theory. If the plan is not feasible, then it will ask the proposer for another plan. If the plan turns out to be feasible, the verifier will output the plan to the robot for execution. The task planner may be recalled once the situation changes during the execution. \subsubsection{Proposer} For the proposer, we are inspired to cast the planning as a path planning problem on a graph $M=(\mathcal{S}, \mathcal{A},T)$ as shown in Fig. \ref{exp:task planning}, where node $s_i\in \mathcal{S}$ represents a GSTL term for objects that hold true at the current status. The initial term $s_0$ corresponding to a point or a set of points in $\mathcal{S}$, while the target terms $s^*_1$, $s^*_2$, and $s^*_3$ in $\mathcal{S}$ corresponds to the accomplishment of the task. Actions $a$ available to robots are given in $\mathcal{A}$ as GSTL formulas from the domain theory. The transition function $T: \mathcal{S}\times \mathcal{A} \rightarrow \mathcal{S}$ mapping one spatial term to another is triggered by an action $a\in \mathcal{A}$ that the robot can take. The example is given in Fig. \ref{exp:task planning example} where the goal is to set up the dinner table as shown in the right figure. The initial states $s_1,s_2,s_3$, target states $s^*_1,s^*_2,s^*_3$, and available actions $a_1,a_2,a_3,a_4$ are given in the domain theory (\ref{knowledge base}). The goal of the proposer is to find an ordered set of actions that transform initial spatial terms into target terms. It is worth pointing out such a graph in Fig. \ref{exp:task planning} is not given \emph{a prior} to robots. Robots need to expand the graph and generate serial actions by utilizing information in the domain theory. Similar to the task planning in GRAPHPLAN \cite{weld1999recent}, the proposer generates a potential solution in two steps, namely forward graph expansion and backward solution extraction. In the graph expansion, we expand the graph forward in time until the current spatial terms level includes all target terms, and none of them is mutually exclusive. To expand the graph, we start with initial terms and expanding the graph by applying available actions to the terms. The resulting terms based on the transition function $T$ will be new current terms. We define an exclusive mutual relation (mutex) for actions and terms and label all mutex relations among actions and terms. Two actions are mutex if they satisfy one of the following conditions. 1) The effect of one action is the negation of the other action. 2) The effect of one action is the negation of the other action's precondition. 3) Their preconditions are mutex. Furthermore, we say two terms are mutex if their supporting actions are mutex. If the current term level includes all target terms, and there is no mutex among them, then we move to the solution extraction phase as a solution may exist in the current transition system. The algorithm is summarized in Algorithm \ref{algorithm:proposer:expansion}, where $\mathcal{A}_{s_i^k}$ is the set of supporting actions for $s_{i}^{k}$. \begin{algorithm}[ht] \SetAlgoLined \LinesNumbered \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{Observed terms $s_1,s_2,...,s_n$, available actions $a_1,a_2,...,a_l$, target term $s^*_1,s^*_2,...,s^*_m$;} \Output{A graph with all target terms included;} \BlankLine Initialization: $s^0=\{s_1^0,s_2^0,...,s_n^0\}$ and $s^*=\{s^*_1,s^*_2,...,s^*_m\}$\; \While{$\exists s^*_i\not\in s^k$ or $\exists s^*_i ~\text{and}~ s^*_j \in s^k$ which are mutex}{ For all $s_i^k$ at current level $k$, add $s_j^{k+1}$ into the next level $k+1$ if $s_j^{k+1}\in \{s_i^k\times a_i\}$\; \If{effect of $a_i$ is the negation of the precondition of $a_j$ or $a_i$ and $a_j$ have conflict preconditions or the effects of $a_i$ and $a_j$ are mutex}{ Add a mutex link between $a_i$ and $a_j$\; } \If{$\forall a_i\in\mathcal{A}_{s_{i}^{k+1}}$ and $\forall a_j\in\mathcal{A}_{s_{j}^{k+1}}$, $a_i$ and $a_j$ are mutex}{Add a mutex link between $s_i^{k+1}$ and $s_j^{k+1}$\;} } \caption{Task planning for the graph expansion phase of the proposer} \label{algorithm:proposer:expansion} \end{algorithm} In the solution extraction phase, we extract solution backward by starting with the current term level. For each target term ${s^*_i}^k$ at the current term level $k$, we denote the set of its supporting actions as $\mathcal{A}_{{s^*_i}^{k}}$. We choose one action from each $\mathcal{A}_{{s^*_i}^{k}}$ with no mutex relations for all target terms, formulate a candidate solution set at this step $\mathcal{A}^k$, and denote the precondition terms of the selected actions as $\mathcal{S}_{pre}^{k}$. Then we check if the precondition terms have mutex relations. If so, we terminate the search on $\mathcal{A}^k$ and choose another set of action as a candidate solution until we enumerate all possible combinations. If no mutex relations are detected in $\mathcal{S}_{pre}^{k}$, then we repeat the above backtracking step until the mutex is founded or $\mathcal{S}_{pre}^{k}$ includes all initial terms. The solution extraction algorithm is summarized in Algorithm \ref{algorithm:proposer:extraction}. \begin{algorithm}[ht] \SetAlgoLined \LinesNumbered \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{The transition graph from the expansion phase;} \Output{A sequence of actions $a_1,a_2,...,a_k$;} \BlankLine $\forall {{s^*_i}}^k\in s^*$ at the current state level $k$, denote the set of its supporting actions as $\mathcal{A}_{{{s^*_i}}^{k}}$\; Pick one solution set $\mathcal{A}^k$ by choosing one actions from each set $\mathcal{A}_{{{s^*_i}}^{k}}$ where no mutex relations are allowed\; Denote the precondition terms of the selected actions as $\mathcal{S}_{pre}^k$\; \eIf{$\mathcal{S}_{pre}^k$ has no mutex terms}{ \If{$\mathcal{S}_{pre}^k$ includes all initial terms}{Output the ordered actions\;} $\forall s_i^k\in \mathcal{S}_{pre}^k$, denote the set of its supporting actions as $\mathcal{A}_{{{s^*_i}}^{k-1}}$\; Repeat the extraction process for $\mathcal{A}_{{{s^*_i}}^{k-1}}$ by going to line 2\; } {Discard the solution $\mathcal{A}^k$ and pick a new solution from $\mathcal{A}_{{{s^*_i}}^k}$ by going to line 2 until all combinations have been enumerated \; } \If{No feasible solution has been found}{Go to Algorithm \ref{algorithm:proposer:expansion} and expand the graph.} \caption{Task planning for the solution extraction phase of the proposer} \label{algorithm:proposer:extraction} \end{algorithm} \subsubsection{Verifier} Next, we introduce the implementation of the verifier and the interaction with the proposer. The verifier checks if the plan generated by the proposer is executable based on constraints in the domain theory and information from the sensors. The plan is executable if the verifier can find a set of parameters for the ordered actions given by the proposer while satisfying all constraints posed by the domain theory and the sensors. If the plan is not executable, then it will ask the proposer for another plan. If the plan is executable, it will output the effective plans to autonomous robots. As the task plans from the proposer are parametric GSTL formulas, the verifier needs two steps to verify if the parametric GSTL formulas are feasible. First, the verifier reformulates the parametric GSTL formulas in $\wedge_i(\vee_j\pi_{i,j})$, where $\pi_{i,j}$ is either spatial terms or spatial terms with ``Always" operators. The verifier finds feasible temporal parameters for every terms in $\wedge_i(\vee_j\pi_{i,j})$ using a satisfiability modulo theories (SMT) solver. Then, the verifier checks if there is a feasible solution for the spatial terms by formulating them in CNF and solving it using an SAT solver. We explain the two steps in detail as follows. We first use SMT to find feasible temporal parameters for the parametric GSTL formulas from the proposer. SMT is the extension of the SAT, where the binary variables are replaced with predicates over a set of non-binary variables. The predicate is a binary function $f(x)\in \mathbb{B}$ with non-binary variables $x\in\mathbb{R}$. The predicate can be interpreted with different theories. For example, the predicate in SMT can be a function of linear inequality, which returns one if and only if the inequality holds true. An example of linear inequality predicate is given below. \begin{align*} (\beta-\alpha<5) \wedge ((\alpha + \beta < 10) \vee (\alpha - \beta > 20)) \wedge (\alpha +\beta >\gamma) \end{align*} Here, $\alpha,\beta,\gamma$ are non-binary variables and we use the linear inequality to represent the predicate for simplicity. To obtain the above form $\wedge_i(\vee_j f_{i,j})$ so that we can apply SMT solver, we reformulate the parametric GSTL formulas in the following form, \begin{equation} \begin{aligned} &\varphi := \wedge(\vee\pi),\\ & \pi := \tau ~|~ \neg\tau ~|~ \Box_{[t_1,t_2]}\tau ~|~ \Box_{[t_1,t_2]}\neg\tau,\\ &\tau := \mu ~|~ \neg\tau ~|~ \tau_1\wedge\tau_2 ~|~ \tau_1\vee\tau_2 ~|~ \mathbf{P}_A\tau ~|~ \mathbf{C}_A \tau ~|~ \mathbf{N}_A^{\left \langle *, *, * \right \rangle} \tau. \end{aligned} \label{smt form} \end{equation} One can write any GSTL formula in $\wedge(\vee \pi)$ because any GSTL formula can be formulated in CNF form as shown in \cite{liu2020graph}. We define the lower temporal bound and the upper temporal bound for each spatial term $\tau$ of $\pi$ in (\ref{smt form}) as two temporal variables (say $\alpha$ and $\beta$) in real domain. If $\tau$ only hold true at current time instance, then $\alpha=\beta$. Then $\pi$ in (\ref{smt form}) can be represented with the following linear inequality predicates. \begin{equation} \begin{aligned} &\pi=\tau \Leftrightarrow \alpha\leq t\leq \beta,\\ &\pi=\neg\tau \Leftrightarrow (t< \alpha)\vee(t<\beta),\\ &\pi=\Box_{[t_1,t_2]}\tau\Leftrightarrow (\alpha\leq t_1) \wedge (t_2\leq \beta),\\ &\pi=\Box_{[t_1,t_2]}\neg\tau\Leftrightarrow (t_2\geq \alpha)\wedge (t_1\leq \beta), \end{aligned} \label{smt pi} \end{equation} where $t$ is the current time. According to the first two lines of (\ref{smt form}) and \eqref{smt pi}, we can formulate any parametric GSTL formulas in the following SMT form. \begin{equation} \begin{aligned} &\varphi=\wedge_i(\vee_j f_{i,j}), \end{aligned} \label{smt} \end{equation} where $f_{i,j}$ is the predicates with linear inequality shown in \eqref{smt pi}. We use existing SMT solvers to find a feasible solution for the problem above. If the parametric GSTL formulas are feasible, the solver will return a time interval $[\alpha,\beta]$ for each spatial term $\tau$ in (\ref{smt form}). The spatial term $\tau$ between $[\alpha,\beta]$ must hold true, which will be checked via SAT. The rest of the verifier is implemented through SAT. Assume we have a set of spatial terms $\Gamma=\{\tau_1,\tau_2,...,\tau_n\}$ whose lower temporal bound $\alpha_i$ and upper temporal bound $\beta_i$ are given by the SMT solver. We aim to check if all spatial terms can hold true in their corresponding time interval. We have shown in \cite{liu2020graph} that any spatial term $\tau$ can be written in CNF $\wedge_i(\vee_j\mu_{i,j})$ by following the Boolean encoding procedure. Then we obtain a set of logic constraints for spatial terms $\mu_{i,j}$ in CNF whose truth value is to be assigned by the SAT solver. This is done by checking the satisfaction of the following formulas. \begin{equation} \begin{aligned} &\varphi=\wedge_{k=1}^{n}\tau_i=\wedge_{k=1}^{n}(\wedge_{t=a_k}^{b_k}\tau_{k,t}),\\ &\tau_{k,t}=\wedge_i(\vee_j\mu_{i,j}^{k,t}). \end{aligned} \label{sat} \end{equation} The solver will give two possible outcomes. First, the solver finds a feasible solution and $\varphi$ holds true. The plans generated by the proposer successfully solve the new task assignment. In this case, the verifier will output the effective plans to robots with temporal parameters generated from SMT solver. Second, the solver cannot find a feasible solution where $\varphi$ holds true which means the task assignment is not accomplished and there are conflicts in the plans generated by the proposer. The verifier will inform the proposer the plan is infeasible. The proposer will take the information as additional constraints and replan the transition system. The algorithm is summarized in Algorithm \ref{algorithm:verfier}. \begin{algorithm}[ht] \SetAlgoLined \LinesNumbered \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{Ordered actions from the proposer $a_1,a_2,...$, task assignment $\psi$, observed event, and domain theory} \Output{An executable sequence of actions or counter-example} \BlankLine Rewrite ordered action plans as $\phi_i=a_1\sqcup_{[c_1,c_2]}^ba_2\sqcup_{[c_3,c_4]}^b\cdots$ and denote $\Sigma=\{\phi_i,\psi\}$\; \While{there are $\Diamond_{[\alpha,\beta]}$ and $\sqcup_{[\alpha,\beta]}^*$ operators in formulas of $\Sigma$}{ for GSTL formula $\varphi=\Diamond_{[\alpha,\beta]}\phi$, we have $\varphi=\neg\Box_{[\alpha,\beta]}\neg\phi$\; for GSTL formula $\varphi_1\sqcup_{[\alpha,\beta]}^o\varphi_2$, we have $\Box_{[\alpha,\beta]}(\varphi_1\wedge\varphi_2)\wedge \Box_{[\alpha-1,\alpha-1]}\neg\varphi_2\wedge\Box_{[\beta-1,\beta-1]}\neg\varphi_1$ (other IA relations can be transferred in the similar way)\; } Reform $\Sigma$ as (\ref{smt form})\; Replace $\pi$ in (\ref{smt form}) with linear inequality predicates based on (\ref{smt pi})\; Solve the corresponding SMT in (\ref{smt})\; Reform $\tau$ at each time in CNF form (\ref{sat})\; Solve the SAT problem in (\ref{sat}) by assigning a set of truth value $u:\tau\rightarrow\{\top,\bot\}\in\mathbf{U}$ to each $\mu_{j,i}^*$\; \eIf{a feasible solution has been found}{ Output the executable ordered action plan\; }{ Inform the proposer such plan is infeasible\; } \caption{Verifier} \label{algorithm:verfier} \end{algorithm} \subsection{Overall framework} The overall framework is summarized in Fig. \ref{framework}. Given a parametric domain theory $a_1,a_2,\cdots$ obtained from the specification mining based on video, current spatial terms $\wedge_i s_i$, and a task assignment $\psi$, we aim to generate a detailed sequence of task plans such that the task assignment $\psi$ can be accomplished. In the framework, the proposer takes the current terms as the initial nodes and the task assignment $\psi$ as the target nodes. Available actions, along with preconditions and the effect of taking those actions, are obtained from the domain theory in (\ref{knowledge base}) and used in expanding the graph in the proposer. Ordered actions are generated by the backward solution extraction and passed to the verifier. The verifier takes the ordered actions from the proposer and verifies them based on the constraints posed by the domain theory, current spatial terms, and the task assignment. The verifier first checks the feasibility of temporal parameters in the parametric GSTL formulas from the proposer via SMT. Then it checks if there is a feasible solution for the spatial terms under logic constraints, which is solved by the SAT solver. If the actions are not executable, then it will inform the proposer that the current planning is infeasible. If the ordered actions are executable, they will be published for robots to implement. \begin{figure} \centering \includegraphics[scale=0.15]{framework4.pdf} \caption{An overall framework for the automatic task planning with the proposer and the verifier} \label{framework} \end{figure} \section{Evaluation}\label{section:evaluation} In this section, we evaluate the effectiveness of the proposed specification mining algorithm and automated task planning through a dining table setting example. We first generate a domain theory containing necessary information on solving the task, which is achieved by specification mining based on the video. Then, we perform automated task planning by implementing the proposer and the verifier introduced in the previous section. \subsection{Specification mining} \subsubsection{Data preprocessing} We record several videos of table setting for the specification mining algorithm. In order to obtain both color and depth images, we chose to use the ZED Stereo camera. The camera uses two lenses at a set distance apart to capture both a right and left color image for each frame. Using those images and the distance between the lenses, the camera's software is able to calculate depth measurements for each pixel of the frame. We first perform object detection on the obtained video. There are numerous results on object detection algorithms. As object detection is not the focus of this paper, we choose color-based filtering for object detection due to its robust performance. The goal for object detection is to be able to isolate each object of interest individually and find a mask that can then be applied to the depth images and isolate each object's depth data. The first step to creating masks is color filtering. Each object in our test setup has a distinct color that will allow for isolation with a color filter. Hue, Saturation, Value (HSV) color scheme was used for the color filters, where hue is the base color or pigment, saturation is the amount of pigment, and value is the darkness. We further apply kernel filters and median filers to improve the performance of the color-based object detection by removing high-frequency noise. The usefulness of the HSV color scheme for color filtering is exemplified in Fig. \ref{hsvfilter}. \begin{figure} \centering \includegraphics[scale=0.1]{comp_color.png} \caption{Color based object detection using HSV color scheme} \label{hsvfilter} \end{figure} Once the object masks are found, they can then be used to isolate each object in the corresponding depth image for each frame. With an object isolated in depth image, other parameters like average depth are calculated. This data is vital for specification mining because information like relative location and contact are very important to learn how the objects interact throughout a target process. Fig. \ref{depth information} shows the depth value for each object. \begin{figure} \centering \includegraphics[scale=0.1]{comp_depth.png} \caption{Depth} \label{depth information} \end{figure} We consider six directional relations, namely front, back, left, right, top, and down, between any two objects that are closer than a certain distance. For each frame, only objects with similar depth value are eligible for left, right, top, and down relations. Objects with similar horizontal positions in Fig. \ref{hsvfilter} and different depth values are eligible for front and back relations. We store the relative position between two objects in a table where each row records time, objects name, relative directional relations. The table will be used in the specification mining algorithm. \subsubsection{Specification mining based on video} We record six demo videos where a person set up the dining table using two different approaches. We first show detailed results for the first video and then overall results for all six videos. Following Algorithm \ref{specification mining algorithm}, we first generate spatial terms for each frame. We obtain the spatial terms $s_1$, $s_2$, $s_3$, $s^*_1$, $s^*_2$, $s^*_3$, and the following spatial terms from the first video \begin{align*} &a_1^{'}=\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{{\left \langle *, *, * \right \rangle}}cup)\\ &a_3^{'}=\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{{\left \langle *, *, * \right \rangle}}spoon)\\ &a_4^{'}=\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{{\left \langle *, *, * \right \rangle}}plate),\\ &s_4=\mathbf{C}_\exists^2(fork\wedge\mathbf{N}_\exists^{left}empty). \end{align*} Some spatial terms are omitted for simplicity. After we merge the consecutive frame with the same spatial terms and mine ``Always" formulas, we have the following the ``Always" formulas for the first video \begin{align*} &\Box_{[1,117]}s_1,~ \Box_{[1,75]}s_2,~ \Box_{[1,494]}s_3,~ \Box_{[118,339]}s_4\\ &\Box_{[75,183]}a_1^{'},~ \Box_{[126,669]}s^*_1,~ \Box_{[274,386]}a_4^{'},\\ &\Box_{[340,669]}s^*_2,~ \Box_{[458,584]}a_3^{'},~ \Box_{[535,669]}s^*_3. \end{align*} Then, we learn temporal relations among these ``Always" formulas and obtain the following results for the video. \begin{align*} &\Box_{[1,117]}s_1\sqcup_{[75,117]}^{o} \Box_{[75,183]}a_1^{'} \sqcup_{[126,183]}^o \Box_{[126,669]} s^*_1,\\ &\Box_{[118,339]}s_4\sqcup_{[274,339]}^{o} \Box_{[274,386]}a_4^{'} \sqcup_{[340,386]}^o \Box_{[340,669]} s^*_2,\\ &\Box_{[1,494]}s_3\sqcup_{[458,494]}^{o} \Box_{[458,584]}a_3^{'} \sqcup_{[535,584]}^o \Box_{[535,669]} s^*_3. \end{align*} After we apply the same algorithm to multiple video and replace the time instances with temporal variables, we obtain the following results. \begin{align*} &a_1=(\Box_{[t_1,t_2]}s_1)\sqcup_{[\alpha_1,\beta_1]}^o (\Box_{[t_3,t_4]} a_1^{'}) \sqcup_{[\alpha_2,\beta_2]}^o(\Box_{[t_5,t_6]}s^*_1),~\\ &a_2=(\Box_{[t_1,t_2]}s_2)\sqcup_{[\alpha_1,\beta_1]}^o (\Box_{[t_3,t_4]} a_2^{'}) \sqcup_{[\alpha_2,\beta_2]}^o(\Box_{[t_5,t_6]}s^*_2),~\\ &a_3=(\Box_{[t_1,t_2]}s_3)\sqcup_{[\alpha_1,\beta_1]}^o (\Box_{[t_3,t_4]} a_3^{'})\sqcup_{[\alpha_2,\beta_2]}^o(\Box_{[t_5,t_6]}s^*_3),~\\ &a_4=(\Box_{[t_1,t_2]}s_4)\sqcup_{[\alpha_1,\beta_1]}^o (\Box_{[t_3,t_4]} a_4^{'})\sqcup_{[\alpha_2,\beta_2]}^o(\Box_{[t_5,t_6]}s^*_2), \end{align*} where $a_2^{'}=\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{{\left \langle *, *, * \right \rangle}}fork)$ is the spatial term learned from other video. \subsection{Automated task planning} \subsubsection{Proposer} We use the same example in Fig. \ref{exp:task planning example} to evaluate the proposer. Fig. \ref{exp:task planning} illustrates the graph expanding and solution extraction algorithm. Initially, we have three terms $s_1,s_2,\text{and}~s_3$ and three available actions $a_1,a_2,\text{and}~a_3$. We expand the graph by generating terms $s^*_1,s_4,\text{and}~\neg s_2$ from applying $a_1$ to $s_1$, terms $s^*_2$ and $\neg s_3$ from applying $a_2$ to $s_2$, and terms $s^*_3$ from applying $a_3$ to $s_3$. $a_1$ and $a_2$ are mutex since $a_1$ generate the negation of the precondition of $a_2$. $a_2$ and $a_3$ are mutex for the same reason. Consequently, $s^*_1$ and $s^*_2$ are mutex since their supporting actions $a_1$ and $a_2$ are mutex. We label all mutex relations as the red dash line in Fig. \ref{exp:task planning}. Even though the current term level includes all target terms, we need to further expand the graph as $s^*_1$ and $s^*_2$ are mutex. Notice that we move terms to the next level if there are no actions applied. We expand the second term level following the same procedure and label all mutex with red dash lines. In the third term level, $s^*_1$ and $s^*_2$ are not mutex anymore since they both have ``no action" as their supporting actions, and they are not mutex. Since the third term level includes all target terms, and there is no mutex, we now move to the backward solution extraction. From the previous forward graph expansion phase, we obtain a graph with three levels of terms and two levels of actions. From Fig. \ref{exp:task planning}, we can see that available actions for $s^*_1$ at level 3 is $\mathcal{A}_{{s^*_1}^3}=\{a_1,\varnothing\}$, available actions for $s^*_2$ is $\mathcal{A}_{{s^*_2}^3}=\{a_2,\varnothing,a_4\}$, and available actions for $s^*_3$ is $\mathcal{A}_{{s^*_3}^3}=\{a_3,\varnothing\}$, where $a_1$ and $a_2$ are mutex, $a_2$ and $a_3$ are mutex, $a_2$ and $a_4$, and $a_3$ and $a_4$ are mutex. Thus, all possible solutions for action level 2 are $\{\{a_1,\varnothing,a_3\}$, $\{a_1,\varnothing,\varnothing\}$, $\{a_1,a_4,\varnothing\}$, $\{\varnothing,a_2,\varnothing\}$, $\{\varnothing,\varnothing,a_3\}$, $\{\varnothing,\varnothing,\varnothing\}$, $\{\varnothing,a_4,\varnothing\}\}$. Let us take $\mathcal{A}^k=\{a_1,\varnothing,a_3\}$ as an example. The precondition for it at level 2 is $\{s_1,s^*_2,s_3\}$. Since $s_3$ and $t_2$ are mutex, thus $\mathcal{A}^k=\{a_1,\varnothing,a_3\}$ is not a feasible solution. In fact, there are no feasible solution for the current transition system. In the case where no feasible solution after the solution extraction phase enumerate all possible candidate, we go back to the graph expanding phase and further grow the graph. For example in Fig. \ref{exp:task planning}, after we grow another level of actions and terms, the solution extraction phase is able to find two possible ordered actions: $a_3,a_2,a_1$ and $a_1,a_4,a_3$. They are highlighted with green and purple lines in Fig. \ref{exp:task planning} respectively. The results will be tested in the verifier to make sure the plan is executable. \begin{figure*} \centering \includegraphics[scale=0.25]{task_planning.pdf} \caption{Automatic task planning based on forward graph expansion and backward solution extraction.} \label{exp:task planning} \end{figure*} \subsubsection{Verifier} Let us continue the dining table set up example. We write one of the ordered actions given by the proposer as a GSTL formula $\phi=a'_3\sqcup_{[e_1,e_2]}^ba'_2\sqcup_{[e_3,e_4]}^ba'_1$. We assume the domain theory requires that robots need 5 seconds to move spoon, fork, and cup. The task assignment is setting up the table in 40 seconds which can be represented as a GSTL formula $\psi=\Diamond_{[0,40]}(s^*_1\wedge s^*_2\wedge s^*_3)$. Based on the domain theory we obtained from the specification mining algorithm, we have the following parametric GSTL formulas. \begin{align*} &a_i=(\Box_{[t_{i,0},t_{i,0}+c_i+5]}s_i)\sqcup_{[t_{i,0}+c_i,t_{i,0}+c_i+5]}^o(\Box_{[t_{i,0}+c_i,t_{i,0}+d_i+5]} a_i^{'})\\ &\sqcup_{[t_{i,0}+d_i,t_{i,0}+d_i+5]}^o(\Box_{[t_{i,0}+d_i,t_{i,0}+d_i+5+\epsilon]}s^*_i),~\forall i\in [1,2,3],\\ &\phi=a'_3\sqcup_{[e_1,e_2]}^ba'_2\sqcup_{[e_3,e_4]}^ba'_1,\\ &\psi=\Diamond_{[0,40]}(s^*_1\wedge s^*_2\wedge s^*_3). \end{align*} The job for the verifier is to find a set of value for $c_i$, $d_i$, $e_i$ and $t_{i,0}$ for each action such that $\psi$ is satisfied and no temporal constraint is violated. Using the proposed algorithm, we first reformulate the GSTL formula in $\wedge_i(\vee_j f_{i,j})$ form, where $f_{i,j}$ is the linear inequality predicate for the temporal parameters. We obtain the following SMT encoding for the parametric GSTL formulas above \begin{equation} \begin{aligned} &\forall i \in [1,2,3],\\ &t_{s_i}^1\leq t_{i,0},~ t_{i,0}+c_i+5\leq t_{s_i}^2,\\ &t_{a_{i}^{'}}^1\leq t_{i,0}+c_i,~ t_{i,0}+c_i+5\leq t_{a_{i}^{'}}^2,\\ &t_{a_{i}^{'}}^1\leq t_{i,0}+d_i,~ t_{i,0}+d_i+5\leq t_{a_{i}^{'}}^2,\\ &t_{s^*_i}^1\leq t_{i,0}+d_i,~ t_{i,0}+d_i+5+\epsilon\leq t_{s^*_i}^2,\\ &t_{a'_3}^2\leq e_1,~ e_2\leq t_{a'_2}^1,~ t_{a'_2}^2\leq e_3,~ e_4\leq t_{a'_1}^1,\\ &t_{s^*_i}^1\leq 40, \end{aligned} \end{equation} where $t_{\tau}^1$ and $t_{\tau}^2$ is the lower bound and upper bound of spatial term $\tau$. All constraints are connected with conjunction operators. We employ the SMT solver MathSAT \cite{mathsat5} and implemented in Python with pySMT API \cite{pysmt2015}. The SMT solver returns the following results. \begin{equation} \begin{aligned} &\phi=a'_3\sqcup_{[13,14]}^ba'_2\sqcup_{[25,26]}^ba'_1\\ &a_3=(\Box_{[1,7]}s_3)\sqcup_{[2,7]}^o(\Box_{[2,13]}a'_3)\sqcup_{[8,13]}^o\Box_{[8,14]}s^*_3,\\ &a_2=(\Box_{[13,19]}s_2)\sqcup_{[14,19]}^o(\Box_{[14,25]}a'_2)\sqcup_{[20,25]}^o\Box_{[20,26]}s^*_2,\\ &a_1=\Box_{[25,31]}s_1\sqcup_{[26,31]}^o(\Box_{[26,37]}a'_1)\sqcup_{[32,37]}^o\Box_{[32,38]}s^*_1. \end{aligned} \label{smt result} \end{equation} As we can see from the above CNF form \eqref{smt result}, temporal parameters are solved by the SMT solver. To verify the spatial terms in \eqref{smt result}, we use the Boolean encoding in \cite{liu2020graph} and obtain the following CNF form for $s_3$ as an example. \begin{equation} \begin{aligned} &s_3=\mathbf{C}_\exists^2(spoon\wedge\mathbf{N}_\exists^{left}fork) =\bigvee_{j=1}^{n_j}\left(\varphi_j\wedge\phi_j\right)\\ &=\bigwedge\begin{pmatrix} \varphi_1\vee\phi_1 & \varphi_1\vee\phi_2 &... &\varphi_1\vee\phi_{n_j}\\ \varphi_2\vee\phi_1 & \varphi_2\vee\phi_2 &... &\varphi_2\vee\phi_{n_j} \\ \vdots&\vdots & ...&\vdots\\ \varphi_{n_j}\vee\phi_1 & \varphi_{n_j}\vee\phi_2 &... &\varphi_{n_j}\vee\phi_{n_j} \end{pmatrix}\\ & \varphi_j=\bigvee_{i=1}^n\mathbf{C}_{A_j}\mathbf{C}_{A_i}spoon,\\ &\phi_j=\bigvee_{i=1}^{n_i}\bigvee_{k=1}^{n_k}\mathbf{C}_{A_j}\mathbf{C}_{A_i}\mathbf{N}_{A_k}^{left}fork, \end{aligned} \label{CNF2} \end{equation} where the truth value of $\varphi_j$ and $\phi_j$ are to be assigned by the SAT solver. We apply the same procedure for the rest of spatial term in \eqref{smt result} at different time and use a SAT solver to check the feasibility of the set of obtained logic constraints. We use PicoSAT \cite{biere2008picosat} as the SAT solver where each spatial term at a different time is modeled as a Boolean variable. The solver returns a feasible solution meaning the task plans generated by the proposer is feasible. Let us assume the domain theory requires that robots need 10 seconds to move a plate with the following GSTL formulas \begin{align*} &a_4=\Box_{[t_{4,0},t_{4,0}+c_4+10]}s_4\sqcup_{[t_{4,0}+c_4,t_{4,0}+c_4+10]}^o(\Box_{[t_{4,0}+c_4,t_{4,0}+d_4+10]}a'_4)\\ &\sqcup_{[t_{4,0}+d_4,t_{4,0}+d_4+10]}^o\Box_{[t_{4,0}+d_4,t_{4,0}+d_4+10+\epsilon]}s^*_2. \end{align*} We use the same algorithm for the verifier to check the feasibility of the ordered actions from the proposer. The verifier cannot find a feasible solution for the other ordered actions $\phi=a'_1\sqcup_{[e_1,e_2]}^ba'_4\sqcup_{[e_3,e_4]}^ba'_3$ because the SMT cannot find a feasible solution where the task assignment can be accomplished within 40 seconds. \section{Conclusion}\label{Section: conclusion} We study specification mining based on demo videos and automated task planning for autonomous robots using GSTL. We use GSTL formulas to represent spatial and temporal information for autonomous robots. We generate the domain theory in GSTL by learning from demo videos and use the domain theory in the automatic task planning. An automatic task planning framework is proposed with an interacted proposer and verifier. The proposer generates ordered actions with unknown temporal parameters by running the graph expansion phase and the solution extraction phase iteratively. The verifier verifies if the plan is feasible and outputs executable task plans through an SMT solver for temporal feasibility and an SAT solver for spatial feasibility. \begin{comment} \begin{equation} \begin{aligned} &\left(\bigwedge^{t_0+a}_{i=t_0}s_3\right) \left(\bigwedge^{t_0+a+5}_{i=t_0+a}(s_3\wedge (hand\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}spoon)\right)\\ &\left(\bigwedge^{t_0+b+5}_{i=t_0+b}t_3\wedge (hand\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}spoon)\right) \left(\bigwedge^{t_0+b+5+\epsilon}_{i=t_0+b+5}t_3\right). \end{aligned} \label{CNF1} \end{equation} One of the feasible solution given by the solver could be \begin{align*} &\phi=a_3\sqcup_{[14,15]}^ba_2\sqcup_{[30,31]}^ba_1\\ &a_3=\Box_{[0,6]}s_3\sqcup_{[1,6]}^o\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}spoon)\sqcup_{[7,12]}^o\Box_{[7,13]}t_3,\\ &a_2=\Box_{[16,22]}s_2\sqcup_{[17,22]}^o\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}fork)\sqcup_{[23,28]}^o\Box_{[23,29]}t_2,\\ &a_1=\Box_{[32,38]}s_1\sqcup_{[33,38]}^o\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}cup)\sqcup_{[39,42]}^o\Box_{[39,45]}t_1. \end{align*} \end{comment} \section{Introduction} \begin{comment} \textbf{Outline:} First paragraph: Start with long term autonomy. Big challenge is generalization under uncertainty. Some related work from different filed. Among them most promising field is formal method and logic-based approach. Second paragraph: Related work on logic-based approach for knowledge representation and automated reasoning. Classic logic and their limitation. which leads to spatial temporal logic. Third paragraph: Spatial temporal logic and their biggest challenges, the balance between expressiveness and tractability. Bottleneck and challenges Forth paragraph: How do we address the challenge and our contribution. \end{comment} \noindent {\color{red} A couple of sentences on the need of knowledge representation ... } Lots of work have been done on knowledge representation such as expert systems for solving specific tasks in 1970s \cite{hayes1983building} and frame languages for rule-based reasoning in 1980s \cite{kifer1995logical}. Researcher realized any intelligent process needs to be able to store knowledge in some forms and has ability to reason on them with rules or logic. Currently, one of the most active models in knowledge representation is knowledge graph (or semantic web) such as Google knowledge graph \cite{googleknowledgegraph} and ConceptNet \cite{liu2004conceptnet}. In knowledge graph, concepts are modeled as nodes and their relations are modeled as labeled edges. They make successful applications in areas such as recommendation systems and searching engine. However, the information in knowledge graph could be inaccurate since contributors could be unreliable. They also face difficulties when describing time and space sensitive information which are particularly important to robots. {\color{red} To avoid such difficulties, we therefore follow formal methods and logic-based approaches in our work. } In formal method and logic-based approach, symbolic knowledge representation and reasoning are performed by means of primitive operations manipulating predefined elementary symbols \cite{hertzberg2008ai}. As one of the most investigated symbolic logics, first-order logics \cite{mccarthy1960programs} is a powerful representation and reasoning tool with a well understood theory. It can be used to model a various range of applications. However, first-order logic is in general undecidable meaning a sound and complete deduction algorithm cannot even guarantee termination, let alone {\color{red} real-time automatic reasoning}. By limiting the expressiveness of first-order logic, some language subsets of first-order logic are decidable and have been used in applications including software development and verification of software and hardware. One of the language subsets of first-order logic is propositional logics \cite{post1921introduction}. For many practical cases, the instances or the propositional variables are finite which results in a decidable inference process. However, as all combinations of propositional variables need to be considered, the growth is multiexponetial in terms of the domain sizes of the propositional variables. Another language subset of first-order logic is description logic \cite{baader2003description}. In description logic, the domain representation is given by two parts, namely terminological knowledge and assertional knowledge. Description logic inferences are running based on the given terminological and assertional knowledge. The inference algorithms can run efficiently in most of practical cases even though they are theoretically intractable. In general, classic logic fails to capture the temporal and spatial characteristics of the knowledge and the inference algorithms are often undecidable. For example, it is difficult to capture information such as a robot hand is required to hold a cup for at least five minutes. The classic logics have sufficient expressive power on sequential planning. However, it may not be sufficient in terms of modeling temporal and spatial relations such as the effects of action duration and space information from sensors. As spatial and temporal information are often particularly important for robots, spatial logic and temporal logic are studied both separately \cite{baier2008principles,cohn2001qualitative,raman2015reactive} and combined \cite{kontchakov2007spatial,haghighi2016robotic,bartocci2017monitoring}. Examples of temporal logics include Linear temporal logic \cite{baier2008principles} and Signal temporal logic \cite{raman2015reactive}. Examples of spatial logics include Region connection calculus (RCC) \cite{cohn1997qualitative} and $S4_u$ \cite{aiello2007handbook}. Both temporal logic and spatial logic can be extended with metric extension such as interval algebra and rectangle algebra at the expense of computational complexity. Furthermore, there has been extensive work on combining temporal logics and spatial logics. Without any restriction on combining temporal predicates/operators and spatial predicates/operators (such as LTL and RCC), the obtained spatial temporal logic has the maximal expressiveness with an undecidable inference \cite{kontchakov2007spatial}. By integrating spatial and temporal operators with classic logic operators, spatial temporal logic shows great potential in specifying a wide range of task assignments for cognitive robots with automated reasoning ability. Thus, in this paper, we are interested in employing spatial temporal logics for knowledge representation in cognitive robots and developing automated reasoning based on it. One of the major challenges comes from the complexity of the inference algorithm. Most spatial temporal logics enjoy huge expressiveness due to their semantic definition and the way of combining spatial and temporal operators \cite{kontchakov2007spatial}. However, their inference algorithms are often undecidable making it impossible for cognitive robots to make decisions in real time. Human inputs are often needed to facilitate the deduction process. A balance between expressiveness and tractability is needed for spatial temporal logic. The lack of tractability limits the application of spatial temporal logic on cognitive robots. Motivated by the challenges faced in existing work, we propose a new graph-based spatial temporal logic (GSTL) for cognitive robots with a sound and complete inference system. The proposed GSTL is able to specify a wide range of spatial and temporal specifications for cognitive robots while maintains a tractable inference algorithm because the interaction between temporal operators and spatial operators in the proposed GSTL is rather limited. Furthermore, we use the proposed GSTL and automated reasoning system in designing a cognitive robot with the ability of independently generating a detailed and executable action plan when a vague task assignment is given. The contributions of this paper are mainly twofold. First, we propose a new spatial temporal logic with a better balance between expressiveness and tractability. The satisfiability of the proposed GSTL is decidable. The inference systems implemented by a constraint programming is sound and complete. Second, we design a automatic task planning framework for cognitive robots which are able to independently solve a vague task assignment with detailed action plans. Knowledge needed to solve a task is stored in a knowledge base as a set of parameterized GSTL formulas. A task planner with an interacted proposer and verifier is proposed to generate an executable action plan based on inference rules, knowledge base, and sensing inputs. The rest of the paper is organized as follow. In Section \ref{Section: GSTL definitio}, we propose the graph-based spatial temporal logic, GSTL. The deduction system is introduced in Section \ref{Section: deduction system} where soundness and completeness are discussed. An automatic task planning framework is given in Section \ref{section:task planning}. Section \ref{Section: conclusion} concludes the paper. \begin{comment} \textbf{Why cognitive robots} In the last two decades, there are great developments in areas related to robotics such as sensors technology, computer hardware, and artificial intelligence. Robots are more capable than ever to achieve fully autonomous. Both academia and industrial researchers show great interests in employing autonomous robots in applications such as service robots, autonomous driving, and search and rescue. However, one of the biggest challenges of achieving fully autonomous robots is ability to generalize, learn, and reason on its own. This leads to our research interests in cognitive robots. Unlike normal robots which are preprogrammed to solve a certain task, cognitive robots have the ability of solving a given task without step-by-step instructions by storing knowledge and reasoning on it like human. We believe cognitive robots are crucial to achieve fully autonomous robots in real world applications for two reasons. First, the real world can defy even our wildest imaginations. There will always be unexpected situations that designers fail to consider for robots. Only robots with ability to reason based on a set of knowledge are possibly able to handle the unexpected situations. Situation like a broom weilding woman chasing a duck while riding an electric wheel chair \cite{wiki:xxx} is challenging for autonomous driving if the machine cannot reason and its creator did not specify the situation in advance. Second, robots will have to deal with incomplete or even vague task specifications. Cognitive robots with knowledge and ability to reason are able to solve a vague task specification such as setting up a dining table without a detailed action plan. Cognitive robots need to make decision reactively based on perception instead of predefined sequences of steps to take. \textbf{Importance of knowledge representation with automated reasoning for cognitive robots.} The most important step for implementing cognitive robots is building a set of knowledge using knowledge representation. Challenged by the great uncertainty presented in the open-world problem, cognitive robots need the comprehensive knowledge with the ability of reasoning to face these challenges. \textbf{Forms of knowledge representation.} Knowledge can be represented in different forms. Raw data such as an online receipt or cooking video contains knowledge needed for accomplishing certain cooking tasks. The amount of available online data is huge but they are unstructured and in various format. It is impossible for cognitive robots to reason and make decisions on it. To let cognitive robots perform rigorous reasoning in an unstructured world, researchers work on transferring unstructured data such as raw text and video into a relatively structured model using machine learning. Many results available on this topic within the computer science community. One of the most popular models is knowledge graph. In knowledge graph, concepts are modeled as nodes and their relations are modeled as labeled edges. Different knowledge graph has different relation schema defined for edges. Examples for knowledge graph include Google knowledge graph \cite{googleknowledgegraph}, ConceptNet \cite{liu2004conceptnet}, and Wikidata \cite{vrandevcic2014wikidata}. They make successful applications in areas such as recommendation systems and searching engine. However, the information in knowledge graph could be inaccurate since contributors could be unreliable. They also face difficulties when describing time and space sensitive information which are particularly important to robots. Furthermore, since knowledge graph normally does not have a rigorous defined relational semantics for the edges, it is possible to generate new knowledge by following implicit links (e.g. rules, definitions, axioms, etc.) but is nearly impossible to conduct a complete and sound reasoning. \textbf{Logic-based approach and spatial temporal logic for knowledge representation in cognitive robots.} Another branch of knowledge representation is symbolic knowledge representation and reasoning performed by means of primitive operations manipulating predefined elementary symbols \cite{hertzberg2008ai}. Logic-based approaches and probabilistic reasoning such as Bayesian network \cite{pearl2014probabilistic} both fall into this category. Classic logic such as propositional logic \cite{post1921introduction}, first-order logic \cite{mccarthy1960programs}, and description logic \cite{baader2003description} are well developed and can be used to represent lots of knowledge with great expressiveness power in different domains. However, in general, they fail to capture the temporal and spatial characteristics of the knowledge and the inference algorithms are often undecidable. For example, it is difficult to capture information such as a robot hand is required to hold a cup for at least five minutes. As spatial and temporal information are often particularly important for cognitive robots, spatial logic and temporal logic are studied both separately \cite{cohn2001qualitative,raman2015reactive} and combined \cite{kontchakov2007spatial,haghighi2016robotic,bartocci2017monitoring}. By integrating spatial and temporal operators with classic logic operators, spatial temporal logic shows great potential in specifying a wide range of task assignments for cognitive robots with automated reasoning ability. Thus, in this paper, we are interested in employing spatial temporal logics for knowledge representation in cognitive robots and developing automated reasoning based on it. \textbf{Bottleneck and challenges in symbolic-based approach in cognitive robots.} We will discuss more details on related work of symbolic-based approach in the next section. Here, we briefly discuss the bottleneck and challenges faced in this area. The first challenge comes from the complexity of the inference algorithm. Classic logics and most of spatial temporal logics enjoy huge expressiveness due to their semantic definition and the way of combining spatial and temporal operators \cite{kontchakov2007spatial}. However, their inference algorithms are often undecidable making it impossible for cognitive robots to make decisions in real time. Human inputs are often needed to facilitate the deduction process. A balance between expressiveness and tractability is needed for cognitive robots. Specification mining is another challenge in the way of applying symbolic-based approach on cognitive robots. The sensing data from the real world is often continuous and noisy while the symbolic-based knowledge representation and reasoning operates on discretized symbols. This requires us to interpret the sensor data stream on a semantic level to transform it into a symbolic description in terms of categories that are meaningful in the knowledge representation \cite{hertzberg2008ai}. Most related work lack a specification mining block to fulfill this role. The lack of tractability and specification mining limit the application of logic-based approach on cognitive robots. \textbf{How do we address the challenges.} Motivated by knowledge representation and reasoning for cognitive robots and challenges faced in existing work, we propose a new graph-based spatial temporal logic (GSTL) for cognitive robots with a sound and complete inference system. The proposed GSTL is able to specify a wide range of spatial and temporal specifications for cognitive robots while maintains a tractable inference algorithm. A specification mining algorithm is also proposed where GSTL formulas can be learned from video data directly. Furthermore, we use the proposed GSTL and automated reasoning system in designing a cognitive robot with the ability of independently generating a detailed action plan when a vague task assignment and a stream video are given. \textbf{Novelty and necessity of our proposed work} The contributions of this paper are mainly twofold. First, we propose a new spatial temporal logic with a better balance between expressiveness and tractability. The inference system implemented by constraint programming is sound and complete. A specification mining algorithm is proposed where GSTL formulas can be learned from video stream data. Second, we design a framework for cognitive robots which are able to independently solve a vague task assignment with detailed action plans. Knowledge needed to solve a task is stored in a knowledge base as a set of parameterized GSTL formulas. A task planner is proposed to generate an executable action plan based on inference rules, knowledge base, and sensing inputs. \textbf{Their relationships}: Cognitive robots should be able to perform rigorous reasoning in an unstructured world. This require us to transfer unstructured data such as raw text and video into a structured model where we can perform logic reasoning. We use knowledge graph as a middle step between knowledge base and unstructured raw data. \textbf{Why do we need knowledge graph} Because automatic deduction systems for a given logic is often intractable, the majority of current applications of automated deduction systems require direction from human users in order to work \cite{lifschitz2008knowledge}. Human users may be required to supply lemmas to the deduction system. In the worst case, a user may need to provide a proof at every step \cite{fensel1997specifying}. This significantly reduces the autonomy of cognitive robots. With the ability of answering questions raised by the deduction systems, knowledge graph is able to replace human users and serves as teacher to make the automated deduction systems tractable. The knowledge base in formal methods and the knowledge base in machine learning (the knowledge graph) are equivalent in some sense. The knowledge graph has similar expressiveness power as the classic logic but it cannot express certain concept of spatial and temporal logic. Challenges in existing work: \begin{itemize} \item Inference of a given language is often undecidable and needs human inputs to facilitate the deduction process. \item Unable to handle uncertainty and do not generalize well. Cannot deal with scenarios that has not been considered in the knowledge base. \item Assumption of having a knowledge base. \item Specification mining. \end{itemize} How do we address the challenges \begin{itemize} \item Complexity: 1) Use knowledge graph or demo (video data) to facilitate and reduce the complexity of the deduction system; 2) Propose a deduction system which takes advantage of the modularity of the knowledge base and hierarchy structure of the graph; 3) Propose a new spatial temporal logic which achieve a balance between expressiveness and complexity. \item Uncertainty: 1) Use knowledge graph to reduce the uncertainty; \item Construct knowledge base from existing knowledge graph from computer science community such as google knowledge graph and ConceptNet \cite{speer2017conceptnet}. \end{itemize} Novelty and contribution \begin{itemize} \item Generate knowledge base from existing knowledge graph \item New spatial temporal logic: Our proposed spatial-temporal logic based KR can achieve a better balance between expressibility and tractability. 1) Our approach is more expressive with the same computational complexity (NP-complete) compared to RCC-8 and Su4 with proof. \item Combining knowledge graph and spatial temporal logics \end{itemize} \end{comment} \begin{comment} \section{Related work} Existing work on symbolic-based knowledge representation for cognitive robots can be generally grouped into two categories, namely logic-based reasoning and probabilistic reasoning. Logic-based reasoning can be further categorized into classic logics such as first-order logic \cite{mccarthy1960programs} and description logics \cite{baader2003description} and spatial and temporal logics which focus on reasoning over space and time. Probabilistic reasoning approach includes Bayesian network, Markov logic network, Markov decision process (MDP), and partial observation Markov decision process (POMDP). Among numerous existing work on knowledge representation, different formalism adopt different trade-off between expressiveness and complexity. \subsection{Classical logics} \subsubsection{First-order logics} One of the most widely used classical logics is first-order logics \cite{mccarthy1960programs}. As one of the most investigated symbolic logics, first-order logics is a powerful representation and reasoning tool with a well understood theory. It can be used to model a various range of applications. However, first-order logic is in general undecidable meaning a sound and complete deduction algorithm cannot even guarantee termination, let alone speedy results. However, by limiting the expressiveness of first-order logic, some language subsets of first-order logic are decidable and have been used in a wide range of applications including software development and verification of software and hardware. There has also attempts on robot motion planning using planning system STRIPS \cite{fikes1971strips}, however, the intractability is evident. One of the language subsets of first-order logic is propositional logics \cite{post1921introduction}. For many practical cases, the instances or the propositional variables are finite which results in a decidable inference process. However, as all combinations of propositional variables need to be considered, the growth is multiexponetial in terms of the domain sizes of the propositional variables. Another language subset of first-order logic is description logic \cite{baader2003description}. In description logic, the domain representation is given by two parts, namely terminological knowledge and assertional knowledge. Description logic inferences are running based on the given terminological and assertional knowledge. The inference algorithms can run efficiently in most of practical cases even though they are theoretically intractable. \subsubsection{Spatial temporal logic} The classic logics have sufficient expressive power on sequential planning. However, it may not sufficient in terms of modeling temporal and spatial relations such as the effects of action duration and space information from sensors. Specific temporal and spatial formalism are proposed both independently and jointly. Examples of temporal logics include Linear temporal logic \cite{baier2008principles} and Allen interval algebra \cite{allen1984towards}. Examples of spatial logics include Region connection calculus (RCC) \cite{cohn1997qualitative} and rectangle algebra \cite{balbiani1999new}. For interval algebra and rectangle algebra, one can improve their expressiveness with a metric extension at the expense of computational complexity. Furthermore, there has been extensive work on combining temporal logics and spatial logics. Without any restriction on combining temporal predicates/operators and spatial predicates/operators (such as LTL and RCC), the obtained spatial temporal logic has the maximal expressiveness with an undecidable inference \cite{kontchakov2007spatial}. To balance the trade-off between complexity and expressiveness, one can systematically weaken the component languages and their interaction with the complexity ranging from NP-complete to undecidable. \subsection{Probabilistic reasoning} Another limitation of classic logic is its inability of handling uncertainty in real world cases which leads to adopt probability representation and reasoning such as Bayesian network, MDP, and POMDP. In Bayesian network, the inference basically means to infer the probability of some event of interest, given prior and dependent other relevant probabilities. The idea is to represent the random variables as nodes in a directed acyclic graph, where a node is directly preceded by a set of parent nodes if and only it is directly conditionally dependent on the corresponding parent variables. So the huge full joint probability distribution is broken into many small local joint probability distribution without losing information. However, conditional independences among variables are required to reduce the inference drastically which may not hold true or cannot be obtained in real applications. The directed graph must have no cycles which is particularly troublesome for real world applications. In MDP and POMDP, the knowledge representation and reasoning are modeled as a sequential decision problem based on transition systems. The MDP formalism assumes a full access to the environment and is easy to solve while POMDP adopts a more realistic assumption where environment is not fully available. However, calculating optimal POMDP policies is often impracticably complex. Markov logic network \cite{richardson2006markov} is the probabilistic extension of first-order logic where the vertices of the network graph are atomic formulas, and the edges are the logical connectives used to construct the formula. Each formula is considered to be a clique, and the Markov blanket is the set of formulas in which a given atom appears. A potential function is associated to each formula, and takes the value of one when the formula is true, and zero when it is false. The potential function is combined with the weight to form the Gibbs measure and partition function for the Markov network. Atomic formulas do not have a truth value unless they are grounded and given an interpretation. Thus, a Markov logic network becomes a Markov network only with respect to a specific grounding and interpretation. The vertices of the graph of the ground Markov network are the ground atoms. The size of the resulting Markov network thus depends strongly (exponentially) on the number of constants in the domain of discourse. The intractability of Markov logic network is an unavoidable side-effect of power and flexibility of Markov network. \end{comment} \begin{comment} We summarized existing work as following items. \begin{itemize} \item Logic-based reasoning \begin{itemize} \item Classic logics \begin{itemize} \item First-order logic \item Description logic \end{itemize} \item Spatial and temporal logic \begin{itemize} \item Temporal logic \item Spatial logic \item Spatial and temporal logic \end{itemize} \end{itemize} \item Probabilistic reasoning \begin{itemize} \item Bayesian network \item Markov decision processes \item Partially observable Markov decision processes \item Markov logic network \end{itemize} \end{itemize} Markov logic networks were particularly popular in the mid 2000s, but for various reasons, including scalability, were supplanted by models like probabilistic soft logic that offered a good tradeoff between expressibility, optimization and representation. Probabilistic soft logic and its variants have continued to be popular for various tasks, but the dominant line of research in the knowledge graph completion community (at the time of writing), and the one that we subsequently describe, is knowledge graph embeddings (KGEs). when the KG is either too sparse or noisy (or both), more established methods such as Probabilistic Soft Logic (PSL) were found to work better. In Bayesian networks, parameters are probabilities, but at the cost of greatly restricting the ways in which the distribution may be factored. In particular, potential functions must be conditional probabilities, and the directed graph must have no cycles. The latter condition is particularly troublesome to enforce in relational extensions. Existing work on hybrid neurosymbolic systems by combining bottom up deep learning with top down symbolic computing. \end{comment} \section{Graph-based spatial temporal logic}\label{Section: GSTL definitio} {\color{red} This section aims to introduce ... We begin with a brief review of ...} \subsection{Temporal representation and logics} \subsubsection{Temporal logics} As the proposed spatial temporal logic is built by extending signal temporal logic with extra spatial operators, we first introduce the definition of signal temporal logic \cite{raman2015reactive}. \begin{definition}[STL Syntax]\rm STL formulas are defined recursively as: $$ \varphi::={\rm True}|\pi^\mu|\neg\pi^{\mu}|\varphi\land\psi|\varphi\lor\psi|\Box_{[a,b]} \psi | \varphi\sqcup_{[a,b]} \psi, $$ \end{definition} where $\pi^\mu$ is an atomic predicate $\mathbb{R}^n\to\{0,1\}$ whose truth value is determined by the sign of a function $\mu:\mathbb{R}^n\to\mathbb{R}$, i.e., $\pi^\mu$ is true if and only if $\mu({\bf x})>0$; and $\psi$ is an STL formula. The ``eventually" operator $\Diamond$ can also be defined here by setting $\Diamond_{[a,b]} \varphi={\rm True}\sqcup_{[a,b]} \varphi$. The semantics of STL with respect to a discrete-time signal $\bf x$ are introduced as follows, where $({\bf x},t_k)\models \varphi$ denotes for which signal values and at what time index the formula $\varphi$ holds true. \begin{definition}[STL Semantics]\rm The validity of an STL formula $\varphi$ with respect to an infinite run ${\bf x}= x_0x_1x_2\ldots$ at time $t_k$ is defined inductively as follows. \begin{enumerate} \item $({\bf x},t_k)\models \mu$, if and only if $\mu(x_k)>0$; \item $({\bf x},t_k)\models \neg\mu$, if and only if $\neg(({\bf x},t_k)\models \mu)$; \item $({\bf x},t_k)\models \varphi\land\psi$, if and only if $({\bf x},t_k)\models \varphi$ and $({\bf x},t_k)\models \psi$; \item $({\bf x},t_k)\models \varphi\lor\psi$, if and only if $({\bf x},t_k)\models \varphi$ or $({\bf x},t_k)\models \psi$; \item $({\bf x},t_k)\models \Box_{[a,b]}\varphi$, if and only if $\forall t_{k'}\in[t_k+a,t_k+b]$, $({\bf x},t_{k'})\models \varphi$; \item $({\bf x},t_k)\models \varphi\sqcup_{[a,b]}\psi$, if and only if $\exists t_{k'}\in[t_k+a,t_k+b]$ such that $({\bf x},t_{k'})\models \psi$ and $\forall t_{k''}\in[t_k,t_{k'}]$, $({\bf x},t_{k''})\models \varphi$; \item $({\bf x},t_k)\models \Diamond_{[a,b]}\varphi$, if and only if $\exists t_{k'}\in[t_k+a,t_k+b]$, $({\bf x},t_{k'})\models \varphi$. \end{enumerate} \end{definition} A run ${\bf x}$ satisfies $\varphi$, denoted by $\bf x\models\varphi$, if $({\bf x},t_0)\models\varphi$. Intuitively, $\bf x\models \Box_{[a,b]}\varphi$ if $\varphi$ holds at every time step between $a$ and $b$, ${\bf x}\models \varphi\sqcup_{[a,b]}\psi$ if $\varphi$ holds at every time step before $\psi$ holds and $\psi$ holds at some time step between $a$ and $b$, and ${\bf x}\models \Diamond_{[a,b]}\varphi$ if $\varphi$ holds at some time step between $a$ and $b$. An STL formula $\varphi$ is {\it bounded-time} if it contains no unbounded operators. The bound of $\varphi$ can be interpreted as the horizon of future predicted signals $\bf x$ that is needed to calculate the satisfaction of $\varphi$. \subsubsection{Temporal representation} We first define a flow of time with a set of time points and a partial ordering. \begin{definition}[Flow of time] We define flow of time as a nonempty set for time points with a connected partial ordering $(T,<)$ where $T$ is time and $<$ is the irreflexive partial order. $(T,<)$ is said to be connected if and only if for all $x,y\in T$ with $x<y$ there is a finite sequence such that $x=x_0<x_1<...<x_n=y, \forall i\in[1,...,N] x_i\in T$. \end{definition} There are multiple ways to represent time, e.g., continuous time, discrete time, and interval. In this paper, we use discrete time interval. In order to increase the expressiveness of the proposed spatial temporal logic, we also consider Allen interval algebra to extend the until operator in STL. \begin{definition}[Allen interval algebra\cite{allen1984towards}] Allen interval algebra defines the following 13 temporal relationships between two intervals, namely before ($b$), meet ($m$), overlap ($o$), start ($s$), finish ($f$), during ($d$), equal ($e$), and their inverse ($^{-1}$) except equal. The 13 temporal relationships are illustrated in Fig. \ref{Allen interval algebra}. \begin{figure}[H] \centering \includegraphics[scale=0.28]{IA.pdf} \caption{The 13 basic qualitative relations in interval algebra} \label{Allen interval algebra} \end{figure} \end{definition} \subsection{Spatial model} {\color{red} After reviewing temporal ... we next take a look at ... the basic spatial characteristics we considered in our spatial model, namely, ontology, mereotopology, and metric spatial representation. } \subsubsection{Spatial ontology} We use regions as the basic spatial elements instead of points. Within the Qualitative Spatial Representation community, there is a strong tendency to take regions of space as the primitive spatial entity \cite{cohn2001qualitative}. In practice, a reasonable constraint to impose would be that regions are all rational polygons. \subsubsection{Mereotopology} As for the relations between regions, we consider mereotopology, meaning we consider both mereology (parthood) and topology (connectivity) in our spatial model. Parthood describes the relational quality of being a part. For example, \emph{wheel is a part of car} and \emph{cup is one of the objects in a cabinet}. Connectivity describes if two spatial objects are connected. For example, \emph{hand grabs a cup}. By considering mereotopology, the proposed GSTL will have more expressive power than existing spatial temporal logic STREL and SpaTeL. We apply a graph with a hierarchy structure to represent the spatial model. Denote $\Omega=\cup_{i=1}^{n}\Omega_i$ as the union of the sets of all possible spatial objects where $\Omega_i$ represents a certain set of spatial objects or concepts. \begin{definition}[Graph-based Spatial Model] The graph-based spatial model with a hierarchy structure $\mathcal{G}=(\mathcal{V},\mathcal{E})$ is constructed by the following rules. \begin{itemize} \item The node set $\mathcal{V}=\{V_1,...,V_n\}$ is consisted of a group of node set where each node set $V_k$ represents a finite subset spatial objects from $\Omega_i$. Denote the number of nodes for node set $V_k$ as $n_k$. We assume the resolution does not decrease as $n$ goes up by letting $n_1\leq n_2 \leq ... \leq n_n$. At each layer, $V_k=\{v_{k,1},...,v_{k,n_k}\}$ contains nodes which represent $n_k$ spatial objects in $\Omega_i$. \item The edge set $\mathcal{E}$ is used to model the relationship between nodes such as whether two nodes are adjacent or if one node is included within another node. $e_{i,j}\in \mathcal{E}$ if and only if $v_i$ and $v_j$ are connected. \item $v_{k,i}$ is a \emph{parent} of $v_{k+1,j}$, $\forall k\in[1,...,n-1]$, if and only if $v_{k,i} \wedge v_{k+1,j}= v_{k+1,j}$. $v_{k+1,j}$ is called a \emph{child} of $v_{k,i}$ if $v_{k,i}$ is its parent. $v_0$ is the only node that does not have a parent. All nodes in $V_n$ do not have child. Furthermore, if $v_i$ and $v_j$ are a pair of parent-child, then $e_{i,j}\in\mathcal{E}$. $v_i$ is a \emph{neighbor} of $v_j$ and $e_{i,j}\in\mathcal{E}$ if and only if there exist $k$ such that $v_i\in V_k$, $v_j\in V_k$, and the minimal distance between $v_i$ and $v_j$ is less than a given threshold $\epsilon$. \end{itemize} \end{definition} \begin{example}\rm An example is given in Fig. \ref{exp:Graph with a hierarchy structure} to illustrate the proposed spatial model. In Fig. \ref{exp:Graph with a hierarchy structure}, $V_1=\{kitchen\}$ , $V_2=\{body~part,~ tool,~ material\}$ and $V_3=\{head,~ hand,~ cup,~ bowl,~ table,~ milk,~ butter\}$. The parent-child relationships are drawn in solid line and the neighbor relationships are drawn in dash line. Each layer represent the space with different spatial concepts or objects by taking categorical values from $\Omega_i$ and connections are built between layers. The hierarchical graph is able to express facts such as ``head is part of body part" and ``cup holds milk". \end{example} \begin{figure}[H] \centering \includegraphics[scale=0.7]{Hierachical-graph.pdf} \caption{The hierarchical graph with three basic spatial operators: parent, child, and neighbor where the parent-child relations are drawn in solid line and the neighbor relations are drawn in dash line.} \label{exp:Graph with a hierarchy structure} \end{figure} \subsubsection{Metric Spatial representation} To further increase the expressiveness of the proposed GSTL for cognitive robots, we include directional information in our spatial model. It is done by extending rectangle algebra into 3D which is more suitable for cognitive robots. We first briefly introduce rectangle algebra below. \begin{definition}[Rectangle algebra \cite{smith1992algebraic}] In rectangle algebra, spatial objects are considered as rectangles whose sides are parallel to the axes of some orthogonal basis in a 2D Euclidean space. The $13\times 13$ basic relations between two spatial objects are defined by extending interval algebra in 2D. \begin{align*} \mathcal{R}_{RA}=\{(A,B):A,B\in \mathcal{R}_{IA}\}, \end{align*} where $\mathcal{R}_{IA}$ is the set containing 13 interval algebra relations. \end{definition} Rectangle algebra extends interval algebra into 2D. It can be used to expressive directional information such as left, right, up, down, and their combination. However, rectangle algebra is defined in 2D only while cognitive robots are often deployed in 3D environment. Thus, in this paper we extend it to 3D. \begin{align*} \mathcal{R}_{CA}=\{(A,B,C):A,B,C\in \mathcal{R}_{IA}\}, \end{align*} where $13\times 13\times 13$ basic relations are defined for cubic algebra (CA). An example is given in Fig. \ref{Relations in CA} to illustrate the cubic algebra. For spatial objects $X$ and $Y$ in the left where $X$ is at the front, left, and below of $Y$, we have $X\{(b,b,o)\}Y$. For spatial objects $X$ and $Y$ in the right where $Y$ is completely on top of $X$, we have $X\{(e,e,m)\}Y$. \begin{figure}[H] \centering \includegraphics[scale=0.3]{CA.pdf} \caption{Representing directional relations between objects $X$ and $Y$ in CA} \label{Relations in CA} \end{figure} \subsubsection{Spatial temporal signals} The spatial temporal signals we are interested in are defined as follow. \begin{definition}[Spatial Temporal Signal] A spatial temporal signal $x(v,t)$ is defined as a scalar function for node $v$ at time $t$ \begin{equation} x(v,t): V\times T \rightarrow D, \end{equation} where $D$ is the signal domain. Depends on different applications, $D$ can be a Boolean domain, real-value domain, positive real-value domain and etc. \end{definition} Based on the spatial temporal signals, we further define a spatial temporal structure as follow. \begin{definition}[Spatial temporal structure] A spatial temporal structure is a triple $N=(T,<,h)$, where $(T,<)$ is a flow of time and the valuation function $h:D\rightarrow T'$ where $T'$ is the set of all subsets of $T$. \end{definition} \subsection{Graph-based spatial temporal logic} With temporal model and spatial model in mind, we now give the formal syntax and semantics definition of GSTL in this section. GSTL is defined based on a hierarchy graph by combining STL and three spatial operators where the until operator and the neighbor operator are enriched by interval algebra (IA) and cubic algebra (CA) respectively. \begin{definition}[GSTL Syntax]\label{definition: GSTL syntax} The syntax of a GSTL formula is defined recursively as \begin{comment} \begin{equation} \begin{aligned} \varphi:=&\mu~ |~\neg\varphi ~|~\varphi_1 \wedge \varphi_2 ~|~\varphi_1 \vee \varphi_2 ~|~\Box_{[a,b]} \varphi~| ~ \Diamond_{[a,b]} \varphi ~|~ \varphi_1\sqcup_{[a,b]}^{*}\varphi_2 ~\\ &|~ \mathbf{P}_A\varphi ~|~ \mathbf{C}_A \varphi ~|~ \mathbf{N}_A^{\left \langle *, *, * \right \rangle} \varphi \end{aligned} \label{GSTL} \end{equation} \end{comment} \begin{equation} \begin{aligned} &\tau := \mu ~|~ \neg\tau ~|~ \tau_1\wedge\tau_2 ~|~ \tau_1\vee\tau_2 ~|~ \mathbf{P}_A\tau ~|~ \mathbf{C}_A \tau ~|~ \mathbf{N}_A^{\left \langle *, *, * \right \rangle} \tau,\\ &\varphi := \tau ~|~ \neg \varphi ~|~ \varphi_1\wedge\varphi_2 ~|~ \varphi_1\vee\varphi_2 ~|~ \Box_{[a,b]} \varphi~| ~ \varphi_1\sqcup_{[a,b]}^{*}\varphi_2. \end{aligned} \label{complexity proof 1} \end{equation} where $\tau$ is spatial term and $\varphi$ is the GSTL formula; $\mu$ is an atomic predicate (AP), negation $\neg$, conjunction $\wedge$ and disjunction $\vee$ are the standard Boolean operators like STL; $\Box_{[a,b]}$ is the ``always" operator and $\sqcup_{[a,b]}^{*}$ is the ``until" temporal operators with an Allen interval algebra extension, where $[a,b]$ being a real positive closed interval and $*\in\{b, o, d, \equiv, m, s, f\}$ is one of the seven temporal relationships defined in the Allen interval algebra. Spatial operators are ``parent" $\mathbf{P}_A$, ``child" $\mathbf{C}_A$, and ``neighbor" $\mathbf{N}_A^*$, where $A$ denotes the set of nodes which they operate on. Same as the until operator $*\in\{b, o, d, \equiv, m, s, f\}$. \end{definition} \begin{remark} We only consider a subclass of CA and IA relations, namely convex IA relations. Convex CA relations are composed exclusively of convex IA relations, which is defined in \cite{ligozat1996new}. For example, $\{p, m, o\}$ is a convex IA relation while $\{p, o\}$ is not. It has been shown that the spatial reasoning on convex relations can be solved in polynomial time using constraint programming while the reasoning with the full CA/IA expressiveness is NP-complete. \end{remark} The parent operator $\mathbf{P}_A$ describes the behavior of the parent of the current node. The child operator $\mathbf{C}_A$ describes the behavior of children of the current node in the set $A$. The neighbor operator $\mathbf{N}_A^*$ describes the behavior of neighbors of the current node in the set $A$. We first define an interpretation function before the semantics definition of GSTL. The interpretation function $\iota(\mu,x(v,t)): AP\times D \rightarrow R$ interprets the spatial temporal signal as a number based on the given atomic proposition $\mu$. The qualitative semantics of the GSTL formula is given as follow. \begin{definition}[GSTL Qualitative Semantics] The validity of a GSTL formula $\varphi$ with respect to a spatial temporal signal $x(v,t)$ at time $t$ and node $v$ is defined inductively as follows. \begin{enumerate} \item $x(v,t)\models \mu$, if and only if $\iota(\mu,x(v,t))>0$; \item $x(v,t)\models \neg\varphi$, if and only if $\neg(x(v,t))\models \varphi)$; \item $x(v,t)\models \varphi\land\psi$, if and only if $x(v,t)\models \varphi$ and $x(v,t)\models \psi$; \item $x(v,t)\models \varphi\lor\psi$, if and only if $x(v,t)\models \varphi$ or $x(v,t)\models \psi$; \item $x(v,t)\models \Box_{[a,b]}\varphi$, if and only if $\forall t'\in[t+a,t+b]$, $x(v,t')\models \varphi$; \item $x(v,t)\models \Diamond_{[a,b]}\varphi$, if and only if $\exists t'\in[t+a,t+b]$, $x(v,t')\models \varphi$; \end{enumerate} The until operator with interval algebra extension is defined as follow. \begin{enumerate} \item $({\bf x},t_k)\models \varphi\sqcup_{[a,b]}^b\psi$, if and only if $({\bf x},t_k)\models\Box_{[a,b]}\neg(\varphi\vee\psi)$ and $\exists t_1<a,~\exists t_2>b$ such that $({\bf x},t_k)\models\Box_{[t_1,a]}(\varphi\wedge\neg\psi)\wedge\Box_{[b,t_2]}(\neg\varphi\wedge\psi)$; \item $({\bf x},t_k)\models \varphi\sqcup_{[a,b]}^o\psi$, if and only if $({\bf x},t_k)\models\Box_{[a,b]}(\varphi\wedge\psi)$ and $\exists t_1<a,~\exists t_2>b$ such that $({\bf x},t_k)\models\Box_{[t_1,a]}(\varphi\wedge\neg\psi)\wedge\Box_{[b,t_2]}(\neg\varphi\wedge\psi)$; \item $({\bf x},t_k)\models \varphi\sqcup_{[a,b]}^d\psi$, if and only if $({\bf x},t_k)\models\Box_{[a,b]}(\varphi\wedge\psi)$ and $\exists t_1<a,~\exists t_2>b$ such that $({\bf x},t_k)\models\Box_{[t_1,a]}(\neg\varphi\wedge\psi)\wedge\Box_{[b,t_2]}(\neg\varphi\wedge\psi)$; \item $({\bf x},t_k)\models \varphi\sqcup_{[a,b]}^\equiv\psi$, if and only if $({\bf x},t_k)\models\Box_{[a,b]}(\varphi\wedge\psi)$ and $\exists t_1<a,~\exists t_2>b$ such that $({\bf x},t_k)\models\Box_{[t_1,a]}(\neg\varphi\wedge\neg\psi)\wedge\Box_{[b,t_2]}(\neg\varphi\wedge\neg\psi)$; \item $({\bf x},t_k)\models \varphi\sqcup^m\psi$, if and only if $\exists t_1<t<t_2$ such that $({\bf x},t_k)\models\Box_{[t_1,t]}(\varphi\wedge\neg\psi)\wedge\Box_{[t,t_2]}(\neg\varphi\wedge\psi)$; \item $({\bf x},t_k)\models \varphi\sqcup^s\psi$, if and only if $\exists t_1<t<t_2$ such that $({\bf x},t_k)\models\Box_{[t_1,t]}(\neg\varphi\wedge\neg\psi)\wedge\Box_{[t,t_2]}(\varphi\wedge\psi)$; \item $({\bf x},t_k)\models \varphi\sqcup^f\psi$, if and only if $\exists t_1<t<t_2$ such that $({\bf x},t_k)\models\Box_{[t_1,t]}(\varphi\wedge\psi)\wedge\Box_{[t,t_2]}(\neg\varphi\wedge\neg\psi)$; \end{enumerate} The spatial operators are defined as follows. \begin{enumerate} \item $x(v,t)\models \mathbf{P}_A\tau$, if and only if $\forall v_p\in A,~x(v_p,t)\models \tau$ where $v_p$ is the parent of $v$; \item $x(v,t)\models \mathbf{C}_{A}\tau$, if and only if $\forall v_c\in A,~x(v_c,t)\models \tau$ where $v_c$ is a child of $v$; \item $x(v,t)\models \mathbf{N}_{A}^{\left \langle b, *, * \right \rangle}\tau$, if and only if $\forall v_n \in A,~x(v_n,t)\models \tau$ where $v_n$ is a neighbor of $v$ and $v_n[x^+]<v[x^-]$; \item $x(v,t)\models \mathbf{N}_{A}^{\left \langle o, *, * \right \rangle}\tau$, if and only if $\forall v_n \in A,~x(v_n,t)\models \tau$ where $v_n$ is a neighbor of $v$ and $v_n[x^-]<v[x^-]<v_n[x^+]<v[x^+]$; \item $x(v,t)\models \mathbf{N}_{A}^{\left \langle d, *, * \right \rangle}\tau$, if and only if $\forall v_n \in A,~x(v_n,t)\models \tau$ where $v_n$ is a neighbor of $v$ and $v_n[x^-]<v[x^-]<v[x^+]<v_n[x^+]$; \item $x(v,t)\models \mathbf{N}_{A}^{\left \langle \equiv, *, * \right \rangle}\tau$, if and only if $\forall v_n \in A,~x(v_n,t)\models \tau$ where $v_n$ is a neighbor of $v$ and $v_n[x^-]=v[x^-],~v[x^+]=v_n[x^+]$; \item $x(v,t)\models \mathbf{N}_{A}^{\left \langle m, *, * \right \rangle}\tau$, if and only if $\forall v_n \in A,~x(v_n,t)\models \tau$ where $v_n$ is a neighbor of $v$ and $v_n[x^+]=v[x^-]$; \item $x(v,t)\models \mathbf{N}_{A}^{\left \langle s, *, * \right \rangle}\tau$, if and only if $\forall v_n \in A,~x(v_n,t)\models \tau$ where $v_n$ is a neighbor of $v$ and $v_n[x^-]=v[x^-]$; \item $x(v,t)\models \mathbf{N}_{A}^{\left \langle f, *, * \right \rangle}\tau$, if and only if $\forall v_n \in A,~x(v_n,t)\models \tau$ where $v_n$ is a neighbor of $v$ and $v_n[x^+]=v[x^+]$. \end{enumerate} \end{definition} where $v[x^-]$ and $v[x^+]$ denote the lower and upper limit of node $v$ in x-direction. Definition for the neighbor operator in y-direction and z-direction is omitted for simplicity. Notice that the reverse relations in IA and CA can be easily defined by changing the order of the two GSTL formulas involved, e.g., $\varphi\sqcup_{[a,b]}^{o^{-1}}\psi\Leftrightarrow\psi\sqcup_{[a,b]}^o\varphi$. As usual, $\varphi_1\vee\varphi_2$, $\varphi_1\rightarrow\varphi_2$, $\varphi_1\leftrightarrow\varphi_2$ abbreviate $\neg(\neg\varphi_1\wedge\varphi_2)$, $\neg\varphi_1\vee\varphi_2$, $(\varphi_1\rightarrow\varphi_2)\wedge(\varphi_2\rightarrow\varphi_1)$, respectively. We further define another six spatial operators $\mathbf{P}_{\exists}\tau$, $\mathbf{P}_{\forall}\tau$, $\mathbf{C}_{\exists}\tau$, $\mathbf{C}_{\forall}\tau$, $\mathbf{N}_{\exists}^{\left \langle *, *, * \right \rangle}\tau$ and $\mathbf{N}_{\forall}^{\left \langle *, *, * \right \rangle}\tau$ based on the definition above. \begin{align*} &\mathbf{P}_{\exists}\tau=\wedge_{i=1}^{n_p}\mathbf{P}_{A_i}\tau, ~\mathbf{P}_{\forall}\tau=\vee_{i=1}^{n_p}\mathbf{P}_{A_i}\tau,~A_i=\{v_{p,i}\},\\ &\mathbf{C}_{\exists}\tau=\wedge_{i=1}^{n_c}\mathbf{C}_{A_i}\tau, ~\mathbf{C}_{\forall}\tau=\vee_{i=1}^{n_c}\mathbf{C}_{A_i}\tau,~A_i=\{v_{c,i}\},\\ &\mathbf{N}_{\exists}^{\left \langle *, *, * \right \rangle}\tau=\wedge_{i=1}^{n_n}\mathbf{N}_{A_i}^{\left \langle *, *, * \right \rangle}\tau, ~\mathbf{N}_{\forall}^{\left \langle *, *, * \right \rangle}\tau=\vee_{i=1}^{n_n}\mathbf{N}_{A_i}^{\left \langle *, *, * \right \rangle}\tau,~A_i=\{v_{n,i}\}, \end{align*} where $v_{p,i}$, $v_{c,i}$, $v_{n,i}$ are the parent, child, and neighbor of $v$ respectively and $n_c$, $n_n$ are the number of children and neighbors of $v$ respectively. The until operator with Allen interval algebra extension is illustrated by Fig. \ref{Enrich until operator with IA}. In Fig. \ref{Relations in CA}, we can use the defined neighbor operator to represent the spatial relations for $X$ and $Y$ as $X\models\mathbf{N}_\exists^{\left \langle b, b, o \right \rangle}Y$ and $X\models\mathbf{N}_\exists^{\left \langle e, e, m \right \rangle}Y$. \begin{figure}[H] \centering \includegraphics[scale=0.3]{until_with_IA.pdf} \caption{Enrich until operator with IA} \label{Enrich until operator with IA} \end{figure} We investigate the expressiveness and tractability of the proposed spatial temporal logic and compare it with respect to existing several classic temporal and spatial logic, namely $S4_u$, $RCC-8$, STL, and $S4_u\times LTL$. First, we compare the expressiveness with existing spatial logics $S4_u$ and $RCC-8$. $RCC-8$ \cite{smith1992algebraic} was introduced in Geographical Information Systems as a decidable subset of Region Connection Calculus ($RCC$). $RCC-8$ studies region variables and defines eight binary relations among the variables. If we assume regions are rectangular in 2D or cubic in 3D all eight relations can be expressed by the neighbor operator in GSTL. For example, the equal relation $EQ(a,b)$ in $RCC-8$ can be represented as $a\models\mathbf{N}_\exists^{\left \langle e, e, e \right \rangle}b$. The parent/child relations (e.g. $fork\models\mathbf{P}_\exists tools$) and directional information (e.g. left and right) in GSTL cannot be expressed by $RCC-8$. $S4_u$ is a well known propositional modal logic which has strictly larger expressive power than $RCC-8$ \cite{kontchakov2007spatial}. All four atoms (subset, negation, conjunction, and disjunction) in $S4_u$ formulas can be expressed by GSTL. However, the interior and closure operators in spatial terms of $S4_u$ cannot be expressed in GSTL. Similar to $RCC-8$, directional information in GSTL cannot be expressed by $S4_u$. Then, we compare GSTL with a popular temporal logic STL \cite{raman2015reactive}. As we mentioned before, GSTL is defined by extending STL by enriching the until operator with interval algebra. Compared to STL, GSTL is able to express more information between two temporal interval such as overlap and during which cannot be expressed by STL. In the end, we compare GSTL with existing spatial temporal logics including $S4_u\times LTL$, SpaTeL, and STREL. $S4_u\times LTL$ are defined by combing two modal logics where no restrictions are added on their spatial and temporal predicates. Due to the freedom of combining spatial and temporal operators, $S4_u\times LTL$ enjoys a powerful expressiveness where the spatial terms can change over time. It can express truth such as ``an egg will eventually turn into a chicken". However, even through $S4_u$ and LTL are decidable, the satisfiability problem for $S4_u\times LTL$ is not decidable. Compared to $S4_u\times LTL$, GSTL has less expressive power in the sense that the spatial term cannot change over time. However, the complexity of satisfiability problem for GSTL is decidable which is discussed below. One of the fundamental problems for any logic is the satisfiability of a finite set of formulas. Specifically, given a set of GSTL formulas, we are particular interested in deciding if they are satisfiable or consistent by finding a spatial temporal model which can satisfy all spatial temporal constraints specified by the GSTL formulas. The satisfiability problem is important as the deduction problem discussed in the following section can often be transferred to the satisfiability problem. The most important algorithmic properties of the satisfiability problem is its computational complexity. We show that the complexity of the satisfiability problem for GSTL is decidable. First, we adopt the following assumption which is reasonable to applications such as cognitive robots. \begin{assumption}[Domain closure] The only objects in the domain are those representable using the existing symbols which do not change over time. \end{assumption} \begin{theorem} The computational complexity of the satisfiability problem for the proposed GSTL is decidable and the the finite sets of satisfiable constraints are recursively enumerable. \label{theorem:complexity} \end{theorem} \begin{proof} From Definition \ref{definition: GSTL syntax}, we can see that the interaction between spatial operators and temporal operators are rather limited. Temporal operators are not allowed in any spatial terms. Thus, for every GSTL formula $\varphi$ we can construct an new formula $\varphi^*$ by replacing every occurrence of a spatial sub-formula $\tau$ ($\mathbf{P}_A\tau, ~ \mathbf{C}_A \tau, ~ \mathbf{N}_A^{\left \langle *, *, * \right \rangle} \tau$) in $\varphi$ as shown in \eqref{complexity proof 1} with a new propositional variable $\mu_\tau$. Then we obtain a formula without spatial operator as shown below and it is a bounded STL formula with interval algebra extension. \begin{equation} \varphi = \mu_\tau ~|~ \neg \varphi ~|~ \varphi_1\wedge\varphi_2 ~|~ \varphi_1\vee\varphi_2 ~|~ \Box_{[a,b]} \varphi~| ~ \varphi_1\sqcup_{[a,b]}^{*}\varphi_2. \label{GSTL*} \end{equation} Now the problem transfer to the complexity of the satisfiability problem for a bounded STL formula with interval algebra extension. We formulate the problem as a constraint satisfaction problem by using Boolean encoding recursively according to the definition of GSTL \cite{liu2017distributed}. Denote binary variables $z^{\varphi}$ for their corresponding GSTL formula $\varphi$. 1) For GSTL formula $\varphi=\mu_r$, we define binary variables $z^{\varphi}$ which equals to 1 if and only if $\varphi=\mu_r=\top$. 2) For GSTL formula with negation $\varphi=\neg\phi$, we have constraint $z^{\varphi}=1-z^{\phi}$. 3) For $\varphi=\varphi_1\wedge\varphi_2$, we have Boolean constraint $z^{\varphi}\leq z^{\varphi_1}$, $z^{\varphi}\leq z^{\varphi_2}$, and $z^{\varphi}\geq z^{\varphi_1}+z^{\varphi_1}-1$. 4) For $\varphi=\varphi_1\vee\varphi_2$, we have Boolean constraint $z^{\varphi}\geq z^{\varphi_1}$, $z^{\varphi}\geq z^{\varphi_2}$, and $z^{\varphi}\leq z^{\varphi_1}+z^{\varphi_1}$. 5) For GSTL formula with ``Always" operator $\varphi=\Box_{[a,b]}\phi$, we have $z^\varphi=\wedge_{i=a}^bz^{\phi_i}$ where $z^{\phi_i}$ represents $\phi$ at time $i\in[a,b]$. 6) For GSTL formula with the until operator extended with the interval algebra, we give the encoding procedure for $\sqcup_{[a,b]}^o$. The rest can be encoded as Boolean constraints using the same procedure. $\varphi_1\sqcup_{[a,b]}^{o}\varphi_2$ can be encoded as $z_i^{\varphi_1}=z_i^{\varphi_2}=z_{a-1}^{\varphi_1}=z_{b+1}^{\varphi_2}=1,i\in [a,b]$ and $z_{a-1}^{\varphi_2}=z_{b+1}^{\varphi_1}=0$, where binary variables $z_i^{\varphi_1}$ and $z_i^{\varphi_2}$ equal to 1 if and only if formula $\varphi_1$ and $\varphi_2$ hold true at time $i$ correspondingly. From the above encoding procedure, we can see that the Boolean encoding resulting a constraint satisfaction problem on a finite domain. Since all variables are Boolean variables, the problem is equivalent to the Boolean satisfiability problem (SAT). It has been shown that the complexity of a SAT problem is NP-complete according to Cook–Levin theorem \cite{cook1971complexity} which is generally considered as decidable. Existing heuristic SAT-algorithms are able to solve formulas consisting of millions of symbols, which is sufficient for many practical SAT problems. \end{proof} \begin{remark} A formula is in conjunctive normal form (CNF) if it is a conjunction of one or more clauses, where a clause is a disjunction of literals. It is worth to point out that from the above proof we can see that any GSTL formulas can be reformulated as CNF. Such property will be explored in the automatic task planer section. \end{remark} \begin{remark} The restriction that no temporal operators are allowed in the spatial term is reasonable for robotics since normally predicates are used to represent objects such as cups and bowls. We don't expect cups change to bowls over time. Thus, we don't need any temporal operator in the spatial term and adopt the following assumption. \end{remark} \begin{comment} \subsection{Specification mining} One of the key steps in employing spatial temporal logics for cognitive robots is specification mining. Specifically, it is crucial for cognitive robots to obtain spatial temporal logics formulas from sensor data (such as video) directly without human inputs. We give specification mining algorithm for the proposed GSTL based on video in this section. The specification mining algorithm is summarized in Algorithm \ref{specification mining algorithm}. \begin{algorithm}[H] \SetAlgoLined \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{Sensor data as video} \Output{GSTL formulas} \BlankLine initialization\; \While{While condition}{ instructions\; \eIf{condition}{ instructions1\; instructions2\; }{ instructions3\; } } \caption{Specification mining} \label{specification mining algorithm} \end{algorithm} The specification mining generates a set of GSTL formulas which satisfy an input video. We have pre-processed the video and stored each video as a sequence of graphs: $\mathcal{G}=(G_1,G_2,...,G_T)$, where $G_t=(V_t,W_t)$. $G_t$ represents frame $t$ in the original video. $V_t=(v_{t,1},v_{t,2},...,v_{t,k})$ stores objects in frame $t$ where $v_{t,k}$ is the object such as ``cup" and ``hand". $w_{t,i,j}\in W_t$ stores the distance (in pixel) between object $v_{t,i}$ and object $v_{t,j}$ at frame $t$. The specification mining procedure is summarized as follow with an example to illustrate the algorithm. First, for each $G_t$, we check if any two objects $v_{t,i}$ and $v_{t,j}$ are closer than a given distance $d$, i.e., $d=10 pixel$. If so, then we generate a GSTL formula $\mathbf{C}_\exists^2(\mu_i\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}\mu_j)$, where $\mu_i$ is the predicate representing the object in $v_{t,i}$. If the relative position of the two objects satisfies at any three directions defined in the neighbor operator semantic definition, we replace $*$ in the neighbor with corresponding IA relations. For an example in Fig, we can generate the following GSTL formulas for $G_t$, where $\varphi_1$ states that the cup is on top of the table and $\varphi_2$ states that the cup is behind of the plate. \begin{align*} &\tau_1=\mathbf{C}_\exists^2(cup\wedge\mathbf{N}_\exists^{\left \langle d, d, m \right \rangle} table)\\ &\tau_2=\mathbf{C}_\exists^2(plate\wedge\mathbf{N}_\exists^{\left \langle b^{-1}, o, o \right \rangle} cup). \end{align*} Next, we merge consecutive $G_t$ with the exact same set of GSTL formulas from the previous step. For example, assuming from $G_0$ to $G_{35}$, all frames satisfy $\tau_1$ and $\tau_2$, then we merge them together by only keeping $G_0$, $G_{35}$, and the all GSTL formulas they satisfied. Then we generate ``Always" formula based on the frames and formulas we kept from the previous step. We find the maximum time interval for each GSTL formula from the previous step. For example, $\varphi_1=\Box_{[0,90]}\tau_1=\Box_{[0,390]}\mathbf{C}_\exists^2(cup\wedge\mathbf{N}_\exists^{\left \langle d, d, m \right \rangle} table)$ and $\varphi_2=\Box_{[45,190]}\tau_2=\Box_{[0,390]}\mathbf{C}_\exists^2(cup\wedge\mathbf{N}_\exists^{\left \langle d, d, m \right \rangle} table)$. In theory, the ``Always" GSTL formulas generated by the previous step include all the information from the video data. However, it does not show any temporal relations between any two formulas. Thus, we use ``Until" operators to mine more temporal information in this step. Denote the formulas generated from the previous step as $\varphi_i$. We check if any two formulas $\varphi_i$ and $\varphi_j$ have the temporal relationship $\varphi_i\sqcup_{[a,b]}^{*}\varphi_j$ if and only if their time interval satisfy any relations defined in Allen's interval algebra. For example, $\varphi_1$ and $\varphi_2$ satisfy $\varphi=\varphi_1\sqcup_{[45,90]}^{o}\varphi_2$. The result can be illustrated by a directed graph where each node is the ``Always" formulas from the third step and the directional edges are interval algebra relations between any two formulas. An example is given in Fig \ref{specification mining graph}, where each node is a ``Always" formula and the directional edges are labeled with their IA relations. \begin{figure} \centering \includegraphics[scale=0.3]{mining.pdf} \caption{Temporal relationship based on Allen's interval algebra} \label{specification mining graph} \end{figure} Then we generate more complicated formulas by searching on the directed graph. We aim to generate a set of formulas which can cover all nodes and edges in the directed graph. For an example in Fig. \ref{specification mining graph}, we can generate the following the GSTL formulas. \begin{equation} \begin{aligned} \varphi_2\sqcup\varphi_6\sqcup\varphi_3&=\Box_{[0,70]}\mathbf{C}_\exists(thermos\wedge\mathbf{N}_\exists counter)\\ &\sqcup\Box_{[45,180]}\mathbf{C}_\exists(glove\wedge\mathbf{N}_\exists thermos)\\ &\sqcup\Box_{[130,360]}\mathbf{C}_\exists(thermos\wedge\mathbf{N}_\exists counter)\\ \varphi_6\sqcup\varphi_7&=\Box_{[45,180]}\mathbf{C}_\exists(glove\wedge\mathbf{N}_\exists thermos)\\ &\sqcup\Box_{[95,115]}\mathbf{C}_\exists(thermos\wedge\mathbf{N}_\exists pot)\\ \varphi_8\sqcup\varphi_9&=\Box_{[195,340]}\mathbf{C}_\exists(glove\wedge\mathbf{N}_\exists pot)\\ &\sqcup\Box_{[255,290]}\mathbf{C}_\exists(cup\wedge\mathbf{N}_\exists pot) \end{aligned} \end{equation} \begin{theorem} The proposed specification mining terminates in finite time for finite length video. \end{theorem} \begin{proof} Since the video is finite, then there are no circle in the directed graph and all paths in the directed graph have finite length. The algorithm terminates when the maximum length of GSTL formulas is equal to the maximum length of all paths in the directed graph. \end{proof} \end{comment} \begin{comment} \begin{enumerate} \item For each $G_t$, we check if any two objects $v_{t,i}$ and $v_{t,j}$ are closer than a given distance $d$, i.e., $d=10 pixel$. If so, then we generate a GSTL formula $\mathbf{C}_\exists(\mu_i\wedge\mathbf{N}_\exists\mu_j)$, where $\mu_i$ is the predicate representing the object in $v_{t,i}$. An example result for $G_t$ is given below: \begin{itemize} \item $\mathbf{C}_\exists(cup\wedge\mathbf{N}_\exists counter~top)$ \item $\mathbf{C}_\exists(thermos\wedge\mathbf{N}_\exists counter~top)$ \item $\mathbf{C}_\exists(handle\wedge\mathbf{N}_\exists cabinet)$ \item $\mathbf{C}_\exists(pot\wedge\mathbf{N}_\exists counter~top)$ \end{itemize} \item Merge consecutive $G_t$ with the same set of GSTL formula from the previous step. For example, $G_0$ to $G_{35}$: \begin{itemize} \item $\mathbf{C}_\exists(cup\wedge\mathbf{N}_\exists counter~top)$ \item $\mathbf{C}_\exists(thermos\wedge\mathbf{N}_\exists counter~top)$ \item $\mathbf{C}_\exists(handle\wedge\mathbf{N}_\exists cabinet)$ \item $\mathbf{C}_\exists(pot\wedge\mathbf{N}_\exists counter~top)$ \end{itemize} \item Generate ``Always" formula. Find the maximum time interval for each GSTL formula from the previous step. For example, \begin{itemize} \item $\varphi_1=\Box_{[0,390]}\mathbf{C}_\exists(cup\wedge\mathbf{N}_\exists counter~top)$; \item $\varphi_2=\Box_{[0,65]}\mathbf{C}_\exists(thermos\wedge\mathbf{N}_\exists counter~top)$ \item $\varphi_3=\Box_{[120,390]}\mathbf{C}_\exists(thermos\wedge\mathbf{N}_\exists counter~top)$ \item $\varphi_4=\Box_{[40,185]}\mathbf{C}_\exists(thermos\wedge\mathbf{N}_\exists glove)$. \end{itemize} \item Denote the formulas generated from the previous step as $\varphi_i$. We check if any two formulas $\varphi_i$ and $\varphi_j$ have the temporal relationship $\varphi_i\sqcup\varphi_2$ if and only if they satisfy the following two conditions: 1) Their time interval has overlap based on Allen's interval algebra; 2) They have common predicate $\mu$ (common objects). To further simply the specification mining procedure, we ignore ``always" formula from the previous step with the time interval from the 0 to the end. An example result is given below, \begin{itemize} \item $\varphi_2\sqcup\varphi_6$ \item $\varphi_6\sqcup\varphi_3$ \item $\varphi_6\sqcup\varphi_7$ \item $\varphi_8\sqcup\varphi_9$. \end{itemize} The result can be illustrated by a directed graph, see Fig \ref{specification mining graph}, where each node is a ``always" formula. \begin{figure} \centering \includegraphics[scale=0.3]{mining.pdf} \label{specification mining graph} \caption{Temporal relationship based on Allen's interval algebra} \end{figure} \item Then we generate more complicated formulas by searching on the directed graph. We aim to generate a set of formulas which can cover all nodes and edges in the directed graph in Fig. \ref{specification mining graph}. An example is given below. \begin{equation} \begin{aligned} \varphi_2\sqcup\varphi_6\sqcup\varphi_3&=\Box_{[0,70]}\mathbf{C}_\exists(thermos\wedge\mathbf{N}_\exists counter)\\ &\sqcup\Box_{[45,180]}\mathbf{C}_\exists(glove\wedge\mathbf{N}_\exists thermos)\\ &\sqcup\Box_{[130,360]}\mathbf{C}_\exists(thermos\wedge\mathbf{N}_\exists counter)\\ \varphi_6\sqcup\varphi_7&=\Box_{[45,180]}\mathbf{C}_\exists(glove\wedge\mathbf{N}_\exists thermos)\\ &\sqcup\Box_{[95,115]}\mathbf{C}_\exists(thermos\wedge\mathbf{N}_\exists pot)\\ \varphi_8\sqcup\varphi_9&=\Box_{[195,340]}\mathbf{C}_\exists(glove\wedge\mathbf{N}_\exists pot)\\ &\sqcup\Box_{[255,290]}\mathbf{C}_\exists(cup\wedge\mathbf{N}_\exists pot) \end{aligned} \end{equation} \end{enumerate} \end{comment} \begin{comment} \section{Knowledge base} The proposed GSTL formulas can be used to represent knowledge for cognitive robots. On the other hand, robots need a set of knowledge or common sense to solve a new task assignment. Thus, we define a knowledge base for cognitive robots which stores necessary information and common sense for robots to solve a new task. In the knowledge base, the temporal parameters $a$ and $b$ in \eqref{GSTL} are not fixed. The knowledge base is defined as a set of parametric GSTL formulas as follow. \begin{definition} Knowledge base $\Sigma$ is a set of parametric GSTL formulas which satisfies the following conditions. \begin{itemize} \item Consistent: $\forall \varphi_i,\varphi_j\in \Sigma$, there exists a set of parameters such as $\varphi_i\wedge\varphi_j$ is true. \item Minimum: $\forall \varphi_i\in\Sigma$, there does not exist a set of parameters such as $\Sigma\setminus \{\varphi_i\}\Rightarrow\varphi_i$ is true. \end{itemize} \end{definition} For example, the set $\Sigma$ including the following GSTL formulas is a knowledge base \begin{equation} \begin{aligned} &\Box_{[t_1,t_2]}\mathbf{C}_\exists (utensils \wedge \mathbf{C}_\exists (cup\vee plate \vee fork \vee spoon)) \\ &\Box_{[t_1,t_2]}\mathbf{C}_\exists (cupboard \wedge \mathbf{C}_\exists (flour \vee baking powder \vee salt \vee sugar)) \\ &\Box_{[t_1,t_2]}\mathbf{C}_\exists^2 (door \wedge \mathbf{N}_\exists^{\left \langle d, d, m \right \rangle} handle),\\ &\Box_{[t_1,t_2]}\mathbf{C}_\exists^2 ( \mathbf{P}_\exists utensils \wedge \mathbf{N}_\exists^{\left \langle *, *, * \right \rangle} (table \vee hand \vee plate)),\\ &\varphi_1= \Box_{[t_1,t_2]}\mathbf{C}_\exists^2 (hand \wedge \mathbf{N}_\exists^{\left \langle *, *, * \right \rangle} cup),\\ &\varphi_2= \Box_{[t_1,t_2]}\mathbf{C}_\exists^2 (cup \wedge \mathbf{N}_\exists^{\left \langle d, d, m \right \rangle} table),\\ &\varphi_2 \sqcup_{[a,b]}^o\varphi_1 \sqcup_{[c,d]}^o \varphi_2. \end{aligned} \end{equation} It states common sense such that utensils includes cup, plate, fork, and spoon and action primitives such as hand grab a cup from a table and put it back after use it. The following set is not a knowledge base as it violates the minimum conditions. \begin{equation} \begin{aligned} &\Box_{[t_1,t_2]}\mathbf{C}_\exists (cup \wedge \mathbf{N}_\exists^{\left \langle *, *, * \right \rangle} hand) \\ &\Box_{[t_1,t_2]}\mathbf{C}_\exists (cup \wedge \mathbf{N}_\exists^{\left \langle *, *, * \right \rangle} (hand\vee table). \end{aligned} \end{equation} \begin{remark}[Modular reasoning] Our knowledge base inherits the hierarchical structure from the hierarchical graph from the GSTL spatial model. This is an important feature and can be used to reduce deduction systems complexity significantly. Knowledge base for real-world applications often demonstrate a modular-like structure in the sense that the knowledge base contains multiple sets of facts with relatively little connection to one another \cite{lifschitz2008knowledge}. For example, a knowledge base for kitchen and bathroom will include two sets of relatively self contained facts with a few connection such as tap and switch. A deduction system which takes advantage of this modularity would be more efficient since it reduces the search space and provide less irrelevant results. Existing work on exploiting structure of a knowledge base for automated reasoning can be found in \cite{amir2005partition}. \end{remark} \end{comment} \section{Deduction system}\label{Section: deduction system} {\color{red} In the previous section, we intriduced GSTL ... and showed that it is suitable for knowledge representation... This section ... automatic reasoning... In particular,} We adopt Hilbert style axiomatization for the proposed GSTL where the proof system is composed with a set of axioms and several inference rules. The axioms are generated through a predefined set of axiom schemes which are defined as below. \begin{definition}[Axiom schemes] Axiom schemes are an axiom templates which represent infinitely many specific instances of axioms by replacing the variables with any syntax valid formulas. The variables ranging over formulas are called schematic variables. \end{definition} For example, for a set of atomic propositions in signature $L=\{a,b,c,...\}$, we can get the axiom $(\Box_{[0,\infty)}\neg a \wedge b)\rightarrow (\neg a \wedge b)$ from the axiom schema $\Box_{[0,\infty)}\varphi\rightarrow\varphi$. We denote a set of axiom schemes as $Z$. The axioms we can get depends on both $Z$ and signature $L$. \subsection{Axiomatization} We define the axiomatization system with a set of axiom shcemas and inference rules given below. There are three parts in the axiom shcemas, namely propositional logics (P), temporal logics (T), and spatial logics (S). P1 to P10 are axiom shcemas from propositional logics. T1 to T5 are axiom shcemas for temporal logics. S1 to S6 are axiom shcemas for spatial logics. \begin{equation} \begin{aligned} &P1~\neg\neg\varphi \Rightarrow \varphi,\\ &P2~\varphi_1,\varphi_2 \Leftrightarrow \varphi_1\wedge\varphi_2,\\ &P3~\varphi_i \Rightarrow \varphi_1\vee\varphi_2,\\ &P4~\varphi_1\rightarrow(\varphi_2\rightarrow\varphi_1),\\ &P5~(\phi \rightarrow(\psi \rightarrow \xi)) \rightarrow((\phi \rightarrow \psi) \rightarrow(\phi \rightarrow \xi)),\\ &P6~(\neg \phi \rightarrow \neg \psi) \rightarrow(\psi \rightarrow \phi),\\ &P7~\varphi_1\vee\varphi_2, \varphi_1\rightarrow\varphi_3, \varphi_2\rightarrow\varphi_3 \Rightarrow \varphi_3,\\ &P8~\varphi_1\rightarrow\varphi_2, \varphi_2\rightarrow\varphi_1 \Leftrightarrow \varphi_1\leftrightarrow\varphi_2,\\ &P9~\neg(\varphi_1\wedge\varphi_2)\Leftrightarrow (\neg\varphi_1)\vee(\neg\varphi_2),\\ &P10~\neg(\varphi_1\vee\varphi_2)\Leftrightarrow (\neg\varphi_1)\wedge(\neg\varphi_2),\\ &T1~\Box_{[a,b]}(\varphi_1\rightarrow\varphi_2)\Rightarrow \Box_{[a,b]}\varphi_1\rightarrow\Box_{[a,b]}\varphi_2,\\ &T2~\Box_{[a,b]}(\varphi_1\wedge\varphi_2)\Leftrightarrow\Box_{[a,b]}\varphi_1 \wedge \Box_{[a,b]}\varphi_2,\\ &T3~\Diamond_{[a,b]}(\varphi_1\wedge\varphi_2)\Rightarrow\Diamond_{[a,b]}\varphi_1 \wedge \Diamond_{[a,b]}\varphi_2\Rightarrow \Diamond_{[a,b]}(\varphi_1\vee\varphi_2),\\ &T4~(\varphi_1\wedge\varphi_2)\sqcup_{[a,b]}^*\varphi_3 \Leftrightarrow \varphi_1\sqcup_{[a,b]}^*\varphi_3 \wedge \varphi_2\sqcup_{[a,b]}^*\varphi_3,\\ &T5~\varphi_1\sqcup_{[a,b]}^*(\varphi_2\wedge\varphi_3) \Leftrightarrow \varphi_1\sqcup_{[a,b]}^*\varphi_2 \wedge \varphi_1\sqcup_{[a,b]}^*\varphi_3,\\ &S1~\mathbf{P}_A (\varphi_1\wedge\varphi_2) \Leftrightarrow \mathbf{P}_A \varphi_1 \wedge \mathbf{P}_A \varphi_2,\\ &S2~\mathbf{P}_A (\varphi_1\vee\varphi_2) \Leftrightarrow \mathbf{P}_A \varphi_1 \vee \mathbf{P}_A \varphi_2,\\ &S3~\mathbf{C}_A (\varphi_1\wedge\varphi_2) \Leftrightarrow \mathbf{C}_A \varphi_1 \wedge \mathbf{C}_A \varphi_2,\\ &S4~\mathbf{C}_A (\varphi_1\vee\varphi_2) \Leftrightarrow \mathbf{C}_A \varphi_1 \vee \mathbf{C}_A \varphi_2,\\ &S5~\mathbf{N}_A^{\left \langle *, *, * \right \rangle} (\varphi_1\wedge\varphi_2) \Leftrightarrow \mathbf{N}_A^{\left \langle *, *, * \right \rangle} \varphi_1 \wedge \mathbf{N}_A^{\left \langle *, *, * \right \rangle} \varphi_2,\\ &S6~\mathbf{N}_A^{\left \langle *, *, * \right \rangle} (\varphi_1\vee\varphi_2) \Leftrightarrow \mathbf{N}_A^{\left \langle *, *, * \right \rangle} \varphi_1 \vee \mathbf{N}_A^{\left \langle *, *, * \right \rangle} \varphi_2. \end{aligned} \label{axiom schemas} \end{equation} The inference rules are: \begin{equation} \begin{aligned} &\textbf{Modus ponens}~ \frac{\varphi_1,\varphi_1\rightarrow\varphi_2}{\varphi_2},\\ &\textbf{IRR(irreflexivity)}~ \frac{\mu\wedge \Box_{[a,b]}\neg \mu\rightarrow\varphi}{\varphi}, \text{for all formulas $\varphi$ and atoms $\mu$ not appearing in $\varphi$}. \end{aligned} \label{inference rules} \end{equation} \subsection{Properties of the deduction system} {\color{red} Next, we show } that the proposed deduction system containing axiomatization defined above is sound and complete. First, we define a proof for GSTL inferences. \begin{definition}[Proof] A proof in GSTL is a finite sequence of GSTL formulas $\varphi_1,\varphi_2,...,\varphi_n$, where each of them is a axiom or there exists $i,j<k$, such that $\varphi_k$ is the conclusion derived from $\varphi_i$ and $\varphi_j$ using the inference rules in the axiomatization system. The GSTL formula $\varphi_n$ is the conclusion of the proof and $n$ is the length of the proof. \end{definition} Now, we discuss soundness of the proposed deduction system. Informally soundness means if in all worlds that are possible given a set of formulas $\Sigma$ the formula $\varphi$ also holds. Formally we say the inference systems is sound if a set $\Sigma$ of well-formed formulas semantically entails (or implies) a certain well-formed formula $\varphi$ if all truth assignments that satisfy all the formulas in $\Sigma$ also satisfy $\varphi$. \begin{theorem}[Soundness]\label{theorem:soundness} The above axiomatization is sound for the given spatial and temporal model. \end{theorem} \begin{proof} This can be proved from the fact that all axioms schemas from propositional logics, temporal logics and spatial logics are valid and all rules preserve validity. To prove soundness, we aim to prove the following statement. Given a set of GSTL formulas $\Sigma$, any formula $\phi$ which can be inferred from the deduction system above is correct meaning all truth assignment that satisfy $\Sigma$ also satisfy $\phi$. The proof is done by induction. First, if $\phi\in \Sigma$, then it is trivial to say $\phi$ is correct. Second, if $\phi$ belong to one of the axiom schemas, it is also trivial to say $\phi$ is correct since all axiom shcemas defined in \eqref{axiom schemas} preserve semantic implication based on the semantic definition of GSTL. We need to further prove the inference rules are also sound. Let us assume if $\phi_i$ can be proved by $\Sigma$ in $n$ steps in a proof, then $\phi_i$ is implied by $\Sigma$. For each possible application of a rule of inference at step $i + 1$, leading to a new formula $\phi_j$, if we can show that $\phi_j$ can be implied by $\Sigma$, we prove the soundness. If Modus ponens is applied, then $\phi_i=\varphi_1\wedge(\varphi_i\rightarrow\varphi_2)$ and $\phi_j=\varphi_2$. Let $N=(T,<,h)$ be a structure where $(T,<)$ is a flow of time with a connected partial ordering and $h$ is the valuation function: $h:L\rightarrow T$. Assuming $N\models\phi_i$, we need to prove that $N\models\phi_j$. According to the semantic definition of GSTL, $N\models\phi_i\Leftrightarrow N\models\varphi_1 ~\text{and}~ N\models\varphi_1\rightarrow\varphi_2$. Since $\varphi_1\rightarrow\varphi_2\Leftrightarrow\neg\varphi_1\vee\varphi_2$, $N\models\varphi_2$ which proves Modus pones is sound. If IRR is applied, then $\phi_i=(\mu\wedge\Box_{[a,b]}\neg\mu)\rightarrow\varphi$ and $\phi_j=\varphi$. Assuming $N\models\phi_i$, we need to prove that $N\models\phi_j$. We define a new structure $N'=(T,<,h')$ where $h'(\mu)=\{t\}$ and $h'(\xi)=h(\xi),\forall \xi\neq\mu$. Since $t\in h'(\mu)$ for $\mu=\xi$, $h'(\xi)=h(\xi),\forall \xi\neq\mu$, and the flow of time $(T,<)$ is the same, it is true that $N'\models\phi_i$ according the semantic definition of GSTL. Thus, the following equation holds true. \begin{equation} \begin{aligned} N'&\models(\mu\wedge\Box_{[a,b]}\mu)\rightarrow\varphi \\ &\models \neg(\mu\wedge\Box_{[a,b]})\vee\varphi\\ &\models \neg\mu \vee \Diamond_{[a,b]}\mu \vee \varphi. \end{aligned} \end{equation} Since $N'\models\Diamond_{[a,b]}\mu \Leftrightarrow \exists t'\in [t+a,t+b], N'\models\mu(t')$ and $t'\not\in h'(\mu)$, thus $N'\not\models\Diamond_{[a,b]}\mu$. It is also obvious $N'\models\mu$. Thus, $N'\models\varphi$ has to be true. That proves that the inference rule IRR is also sound. This completes the proof. \end{proof} Before we discuss completeness, we give several necessary definitions on terms which will be used later. \begin{definition} A theory $\Delta$ prove $\varphi$, denoted as $\Delta\vdash\varphi$, if for some finite $\Delta_0\subseteq\Delta$, $\vdash(\wedge\Delta_0)\rightarrow\varphi$. Denote $Z$ as a set of schemas. A theory $\Delta$ is said to be $Z$-consistent, or just consistent if $Z$ is understood, if and only if $\Delta\not\vdash\bot$. Let $L$ be a signature containing all atomic propositions. A theory $\Delta$ is said to be a complete $L$-theory if for all formulas $\varphi$, either $\varphi\in\Delta$ or $\neg\varphi\in\Delta$. $\Delta$ is said to be an IRR-theory if for some atom $\mu$, $\mu\wedge\Box_{[a,b]}\neg\mu\in\Delta$ and if $\Diamond_{[a,b]}(\varphi_1\wedge(\varphi_2\wedge ... \Diamond_{[a,b]}\varphi_m)...)\in\Delta$, then $\Diamond_{[a,b]}(\varphi_1\wedge(\varphi_2\wedge ... \Diamond_{[a,b]}\varphi_m\wedge\mu\Box_{[a,b]}\neg\mu)...)\in\Delta$. \end{definition} We first prove the following lemma. \begin{lemma} Let $\Sigma$ be a Z-consistent L-theory. Let $L\subseteq L^*$ be an extension of $L$ by a countable infinite set of atoms. Then there is a Z-consistent complete IRR $L^*$-theory $\Delta$ containing $\Sigma$. \label{lemma1} \end{lemma} \begin{proof} We prove the lemma by building an increasing chain of Z-consistent $L^*$-theories $\Delta_i, i<\omega$ by induction. Then the union $\cup_i\Delta_i$ is the Z-consistent complete IRR $L^*$-theory. Let $\mu$ be an atom of $L^*\setminus L$. Then $\mu$ does not appear in $\Sigma$. According to the corollary in \cite{gabbay1990axiomatization}, $\Sigma\cup\{\mu\wedge\Box_{[a,b]}\neg\mu\}$ is Z-consistent. Set $\Delta_0=\Sigma\cup\{\mu\wedge\Box_{[a,b]}\neg\mu\}$. Denote all $L^*$-formulas as $\varphi_0,\varphi_1,...$. Assume $\Delta_i$ is constructed inductively and is Z-consistent. We expend $\Delta_i$ through the following steps. \begin{enumerate} \item If $\Delta_i\cup\{\varphi_i\}$ is not Z-consistent, then $\Delta_{i+1}=\Delta_i\cup\{\neg\varphi_i\}$. $\Delta_{i+1}$ is Z-consistent since $\Delta_i$ is Z-consistent based on our assumption. \item If $\Delta_i\cup\{\varphi_i\}$ is Z-consistent and $\varphi_i$ is not of the form $\Diamond_{[a,b]}(\phi_1\wedge(\phi_2\wedge ... \Diamond_{[a,b]}\phi_m)...)$, then $\Delta_{i+1}=\Delta_i\cup\{\varphi_i\}$. \item If $\Delta_i\cup\{\varphi_i\}$ is Z-consistent and $\varphi_i$ is of the form $\Diamond_{[a,b]}(\phi_1\wedge(\phi_2\wedge ... \Diamond_{[a,b]}\phi_m)...)$, then $\Delta_{i+1}=\Delta_i\cup\{\varphi_i,\Diamond_{[a,b]}(\phi_1\wedge(\phi_2\wedge ... \Diamond_{[a,b]}\phi_m\wedge\mu\Box_{[a,b]}\neg\mu)...)$. The same corollary in \cite{gabbay1990axiomatization} makes sure that it is Z-consistent. \end{enumerate} Based on the definition of IRR-theory, $\Delta=\cup_{i<\omega}\Delta_i$ is a Z-consistent complete theory. \end{proof} Another lemma is needed to prove the completeness. \begin{lemma} Define a L-structure $N=(T,\sqsubset,v)$, where $\sqsubset$ is a binary relation \cite{gabbay1990axiomatization} on the set of all complete Z-consistent IRR L-theory defined as \begin{equation} \Delta_1\sqsubset\Delta_2, \text{if and only if for all L-formulas, if~} \Box_{[a,b]}\varphi\in\Delta_1, \text{then~} \varphi\in\Delta_2, \end{equation} and the valuation function $v:L\rightarrow T$ is defined as \begin{equation} v(\mu)=\{\Delta\in T:\mu\in\Delta\}, \forall \mu \in L. \end{equation} For all formulas $\varphi$ and all $\Delta\in T$, we have \begin{equation} N\models\varphi(\Delta) \Leftrightarrow \varphi\in\Delta. \end{equation} \label{lemma2} \end{lemma} \begin{proof} It is proved by the induction on building $\varphi$. If $\varphi=\mu$, then based on the definition of $v(\mu)$, the lemma is satisfied. If $\varphi=\neg\mu$ and $\varphi=\mu_1\wedge\mu_2$, it is obvious the lemma is satisfied based on the definition of completeness of $\Delta$. Now assume $\varphi$ satisfies $N\models\varphi(\Delta) \Leftrightarrow \varphi\in\Delta$. We first prove the lemma holds for $\Box_{[a,b]}\varphi$. 1) If $N\models\Box_{[a,b]}\varphi(\Delta_1)$, then there is $\Delta_2\in T$ with $\Delta_1\sqsubset\Delta_2$ and $N\models\varphi(\Delta_2)$. Since $\varphi$ satisfies $N\models\varphi(\Delta) \Leftrightarrow \varphi\in\Delta$, $\varphi\in\Delta_2$. If $\neg\Box_{[a,b]}\varphi\in\Delta_1$, then based on the definition of $\sqsubset$, $\neg\varphi\in\Delta_2$ which contradict our assumption. Thus, $\Box_{[a,b]}\varphi\in\Delta_1$. 2) Now assume that $\Box_{[a,b]}\varphi\in\Delta_1$. Then there is $\Delta_2\in S$ with $\Delta_1\sqsubset\Delta_2$ and $\varphi\in\Delta_2$ where $S$ is the set of all complete Z-consistent IRR L-theories. Since $\Delta_2\in T$ and $N\models\varphi(\Delta_2)$ according to our inductive hypothesis, $N\models\Box_{[a,b]}\varphi(\Delta_1)$ based on the semantic definition of $\Box_{[a,b]}\varphi$ ($\forall \Delta_2\in N ~\text{with}~ \Delta_1\in\Delta_2=[\Delta_1+a,\Delta_2+b] $ and $N\models\varphi(\Delta_2)$). As for the until operator with IA extension, we prove $\varphi_1\sqcup_{[a,b]}^o\varphi_2$ as an example and leave the rest to readers. They can all be proved using the same procedure. Based on the semantic definition of $\sqcup_{[a,b]}^o$, $\varphi_1\sqcup_{[a,b]}^o\varphi_2=\Box_{[a,b]}(\varphi_1\wedge\varphi_2)\wedge \Box_{[t_1,a]}(\varphi_1\wedge\neq\varphi_2)\wedge \Box_{[b,t_2]}(\neg\varphi_1\wedge\varphi_2)$ for some $t_1<a<b<t_2$. As we already show that the lemma holds true for $\varphi_1\wedge\varphi_2$ and $\Box_{[a,b]}\varphi$ inductively, the lemma holds true for $\Box_{[a,b]}(\varphi_1\wedge\varphi_2)\wedge \Box_{[t_1,a]}(\varphi_1\wedge\neg\varphi_2)\wedge \Box_{[b,t_2]}(\neg\varphi_1\wedge\varphi_2)$ as well. \end{proof} \begin{definition} A spatial temporal structure $N=(T,<,h)$ is special with respect to a signature $L$ if $N$ is an IRR structure and for all $t,u\in N$, $t=u$ if and only if all $L$-formulas $\varphi$, $N\models\varphi(t)\Leftrightarrow N\models\varphi(u)$ and $u\in[t+a,t+b]$ if and only if for all $L$-formulas $\varphi$, $N\models\Box_{[a,b]}\varphi(t)\rightarrow N\models\varphi(u)$. \end{definition} Now we can have the following theorem. \begin{theorem} Let $L$ be a countable infinite signature and $\Sigma$ be a Z-consistent L-theory. Then there are countable $L\subseteq L^*$ and a special $L^*$-structure $N$ such that every $L^*$-axiom of $Z$ is valid in $N$, and $N$ is a model of $\Sigma$. \label{theorem1} \end{theorem} \begin{proof} The proof is based on Lemma \ref{lemma1} and Lemma \ref{lemma2}. Let us assume $\Sigma$ to be Z-consistent. Take $L^*$ to be $L$ augmented with countable infinite new atoms. Let $S$ be the set of all Z-consistent complete IRR $L^*$ theories. According to Lemma \ref{lemma1}, there is $\Delta\in S$ containing $\Sigma$. We define an L-structure $(T,\sqsubset,v)$ the same as the one in Lemma \ref{lemma2}, where $T$ is the $\approx$-class of $\Delta$. Then based on Lemma \ref{lemma2}, $N$ is IRR and all instances of Z-schemas are in every $\Delta\in S$, they are all valid in $N$. Because any two theories in $T$ are distinct ($(\varphi_1\wedge\varphi_2)\wedge\varphi_3$ and $\varphi_1\wedge(\varphi_2\wedge\varphi_3)$ are distinct) and the ordering on $T$ is $\sqsubset$, thus, $N$ is special based the previous definition. \end{proof} An inference system for a logic is complete with respect to its semantics if it produces a proof for each provable statement. If the semantics of a set of GSTL formulas $\Sigma$ implies $\varphi$, then $\varphi$ is proved by $\Sigma$. We formally defined completeness as follow. \begin{definition} Let $K$ be a class of flows of time. The inference system composed with axiom schemas $Z$ and inference rules is said to be complete for $K$ if for all theories $\Sigma$, $\Sigma$ is Z-consistent if and only if there is an IRR-structure $N$ with flow of time in $K$ such that $N$ is a model of $\Sigma$. \end{definition} The inference system satisfies the following property. \begin{theorem}[Completeness] The inferences system for GSTL is complete for the class $K$ of all flows of time with an IRR-structure $N$ such that $N$ is a model of $\Sigma$. \end{theorem} \begin{proof} According to the soundness Theorem \ref{theorem:soundness}, if $\Sigma$ has a model then it is consistent. Then according to Theorem \ref{theorem1}, if $\Sigma$ is consistent then it has an IRR model. Thus, the deduction system is complete. \end{proof} \begin{remark} The compactness theorem, that any consistent theory has a model if each of its finite subsets does, is a result that holds for first-order logic, temporal logic with flow of time $\mathbb{Q}$. But it fails in the temporal logic for $\mathbb{R}$, $\mathbb{N}$, and $\mathbb{Z}$ \cite{gabbay1990axiomatization}. Since strong completeness implies compactness, we only discuss weak completeness theorem in the cases of $\mathbb{N}$ and $\mathbb{Z}$ where given appropriate schemas, any consistent formula has model with the appropriate flow of time \cite{gabbay1990axiomatization}. The strong completeness is equal to frame completeness and compactness in universal modal logic. However, temporal logic in the flow of real time has weak completeness, which has finitely complete and expressively complete, but does not have compactness. \end{remark} \begin{comment} \begin{proof} We want to show if $G$ implies (semantically) $\varphi$, then $G$ proves (syntatically) $\varphi$. It means if $\varphi$ satisfies $G$ semantically, it can be deduced through the deduction system. We proceed by contraposition. We show instead that if $G$ does not prove $\varphi$ then $G$ does not imply $\varphi$. If we show that there is a model where $\varphi$ does not hold despite $G$ being true, then obviously $G$ does not imply $\varphi$. The idea is to build such a model out of our very assumption that $G$ does not prove $\varphi$. We assume $G$ does not prove $\varphi$. If $G$ does not prove $\varphi$, then we can construct an (infinite) Maximal Set, $G^*$, which is a superset of $G$ and which also does not prove $\varphi$. To construct $G^*$, we place an ordering on all the sentences in the language (e.g., shortest first, and equally long ones in extended alphabetical ordering), and number them $\varphi_1,\varphi_2,...$. Define a series $G_n$ inductively by \begin{enumerate} \item $G_0=G$ \item If $G_k\cup \{\varphi_{k+1}\}$ proves $\varphi$, then $G_{k+1}=G_k$ \item If $G_k\cup \{\varphi_{k+1}\}$ does not proves $\varphi$, then $G_{k+1}=G_k\cup \{\varphi_{k+1}\}$ \end{enumerate} Then define $G^*$ as the union of all $G_n$ meaning $G^*$ contains all sentences in all $G_n$. It can be shown that $G^*$ satisfies the following statement \begin{enumerate} \item $G^{*}$ contains (is a superset of) $G$; and \item $G^{*}$ does not prove $\varphi$ (because the proof would contain only finitely many sentences and when the last of them is introduced in some $G_{n},$ that $G_{n}$ would prove $\varphi$ contrary to the definition of $G_{n}$ ); and \item $G^{*}$ is a Maximal Set with respect to $\varphi$. If any more sentences whatever were added to $G^{*},$ it would prove $\varphi$ (Because if it were possible to add any more sentences, they should have been added when they were encountered during the construction of the $G_{n},$ again by definition. \end{enumerate} If $G^{*}$ is a Maximal Set with respect to $\varphi$, then it is truth-like. This means that it contains $\psi$ only if it does not contain $\neg \psi$; if it contains $\psi$ and contains $\psi\rightarrow\phi$ then it also contains $\phi$; and so forth. If $G^{*}$ is truth-like there is a $G^{*}$ -Canonical valuation of the language: one that makes every sentence in $G^{*}$ true and everything outside $G^{*}$ false while still obeying the laws of semantic composition in the language. A $G^{*}$ -canonical valuation will make our original set $G$ all true, and make $\varphi$ false. If there is a valuation on which $G$ are true and $\varphi$ is false, then $G$ does not (semantically) imply $\varphi$. Thus, the proof can be completed if we find such $G^*$-Canonical valuation where every formulas in $G^*$ is true and everything outside of it is false. The $G^*$-Canonical valuation is found by induction on the structure of the GSTL syntax definition. First, if $\varphi=\mu^*$, then $\mu^*\models\top$ if and only if $\mu^*\in G^*$. Next, assume $\varphi$ is one of the following atomic predicates $\neg\varphi$, $\varphi_1\wedge\varphi_2$, $\varphi_1\vee\varphi_2$ $\Box_{[a,b]}\varphi$, $\varphi_1\sqcup_{[a,b]}^*\varphi_2$. We prove some of them and omit the rest. For $\varphi=\Box_{[a,b]}\mu^*$, $\Box_{[a,b]}\mu^*\models\top$ if and only if $\forall i \in [a,b], \mu^*\in G^*$. If $\varphi$ does not have the above forms, then we use the inference rules. \end{proof} \end{comment} \subsection{Implementation of the deduction system} The deduction system is implemented through constraint programming. It is common to implement logic deduction system using constraint programming. As most deduction problem can be transferred as a consistency checking problem, we demonstrate how do we solve a consistency checking problem using constraint programming based on the deduction system defined above. Assume we have $n$ GSTL formulas $\Sigma=\{\phi_1,\phi_2,...,\phi_n\}$ whose truthfulness are known. We aim to check if $\Sigma$ is consistent. We define a set of binary variables, i.e., $z^{\phi_i}$ which are 1 if and only if the corresponding GSTL formulas are true. Based on the deduction system, we can obtain a set of binary relationship between two GSTL formulas from the set $\Sigma$. Denote the binary relationship as $R_{i,j}(\phi_i,\phi_j)$ where $\phi_i$ and $\phi_j$ are GSTL formulas. The relationship $R_{i,j}$ states the logic constraint for GSTL formulas $\phi_i$ and $\phi_j$ which can be encoded as a constraint on their corresponding binary variables such as $z^{\phi_i}=z^{\phi_j}$ for $\phi_i \Leftrightarrow\phi_j$ or $z^{\phi_i}\leq z^{\phi_j}$ for $\phi_i\rightarrow\phi_j$. Thus, we can use Boolean encoding for all GSTL formulas in $\Sigma$ and model all relationship $R_{i,j}$ as a set of constraints on the binary variables $z^{\phi_i}$. The problem can be formulated as a constraint programming. The constraint programming solver will decide if it can find a solution for all formulas in $\Sigma$. If there exist at least a solution for all binary variables where all constraints can be satisfied, then $\Sigma$ is consistent. If no feasible solution can be found, then there are conflict among the formulas in $\Sigma$. \section{Automatic task planner based on knowledge base}\label{section:task planning} In this section, we focus on developing a task planner for cognitive robots to generate a detailed task plan for a vague task assignment. We first introduce the knowledge base which stores common sense for cognitive robots. Then we introduce the automatic task planner composed with proposer and verifier. In the end, an overall framework for cognitive robots that combines the task planner and knowledge base is given. \subsection{Knowledge base} The proposed GSTL formulas can be used to represent knowledge for cognitive robots. On the other hand, robots need a set of knowledge or common sense to solve a new task assignment. Thus, we define a knowledge base for cognitive robots which stores necessary information and common sense for robots to solve a new task. In the knowledge base, the temporal parameters $a$ and $b$ in \eqref{GSTL*} are not fixed. The knowledge base is defined as a set of parametric GSTL formulas as follow. \begin{definition} Knowledge base $\Sigma$ is a set of parametric GSTL formulas which satisfies the following conditions. \begin{itemize} \item Consistent: $\forall \varphi_i,\varphi_j\in \Sigma$, there exists a set of parameters such as $\varphi_i\wedge\varphi_j$ is true. \item Minimum: $\forall \varphi_i\in\Sigma$, there does not exist a set of parameters such as $\Sigma\setminus \{\varphi_i\}\Rightarrow\varphi_i$ is true. \end{itemize} \end{definition} For example, the set $\Sigma$ including the following GSTL formulas is a knowledge base \begin{equation} \begin{aligned} &\Box_{[t_1,t_2]}\mathbf{C}_\exists (utensils \wedge \mathbf{C}_\exists (cup\vee plate \vee fork \vee spoon)) \\ &\Box_{[t_1,t_2]}\mathbf{C}_\exists (cupboard \wedge \mathbf{C}_\exists (flour \vee baking powder \vee salt \vee sugar)) \\ &\Box_{[t_1,t_2]}\mathbf{C}_\exists^2 (door \wedge \mathbf{N}_\exists^{\left \langle d, d, m \right \rangle} handle),\\ &\Box_{[t_1,t_2]}\mathbf{C}_\exists^2 ( \mathbf{P}_\exists utensils \wedge \mathbf{N}_\exists^{\left \langle *, *, * \right \rangle} (table \vee hand \vee plate)),\\ &\varphi_1= \Box_{[t_1,t_2]}\mathbf{C}_\exists^2 (hand \wedge \mathbf{N}_\exists^{\left \langle *, *, * \right \rangle} cup),\\ &\varphi_2= \Box_{[t_1,t_2]}\mathbf{C}_\exists^2 (cup \wedge \mathbf{N}_\exists^{\left \langle d, d, m \right \rangle} table),\\ &\varphi_2 \sqcup_{[a,b]}^o\varphi_1 \sqcup_{[c,d]}^o \varphi_2. \end{aligned} \end{equation} It states common sense such that utensils includes cup, plate, fork, and spoon and action primitives such as hand grab a cup from a table and put it back after use it. The following set is not a knowledge base as it violates the minimum conditions. \begin{equation} \begin{aligned} &\Box_{[t_1,t_2]}\mathbf{C}_\exists (cup \wedge \mathbf{N}_\exists^{\left \langle *, *, * \right \rangle} hand) \\ &\Box_{[t_1,t_2]}\mathbf{C}_\exists (cup \wedge \mathbf{N}_\exists^{\left \langle *, *, * \right \rangle} (hand\vee table). \end{aligned} \end{equation} \begin{remark}[Modular reasoning] Our knowledge base inherits the hierarchical structure from the hierarchical graph from the GSTL spatial model. This is an important feature and can be used to reduce deduction systems complexity significantly. Knowledge base for real-world applications often demonstrate a modular-like structure in the sense that the knowledge base contains multiple sets of facts with relatively little connection to one another \cite{lifschitz2008knowledge}. For example, a knowledge base for kitchen and bathroom will include two sets of relatively self contained facts with a few connection such as tap and switch. A deduction system which takes advantage of this modularity would be more efficient since it reduces the search space and provide less irrelevant results. Existing work on exploiting structure of a knowledge base for automated reasoning can be found in \cite{amir2005partition}. \end{remark} \subsection{Control synthesis} The task planner takes environment information from sensors and common sense from knowledge base and solve a vague task assignment with a detailed task plans. For example, we give a task assignment to robot by asking it to set up a dining table. A camera will provide the current dining table setup and the knowledge base stores information on what actions robots can take. The goal for the task planner is to generate a sequence of actions robots need to take such that the dining table can be set up as it required. Specifically, we propose to implement the task planner as two interacting components, namely proposer and verifier. The proposer first proposes a plan based on the knowledge base and its situation awareness. The verifier then checks the feasibility of the proposed plan based on the knowledge base. If the plan is not feasible, then it will generate a counter-example and pass it to the proposer for another plan. For example, if the initial plans generated by the proposer includes ``hand touch hot water" while the knowledge base specifies constraints ``hand cannot touch hot materials", then the verifier will find the conflicts and inform the proposer that ``hand cannot touch hot water" needed to be considered in the re-planning. The proposer may come up with a new plan where the hand will use a cup to hold hot water. If the new plan turns out to be feasible, the verifier will output the plan to the robot for plan execution. The task planner may be recalled once the situation changes during the execution. \subsubsection{Proposer} Although the counterexample-guided design is not new, how to implement the proposer and verifier is highly nontrivial. For the proposer, we are inspired to cast the planning as a path planning problem on a transition system $M=(\mathcal{S}, \mathcal{A},T)$ as shown in Fig. \ref{exp:task planning} where the state space $\mathcal{S}$ for the task planning is discrete, where each state $s\in \mathcal{S}$ represents an GSTL automatic proposition that hold true at the current status. The initial state $s_0$ corresponding to a point or a set of points in $\mathcal{S}$, while the target state $s^*$ in $\mathcal{S}$ corresponds to the accomplishment of the task. Actions $a$ available to robots are given in $\mathcal{A}$. The transition function $T: \mathcal{S}\times \mathcal{A} \rightarrow \mathcal{S}$ mapping one state to another is triggered by an action $a\in \mathcal{A}$ that the robot can take. An example is given in Fig. \ref{exp:task planning example} where the goal is to set up the dinner table as shown in the right figure. The initial states $s_1,s_2,s_3$, target states $t_1,t_2,t_3$ and available actions $a_1,a_2,a_3,a_4$ are given as follow. \begin{align*} &s_1=\mathbf{C}^2_{\exists}(cup\wedge\mathbf{N}_{\exists}^{back}plate),~ s_2=\mathbf{C}^2_{\exists}(fork\wedge\mathbf{N}_{\exists}^{left}knife),~ s_3=\mathbf{C}^2_{\exists}(knife\wedge\mathbf{N}_{\exists}^{left}fork),\\ &t_1=\mathbf{C}^2_{\exists}(cup\wedge\mathbf{N}_{\exists}^{top}plate),~ t_2=\mathbf{C}^2_{\exists}(fork\wedge\mathbf{N}_{\exists}^{left}plate),~ t_3=\mathbf{C}^2_{\exists}(knife\wedge\mathbf{N}_{\exists}^{right}plate).\\ &a_1=(\Box_{[t_1,t_2]}s_1)\sqcup_{[a_1,b_2]}^o\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{{\left \langle *, *, * \right \rangle}}cup)\sqcup_{[a_2,b_2]}(\Box_{[t_3,t_4]}t_1),~\\ &a_2=(\Box_{[t_1,t_2]}s_2)\sqcup_{[a_1,b_2]}^o\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{{\left \langle *, *, * \right \rangle}}fork)\sqcup_{[a_2,b_2]}(\Box_{[t_3,t_4]}t_2),~\\ &a_3=(\Box_{[t_1,t_2]}s_3)\sqcup_{[a_1,b_2]}^o\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{{\left \langle *, *, * \right \rangle}}knife)\sqcup_{[a_2,b_2]}(\Box_{[t_3,t_4]}t_3),~\\ &a_4=(\Box_{[t_1,t_2]}s_4)\sqcup_{[a_1,b_2]}^o\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{{\left \langle *, *, * \right \rangle}}plate)\sqcup_{[a_2,b_2]}(\Box_{[t_3,t_4]}t_2),~ s_4=\mathbf{C}^2_{\exists}(fork\wedge\mathbf{N}_{\exists}^{left}empty). \end{align*} \begin{figure} \centering \includegraphics[scale=0.6]{task_planning_example.pdf} \caption{An example of automatic task planning for dining table setting. The left figure is the initial table setup. The right figure is the target table setup.} \label{exp:task planning example} \end{figure} The goal of the proposer is find an ordered set of actions which transform any states satisfying the initial state into a state satisfying the target states. It is worth to point out such transition system is not given prior to robots. Robots need to generate the transition system and generate sequential actions by utilizing information in the knowledge base. Similar to task planning in GRAPHPLAN \cite{weld1999recent}, the proposer generate a potential solution in two steps, namely forward transition system expansion and backward solution extraction. In the transition system expansion, we expand the transition system forward in time until the current states level include all target states and none of them are mutual exclusive. To expand the transition system, we start with initial states and expanding the transition system by applying available actions to the states. The resulting states based on the transition function $T$ will be the new current states. Then we label all mutual exclusive (mutex) relations among actions and states. Two actions are mutex if they satisfy one of the following conditions. 1) The effect of one action is the negation of the other action. 2) The effect of one action is the negation of the other action's precondition. 3) Their preconditions are mutex. We say two actions are mutex if their supporting actions are mutex. For the current state level, if all target states are included and there are no mutex among them, then we move to the solution extraction phase as solution may exists in the current transition system. The algorithm is summarized in Algorithm \ref{algorithm:proposer}. We use the same example in Fig. \ref{exp:task planning} and Fig. to illustrate the transition system expanding algorithm. Initially, we have three states $s_1,s_2,\text{and}~s_3$ and three available actions $a_1,a_2,\text{and}~a_3$. We expand the transition system by generating states $t_1,s_4,\text{and}~\neg s_2$ from applying $a_1$ to $s_1$, states $t_2,\text{and}~\neg s_3$ from applying $a_2$ to $s_2$, and states $t_3$ from applying $a_3$ to $s_3$. $a_1$ and $a_2$ are mutex since $a_1$ generate the negation of the precondition of $a_2$. $a_2$ and $a_3$ are mutex for the same reason. Consequently, $t_1$ and $t_2$ are mutex since their supporting actions $a_1$ and $a_2$ are mutex. All mutex relations are labeled as red dash line in Fig. \ref{exp:task planning}. Even though the current state level include all target states, we need to further expand the transition system as $t_1$ and $t_2$ are mutex. Notice that we move states to the next level if there are no actions applied. We expand the second state level following the same procedure and label all mutex with red dash lines. In the third state level, $t_1$ and $t_2$ are not mutex anymore since they both have ``no action" as their supporting actions and they are not mutex. Since the third state level includes all target states and there are no mutex, we now move to the backward solution extraction. In the solution extraction phase, we extract solution backward by starting with the current state level. For each target state $t_i^k$ at the current state level $k$, we denote the set of its supporting actions as $\mathcal{A}_{t_i^{k}}$. We choose one action from each $\mathcal{A}_{t_i^{k}}$ with no mutex relations for all target states, formulate a candidate solution set at this step $\mathcal{A}^k$, and denote the precondition states of the selected actions as $\mathcal{S}_{pre}^{k}$. Then we check if the precondition states have mutex relations. If so, we terminate the search on $\mathcal{A}^k$ and choose another set of action as candidate solution until we enumerate all possible combination. If no mutex relations are detected in $\mathcal{S}_{pre}^{k}$, then we repeat the above back tracking step until mutex are founded or $\mathcal{S}_{pre}^{k}$ includes all initial states. The solution extraction algorithm is summarized in Algorithm \ref{algorithm:proposer}. Again, we use Fig. \ref{exp:task planning example} as an example to illustrate the backward solution extraction. From the previous forward transition system expansion phase, we obtain a transition system with three levels of states and two levels of actions. From Fig. \ref{exp:task planning}, we can see that available actions for $t_1$ at state level 3 is $\mathcal{A}_{t_1^3}=\{a_1,\varnothing\}$, available actions for $t_2$ is $\mathcal{A}_{t_2^3}=\{a_2,\varnothing,a_4\}$, and available actions for $t_3$ is $\mathcal{A}_{t_3^3}=\{a_3,\varnothing\}$, where $a_1$ and $a_2$ are mutex, $a_2$ and $a_3$ are mutex, $a_2$ and $a_4$, and $a_3$ and $a_4$ are mutex. Thus, all possible solutions for action level 2 are $\mathcal{A}^k=\{\{a_1,\varnothing,a_3\},\{a_1,\varnothing,\varnothing\},\{a_1,a_4,\varnothing\},\{\varnothing,a_2,\varnothing\},\{\varnothing,\varnothing,a_3\},\{\varnothing,\varnothing,\varnothing\},\{\varnothing,a_4,\varnothing\}\}$. Let us take $\mathcal{A}^k=\{a_1,\varnothing,a_3\}$ as an example. The precondition for it at state level 2 is $\{s_1,t_2,s_3\}$. Since $s_3$ and $t_2$ are mutex, thus $\mathcal{A}^k=\{a_1,\varnothing,a_3\}$ is not a feasible solution. In fact, there are no feasible solution for the current transition system. In the case where no feasible solution after the solution extraction phase enumerate all possible candidate, we go back to the transition system expanding phase and further grow the transition system. For example in Fig. \ref{exp:task planning}, after we grow another level of actions and states, the solution extraction phase is able to find two possible ordered actions: $a_3,a_2,a_1$ and $a_1,a_4,a_3$. They are highlighted with green and purple lines in Fig. \ref{exp:task planning} respectively. The results will be tested in the verifier to make sure the plan is executable. \begin{figure} \centering \includegraphics[scale=0.26]{task_planning.pdf} \caption{Automatic task planning based on graph growing and solution extraction.} \label{exp:task planning} \end{figure} \begin{algorithm}[H] \SetAlgoLined \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{Observed events $s_1,s_2,...,s_n$, available actions $a_1,a_2,...,a_l$, target state $t_1,t_2,...,t_m$;} \Output{A sequence of actions $a_1,a_2,...,a_k$;} \BlankLine Initialization: $s^0=\{s_1^0,s_2^0,...,s_n^0\}$ and $s^*=\{t_1,t_2,...,t_m\}$\; \While{$\exists t_i\not\in s^k$ or $\exists t_i ~\text{and}~ t_j \in s^k$ which are mutex}{ For all $s_i^k$ at current state level $k$, add $s_j^{k+1}$ into the next state level $k+1$ if $s_j^{k+1}\in \{s_i^k\times a_i\}$\; \If{effect of $a_i$ is the negation of the precondition of $a_j$ or $a_i$ and $a_j$ have conflict preconditions or the effects of $a_i$ and $a_j$ are mutex}{ Add a mutex link between $a_i$ and $a_j$\; } } \If{$\forall a_i\in\mathcal{A}_{s_{i}^{k+1}}$ and $\forall a_j\in\mathcal{A}_{s_{j}^{k+1}}$, $a_i$ and $a_j$ are mutex}{Add a mutex link between $s_i^{k+1}$ and $s_j^{k+1}$\;} $\forall t_i^k\in s^*$ at the current state level $k$, denote the set of its supporting actions as $\mathcal{A}_{t_i^{k}}$\; Enumerate all possible combinations from all $\mathcal{A}_{t_i^{k}}$ by choosing one actions from each set where no mutex relations are allowed and denote the solution set as $\mathcal{A}^k$\; \While{$\mathcal{A}_{t_i^{k}}$ is not empty}{ Pick one solution from $\mathcal{A}^k$\; Denote the precondition states of the selected actions as $\mathcal{S}_{pre}^k$\; \eIf{$\mathcal{S}_{pre}^k$ has no mutex states}{ \If{$\mathcal{S}_{pre}^k$ includes all initial states}{Output the ordered actions\;} $\forall t_i^k\in \mathcal{S}_{pre}^k$, denote the set of its supporting actions as $\mathcal{A}_{t_i^{k-1}}$\; Enumerate all possible combinations from all $\mathcal{A}_{t_i^{k-1}}$ by choosing one actions from each set where no mutex relations are allowed and denote the solution set as $\mathcal{A}^{k-1}$\; Repeat the while loop for $\mathcal{A}^{k-1}$\; } {Discard the solution from $\mathcal{A}^k$ and pick a new solution from $\mathcal{A}^k$\; } } \If{No feasible solution has been found by the second while loop}{Go to the first while loop and expand the transition system.} \caption{Task planning for the proposer} \label{algorithm:proposer} \end{algorithm} \subsubsection{Verifier} Next, we introduce the implementation of the verifier and the interaction with the proposer. The verifier checks if the plan generated by the proposer is executable based on constraints in the knowledge base and information from sensors. The plan is executable if the verifier can find a set of parameters for the ordered actions given by the proposer while satisfying all constraints posed by the knowledge base and sensors. If the plan is not executable, then it will generate a counter-example and pass it to the proposer for another plan. If the plan is executable, it will output the effective plans to cognitive robots and will add them to the knowledge base. The verifier is implemented through constraint programming. Assume we have a set of parametric GSTL formulas $\Sigma=\{\phi_1,\phi_2,...,\phi_n,\psi\}$ where $\phi_i$ are conditions needed to be satisfied according to the knowledge base and the proposer and $\psi$ is the new task assignment. The truthfulness of formulas in $\Sigma$ is undecided. We aim to check if the verifier can find a set of parameters such that $\Sigma$ hold true with respect to constraints posed by the knowledge base and the proposer. It has been shown in the proof of Theorem \ref{theorem:complexity}, any GSTL formulas can be reformulated as conjunctive normal form. Following the Boolean encoding procedure in the proof of Theorem \ref{theorem:complexity}, we first eliminate all temporal operators by reformulating the parametric GSTL formulas in $\Sigma$ in the CNF form $\wedge_{j}^{p_1}(\vee_{i}^{p_2}\mu_{j,i}^*)$ where $\mu_{j,i}^*$ is the spatial term and $p_1$ and $p_2$ are temporal parameters in GSTL formulas. Then for each spatial term $\mu_{i,j}^*$, we can reformulate it in the CNF form $\wedge(\vee \mu)$ where $\mu$ only includes spatial operators using the De Morgan's Law P9 and S1 to S6 in the inference system \eqref{inference rules}. Then we obtain a set of logic constraints for spatial terms $\mu_{j,i}^*$ in CNF form $\wedge(\vee \mu)$ where the truth value of $\mu$ can be grounded through sensor or logic inference. Thus, the verifier can determine the truth value of the spatial terms $\mu_{j,i}^*$ by calling an SAT solver and further determine the truth value of the formulas in $\Sigma$ by solving another SAT problem for $\wedge_{j}^{p_1}(\vee_{i}^{p_2}\mu_{j,i}^*)$. The algorithm is summarized in Algorithm \ref{algorithm:verfier}. Let us continue the dinning table set up example. We write one of the ordered actions given by the proposer as a GSTL formula $\phi=a_3\sqcup_{[c_1,c_2]}^ba_2\sqcup_{[c_3,c_4]}^ba_1$. We assume the knowledge base requires that robots need 5 seconds to move knife, fork, and cup by having the following parametric GSTL formulas. \begin{align*} &a_3=\Box_{[t_0,t_0+a+5]}s_3\sqcup_{[t_0+a,t_0+a+5]}^o\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}knife)\sqcup_{[t_0+b,t_0+b+5]}^o\Box_{[t_0+b,t_0+b+5+\epsilon]}t_3,\\ &a_2=\Box_{[t_0,t_0+a+5]}s_2\sqcup_{[t_0+a,t_0+a+5]}^o\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}fork)\sqcup_{[t_0+b,t_0+b+5]}^o\Box_{[t_0+b,t_0+b+5+\epsilon]}t_2,\\ &a_1=\Box_{[t_0,t_0+a+5]}s_1\sqcup_{[t_0+a,t_0+a+5]}^o\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}cup)\sqcup_{[t_0+b,t_0+b+5]}^o\Box_{[t_0+b,t_0+b+5+\epsilon]}t_1. \end{align*} The task assignment is set up the table in 40 seconds which can be represented as a GSTL formula $\psi=\Diamond_{[0,40]}(t_1\wedge t_2\wedge t_3)$. The job for the verifier is to find a set of value for $c_1,c_2,c_3,c_4$ and $t_0,a,b$ for each action such that $\psi$ is satisfied. Using the proposed algorithm, we first reformulate the GSTL formula in the CNF form. For example, $a_3$ can be reformulated as \begin{equation} \begin{aligned} &\left(\bigwedge^{t_0+a}_{i=t_0}s_3\right) \left(\bigwedge^{t_0+a+5}_{i=t_0+a}(s_3\wedge (hand\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}knife)\right) \left(\bigwedge^{t_0+b+5}_{i=t_0+b}t_3\wedge (hand\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}knife)\right) \left(\bigwedge^{t_0+b+5+\epsilon}_{i=t_0+b+5}t_3\right). \end{aligned} \label{CNF1} \end{equation} As we can see from above CNF form \eqref{CNF1}, temporal operators are replaced with conjunction operators with temporal parameters. To verify each clause in \eqref{CNF1}, we use the inference rules P9 and S1-S6 and obtain the following CNF form for $s_3$. \begin{equation} \begin{aligned} &s_3=\mathbf{C}_\exists^2(knife\wedge\mathbf{N}_\exists^{left}fork) =\bigvee_{j=1}^{n_j}\left(\varphi_j\wedge\phi_j\right) =\bigwedge\begin{pmatrix} \varphi_1\vee\phi_1 & \varphi_1\vee\phi_2 &... &\varphi_1\vee\phi_{n_j}\\ \varphi_2\vee\phi_1 & \varphi_2\vee\phi_2 &... &\varphi_2\vee\phi_{n_j} \\ \vdots&\vdots & ...&\vdots\\ \varphi_{n_j}\vee\phi_1 & \varphi_{n_j}\vee\phi_2 &... &\varphi_{n_j}\vee\phi_{n_j} \end{pmatrix} \end{aligned} \label{CNF2} \end{equation} where $\varphi_j=\bigvee_{i=1}^n\mathbf{C}_{A_j}\mathbf{C}_{A_i}knife$ and $\phi_j=\bigvee_{i=1}^{n_i}\bigvee_{k=1}^{n_k}\mathbf{C}_{A_j}\mathbf{C}_{A_i}\mathbf{N}_{A_k}^{left}fork$ whose truth value can be checked easily. One of the feasible solution given by the solver could be \begin{align*} &\phi=a_3\sqcup_{[14,15]}^ba_2\sqcup_{[30,31]}^ba_1\\ &a_3=\Box_{[0,6]}s_3\sqcup_{[1,6]}^o\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}knife)\sqcup_{[7,12]}^o\Box_{[7,13]}t_3,\\ &a_2=\Box_{[16,22]}s_2\sqcup_{[17,22]}^o\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}fork)\sqcup_{[23,28]}^o\Box_{[23,29]}t_2,\\ &a_1=\Box_{[32,38]}s_1\sqcup_{[33,38]}^o\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}cup)\sqcup_{[39,42]}^o\Box_{[39,45]}t_1. \end{align*} If we assume the knowledge base requires that robots need 10 seconds to move a plate with the following GSTL formulas $a_4=\Box_{[t_0,t_0+a+10]}s_4\sqcup_{[t_0+a,t_0+a+10]}^o\mathbf{C}_\exists^2(hand\wedge\mathbf{N}_\exists^{\left \langle *, *, * \right \rangle}plate)\sqcup_{[t_0+b,t_0+b+10]}^o\Box_{[t_0+b,t_0+b+10+\epsilon]}t_2$. Then the verifier cannot find a feasible solution for the other ordered actions $\phi=a_1\sqcup_{[c_1,c_2]}^oa_4\sqcup_{c_3,c_4]}^oa_3$ generated by the proposer where the task assignment can be accomplished within 40 seconds. The solver will give two possible outcomes. First, there exists a feasible solution and $\psi$ holds true. The plans generated by the proposer successfully solve the new task assignment. In this case, the verifier will output the effective plans to robots. Second, there are no feasible solutions where $\psi$ holds true which means the new task assignment is not accomplished and there are conflicts in the plans generated by the proposer. As a modern constraint programming solver is able to locate the formulas which lead to the infeasibility, the verifier will inform the proposer which steps are not executable. The proposer will take the information as additional constraints and replan the transition system. For example, if the initial plans generated by the proposer includes ``hand touch hot water" while the knowledge base specifies constraints ``hand cannot touch hot materials", then the verifier will find the conflicts and inform the proposer that ``hand cannot touch hot water" needed to be considered in the replanning. The proposer may come up with a new plan where the hand will use a cup to hold hot water. The verifier algorithm is summarized in Algorithm \ref{algorithm:verfier}. \begin{algorithm}[H] \SetAlgoLined \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{Ordered actions from the proposer $a_1,a_2,...$, task assignment $\psi$, observed event, knowledge base, and inference rules} \Output{An executable sequence of actions or counter-example} \BlankLine initialization: Rewrite ordered action plans as $\phi_i=a_1\sqcup_{[c_1,c_2]}^ba_2\sqcup_{[c_3,c_4]}^ba_3$ and denote $\Sigma=\{\phi_i,\psi\}$\; \While{there are $\Box_{[a,b]}$, $\Diamond_{[a,b]}$ and $\sqcup_{[a,b]}^*$ operators in formulas of $\Sigma$}{ for GSTL formula $\varphi=\Box_{[a,b]}\phi$, we have $z^\varphi=\wedge_{i=a}^bz^{\phi_i}$ where $z^{\phi_i}$ represents $\phi$ at time $i\in[a,b]$\; for GSTL formula $\sqcup_{[a,b]}^o$, we have $\varphi_1\sqcup_{[a,b]}^{o}\varphi_2$ can be encoded as $z_i^{\varphi_1}=z_i^{\varphi_2}=z_{a-1}^{\varphi_1}=z_{b+1}^{\varphi_2}=1,i\in [a,b]$ and $z_{a-1}^{\varphi_2}=z_{b+1}^{\varphi_1}=0$ (other IA relations can be transferred in the similar way)\; } Reform all spatial terms $\mu_{j,i}^*$ in CNF form using inference rules P9 and S1-S6\; Solve the SAT problem for the logic constraints in CNF from the previous step\; Based on the truth assignment of spatial terms $\mu_{j,i}^*$, find a set of parameters for the constraint programming problem of the GSTL formulas in $\Sigma$\; \eIf{a feasible has been found}{ output the executable ordered action plan\; }{ inform the proposer such plan is infeasible\; } \caption{Verifier} \label{algorithm:verfier} \end{algorithm} \subsection{Overall framework} The overall framework is discussed in this section. Given a parametric knowledge base $\Sigma$, a current state, and a task assignment $\psi$, we aim to generate a detailed sequence of task plans such that the task assignment $\psi$ can be accomplished. We propose the following framework summarized in Fig. \ref{framework} to solve the automatic task planning problem. In the framework, the proposer takes the current state as the initial states and the task assignment as the target states. Available actions along with preconditions and effect of taking those actions are obtained from the knowledge base and used in expanding the transition system in the proposer. Ordered actions are generate by the backward solution extraction and passed to the verifier. The verifier takes the ordered actions from the proposer and verifies them based on the constraints posed by the knowledge base and the inference rules. If the actions are not executable, then it will inform the proposer that the current planning is infeasible. If the ordered actions are executable, they will be published for robots to implement. \begin{comment} Assume we have $n$ GSTL formulas in the knowledge base $\Sigma=\{\phi_1,\phi_2,...,\phi_n\}$, and an observed event $\varphi_1$, and an event $\psi$ whose truthfulness is unknown. We define a set of binary variables, i.e., $z^{\phi_i},z^{\varphi_1},z^{\psi}$ which are 1 if and only if the corresponding GSTL formulas are true. From the knowledge base $\Sigma$ and the deduction system, we can obtain a set of binary relationship between two GSTL formulas from the set $\Sigma\cup\{\varphi_1,\psi\}$. Denote the binary relationship as $R_{i,j}(\phi_i,\phi_j)$ where $\phi_i$ and $\phi_j$ are GSTL formulas. The relationship $R_{i,j}$ states the logic constraint for GSTL formulas $\phi_i$ and $\phi_j$ which can be encoded as a constraint on their corresponding binary variables such as $z^{\phi_i}=z^{\phi_j}$ or $z^{\phi_i}+z^{\phi_j}=1$. Thus, we can use Boolean encoding for all GSTL formulas in $\Sigma\cup\{\varphi_1,\psi\}$ and model all relationship $R_{i,j}$ as a set of constraints on the binary variables $z^{\phi_i}$. The problem can be formulated as a constraint programming. As we know from the previous step, the value of $z^{\psi}$ is unknown meaning the constraint programming can find a solution for the case $z^{\psi}=1$ and the case $z^{\psi}=0$. We define the set $Z_1$ as the set includes all variables with a fixed solution (0 or 1) and the set $Z_2$ as the set includes all variables whose value are not fixed. Following certain guidance (such as choosing variables that have a relationship with $\varphi_1$ and $\psi$), we pick one variable from each set and denote their corresponding GSTL formulas as $\phi_1$ from $Z_1$ and $\phi_2$ from $Z_2$. Assume in the data storage, we can find a sequence of graphs $G_{z_1,1},G_{z_1,2},...,G_{z_1,t}$ which satisfies the formula $\phi_1$. We also list a set of tests we want to check from formula $\phi_2$ such as if the distance between two objects is below a given threshold for a minimum amount of time. Then we use the graphs $G_{z_1,1},G_{z_1,2},...,G_{z_1,t}$ as inputs for $\mathcal{GNN}$ and check if output graphs satisfy the tests. Assume the $\mathcal{GNN}$ can decide the value of $z^{\phi_2}$. We add the result $z^{\phi_2}=1$ or $z^{\phi_2}=0$ into the constraint programming problem and check if the variable $z^{\psi}$ has a unique solution. If it does, then we can conclude if event $\psi$ is true or false. If it does not, we pick another two variables from the new $Z_1$ and $Z_2$ and ask $\mathcal{GNN}$ again. An overall framework is summarized in Fig. \ref{framework}. \end{comment} \begin{figure}[H] \centering \includegraphics[scale=0.2]{framework2.pdf} \caption{An overall framework for the automatic task planning with the proposer and the verifier} \label{framework} \end{figure} \begin{comment} \section{Evaluation} \subsection{Data preprocessing} For each video, we pre-process it by following the steps below. \begin{enumerate} \item Uniformly sample images from the video. Store the colored image every 5 frames. If the original video is 30fps, then we will have 6 images per second from the video. \item Use a color-based approach to detect objects from the images. We use ImageJ for the color-based objects detection. A color-based mask is tuned, created, and filtered for each objects. An example of hand is shown in Fig. \ref{Masks for a hand in a video}. \begin{figure}[H] \centering \includegraphics[scale=0.5]{mask.pdf} \label{Masks for a hand in a video} \caption{Masks for a hand in a video} \end{figure} \item Based on the mask, we collect the following information for each object at each frame. \begin{itemize} \item objects name; \item Average R,G,B; \item Contour length; \item Contour size; \item First and second moment of the contour. \end{itemize} We also collect the following information between two objects. \begin{itemize} \item Distance of the centers of the two objects; \item Minimum distance: first and second moment of the 20 smallest pair-wise distance between the pixels in two objects; \end{itemize} \end{enumerate} \subsection{Data format and storage} We store the data as two csv files, where the first one include information for objects, see Fig. and the second one include distance information between any two objects, see Fig. . We also save the video clips (in the form of a sequence of graph) for the ``always" formulas from the specification mining. \subsection{Case study} We use the following dining table setting as an example to illustrate the effectiveness of the proposed framework. \begin{figure}[H] \centering \includegraphics[scale=0.5]{example2.pdf} \label{example2} \caption{An example of dining table setting} \end{figure} \end{comment} \section{Conclusion}\label{Section: conclusion} Motivated by developing cognitive robots and the limitation in existing spatial temporal logics, we propose a new graph-based spatial temporal logic by proposing a new ``until" temporal operator and three spatial operators with interval algebra extension. The satisfiability problem of the proposed GSTL is decidable. A Hilbert style inference system is given with a set of axiom schemas and two inference rules. We prove that the corresponding deduction system is sound and complete and can be implemented through constraint programming. An automatic task planning framework is proposed with an interacted proposer and verifier. The proposer generates an ordered actions with unknown parameters. The verifier verifies if the plan is feasible and outputs executable task plans.
{ "timestamp": "2020-07-17T02:21:23", "yymm": "2007", "arxiv_id": "2007.08451", "language": "en", "url": "https://arxiv.org/abs/2007.08451" }
\section*{\large Appendices} \section{Proofs} \label{sec:app:proofs} Here we restate the Theorems and Propositions, as well as other mathematical claims appearing in the main text, and give their proof. \subsection*{Theorem~\ref{prop:GAverageUnitaries}} \GAverageUnitaries* \begin{proof} Let $S$ be the operator over $\mathcal H \otimes \mathcal H'$ that swaps $\mathcal H$ with its replica $\mathcal H'$. Then for any operators $X,Y$ acting over $\mathcal H$ it holds that \begin{align} \label{eq:app:swap_trick} \Tr \left( X Y \right) = \Tr \left[ S (X \otimes Y) \right] , \end{align} as it can be easily verified by expressing both sides in a basis. Notice that in our case, where $\mathcal H$ carries a bipartition, one can further decompose $S = S_{A A'} S_{B B'}$. Using the above identity the OTOC averaging in Eq.~\eqref{eq:defnition_G} can be written as \begin{align*} G(t) &= 1 - \frac{1}{d} \Real \int dV dW \Tr \left(S \, V_A^\dagger(t) W_B^\dagger \otimes V_A(t) W_B \right) \\ & = 1 - \frac{1}{d} \Real \int dV dW \Tr \left(S U_t^{\dagger \otimes 2} (V_A^\dagger \otimes V_A) U_t^{\otimes 2} (W_B^\dagger \otimes W_B) \right) \\ & = 1 - \frac{1}{d} \Real \Tr \left[ S U_t^{\dagger \otimes 2} \left( \int dV V_A^\dagger \otimes V_A \right) U_t^{\otimes 2} \left( \int dW W_B^\dagger \otimes W_B \right) \right] . \end{align*} Now the two independent averages can be easily performed since for unitary operators over $\mathcal H \cong \mathbb C^d$ the corresponding Haar integrals evaluate to \begin{align} \label{eq:app:u_avg_vectorized} \int dU U \otimes U^\dagger = \frac{S}{d} \end{align} where $S$ is again the swap operator over the doubled space. A quick way to prove the well-known identity~\eqref{eq:app:u_avg_vectorized} is by using Eq.~\eqref{eq:app:swap_trick} to write \begin{align*} U X U^\dagger = \tr_{\mathcal H'} \left[ (U \otimes U^\dagger)( X \otimes I) S \right] \end{align*} and then using the fact that \begin{align} \int dU U X U^\dagger = \frac{\Tr(X)}{d} \end{align} which follows directly from the left/right invariance of the Haar measure~\cite{watrous2018theory}. Using Eq.~\eqref{eq:app:u_avg_vectorized} twice, we get \begin{align*} G(t) &= 1 - \frac{1}{d} \Real \Tr \left( S U_t^{\dagger \otimes 2} \frac{S_{AA'}}{d_A} U_t^{\otimes 2} \frac{S_{BB'}}{d_B} \right) \\ & = 1 - \frac{1}{d^2} \Tr \left( S_{AA'} U_t^{\otimes 2} S_{AA'} U_t^{\dagger \otimes 2} \right) . \end{align*} Since $\left[ S , X^{\otimes 2} \right] = 0$ for all operators $X$, the analogous expression for $BB'$ holds, i.e., \begin{align} G(t) = 1 - \frac{1}{d^2} \Tr \left( S_{BB'} U_t^{\otimes 2} S_{BB'} U_t^{\dagger \otimes 2} \right). \end{align} \end{proof} Notice that the symmetry of the Haar measure forces the bipartite OTOC to be time reversal invariant, i.e., $G(t) = G(-t)$. Finally, we also note that that there is a straightforward generalization of \autoref{prop:GAverageUnitaries} to any finite temperature thermal state. Following similar steps as above, one gets for for the thermal version of the bipartite OTOC \begin{align} G(t) = 1 - \frac{1}{d} \Real \Tr \left( (\rho_\beta \otimes I_{A'B'}) U_t^{\dagger\, \otimes 2} S_{AA'} U_t^{ \otimes 2} S_{AA'}\right). \end{align} \subsection*{Theorem~\ref{prop:entangling_power}} \EntanglingPower* Before giving the proof, let us first recall the definitions of operator entanglement~\cite{zanardi2001entanglement} and entangling power~\cite{zanardi2000entangling}. The main idea behind operator entanglement is to first express the unitary evolution $U$ (over the bipartite Hilbert space $\mathcal H_{AB}$) as a state in the doubled space $\mathcal H_{AB}\otimes \mathcal H_{A'B'}$ via \begin{align} \ket{U} = U \otimes I_{A'B'} \ket{\phi^+} \end{align} for the maximally entangled state $\ket{\phi^+} = \frac{1}{\sqrt{d}} \sum_{i=1}^d \ket{i}_{AB}\ket{i}_{A'B'}$ and then evaluate the linear entropy of the state $\sigma_U = \tr_{BB'} \left( \ket{U} \! \bra{U} \right)$, i.e., \begin{align} \label{eq:app_operator_entanglement} E_{\mathrm{op}} (U) \coloneqq S_{\mathrm{lin}} (\sigma_U) = 1 - \Tr (\sigma_U^2) . \end{align} The entangling power~\cite{zanardi2000entangling} of a quantum evolution $U$ over a bipartite quantum system $\mathcal H = \mathcal H_A \otimes \mathcal H_B$ is defined as the average entanglement that the evolution generates when acting on random separable pure states. More specifically, \begin{align} e_{\mathrm{P}} (U) \coloneqq \int dV dW E \left[ U \left( \ket{\psi_V}_A \ket{\psi_W}_B \right) \right], \end{align} where $\ket{\psi_V}_A = V \ket{\psi_0}_A$ corresponds to Haar random pure states over $A$ ($\ket{\psi_0}_A$ is an irrelevant reference state), and similarly for B, while $E(\ket{\psi_{AB}}) \coloneqq S_{\mathrm{lin}}\left( \tr_{ B}\ket{\psi_{AB}}\!\bra{\psi_{AB}} \right)$ is the entanglement of the resulting state, as measured by the linear entropy. \begin{proof} \textbf{(i)}~The key observation here is that the bipartite OTOC $G_{U_t}$, in the form of Eq.~\eqref{eq:G_main}, coincides with the operator entanglement $E(U_t)$ as defined in Ref.~\cite{zanardi2001entanglement} (see Eq.~(6) therein). Evaluating the expression~\eqref{eq:app_operator_entanglement}, as in the proof of \autoref{prop:GAverageUnitaries}, one obtains exactly Eq.~\eqref{eq:G_main}, hence $E_{\mathrm{op}} (U_t) = G_{U_t}$. \textbf{(ii)} For the symmetric case $d_A = d_B$, the result follows by combining the first part of the current Theorem and Eq.~(12) of Ref.~\cite{zanardi2001entanglement}. Finally, we note that by direct substitution, one has $G_{S_{AB}} = 1 - 1/d$. \end{proof} \subsection*{Proposition~\ref{prop:GDoubleConcentration}} \GDoubleConcentration* The proof relies on measure concentration and, in particular, Levy's lemma which we shall recall shortly (see, e.g.,~\cite{anderson2010introduction}). Below we are also going use various operator (Schatten) $k$-norms~\cite{bhatia2013matrix}; the latter are defined as $\left\| X \right\|_k \coloneqq \left( \sum_i s^k_i \right)^{1/k}$ where $\{ s_i \}_i$ are the singular values of $X$. The case $\left\| X \right\|_\infty \coloneqq \max_i \left\{ s_i \right\}_i$ corresponds to the usual operator norm. For $k \ge l$, one always has $\| X \|_k \le \| X \|_l$. We also remind the reader that a function $f: U(d) \to \mathbb R$ is said to be Lipschitz continuous with constant $K$ if it satisfies \begin{align} \left| f(V) - f(W) \right| \le K \left\| V - W \right\|_2 \end{align} for all $V,W \in U(d)$. For brevity, in this section we denote the Haar averages as $\braket{(\cdot)}_U$ and also occasionally drop the explicit time dependence. \begin{theorem*}[Levy's lemma] Let $U \in U(d)$ be distributed according to the Haar measure and $f: U(d) \to \mathbb R$ be a Lipschitz continuous function. Then for any $\epsilon > 0$ \begin{align} \Prob\{ \left| f(U) - \braket{f(U)}_U \right| \ge \epsilon \} \le \exp \left( - \frac{d \epsilon^2}{4 K^2} \right) , \end{align} where $K$ is a Lipschitz constant. \end{theorem*} During the course of the proof of \autoref{prop:GDoubleConcentration}, the following two continuity results will come in handy. \begin{lemma} \label{lemma:app:lipschitz} \begin{enumerate}[(i)] \item The function $f_{W}(V) : U(d_A) \to \mathbb R$ with $f_{W}(V) \coloneqq C_{V_A,W_B}(t) $ is Lipschitz continuous with constant $K_f = 2$ for all $t \in \mathbb R$ and $W \in U(d_B)$. \item The function $g(W) : U(d_B) \to \mathbb R$ with $g(W) \coloneqq \braket{C_{V_A,W_B}(t)}_{V} $ is Lipschitz continuous with constant $K_g = 2/d_A$ for all $t \in \mathbb R$. \end{enumerate} \end{lemma} \begin{proof}[Proof of lemma] \textbf{(i)} Let $X,Y \in U(d_A)$. We need to show that \begin{align*} \left| f_W(X) - f_W (Y) \right| \le K_f \left\| X - Y \right\|_2. \end{align*} Following the proof of \autoref{prop:GAverageUnitaries}, we can express \begin{align*} f_W(V) = 1 - \frac{1}{d} \Real \tr \left[ S U_t^{\dagger \otimes 2} (V_A^\dagger \otimes V_A) U_t^{\otimes 2} (W_B^\dagger \otimes W_B) \right] \end{align*} therefore \begin{align*} \left| f_W(X) - f_W (Y) \right| & \le \frac{1}{d} \left| \tr \left[ U_t^{ \otimes 2} (W_B^\dagger \otimes W_B) S U_t^{\dagger \otimes 2} (X_A^\dagger \otimes X_A - Y_A^\dagger \otimes Y_A) \right] \right| \\ & \le \frac{1}{d} \big\| X_A^\dagger \otimes X_A - Y_A^\dagger \otimes Y_A \big\|_1 , \end{align*} where in the last step we used the inequality $\left\| \Tr \left( A B \right) \right\| \le \left\| A \right\|_1 \left\| B \right\|_\infty$ and the fact that $\big\| U_t^{ \otimes 2} (W_B^\dagger \otimes W_B) S U_t^{\dagger \otimes 2} \big\|_\infty= 1$ since the operator within the norm is unitary. In order to express the last norm as a function of the difference $X_A - Y_A$, we first add and subtract $Y^\dagger_A \otimes X_A$ and then use the triangle inequality. This results in \begin{align*} \frac{1}{d} \big\| X_A^\dagger \otimes X_A - Y_A^\dagger \otimes Y_A \big\|_1 & \le \frac{1}{d} \left( \big\| (X_A^\dagger -Y_A^\dagger) \otimes X_A \big\|_1 + \big\| Y_A^\dagger \otimes (X_A - Y_A) \big\|_1 \right) \\ & \le \frac{1}{d} \left( \big\| X_A^\dagger - Y_A^\dagger \big\|_\infty \big\| I \otimes X_A \big\|_1 + \big\| X_A - Y_A \big\|_\infty \big\| Y_A^\dagger \otimes I \big\|_1 \right) \end{align*} where for the last step we utilized the inequality $\left\| A B \right\|_1 \le \left\| A \right\|_1 \left\| B \right\|_\infty$. Now notice that $\big\| I \otimes X_A \big\|_1 = d$ since $X_A$ is unitary, and similarly for $\big\| Y_A^\dagger \otimes I \big\|_1$. Therefore we can bound \begin{align*} \left| f_W(X) - f_W (Y) \right| \le \big\| X_A - Y_A \big\|_\infty + \big\| X^\dagger_A - Y^\dagger_A \big\|_\infty \le 2 \big\| X_A - Y_A \big\|_\infty = 2 \big\| X - Y \big\|_\infty \le 2 \big\| X - Y \big\|_2 , \end{align*} from which clearly one can take $K_f = 2$. \\ \\ \textbf{(ii)} First notice that the Haar average over $V_A = V \otimes I_B$ can be performed, as was done in the proof of \autoref{prop:GAverageUnitaries}. The result is \begin{align*} g(W) &= 1 - \frac{1}{d} \Real \Tr \left[ S U_t^{\dagger \otimes 2} \frac{S_{AA'}}{d_A} U_t^{\otimes 2} W_B^\dagger \otimes W_B \right] \\ & = 1 - \frac{1}{d} \Real \Tr \left[ U_t^{\dagger \otimes 2} \frac{S_{BB'}}{d_A} U_t^{\otimes 2} W_B^\dagger \otimes W_B \right] . \end{align*} Considering the relevant difference, we can bound \begin{align*} \left| g(X) - g (Y) \right| & \le \frac{1}{d_A} \frac{1}{d} \left| \Tr \left[ U_t^{\dagger \otimes 2} S_{BB'} U_t^{\otimes 2} (X_B^\dagger \otimes X_B - Y_B^\dagger \otimes Y_B ) \right] \right| \\ & \le \frac{1}{d_A} \frac{1}{d} \big\| X_B^\dagger \otimes X_B - Y_B^\dagger \otimes Y_B \big\|_1 . \end{align*} Now one can follow the exact same steps as in part (i); the result is identical except of the extra factor $1/d_A$ that carries through, which originates from the averaging. This results in \begin{align*} \left| g(X) - g (Y) \right| \le \frac{2}{d_A} \big\| X - Y \big\|_2 \end{align*} from which one can take $K_g = 2/d_A$. \end{proof} Everything is now in place to give the proof of \autoref{prop:GDoubleConcentration}. \begin{proof} Let $\epsilon > 0$. We want to show that, for $V \in U(d_A)$ and $W \in U(d_B)$ distributed independently according to the Haar measure, it holds \begin{align*} \Prob \left( \gamma \ge \epsilon \right) \le \exp\left( -\frac{\epsilon^2 d_{\max}}{64} \right) \end{align*} where $\gamma \coloneqq \left| C_{V_A,W_B} - G \right|$ and by definition $G = \braket{C_{V_A,W_B}}_{V,W}$. Let us consider any pair $V_A, W_B$ that satisfies $\epsilon \le \gamma$. Then, from the triangle inequality also \begin{align*} \epsilon \le \alpha + \beta, \end{align*} where we set $\alpha \coloneqq \left| C_{V_A,W_B} - \braket{C_{V_A,W_B}}_{V} \right| $ and $\beta \coloneqq \left| \braket{C_{V_A,W_B}}_{V} - G \right|$. Hence we have for the corresponding probabilities \begin{align*} \Prob \left\{ \gamma \ge \epsilon \right\} \le \Prob \left\{ \alpha + \beta \ge \epsilon \right\} . \end{align*} However, if $\alpha + \beta \ge \epsilon$ then necessarily $\alpha \ge \epsilon/2$ or $\beta \ge \epsilon/2$, therefore we also have \begin{align*} \Prob \left\{ \alpha + \beta \ge \epsilon \right\} \le \Prob \left( \left\{ \alpha \ge \epsilon / 2 \right\} \cup \left\{ \beta \ge \epsilon / 2 \right\} \right). \end{align*} Using the standard union bound over the last expression results in \begin{align} \label{eq:app:ineq_abc} \Prob \left\{ \gamma \ge \epsilon \right\} \le \Prob \left\{ \alpha \ge \epsilon / 2 \right\} + \Prob \left\{ \beta \ge \epsilon / 2 \right\}. \end{align} The two Probabilities in Eq.~\eqref{eq:app:ineq_abc} can be bounded using Levy's lemma. For that, let us first define the auxiliary functions $f_{W}(V)$ and $g(W)$ as in \autoref{lemma:app:lipschitz}. Combining the Lipschitz continuity result from there with Levy's lemma, one gets measure concentration bounds \begin{subequations} \label{eq:app:levy_bound_fg} \begin{align} \Prob_V \{ \left| C_{V_A,W_B} - \braket{C_{V_A,W_B}}_{V} \right| \ge \epsilon/2 \} &\le \exp \left( - \frac{d_A \epsilon^2}{64} \right) \quad \forall W \label{eq:app:levy_bound_f_only} \\ \Prob \{ \braket{C_{V_A,W_B}}_{V} - G \ge \epsilon/2 \} &\le \exp \left( - \frac{d_A^2 d_B \epsilon^2}{64} \right) \end{align} \end{subequations} We are almost done; it suffices to notice that the bound~\eqref{eq:app:levy_bound_f_only} is uniform in $W$, hence it is also applicable to $\Prob \left\{ \alpha \ge \epsilon / 2 \right\} $. Therefore we arrive at \begin{align} \Prob \{ \left| C_{V_A,W_B}(t) - G(t) \right| \ge \epsilon \} \le \exp \left( - \frac{d_A \epsilon^2}{64} \right) + \exp \left( - \frac{d_A^2 d_B \epsilon^2}{64} \right) \le 2 \exp \left( - \frac{d_A \epsilon^2}{64} \right) . \end{align} Notice the resulting bound is independent of the dynamics, as long as the latter is unitary. Finally, one can obtain the analogous bound for $A \leftrightarrow B$ by inverting the roles of $V$ and $W$ in the proof. Therefore we obtain Eq.~\eqref{eq:double_concentration}. \end{proof} \subsection*{Proposition~\ref{prop:R_matrix}} \Rmatrix* Here we give a straightforward proof assuming the NRC holds exactly. For a more detailed discussion, see also the section of the proof of \autoref{prop:NRC_upper_bound}. \begin{proof} Our starting point is Eq.~\eqref{eq:G_main}, which we need to time average. Since the Hamiltonian is by assumption nondegenerate, we can spectrally decompose $H = \sum_{k=1}^d E_k P_k$, where $P_k \coloneqq \ket{\phi_k} \! \bra{\phi_k}$. We then have \begin{align*} \overline{G(t)}^\mathrm{NRC} = 1 - \frac{1}{d^2} \sum_{klmn} \overline{ \exp \big[ i (E_k + E_l -E_m - E_n) t \big] } \tr \left[ S_{AA'} (P_k \otimes P_l) \, S_{AA'} \, (P_m \otimes P_n) \right] . \end{align*} Time averaging the exponential results in \begin{align*} \overline{ \exp \big[ i (E_k + E_l -E_m - E_n) t \big] } = \delta_{E_k + E_l -E_m - E_n,0} \stackrel{\mathclap{\scriptsize \mbox{NRC}}}{=\joinrel=} \delta_{k,m} \delta_{l,n} + \delta_{k,n} \delta_{l,m} - \delta_{k,l} \delta_{l,m} \delta_{m,n} \end{align*} where in the last step we used the fact that energy gaps are nondegenerate. Thus \begin{align*} \overline{G(t)}^\mathrm{NRC} & =1 - \frac{1}{d^2} \Big( \sum_{kl} \tr \left[ S_{AA'} (P_k \otimes P_l) \, S_{AA'} \, (P_k \otimes P_l) \right] + \sum_{kl} \tr \left[ S_{AA'} (P_k \otimes P_l) \, S_{AA'} \, (P_l \otimes P_k) \right] \\ & \hspace{0.55 \columnwidth} - \sum_{k} \tr \left[ S_{AA'} (P_k \otimes P_k) \, S_{AA'} \, (P_k \otimes P_k) \right] \Big) \\ & = 1 - \frac{1}{d^2} \Big( \sum_{kl} \big| \tr \left[(P_k \otimes P_l) \, S_{AA'} \right] \big|^2 + \sum_{kl} \big| \tr \left[(P_k \otimes P_l) \, S_{BB'} \right] \big|^2 - \sum_{k} \big| \tr \left[(P_k \otimes P_k) \, S_{AA'} \right] \big|^2 \big) , \end{align*} where for the second term we used that $P_l \otimes P_k = S (P_k \otimes P_l) S$ and $S = S_{AA'} S_{BB'}$. Now, notice that partial traces can be formally performed, giving \begin{align*} \tr _{AA'BB'} \left[(P_k \otimes P_l) S_{AA'} \right] = \tr_{AA'} \left[ \tr_{BB'} (P_k \otimes P_l) S_{AA'} \right] = \tr_{AA'} \left[ (\rho_k^{(A)} \otimes \rho_l^{(A')} ) S_{AA'} \right] = \tr\left( \rho_k^{(A)} \rho_l^{(A)} \right) = R_{kl}^{(A)}, \end{align*} and similarly \begin{align*} \tr _{AA'BB'} \left[(P_k \otimes P_l) \, S_{BB'} \right] &= R_{kl}^{(B)} \\ \tr _{AA'BB'} \left[(P_k \otimes P_k) \, S_{AA'} \right] &=\tr _{AA'BB'} \left[(P_k \otimes P_k) \, S_{BB'} \right] = R_{kk}^{(A)} = R_{kk}^{(B)} \end{align*} where in the last line we used the fact that the spectra of $\rho_k^{(A)}$ and $\rho_k^{(B)}$ are equal, up to (irrelevant for the trace) zeroes. The result follows by expressing the matrix 2-norm as $\left\| X \right\|_2^2 = \sum_{ij} \left| X_{ij} \right|^2$. \end{proof} \subsection*{Proposition~\ref{prop:bound_NRC}} Before proceeding with the proof, let us briefly comment on the need of including the parameter $\alpha$, which corresponds to the fraction of the highly entangled eigenstates of the Hamiltonian. For certain Hamiltonian models (e.g., the class of gapped, local Hamiltonians over one-dimensional lattice systems) it is well known that the ground state follows an area law for the entanglement entropy~\cite{eisert2010colloquium}. Thus for larger system sizes $\epsilon$ cannot be chosen to be small for the ground state (and also possibly for the low lying excited states), even for the symmetric $d_A = d_B$ bipartition. Nevertheless, in the bulk of the spectrum, typical eigenstates are expected to obey instead a volume law, which is compatible with an $\epsilon$ that can be chosen to be suitably small. Therefore, we expect that, for certain physically relevant models, a large fraction $\alpha$ can be assumed to satisfy this condition. \boundNRC* \begin{proof} To simplify the notation, we assume $d_A \le d_B$. Let us also define $I = \{ k: E_{\max} - E(\ket{\phi_k}) \le \epsilon \}$, i.e., $I$ is the index set of those Hamiltonian eigenstates that deviate at most by $\epsilon$ from $E_{\max}$, while we use $\bar {I}$ to label the rest of the eigenstates. By assumption, $\left| I \right| \ge \alpha d$. First of all, notice that one can express the difference $E_{\max} - E(\ket{\psi_{AB}})$ as the distance \begin{align*} E_{\max} - E(\ket{\psi_{AB}}) = \Tr ( \rho_B^2 ) - 1/d_{B } = \big\| \rho_B - I/d_B \big\|_2^2 \ge \big\| \rho_A - I/d_A \big\|_2^2 = \Tr ( \rho_A^2 ) - 1/d_A \,\;. \end{align*} Setting for brevity $\Delta_k^{(\chi)} \coloneqq \rho_k^{(\chi)} - I/d_\chi$ ($\chi = A,B$), we have for all $k \in I$ that $E_{\max} - E(\ket{\phi_{k}}) = \big\| \Delta_k^{(B)} \big\|_2^2 \le \epsilon$ and hence also $\big\| \Delta_k^{(A)} \big\|_2^2 = \big\| \rho_k^{(A)} - I/d_A \big\|_2^2 \le \epsilon $. It will be convenient for later to express \begin{align} \label{eq:app:deltas_aux} \big| \braket{\rho_k^{(\chi)},\rho_l^{(\chi)}} \big|^2 = \big| \braket{I / d_\chi + \Delta_k^{(\chi)}, I / d_\chi + \Delta_l^{(\chi)}} \big|^2 = \big| \frac{1}{d_\chi} + \braket{\Delta_k^{(\chi)} , \Delta_l^{(\chi)}} \big|^2 = \frac{1}{d_\chi^2} + \frac{2}{d_\chi} \braket{\Delta_k^{(\chi)} , \Delta_l^{(\chi)}} + \braket{\Delta_k^{(\chi)} , \Delta_l^{(\chi)}}^2. \end{align} Moreover, by the Cauchy-Schwartz inequality, \begin{subequations} \label{eq:app_estimates_Delta} \begin{align} \big| \braket{\Delta_k^{(\chi)} , \Delta_l^{(\chi)}} \big| \le \big\| \Delta_k^{(\chi)} \big\|_2 \big\| \Delta_l^{(\chi)} \big\|_2 \end{align} while \begin{align} \big\| \Delta_k^{(\chi)} \big\|_2 ^ 2 \le \begin{cases} \epsilon & \mbox{if } k \in I , \\ 1 - \frac{1}{d_\chi} & \mbox{otherwise.} \end{cases} \end{align} \end{subequations} Let's start from Eq.~\eqref{eq:G_ave_R_matrix}. Using the fact that $\big\| R^{(A)}_D \big\|_2^2 = \big\| R^{(B)}_D \big\|_2^2$ and recalling $\overline{G_{\mathrm{ME}}(t)}^\mathrm{NRC} = (1 - 1/d)^2$ we get by the triangle inequality \begin{align} \label{eq:app_three_terms} \big| \overline{G_{\mathrm{ME}}(t)}^\mathrm{NRC} - \overline{G(t)}^\mathrm{NRC} \Big| & \le \Big| \frac{1}{d^2} \big\| R^{(A)} \big\|_2^2 - \frac{1}{d} \Big| + \Big| \frac{1}{d^2} \big\| R^{(B)} \big\|_2^2 - \frac{1}{d} \Big| + \frac{1}{d^2} \big| \big\| R_D^{(A)} \big\|_2^2 - 1 \big| . \end{align} To bound the first term we write \begin{align*} \Big| \frac{1}{d^2} \big\| R^{(A)} \big\|_2^2 - \frac{1}{d} \Big| = \Big| \frac{1}{d^2} \sum_{kl} \big| \braket{\rho_k^{(A)},\rho_l^{(A)}} \big|^2 - \frac{1}{d} \Big| \le \frac{1}{d_A^2} - \frac{1}{d} + \frac{1}{d^2} \sum_{kl} \left( \frac{2}{d_A} \big| \braket{\Delta_k^{(A)} , \Delta_l^{(A)}} \big| + \braket{\Delta_k^{(A)} , \Delta_l^{(A)}}^2 \right) \end{align*} where we used Eq.~\eqref{eq:app:deltas_aux}. Splitting both of the sums as $\sum_{k} = \sum_{k \in I} + \sum_{k \notin I}$ and using Eqs.~\eqref{eq:app_estimates_Delta} we have \begin{align*} \frac{1}{d^2} \sum_{kl} \big| \braket{\Delta_k^{(A)} , \Delta_l^{(A)}} \big| & \le \epsilon \alpha^2 + 2 \alpha(1-\alpha) \sqrt{\epsilon \left( 1 - \frac{1}{d_A} \right)} + (1 - \alpha)^2 \left(1 - \frac{1}{d_A} \right) \end{align*} and \begin{align*} \frac{1}{d^2} \sum_{kl} \braket{\Delta_k^{(A)} , \Delta_l^{(A)}} ^2 & \le \epsilon^2 \alpha^2 + 2 \alpha(1-\alpha) \epsilon \left( 1 - \frac{1}{d_A} \right) + (1 - \alpha)^2 \left(1 - \frac{1}{d_A} \right)^2 . \end{align*} Putting them together, and relaxing some inequalities for clarity, we obtain for the first term of Eq.~\eqref{eq:app_three_terms} \begin{align*} \Big| \frac{1}{d^2} \big\| R^{(A)} \big\|_2^2 - \frac{1}{d} \Big| \le \frac{1}{d_A^2} - \frac{1}{d} + \alpha \epsilon \left(\frac{2}{d_A} + \epsilon \right) + (1-\alpha)^2 (1 + \frac{2}{d_A}) + 2 (1-\alpha) (\epsilon + \sqrt{\epsilon}). \end{align*} Analogously for the second term of Eq.~\eqref{eq:app_three_terms}, \begin{align*} \Big| \frac{1}{d^2} \big\| R^{(B)} \big\|_2^2 - \frac{1}{d} \Big| \le \frac{1}{d} - \frac{1}{d_B^2} + \alpha \epsilon \left(\frac{2}{d_A} + \epsilon \right) + (1-\alpha)^2 (1 + \frac{2}{d_A}) + 2 (1-\alpha) (\epsilon + \sqrt{\epsilon}) . \end{align*} For the third one, we have \begin{align*} \big\| R_D^{(A)} \big\|_2^2 = \sum_k \big| \braket{\rho_k^{(A)} , \rho_k^{(A)}} \big|^2 = \frac{d_B}{d_A} + \frac{2}{d_A} \sum_k \braket{\Delta_k^{(A)} , \Delta_k^{(A)}} + \sum_k \braket{\Delta_k^{(A)} , \Delta_k^{(A)}}^2 . \end{align*} Using similar manipulations as above, and under the convention $d_A \le d_B$, \begin{align*} \frac{1}{d^2} \big| \big\| R_D^{(A)} \big\|_2^2 - 1 \big| \le \frac{1}{d^2} \left( \frac{d_B}{d_A} - 1 \right) + \frac{1}{d} \left[ \alpha \left( \frac{2 \epsilon}{d_A} + \epsilon^2 \right) + (1-\alpha) \left( \frac{2}{d_A} +1 \right) \right] \end{align*} Putting the inequalities together, we have \begin{multline} \big| \overline{G_{\mathrm{ME}}(t)}^\mathrm{NRC} - \overline{G(t)}^\mathrm{NRC} \big| \le \\ \frac{\lambda - 1 }{d^2} + \frac{\lambda^2 - 1}{d_B^2} + \alpha \left[ 2 \epsilon \left( \frac{2}{d_A} + \frac{1}{d_A^2 d_B} \right) + \epsilon^2 \left( 2 + \frac{1}{d} \right) \right] + (1 - \alpha ) \left[ 2 (1-\alpha) \left( 1+ \frac{2}{d_A} \right) + \frac{2}{d} + 4\left( \epsilon + \sqrt{\epsilon} \right) \right] \end{multline} which can be relaxed to give the final result by using $\dfrac{\lambda^2 - 1}{d_B^2} \ge \dfrac{\lambda - 1 }{d^2}$. \end{proof} \subsection*{Theorem~\ref{prop:NRC_upper_bound}} \NRCUpperBound* Before giving the proof of the Theorem, we first briefly discuss some general facts regarding infinite time averages, their connection with the NRC and the NRC\textsuperscript{+}, and how they give rise to the corresponding estimates. Let us consider unitary quantum dynamics $\mathcal U_t (\cdot) = U_t (\cdot) U^\dagger _t $ generated by a Hamiltonian $H = \sum_k \tilde E_k \Pi_k$, where $\Pi_k$ denotes the projector onto the $k\textsuperscript{th}$ eigenspace. As a warm-up, let us calculate the time average of the superoperator $\mathcal U_t$. The latter can be easily performed by noticing that $ \overline{\exp\big[-i (\tilde E_k - \tilde E_l) t \big]} = \delta_{kl}$. It results to \begin{align} \mathcal P_{ H} \coloneqq \overline{\mathcal U_t} = \sum_k \Pi_k (\cdot) \Pi_k \end{align} which is the (Hilbert-Schmidt orthogonal) projector onto the commutant of the algebra generated by $\{ \Pi_k \}_k$, i.e., the projector whose range is the space of operators commuting with $H$. The object of interest for us is, in fact, $\overline{\mathcal U_t^{\otimes 2}}$ since \begin{align} \overline{G(t)} = 1- \frac{1}{d^2} \braket{S_{AA'} , \overline{\mathcal U_t^{\otimes 2}} (S_{AA'})}. \end{align} Reasoning as above, it follows that the resulting superoperator is again a projector, whose range is the space of operators over the replicated Hilbert space $\mathcal H^{\otimes 2}$ that commute with $H^{(2)} \coloneqq H \otimes I + I \otimes H$. The projector can be explicitly expressed as \begin{align} \mathcal P_{ H^{(2)}} \coloneqq \, \overline{\mathcal U_t^{\otimes 2}} \, = \sum_{klmn } \delta_{\tilde E_k - \tilde E_m , \tilde E_l - \tilde E_n} \Pi_k \otimes \Pi_l (\cdot) \Pi_m \otimes \Pi_n \end{align} To evaluate the above sum, let us for a moment examine what happens when the energy gaps $\{\tilde E_k - \tilde E_l \}_{kl}$ are nondegenerate. i.e., \begin{align} \mathrm{NRC}^+: \qquad \tilde E_k + \tilde E_l = \tilde E_m + \tilde E_n \Longleftrightarrow ( k = m \ \wedge \, l = n ) \, \vee \, ( k = n \ \wedge \, l = m ) . \end{align} We will refer to this condition over the spectrum as $\mathrm{NRC}^+$, since it constitutes a relaxed version of the $\mathrm{NRC}$. Without any assumption over the spectrum, one can always separate two contributions \begin{align} \mathcal P_{\mathcal H^{(2)}} = \mathcal P_{\mathrm{NRC}^+} + \mathcal P_{\overline{\mathrm{NRC}^+}} \end{align} where \begin{align} \mathcal P_{\mathrm{NRC}^+} \coloneqq \sum_{kl } \Pi_k \otimes \Pi_l (\cdot) \Pi_k \otimes \Pi_l + \sum_{k l} \Pi_k \otimes \Pi_l (\cdot) \Pi_l \otimes \Pi_k - \sum_k \Pi_k \otimes \Pi_k (\cdot) \Pi_k \otimes \Pi_k \end{align} and $\mathcal P_{\overline{\mathrm{NRC}^+}} $ is any possibly remaining piece, which vanishes if and only if the Hamiltonian does indeed satisfy $\mathrm{NRC}^+$. Disregarding $\mathcal P_{\overline{\mathrm{NRC}^+}} $, one gets the estimate \begin{align} \overline{G(t)}^{\mathrm{NRC}^+} &\coloneqq 1 - \frac{1}{d^2} \tr\left[ S_{AA'} \mathcal P_{\mathrm{NRC}^+} \left( S_{AA'} \right) \right]\\ & = 1 - \frac{1}{d^2} \Big( \sum_{kl} \tr \left[ S_{AA'} (\Pi_k \otimes \Pi_l) \, S_{AA'} \, (\Pi_k \otimes \Pi_l) \right] + \sum_{kl} \tr \left[ S_{AA'} (\Pi_k \otimes \Pi_l) \, S_{AA'} \, (\Pi_l \otimes \Pi_k) \right] \nonumber \\ & \hspace{0.45 \columnwidth} - \sum_{k} \tr \left[ S_{AA'} (\Pi_k \otimes \Pi_k) \, S_{AA'} \, (\Pi_k \otimes \Pi_k) \right] \Big) , \label{eq:app:NRCp_expanded} \end{align} where the second equation follows from the proof of \autoref{prop:R_matrix}. Clearly, if all projectors $\{\Pi_k \}$ are rank-1, then Eq.~\eqref{eq:app:NRCp_expanded} collapses to the corresponding one for NRC, Eq.~\eqref{eq:G_ave_R_matrix}. Notice that one can evaluate $\overline{G(t)}^{\mathrm{NRC}^+}$ regardless of whether the Hamiltonian spectrum actually satisfies NRC\textsuperscript{+}, and obtain the NRC\textsuperscript{+} estimate mentioned in the main text. Evidently, one can also express the NRC time average, Eq.~\eqref{eq:G_ave_R_matrix}, in terms of the corresponding projector \begin{align} \overline{G(t)}^{\mathrm{NRC}} = 1 - \frac{1}{d^2} \tr\left[ S_{AA'} \mathcal P_{\mathrm{NRC}} \left( S_{AA'} \right) \right]. \end{align} If the Hamiltonian does not satisfy NRC, performing a (possibly nonunique) decomposition $H = \sum_k E_k \ket{\phi_k} \!\bra{\phi_k}$ and evaluating Eq.~\eqref{eq:G_ave_R_matrix} gives rise to the corresponding NRC estimate. Finally, for the case of Haar random unitaries, one has the corresponding projector $\overline {\mathcal U^{\otimes 2} }^{\mathrm{Haar}} \coloneqq \mathcal P_{\mathrm{Haar}}$ whose range is given by the algebra generated by $\{ I, S \}$~\cite{goodman2009symmetry}. We evaluate its explicit expression in the next section. We are now ready to give the proof of \autoref{prop:NRC_upper_bound}. \begin{proof} The key observation here is that, by construction, the range of each projector satisfies \begin{align} \Ran \left( \mathcal P_{H^{(2)}} \right) \supseteq \Ran \left( \mathcal P_{\mathrm{NRC}^+} \right) \supseteq \Ran \left( \mathcal P_{\mathrm{NRC}} \right) \supseteq \Ran \left( \mathcal P_{\mathrm{Haar}} \right) . \end{align} Since all of the above are Hilbert-Schmidt orthogonal projectors, it also follows that \begin{align} \label{eq:app:superprojectors_inequality} \mathcal P_{H^{(2)}} \ge \mathcal P_{\mathrm{NRC}^+} \ge \mathcal P_{\mathrm{NRC}} \ge \mathcal P_{\mathrm{Haar}} \,. \end{align} As a result, \begin{align} \braket{S_{AA'}, \mathcal P_{H^{(2)}}(S_{AA'})} \ge \braket{S_{AA'}, \mathcal P_{\mathrm{NRC}^+} (S_{AA'})} \ge \braket{S_{AA'}, \mathcal \mathcal P_{\mathrm{NRC}}(S_{AA'})} \ge \braket{S_{AA'}, \mathcal P_{\mathrm{Haar}} (S_{AA'})}, \end{align} from which Eq.~\eqref{eq:comparison_time_avgs_ineq} follows immediately. \end{proof} \subsection*{Proof of Eq.~\eqref{eq:Haar_average}} The Haar average \begin{align*} \overline{G}^{\mathrm{Haar}} = \frac{(d_A^2-1)(d_B^2 - 1)}{d^2 - 1} \end{align*} can be derived using fact that $\overline{\mathcal U^{\otimes 2}}^{\mathrm{Haar}}$ is the CPTP orthogonal projector over the algebra generated by $\{ I, S \}$~\cite{goodman2009symmetry}, i.e., \begin{align} \mathcal P_{\mathrm{Haar}} (X) \coloneqq \overline{\mathcal U^{\otimes 2}}^{\mathrm{Haar}} (X) = \frac{1}{2} \sum_{\alpha = \pm 1} \frac{I + \alpha S}{d(d+\alpha)} \braket{I + \alpha S,X} , \end{align} where $S$ swaps $\mathcal H$ and its duplicate $\mathcal H '$, as usual. Plugging the above into Eq.~\eqref{eq:G_main}, one gets \begin{align*} \overline{G}^{\mathrm{Haar}} = 1 - \frac{1}{2d^2} \sum_{\alpha = \pm 1} \frac{\left| \braket{I + \alpha S, S_{AA'}} \right|^2}{d(d+\alpha)} \end{align*} which, after some simple algebra, simplifies to the announced result. \subsection*{Theorem~\ref{prop:entropy_production}} \EntropyProduction* \begin{proof} Let us do the $\chi = A$ case. The result relies on the observation that one can express $S_{AA'}$ in Eq.~\eqref{eq:G_reduced_dynamics} through the Haar average~\cite{goodman2009symmetry} \begin{align} \int dU \left( \ket{\psi_U}\!\bra{\psi_U} \right)^{\otimes 2} = \frac{1}{d_A(d_A+1)} \left( I_{AA'} + S_{AA'} \right). \end{align} Performing the substitution results in \begin{align*} G(t) &= 1 + \frac{1}{d_A^2} \tr \left( S_{AA'} \right) - \frac{d_A + 1}{d_A} \int dU \, \tr \left( S_{AA'} \big[ \Uplambda^{\!(A) }_t (\ket{\psi_U}\! \bra{\psi_U}) \big]^{\otimes 2} \right) \\ & = \frac{d_A+1}{d_A} \left( 1 - \int dU \, \tr \left[ \big( \Uplambda^{\!(A) }_t (\ket{\psi_U}\!\bra{\psi_U}) \big)^2 \right] \right) \\ & = \frac{d_A+1}{d_A} \int dU \, S_{\mathrm{lin}} \left[ \Uplambda^{\!(A)}_t (\ket{\psi_U} \! \bra{\psi_U}) \right] \end{align*} where we used the fact that $ \Uplambda^{\!(A) }_t (I) = I$ and the identity of Eq.~\eqref{eq:app:swap_trick}. The $\chi = B$ case follows similarly. \end{proof} \subsection*{Proof of Eq.~\eqref{eq:concentration_linear_entropy}} We need to prove that \begin{align} \Prob \left\{ \Big| S_{\mathrm{lin}} \big[ \Uplambda^{\!(\chi) }_t \big( \ket{\psi}\! \bra{\psi} \big) \big] - \frac{d_\chi}{d_\chi + 1} G(t) \Big| \ge \epsilon \right\} \le \exp \left( - \frac{d_\chi \epsilon^2}{64} \right) \end{align} where $\ket{\psi}$ is a Haar random pure state. We will make use of the concentration of measure machinery, briefly presented before the proof of \autoref{prop:GDoubleConcentration}. The result follows by the use of Levy's lemma and \autoref{prop:entropy_production}, if one shows that the function $f: U(d_\chi) \to \mathbb R$ with $f(V) \coloneqq S_{\mathrm{lin}} \big[ \Uplambda^{\!(\chi)}_t (\ket{\psi_V} \! \bra{\psi_V}) \big]$ is Lipschitz continuous with $K = 4$. As before, we denote $\ket{\psi_V} \coloneqq V \ket{\psi_0}$ for some (irrelevant) reference state $\ket{\psi_0}$. Indeed, let us show the Lipschitz continuity. We have \begin{align*} \big| f(V) - f(W) \big| &= \big| \big\| \Uplambda^{\!(\chi)}_t (\ket{\psi_V} \! \bra{\psi_V}) \big\|_2^2 - \big\| \Uplambda^{\!(\chi)}_t (\ket{\psi_W} \! \bra{\psi_W}) \big\|_2^2 \big| \\ & = \Big( \big\| \Uplambda^{\!(\chi)}_t (\ket{\psi_V} \! \bra{\psi_V}) \big\|_2 + \big\| \Uplambda^{\!(\chi)}_t (\ket{\psi_W} \! \bra{\psi_W}) \big\|_2 \Big) \, \Big| \big\| \Uplambda^{\!(\chi)}_t (\ket{\psi_V} \! \bra{\psi_V}) \big\|_2 - \big\| \Uplambda^{\!(\chi)}_t (\ket{\psi_W} \! \bra{\psi_W}) \big\|_2 \Big| \\ & \le 2 \big\| \Uplambda^{\!(\chi)}_t (\ket{\psi_V} \! \bra{\psi_V}) - \Uplambda^{\!(\chi)}_t (\ket{\psi_W} \! \bra{\psi_W}) \big\|_1 \\ & \le 2 \Big\| \mathcal U _t \Big( \ket{\psi_V} \! \bra{\psi_V} \otimes \frac{I_{d_{\overline \chi}}}{d_{\overline \chi}} \Big) - \mathcal U _t \Big( \ket{\psi_W} \! \bra{\psi_W} \otimes \frac{I_{d_{\overline \chi}}}{d_{\overline \chi}} \Big) \Big\|_1 \\ & \le 2 \big\| \big(\ket{\psi_V} \! \bra{\psi_V} - \ket{\psi_W} \! \bra{\psi_W} \big) \otimes \frac{I_{d_{\overline \chi}}}{d_{\overline \chi}} \big\|_1 = 2 \big\| \ket{\psi_V} \! \bra{\psi_V} - \ket{\psi_W} \! \bra{\psi_W} \big\|_1 \,, \end{align*} where in the second to last line we used the monotonicity of the 1-norm under the partial trace and in the last line that it is unitarily invariant. Utilizing the inequality $\big\| X \big\|_1 \le \sqrt{\Rank(X)} \left\| X \right\|_2 $, we have \begin{align*} \big| f(V) - f(W) \big| &\le 2 \sqrt{2} \big\| \ket{\psi_V} \! \bra{\psi_V} - \ket{\psi_W} \! \bra{\psi_W} \big\|_2 = 4 \sqrt{1 - | \! \braket{\psi_V | \psi_W} \! |^2 } \\ & \le 4 \sqrt{2 ( 1 - | \! \braket{\psi_V | \psi_W} \! | )} \le 4 \sqrt{2 ( 1 - \Real \braket{\psi_V | \psi_W} )} \\ & \le 4\| \ket{\psi_V} - \ket{\psi_W} \| \le 4 \| V - W \|_\infty \\ &\le 4 \| V - W \|_2 \end{align*} hence one can take $K = 4$. \subsection*{Proposition~\ref{prop:Choi}} \Choi* \begin{proof} Let us first express the Choi states explicitly as \begin{align*} \rho_{\Uplambda^{\!(\chi) }_t} &= \big( \Uplambda^{\!(\chi) }_t \otimes \mathcal I \big) \ket{\phi^+}\! \bra{\phi^+} = \frac{1}{d_\chi} \sum_{ij} \Uplambda^{\!(\chi) }_t \big( \ket{i}\! \bra{j} \big) \otimes \ket{i} \!\bra{j} \\ \rho_{\mathcal T^{(\chi)}} & = \big( \mathcal T^{(\chi)} \otimes \mathcal I \big) \ket{\phi^+}\! \bra{\phi^+} = \left( \frac{I_\chi}{d_\chi} \right) ^{\!\otimes 2} . \end{align*} Writing $S_{\chi\chi'} = \sum_{i,j=1}^{d_\chi} \ket{i}\!\bra{j} \otimes \ket{j}\!\bra{i}$ one also has from Eq.~\eqref{eq:G_reduced_dynamics} \begin{align*} G(t) = 1 - \frac{1}{d_\chi^2} \sum_{ij} \big\| \Uplambda^{\!(\chi) }_t \big( \ket{i} \! \bra{j} \big) \big\|_2^2 \,. \end{align*} Thus, expanding the Choi state distance, \begin{align*} \big\| \rho_{\Uplambda^{\!(\chi) }_t} - \rho_{\mathcal T^{(\chi)}} \big\|_2^2 & = \braket{ \rho_{\Uplambda^{\!(\chi) }_t} - \rho_{\mathcal T^{(\chi)}}, \rho_{\Uplambda^{\!(\chi) }_t} - \rho_{\mathcal T^{(\chi)}} } = \braket{ \rho_{\Uplambda^{\!(\chi) }_t} , \rho_{\Uplambda^{\!(\chi) }_t} } - 2 \braket{ \rho_{\Uplambda^{\!(\chi) }_t} , \rho_{\mathcal T^{(\chi)}} } + \braket{ \rho_{\mathcal T^{(\chi)}}, \rho_{\mathcal T^{(\chi)}} } \\ & = \big\| \rho_{\Uplambda^{\!(\chi) }_t} \big\|_2^2 - \frac{1}{d_\chi^2} = \frac{1}{d_\chi^2} \sum_{ij} \big\| \Uplambda^{\!(\chi) }_t \big( \ket{i} \! \bra{j} \big) \big\|_2^2 - \frac{1}{d_\chi^2} \\ & = 1 - G(t) - \frac{1}{d_\chi^2} \end{align*} which is what we wanted. \end{proof} \subsection*{Proof of $ \big\| \Uplambda^{\!(\chi)}_t - \mathcal T^{(\chi)} \big\|_{\lozenge} \le d_{\chi}^{3/2} \sqrt{ G_{\max}^{(\chi)} - G(t)}$ and an application on information spreading} We first remind the reader that the diamond norm can be defined as $ \left\| \mathcal X \right\|_{\lozenge} \coloneqq \left\| \mathcal X \otimes \mathcal I_d \right\|_{1,1} $ where $\mathcal I_d$ denotes the identity quantum channel over $\mathcal H \cong \mathbb C^{d}$ and $\left\| \mathcal X \right\|_{1,1} \coloneqq \sup_{\left\| A \right\|_1 = 1} \left\| \mathcal X(A) \right\|_1$. One of the reasons for this definition is the property that $\left\| \mathcal X \otimes \mathcal Y \right\|_{\lozenge} = \left\| \mathcal X \right\|_{\lozenge} \left\| \mathcal Y \right\|_{\lozenge}$, which in general fails for the $\left\| \left( \cdot \right) \right\|_{1,1}$ norm (see, e.g.,~\cite{kitaev2002classical}). Let us now prove that \begin{align*} \sqrt{ G_{\max}^{(\chi)} - G(t) } \le \big\| \Uplambda^{\!(\chi)}_t - \mathcal T^{(\chi)} \big\|_{\lozenge} \le d_{\chi}^{3/2} \sqrt{ G_{\max}^{(\chi)} - G(t)} \,. \label{eq:ineq_diamond} \end{align*} \begin{proof} The result follows easily by utilizing the inequalities \begin{align} \big\|\rho_{\mathcal E_1} - \rho_{\mathcal E_2} \big\|_1 \le \big\| \mathcal E_1 - \mathcal E_2 \big\|_\lozenge \le d \big\|\rho_{\mathcal E_1} - \rho_{\mathcal E_2} \big\|_1 \end{align} that hold for any pair of CPTP maps. The inequality was reported by John Watrous in~\cite{watrous11stackexchange}. The result follows by use of the inequality $\big\| X \big\|_1 \le \sqrt{d} \big\| X \big\|_2$ and \autoref{prop:Choi}. \end{proof} As an additional application of Eq.~\eqref{eq:ineq_diamond}, we can utilize it to bound from above the fraction of time such that $\big\| \Uplambda^{\!(\chi)}_t - \mathcal T^{(\chi)} \big\|_{\lozenge} \ge \epsilon $ holds true. This can be done by combining Eq.~\eqref{eq:ineq_diamond} with our earlier time averages. The result \begin{align} \label{eq:Markov} \Prob \big\{ t \; \big| \; \big\| \Uplambda^{\!(\chi)}_t - \mathcal T^{(\chi)} \big\|_{\lozenge} \ge \epsilon \big\} \le \frac{2 d_{\chi}^{3/2}}{\epsilon d_{\overline \chi} } \kappa \,, \end{align} where $\kappa \coloneqq \sqrt{1 + \dfrac{d_{\overline \chi}^2}{2} \big( \overline{G}^{\mathrm{Haar}} - \overline{G(t)} \big)}$, demonstrates in yet another way that if $ d_{\overline \chi} \gg d_{\chi} $ and $\kappa = O(1)$ (i.e., the equilibration is sufficiently close to the Haar estimate), then the reduced evolution is necessarily close to the maximally mixing one for a large fraction of time. \begin{proof} Our starting point will be inequality~\eqref{eq:ineq_diamond}, $\big\| \Uplambda^{\!(\chi)}_t - \mathcal T^{(\chi)} \big\|_{\lozenge} \le d_{\chi}^{3/2} \sqrt{ G_{\max}^{(\chi)} - G(t)}\,$. By taking the time average of both sides, and then using the concavity of the square root, we obtain \begin{align*} \overline{\big\| \Uplambda^{\!(\chi)}_t - \mathcal T^{(\chi)} \big\|_{\lozenge}} \le d_{\chi}^{3/2} \sqrt{ G_{\max}^{(\chi)} - \overline {G(t)}} \le d_{\chi}^{3/2} \sqrt{ \big( G_{\max}^{(\chi)} - \overline G ^{\mathrm{Haar}} \big) + \big( \overline G ^{\mathrm{Haar}} - \overline {G(t)} \big) } \le 2 \frac{d_\chi^{3/2}}{d_{\overline \chi}} \kappa \,, \end{align*} where we approximated the difference \begin{align*} G_{\max}^{(\chi)} - \overline {G(t)}^{\mathrm{Haar}} = \frac{(d_\chi^2 - 1)^2}{d_\chi^2 (d^2-1)} \le \frac{2}{d_{\overline \chi}^2} \,. \end{align*} Finally, Eq.~\eqref{eq:Markov} follows by the use of Markov's inequality. \end{proof} \section{Haar measure, unitary $k$-designs and the bipartite OTOC} \label{sec:app:measures} Here we discuss in more details how the Haar measure in the definition of the bipartite OTOC, Eq.~\eqref{eq:defnition_G}, can be replaced by other possible averaging choices, in a way that Eq.~\eqref{eq:G_main} (and everything that stems from it) remains valid. Let us first recall the definition of a (unitary) $k$-design~\cite{divincenzo2002quantum,renes2004symmetric, scott2006tight,gross2007evenly,roberts2017chaos}. Consider an ensemble of unitary operators $\Lambda = \{ ( p_i, U_i ) \}_i$ and define the family of CPTP maps \begin{align} \mathcal E ^{(k)} _{\Lambda} &\coloneqq \sum_i p_i U_i^{\otimes k} (\cdot) U_i^{\dagger \otimes k} \label{eq:app:Haar_channel} \\ \mathcal E ^{(k)} _{\mathrm{Haar}} & \coloneqq \int dU \, U^{\otimes k} (\cdot) U^{\dagger \otimes k} \label{eq:app:ensemble_channel} \end{align} for $ k \in \mathbb N $. The ensemble $\Lambda $ forms a $k$-design if $\mathcal E ^{(k)} _{\Lambda} = \mathcal E ^{(k)} _{\mathrm{Haar}}$. In words, a $k$-design emulates Haar averaging up to (at least) the $k\textsuperscript{th}$ moment. Now, let us investigate what is the freedom over the possible probability measures of $V_A$ and $W_B$ in Eq.~\eqref{eq:defnition_G}, such that Eq.~\eqref{eq:G_main} holds true without modification. It is easy to see, by the proof of \autoref{prop:GAverageUnitaries}, that we are in fact looking for a unitary ensemble $\Lambda$ retaining the validity of Eq.~\eqref{eq:app:u_avg_vectorized}. In turn, the latter is just a vectorized form of the $1$-design condition $\mathcal E ^{(1)} _{\Lambda} = \mathcal E ^{(1)} _{\mathrm{Haar}}$. One can therefore substitute the Haar measure over $U(d_A)$ and $U(d_B)$ with $1$-designs over the corresponding spaces; the full Haar randomness is not probed by the OTOC~\cite{roberts2017chaos}. Moreover, $1$-designs factorize, i.e., if $\Lambda_1 = \{ ( p^{(1)}_i, U^{(1)}_i ) \}_i$ and $\Lambda_2 = \{ ( p^{(2)}_j, U^{(2)}_j ) \}_j$ are 1-designs over $\mathcal H_A$ and $\mathcal H_B$ respectively, then $\Lambda_1 \otimes \Lambda_2 \coloneqq \{ ( p^{(1)}_i p^{(2)}_j, U^{(1)}_i \otimes U^{(2)}_j ) \}_{ij}$ is a 1-design over $\mathcal H = \mathcal H_A \otimes \mathcal H_B$. This follows just by the 1-design condition in the form of Eq.~\eqref{eq:app:u_avg_vectorized} and the fact that the swap operator over the duplicated space $\mathcal H \otimes \mathcal H'$ factorizes $S_{AB;A'B'} = S_{AA'}S_{BB'}$. This last fact has an important implication for the physically relevant case of many-body systems. Consider the case where $\mathcal H_\chi = \bigotimes_i \mathcal H_\chi^{(i)}$ for $\chi = A,B$, i.e., when $A$ and $B$ are made up of (not necessarily identical) individual subsystems. Then the OTOC of Eq.~\eqref{eq:defnition_G} remains unchanged if the averages $\int dV_A$ and $\int dW_b$ are replaced by the unitary ensemble $\bigotimes_i \Lambda_\chi^{(i)}$, where each $\Lambda_\chi^{(i)}$ is a $1$-design on $\mathcal H_\chi^{(i)}$. In other words, it is always enough to average over unitary operators that factorize completely. For instance, in the case of a spin-$1/2$ many-body system $\mathcal H_\chi^{(i)} \cong \mathbb C^2$ such an example is given by the Pauli $1$-design $\Lambda^{(i)}_{\chi,\mathrm{Pauli}} \coloneqq \{ 1/4 , \sigma_k \}_{k=0}^3$~\cite{webb2015clifford}. \section{Estimating the bipartite OTOC via linear entropy measurements of random pure states} \label{sec:app:linear_entropy} Here we present a basic protocol, stemming directly from \autoref{prop:entropy_production}, for the estimation of the bipartite OTOC via repeated measurements of a single expectation value. \begin{figure}[h] \centering \includegraphics[width=.5\textwidth]{circuit.pdf} \caption{Protocol to for the estimation of the purity $1 - S_{\mathrm{lin}} \left[ \Uplambda^{(A)}_t (\ket{\psi}\!\bra{\psi}) \right]$ according to Eq.~\eqref{eq:entropy_production}. The resulting purity constitutes also an estimate of the bipartite OTOC, up to a simple proportionality factor. The final measurement of the swap operator can be realized, for instance, by measuring the expectation value of $A$ and $A'$ over any preferred product basis $\{ \ket{i} \otimes \ket{j} \}_{i,j = 1}^{d_A}$, without the need for coherences.}. \label{fig:circuit} \end{figure} As pointed out in the main text, the linear entropy of a state can be expressed as an expectation value, $1 - S_{\mathrm{lin}} (\rho) = \tr \left( S \rho^{\otimes 2} \right)$ at the expense of requiring two copies of the state $\rho$, though uncorrelated. Combining \autoref{prop:entropy_production} with the above observation, one can realize a simple protocol for estimating the bipartite OTOC via measuring the expectation value of the swap operator over pairs of randomly generated states $\ket{\psi} \in \mathcal H_A$. We schematically draw the protocol in \autoref{fig:circuit}. Averaging the resulting expectation value over Haar random pure states $\ket{\psi}$ converges to the exact value of the bipartite OTOC. In light of Eq.~\eqref{eq:concentration_linear_entropy}, the expected number of sample for this convergence to a given accuracy drops fast as $d_A$ increases. Clearly, the corresponding protocol with the roles of $A$ and $B$ interchanged is formally equivalent. Along conceptually similar lines, there have been a number of proposals for probing the linear entropy of a state in an experimentally accessible way. For example, in a recent experiment~\cite{islam2015measuring} quantum purity (which is directly related to the second-order R\'enyi entanglement entropy) was measured by interfering two uncorrelated but identical copies of a many-body quantum state; similar ideas have also been considered previously~\cite{daley2012measuring,ekert2002direct,PhysRevLett.93.110501,bovino2005direct}. In particular, this scheme neither requires full quantum state tomography nor the use of entanglement witnesses to estimate entanglement of a quantum state. Furthermore, there have been recent proposals for protocols based on measurements over random local bases that can probe entanglement given just a single copy of the quantum state, and, in this sense, go beyond traditional quantum state tomography. The main idea consists of directly expressing the linear entropy~\cite{brydges2019probing,elben2019statistical}, as well as other functions of the state~\cite{huang2020predicting}, as an ensemble average of measurements over random bases. Related ideas have also been adapted to probe OTOCs~\cite{vermersch2019probing,joshi2020quantum} and mixed state entanglement~\cite{elben2020mixed}. \end{document}
{ "timestamp": "2021-01-21T02:24:21", "yymm": "2007", "arxiv_id": "2007.08570", "language": "en", "url": "https://arxiv.org/abs/2007.08570" }
\section{Introduction} Neural networks provide state-of-the-art results in a variety of machine learning tasks, however, several neural network's aspects complicate their usage in practice, including overconfidence~\cite{deepens}, vulnerability to adversarial attacks~\cite{adv_attacks}, and overfitting~\cite{overfitting}. One of the ways to compensate these drawbacks is using deep ensembles, i.\,e. the ensembles of neural networks trained from different random initialization~\cite{deepens}. In addition to increasing the task-specific metric, e.\,g.\,accuracy, the deep ensembles are known to improve the quality of uncertainty estimation, compared to a single network. There is yet no consensus on how to measure the quality of uncertainty estimation. \citet{pitfalls} consider a wide range of possible metrics and show that the calibrated negative log-likelihood (CNLL) is the most reliable one because it avoids the majority of pitfalls revealed in the same work. Increasing the size $n$ of the deep ensemble, i.\,e.\,the number of networks in the ensemble, is known to improve the performance~\cite{pitfalls}. The same effect holds for increasing the size $s$ of a neural network, i.\,e. the number of its parameters. Recent works~\citep{double_descent2, double_descent} show that even in an extremely overparameterized regime, increasing $s$ leads to a higher quality. These works also mention a curious effect of non-monotonicity of the test error w.\,r.\,t.\,the network size, called double descent behaviour. In figure~\ref{fig:motivation}, left, we may observe the saturation and stabilization of quality with the growth of both the ensemble size $n$ and the network size $s$. The goal of this work is to study the asymptotic properties of CNLL of deep ensembles as a function of $n$ and $s$. We investigate under which conditions and w.\,r.\,t.\,which dimensions the CNLL follows a power law for deep ensembles in practice. In addition to the horizontal and vertical cuts of the diagram shown in figure~\ref{fig:motivation}, left, we also study its diagonal direction, which corresponds to the increase of the total parameter count. \begin{figure} \begin{center} \centerline{ \begin{tabular}{cc} \includegraphics[width=0.31\textwidth]{figures_final/motivational_carpet.pdf}& \includegraphics[width=0.65\textwidth]{figures_final/pl_vgg64_cifar100.pdf} \end{tabular}} \caption{Non-calibrated NLL and CNLL of VGG on CIFAR-100. Left: the $(n, s)$-plane for the CNLL. Middle and right: non-calibrated $\mathrm{NLL}_n$ and $\mathrm{CNLL}_n$ can be closely approximated with a power law (VGG of the commonly used size as an example) .} \label{fig:motivation} \end{center} \end{figure} The power-law behaviour of deep ensembles has previously been touched in the literature. \citet{scaling_description} consider simple shallow architectures and reason about the power-law behaviour of the test error of a deep ensemble as a function of $n$ when $n \rightarrow \infty$, and of a single network as a function of $s$ when $s \rightarrow \infty$. \citet{kaplan2020scaling, rosenfeld2019constructive} investigate the behaviour of \textit{single} networks of modern architectures and empirically show that their NLL and test error follow power laws w.\,r.\,t.\,the network size $s$. In this work, we perform a broad empirical study of power laws in deep ensembles, relying on the practical setting with properly regularized, commonly used deep neural network architectures. Our main contributions are as follows: \begin{enumerate} \item for the practically important scenario with NLL calibration, we derive the conditions under which CNLL of an ensemble follows a power law as a function of $n$ when $n \rightarrow \infty$; \item we empirically show that, in practice, the following dependencies can be closely approximated with a power law on the \emph{whole} considered range of their arguments: (a) CNLL of an ensemble as a function of the ensemble size $n \geqslant 1$; (b) CNLL of a single network as a function of the network size $s$; (c) CNLL of an ensemble as a function of the total parameter count; \item based on the discovered power laws, we make several practically important conclusions regarding the use of deep ensembles in practice, e.\,g.\,using a large single network may be less beneficial than using a so-called memory split --- an ensemble of several medium-size networks of the same total parameter count; \item we show that using the discovered power laws for $n \geqslant 1$, and having a small number of trained networks, we can predict the CNLL of the large ensembles and the optimal memory split for a given memory budget. \end{enumerate} {\bf Definitions and notation.$\quad$} In this work, we treat a power law as a family of functions $\mathrm{PL}_m = c + b m^a$, $m=1, 2, 3,\dots$; $a<0$, $b \in \mathbb{R}$, $c \in \mathbb{R}$ are the parameters of the power law. Parameter $c = \lim_{m\rightarrow \infty} (c + b m^a) = \lim_{m\rightarrow \infty} \mathrm{PL}_m \overset{\mathrm{def}}{=} \mathrm{PL}_{\infty}$ reflects the asymptote of the power law. Parameter $b=c-\mathrm{PL}_1=\mathrm{PL}_\infty - \mathrm{PL}_1$ reflects the difference between the starting point of the power law and its asymptote. Parameter $a$ reflects the speed of approaching the asymptote. In the rest of the work, $\mathrm{(C)NLL}_m$ denotes (C)NLL as a function of $m$. \section{Theoretical view} \label{sec:our_theory} The primary goal of this work is to perform the empirical study of the conditions under which NLL and CNLL of deep ensembles follow a power law. Before diving into a discussion about our empirical findings, we first provide a theoretical motivation for anticipating power laws in deep ensembles, and discuss the applicability of this theoretical reasoning to the practically important scenario with calibration. We begin with a theoretical analysis of the non-calibrated NLL of a deep ensemble as a function of ensemble size $n$. Assume that an ensemble consists of $n$ models that return independent identically distributed probabilities $p_{\mathrm{obj}, i}^* \in [0, 1], i=1,\dots,n$ of the correct class for a single object from the dataset $\mathcal{D}$. Hereinafter, operator $*$ denotes retrieving the prediction for the correct class. We introduce the \emph{model-average} NLL of an ensemble of size $n$ for the \emph{given object}: \begin{equation} \label{eq:nll_ens_single} \mathrm{NLL}_{n}^{\mathrm{obj}} = -\mathbb{E} \log \left( \frac{1}{n} \sum_{i=1}^n p_{\mathrm{obj}, i}^* \right). \end{equation} The expectation in~\eqref{eq:nll_ens_single} is taken over all possible models that may constitute the ensemble (e.\,g. random initializations). The following proposition describes the asymptotic power-law behavior of $\mathrm{NLL}_{n}^{\mathrm{obj}}$ as a function of the ensemble size. \begin{prop} \label{prop:pl_ens} Consider an ensemble of $n$ models, each producing independent and identically distributed probabilities of the correct class for a given object: $p_{\mathrm{obj}, i}^* \in \left[\epsilon_{\mathrm{obj}}, 1\right]$, $\epsilon_{\mathrm{obj}} > 0$, $i=1,\dots,n$. Let $\mu_{\mathrm{obj}} = \mathbb{E} p_{\mathrm{obj}, i}^*$ and $\sigma_{\mathrm{obj}}^2 = \mathbb{D} p_{\mathrm{obj}, i}^*$ be, respectively, the mean and variance of the distribution of probabilities. Then the model-average NLL of the ensemble for a single object can be decomposed as follows: \begin{equation} \label{eq:nll_ens_single_pl} \mathrm{NLL}_{n}^{\mathrm{obj}} = \mathrm{NLL}_{\infty}^{\mathrm{obj}} + \frac 1 n \frac{\sigma_\mathrm{obj}^2}{2 \mu_\mathrm{obj}^2} + \mathcal{O}\left(\frac{1}{n^2}\right). \end{equation} where $\mathrm{NLL}_{\infty}^{\mathrm{obj}} = -\log \left( \mu_\mathrm{obj} \right)$ is the ``infinite'' ensemble NLL for the given object. \end{prop} The proof is based on the Taylor expansions for the moments of functions of random variables, we provide it in Appendix~\ref{app:theory_prop_proof}. The assumption about the lower limit of model predictions $\epsilon_\mathrm{obj} > 0$ is necessary for the accurate derivation of the asymptotic in~\eqref{eq:nll_ens_single_pl}. We argue, however, that this condition is fulfilled in practice as real softmax outputs of neural networks are always positive and separated from zero. The model-average NLL of an ensemble of size $n$ on the whole dataset, $\mathrm{NLL}_n$, can be obtained via summing $\mathrm{NLL}_{n}^{\mathrm{obj}}$ over objects, which implies that $\mathrm{NLL}_n$ also behaves as $c + bn^{-1}$, where $c, b>0$ are constants w.\,r.\,t.\,$n$, as $n \rightarrow \infty$. However, for the finite-range $n$, the dependency in $\mathrm{NLL}_n$ may be more complex. \citet{pitfalls} emphasize that the comparison of the NLLs of different models with suboptimal softmax temperature may lead to an arbitrary ranking of the models, so the comparison should only be performed after \emph{calibration}, i.\,e. with optimally selected temperature $\tau$. The model-average CNLL of an ensemble of size $n$, measured on the whole dataset $\mathcal{D}$, is defined as follows: \begin{equation} \mathrm{CNLL}_n = \mathbb{E} \min_{\tau>0} \biggl\{ -\sum_{\mathrm{obj} \in \mathcal{D}} \log \bar{p}^*_{\mathrm{obj}, n}(\tau) \biggr\}, \label{eq:true_cnll} \end{equation} where the expectation is also taken over models, and $\bar{p}_{\mathrm{obj}, n}(\tau) \in [0, 1]^K $ is the distribution over $K$ classes output by the ensemble of $n$ networks with softmax temperature $\tau$. \citet{pitfalls} obtain this distribution by averaging predictions $p_{\mathrm{obj}, i} \in [0, 1]^K$ of the member networks $i=1,\dots, n$ for a given object and applying the temperature $\tau>0$ on top of the ensemble: $\bar{p}_{\mathrm{obj}, n}(\tau) = \mathrm{softmax} \bigl\{ \bigl( \log (\frac 1 n \sum_{i=1}^n p_{\mathrm{obj}, i} )\bigr) \bigl/ \tau \bigr\} $. This is a native way of calibrating, in a sense that we plug the ensemble into a standard procedure of calibrating an arbitrary model. We refer to the described calibration procedure as applying temperature \emph{after} averaging. In our work, we also consider another way of calibrating, namely applying temperature \emph{before} averaging: $\bar{p}_{\mathrm{obj}, n}(\tau) =\frac 1 n \sum_{i=1}^n \mathrm{softmax} \{ \log (p_{\mathrm{obj}, i}) / \tau \}$. The two calibration procedures perform similarly in practice, in most of the cases the second one performs slightly better (see Appendix~\ref{app:cnll_1}). The following series of derivations helps to connect the non-calibrated and calibrated NLLs. If we fix some $\tau>0$ and apply it \emph{before} averaging, $\bar p_{\mathrm{obj}, n} (\tau)$ fits the form of the ensemble in the right-hand side of equation~\eqref{eq:nll_ens_single}, and according to Proposition~\ref{prop:pl_ens}, we obtain that the model-average NLL of an $n$-size ensemble with fixed temperature $\tau$, $\mathrm{NLL}_n(\tau)$, follows a power law w.\,r.\,t.\,$n$ as $n \rightarrow \infty$. Applying $\tau$ \emph{after} averaging complicates the derivation, but the same result is generally still valid, see Appendix~\ref{app:theory_nll_temp_after}. However, the parameter $b$ of the power law may become negative for certain values of $\tau$. In contrast, when we apply $\tau$ before averaging, $b$ always remains positive, see eq.~\eqref{eq:nll_ens_single_pl}. Minimizing $\mathrm{NLL}_n(\tau)$ w.\,r.\,t.\,$\tau$ results in a lower envelope of the (asymptotic) power laws: \begin{equation} \label{eq:le_nll} \mathrm{LE\mbox{-}NLL}_n = \min_{\tau>0} \mathrm{NLL}_n(\tau), \quad \mathrm{NLL}_n(\tau) \overset{n \rightarrow \infty}{\sim} \mathrm{PL}_n. \end{equation} The lower envelope of power laws also follows an (asymptotic) power law. Consider for simplicity a finite set of temperatures $\{\tau_1, \dots, \tau_T\}$, which is the conventional practical case. As each of $\mathrm{NLL}_n(\tau_t), t = 1,\dots,T$ converges to its asymptote $c(\tau_t)$, there exists an optimal temperature $\tau_{t^*}$ corresponding to the lowest $c(\tau_{t^*})$. The above implies that starting from some point $n$, $\mathrm{LE\mbox{-}NLL}_n$ will coincide with $\mathrm{NLL}_n(\tau_{t^*})$ and hence follow its power law. We refer to Appendix~\ref{app:theory_lower_env_cont_temp} for further discussion on continuous temperature. Substituting the definition of $\mathrm{NLL}_n(\tau)$ into~\eqref{eq:le_nll} results in: \begin{equation} \mathrm{LE\mbox{-}NLL}_n = \min_{\tau>0} \mathbb{E} \biggl\{ - \sum_{\mathrm{obj} \in \mathcal{D}} \log \bar{p}^*_{\mathrm{obj}, n}(\tau) \biggr \}, \end{equation} from which we obtain that the only difference between $\mathrm{LE\mbox{-}NLL}_n$ and $\mathrm{CNLL}_n$ is the order of the minimum operation and the expectation. Although this results in another calibration procedure than the commonly used one, we show in Appendix~\ref{app:two_types_nll} that the difference between the values of $\mathrm{LE\mbox{-}NLL}_n$ and $\mathrm{CNLL}_n$ is negligible in practice. Conceptually, applying expectation inside the minimum is also a reasonable setting: in this case, when choosing the optimal $\tau$, we use the more reliable estimate of the NLL of the $n$-size ensemble with temperature $\tau$. This setting is not generally considered in practice, since it requires training several ensembles and, as a result, is more computationally expensive. In the experiments we follow the definition of CNLL~\eqref{eq:true_cnll} to consider the most practical scenario. To sum up, in this section we derived an asymptotic power law for LE-NLL that may be treated as another definition of CNLL, and that closely approximates the commonly used CNLL in practice. \section{Experimental setup} \label{exp_setup} We conduct our experiments with convolutional neural networks, WideResNet~\cite{wrn} and VGG16~\cite{vgg}, on CIFAR-10~\cite{CIFAR10} and CIFAR-100~\cite{CIFAR100} datasets. We consider a wide range of network sizes $s$ by varying the width factor $w$: for VGG\,/\,WideResNet, we use convolutional layers with $[w, 2w, 4w, 8w]$\,/\,$[w, 2w, 4w]$ filters, and fully-connected layers with $8w$\,/\,$4w$ neurons. For VGG\,/\,WideResNet, we consider $2 \leqslant w \leqslant 181$ / $5 \leqslant w \leqslant 453$; $w=64$\,/\,$160$ corresponds to a standard, commonly used, configuration with $s_{\mathrm{standard}}$ = 15.3M / 36.8M parameters. These sizes are later referred to as the standard budgets. For each network size, we tune hyperparameters (weight decay and dropout) using grid search. We train all networks for 200 epochs with SGD with an annealing learning schedule and a batch size of 128. We aim to follow the practical scenario in the experiments, so we use the definition CNLL~\eqref{eq:true_cnll}, not LE-NLL~\eqref{eq:le_nll}. Following~\cite{pitfalls}, we use the ``test-time cross-validation'' to compute the CNLL. We apply the temperature before averaging, the motivation for this is given in section~\ref{sec:ensemble_size}. More details are given in Appendix~\ref{app:experimental_setup}. For each network size $s$, we train at least $\ell = \max\{N, 8 s_{\mathrm{standard}}/s\}$ networks, $N=64$\,/\,$12$ for VGG\,/\,WideReNet. For each $(n, s)$ pair, given the pool of $\ell$ trained networks of size $s$, we construct $\lfloor \frac \ell n \rfloor$ ensembles of $n$ distinct networks. The NLLs of these ensembles have some variance, so in all experiments, we average NLL over $\lfloor \frac \ell n \rfloor$ runs. We use these values to approximate NLL with a power law along the different directions of the $(n, s)$-plane. For this, we only consider points that were averaged over at least three runs. {\bf Approximating sequences with power laws.$\quad$} Given an arbitrary sequence $\{\hat y_m\},\,m=1,\dots, M$, we approximate it with a power law $\mathrm{PL}_m = c + b m^{a}$. In the rest of the work, we use the hat notation $\hat y_m$ to denote the observed data, while the value without hat, $y_m$, denotes $y$ as a function of $m$. To fit the parameters $a,\,b,\,c$, we solve the following optimization task using BFGS: \begin{equation} \label{fit_loss} (a, b, c) = \underset{a, b, c}{\mathrm{argmin}} \frac 1 M \sum_{m=1}^M \bigl (\log_2 (\hat y_m - c) - \log_2 (b m^{a}) \bigr)^2 \end{equation} We use the logarithmic scale to pay more attention to the small differences between values $\hat y_m$ for large $m$. For a fixed $c$, optimizing the given loss is equivalent to fitting the linear regression model with one factor $\log_2m$ in the space $\log_2 m$ --- $\log_2(y_m-c)$ (see fig.~\ref{fig:motivation}, right as an example). \section{NLL as a function of ensemble size} \label{sec:ensemble_size} In this section, we would like to answer the question, whether the NLL as the function of ensemble size can be described by a power law in practice. We consider both calibrated NLL and NLL with a fixed temperature. To answer the stated question, we fit the parameters $a,\,b,\,c$ of the power law on the points $\widehat {\mathrm{NLL}}_n(\tau)$ or $\widehat {\mathrm{CNLL}}_n$, $n=1, 2, 3, \dots$, using the method described in section~\ref{exp_setup}, and analyze the resulting parameters and the quality of the approximation. As we show in Appendix~\ref{app:cnll_2}, when the temperature is applied \textit{after} averaging, $\mathrm{NLL}_n(\tau)$ is, in some cases, an increasing function of $n$. As for CNLL, we found settings when $\mathrm{CNLL}_n$ with the temperature applied \textit{after} averaging is not a convex function for small $n$, and as a result, cannot be described by a power law. In the rest of the work, we apply temperature \textit{before} averaging, as in this case, both $\mathrm{NLL}_n(\tau)$ and $\mathrm{CNLL}_n$ can be closely approximated with power laws in all considered cases. {\bf NLL with fixed temperature.$\quad$} For all considered dataset--architecture pairs, and for all temperatures, $\widehat {\mathrm{NLL}}_n(\tau)$ with fixed $\tau$ can be closely approximated with a power law. Figure~\ref{fig:motivation}, middle and right shows an example approximation for VGG of the commonly used size with the temperature equal to one. Figure~\ref{fig:ens_fixed_t} shows the dynamics of the parameters $a,\,b,\,c$ of power laws, approximating the NLL with a fixed temperature of ensembles of different network sizes and for different temperatures, for VGG on a CIFAR-100 dataset. The rightmost plot reports the quality of approximation measured with RMSE in the \emph{log}-space. We note that even the highest RMSE in the \emph{log}-space corresponds to the low RMSE in the \emph{linear} space (the RMSE in the \emph{linear} space is less than 0.006 for all lines in figure~\ref{fig:ens_fixed_t}). In theory, starting from large enough $n$, $\mathrm{NLL}_n(\tau)$ follows a power law with parameter $a$ equal to -1, and for small $n$, more than one terms in eq.~\eqref{eq:nll_ens_single_pl} are significant, resulting in a complex dependency $\mathrm{NLL}_n(\tau)$. In practice, we observe the power-law behaviour for the whole considered graph $\mathrm{NLL}_n(\tau)$, $n \geqslant 1$, but with $a$ slightly larger than $-1$. This result is consistent for all considered dataset--architecture pairs, see Appendix~\ref{app:nll_fixed_temp_setup2}. When the temperature grows, the general behaviour is that $a$ approaches -1 more and more tightly. This behaviour breaks for the ensembles of small networks (blue lines). The reason is that the number of trained small networks is large, and the NLL for large ensembles with high temperature is noisy in log-scale, so the approximation of NLL with power law is slightly worse than for other settings, as confirmed in the rightmost plot of fig.~\ref{fig:ens_fixed_t}. Nevertheless, these approximations are still very close to the data, we present the corresponding plots in Appendix~\ref{A:large_sized_t}. Parameter $b=\mathrm{NLL}_1(\tau)-\mathrm{NLL}_\infty(\tau)$ reflects the potential gain from the ensembling of the networks with the given network size and the given temperature. For a particular network size, the gain is higher for low temperatures, since networks with low temperatures are overconfident in their predictions, and ensembling reduces this overconfidence. With high temperatures, the predictions of both single network and ensemble get closer to the uniform distribution over classes, and $b$ approaches zero. Parameter $c$ approximates the quality of the ``infinite''-size ensemble, $\mathrm{NLL}_\infty(\tau)$. For each network size, there is an optimal temperature, which may be either higher or lower than one, depending on the dataset--architecture combination (see Appendix~\ref{app:nll_fixed_temp_setup2} for more examples). This shows that even large ensembles need calibration. Moreover, the optimal temperature increases, when the network size grows. Therefore, not only large single networks are more confident in their predictions than small single networks, but the same holds even for large ensembles. Higher optimal temperature reduces their confidence. We notice that for given network size, the optimal temperature converges when $n \rightarrow \infty$, we show the corresponding plots in Appendix~\ref{app:temp}. \begin{figure} \centering \includegraphics[width=\textwidth]{figures_final/pls_vs_temp_vgg_100_loss.pdf} \caption{Parameters of power laws and the quality of the approximation for $\mathrm{NLL}_n(\tau)$ with a fixed temperature $\tau$ for VGG on CIFAR-100.} \label{fig:ens_fixed_t} \end{figure} {\bf NLL with calibration.$\quad$} When the temperature is applied before averaging, $\widehat{\mathrm{CNLL}}_n$ can be closely approximated with a power law for all considered dataset--architecture pairs, see Appendix~\ref{cnll:setup2}. Figure~\ref{fig:cnll_ens} shows how the resulting parameters of the power law change when the network size increases for different settings. The rightmost plot reports the quality of the approximation. In figure~\ref{fig:cnll_ens}, we observe that for WideResNet, parameter $b$ decreases, as $s$ becomes large, and $c$ starts growing for large $s$. For VGG, this effect also occurs in a light form but is almost invisible at the plot. This suggests that large networks gain less from the ensembling, and therefore the ensembles of larger networks are less effective than the ensembles of smaller networks. We suppose, the described effect is a consequence of under-regularization (the large networks need more careful hyperparameter tuning and regularization), because we also observed the described effect in a more strong form for the networks with all regularization turned off, see Appendix~\ref{app:noreg_b_c}. However, the described effect might also be a consequence of the decreased diversity of wider networks~\cite{neal2018modern}, and needs further investigation. \begin{figure} \centering \includegraphics[width=\textwidth]{figures_final/pls_cnll_vgg_wr_100.pdf} \caption{Parameters of power laws and the quality of the approximation for $\mathrm{CNLL}_n$ for different network sizes $s$. VGG and WideResNet on CIFAR-100.} \label{fig:cnll_ens} \end{figure} \section{NLL as a function of network size} \label{sec:network_size} In this section, we analyze the behaviour of the NLL of the ensemble of a fixed size $n$ as a function of the member network size $s$. We consider both non-calibrated and calibrated NLL, and analyze separately cases $n=1$ and $n>1$. \citet{scaling_description} reason about a power law of accuracy for $n=1$ when $s \rightarrow \infty$, considering shallow fully-connected networks on the MNIST dataset. We would like to check, whether $\widehat{\mathrm{(C)NLL}}_s$ can be approximated with a power law on the whole reasonable range of $s$ in practice. {\bf Single network.$\quad$} Figure~\ref{fig:nll_network_size}, left shows the NLL with $\tau = 1$ and the CNLL of a single VGG on the CIFAR-100 dataset as a function of the network size. We observe the double descent behaviour~\cite{double_descent, double_descent2} of the non-calibrated NLL, which could not be approximated with a power law for the considered range of $s$. The calibration removes the double-descent behaviour, and allows a close power-law approximation, as confirmed in the middle plot of figure~\ref{fig:nll_network_size}. Interestingly, parameter $a$ is close to $-0.5$, which coincides with the results of~\citet{scaling_description} derived for the test error. The results for other dataset--architecture pairs are given in Appendix~\ref{app:network_size}. \citet{double_descent} observe the double descent behaviour of \emph{accuracy} as a function of the network size for highly overfitted networks, when training networks without regularization, with label noise, and for much more epochs than is usually needed in practice. In our practical setting, accuracy and CNLL are the monotonic functions of the network size, while for the non-calibrated NLL, the double descent behaviour is observed. \citet{pitfalls} point out that accuracy and CNLL usually correlate, so we hypothesize that the double descent \emph{may be} observed for CNLL in the same scenarios when it is observed for accuracy, while the non-calibrated NLL exhibits the double descent at the earlier epochs in these scenarios. To sum up, our results support the conclusions of~\cite{pitfalls} that the comparison of the NLL of the models of different \emph{sizes} should only be performed with an optimal temperature. {\bf Ensemble.$\quad$} As can be seen from figure~\ref{fig:nll_network_size}, right, for ensemble sizes $n > 1$, CNLL starts increasing at some network size $s$. This agrees with the behaviour of parameter $c$ of the power law for CNLL shown in figure~\ref{fig:cnll_ens} and was discussed in section~\ref{sec:ensemble_size}. Because of this behaviour, we do not perform experiments on approximating $\mathrm{CNLL}_s$ for $n>1$ with a power law. \begin{figure} \centering \includegraphics[width=\textwidth]{figures_final/pl_vgg_100_ms_ind_and_ens.pdf} \caption{Non-calibrated $\mathrm{NLL}_s$ and $\mathrm{CNLL}_s$ for VGG on CIFAR-100. Left and middle: for a single network, $\mathrm{NLL}_s$ exhibits double descent, while $\mathrm{CNLL}_s$ can be closely approximated with a power law. Right: $\mathrm{NLL}_s$ and $\mathrm{CNLL}_s$ of an ensemble of several networks may be non-monotonic functions.} \label{fig:nll_network_size} \end{figure} \section{NLL as a function of the total parameter count} \label{sec:budgets} In the previous sections, we analyzed the vertical and horizontal cuts of the $(n, s)$-plane shown in figure~\ref{fig:motivation}, left. In this section, we analyze the diagonal cuts of this space. One direction of diagonal corresponds to the fixed total parameter count, later referred to as a memory budget, and the orthogonal direction reflects the increasing budget. We firstly investigate CNLL as a function of the memory budget. In figure~\ref{fig:budgets}, left, we plot sequences $\widehat{\mathrm{CNNL}}_n$ for different network sizes $s$, aligning plots by the total parameter count. CNLL as a function of the memory budget is then introduced as the lower envelope of the described plots. As in the previous sections, we approximate this function with the power law, and observe, that the approximation is tight, the corresponding visualization is given in figure~\ref{fig:budgets}, middle. The same result for other dataset-architecture pairs is shown in Appendix~\ref{app:budgets}. Another, practically important, effect is that the lower envelope may be reached at $n>1$. In other words, for a fixed memory budget, a single network may perform worse than an ensemble of several medium-size networks of the same total parameter count, called a memory split in the subsequent discussion. We refer to the described effect itself as a Memory Split Advantage effect (MSA effect). We further illustrate the MSA effect in figure~\ref{fig:budgets}, right, where each line corresponds to a particular memory budget, the x-axis denotes the number of networks in the memory split, and the lowest CNLL, denoted at the y-axis, is achieved at $n > 1$ for all lines. We consistently observe the MSA effect for different settings and metrics, i.\,e. CNLL and accuracy, for a wide range of budgets, see Appendix~\ref{app:budgets}. We note that the MSA-effect holds even for budgets smaller than the standard budget. We also show in appendix~\ref{app:budgets} that using the memory split with the relatively small number of networks is only moderately slower than using a single wide network, in both training and testing stages. We describe the memory split advantage effect in more details in~\cite{chirkova2020deep}. \begin{figure} \begin{center} \centerline{ \begin{tabular}{c@{}c} \includegraphics[height=26mm]{figures_final/pl_vgg_cifar100_budget.pdf}& \includegraphics[height=26mm]{figures_final/vgg_cifar100_budgets_cnll.pdf} \end{tabular}} \caption{Left and middle: $\mathrm{CNLL}_B$ for VGG on CIFAR-100 can be closely approximated with a power law. $\mathrm{CNLL}_B$ is a lower envelope of $\mathrm{CNLL}_n$ for different network sizes $s$. Right: Memory Split Advantage effect, VGG on CIFAR-100. For different memory budgets $B$, the optimal CNLL is achieved at $n>1$.} \label{fig:budgets} \end{center} \end{figure} \section{Prediction based on power laws} \label{sec:prediction} One of the advantages of a power law is that, given a few starting points $y_1, \dots, y_m$ satisfying the power law, one can exactly predict values $y_i$ for any $i \gg m$. In this section, we check whether the power laws discovered in section~\ref{sec:ensemble_size} are stable enough to allow accurate predictions. We use the CNLL of the ensembles of sizes $1-4$ as starting points, and predict the CNLL of larger ensembles. We firstly conduct the experiment using the values of starting points, obtained by averaging over a large number of runs. In this case, the CNLL of large ensembles may be predicted with high precision, see Appendix~\ref{app:prediction}. Secondly, we conduct the experiment in the practical setting, when the values of starting points were obtained using only 6 trained networks (using 6 networks allows the more stable estimation of CNLL of ensembles of sizes $1-3$). The two left plots of figure~\ref{fig:predictions_ensembles} report the error of the prediction for the different ensemble sizes and network sizes of VGG and WideResNet on the CIFAR-100 dataset. The plots for other settings are given in Appendix~\ref{app:prediction}. The experiment was repeated 10 times for VGG and 5 times for WideResNet with the independent sets of networks, and we report the average error. The error is $1-2$ orders smaller than the value of CNLL, and based on this, we conclude that the discovered power laws allow quite accurate predictions. In section~\ref{sec:budgets}, we introduced memory splitting, a simple yet effective method of improving the quality of the network, given the memory budget $B$. Using the obtained predictions for CNLL, we can now predict the optimal memory split (OMS) for a fixed $B$ by selecting the optimum at a specific diagonal of the predicted ($n$, $s$)-plane, see Appendix~\ref{app:prediction} for more details. We show the results for the practical setting with 6 given networks in figure~\ref{fig:predictions_ensembles}, right. The plots depict the number of networks $n$ in the true and predicted OMS; the network size can be uniquely determined by $B$ and $n$. In most cases, the discovered power laws predict either the exact or the neighboring split. If we predict the neighboring split, the difference in CNLL between the true and predicted splits is negligible, i.\,e.\,of the same order as the errors presented in figure~\ref{fig:predictions_ensembles}, left. To sum up, we observe that the discovered power laws not only \textit{interpolate} $\widehat{\mathrm{CNNL}}_n$ on the \textit{whole} considered range of $n$, but also is able to \textit{extrapolate} this sequence, i.\,e.\,CNLL fitted on a \emph{short} segment of $n$ approximates well the \emph{full} range, providing an argument for using particularly power laws and not other functions. \begin{figure} \begin{center} \centerline{ \begin{tabular}{@{}c@{}c} \multicolumn{2}{c}{\footnotesize RMSE between true and predicted CNLL} \\ \includegraphics[width=0.27\textwidth]{figures_final/predicted_carpet_vgg100.pdf}& \includegraphics[width=0.27\textwidth]{figures_final/predicted_carpet_wr100.pdf} \end{tabular} \begin{tabular}{@{}c@{}c} \multicolumn{2}{c}{\footnotesize Optimal memory splits: predicted vs true}\\ \includegraphics[width=0.22\textwidth]{figures_final/vgg_cifar100_predcited_memory_splits.pdf}& \includegraphics[width=0.22\textwidth]{figures_final/wr_cifar100_predcited_memory_splits.pdf} \end{tabular}} \caption{Predictions based on $\mathrm{CNLL}_n$ power laws for VGG and WideResNet on CIFAR-100. Predictions are made for large $n$ based on $n = 1..4$. Left pair: RMSE between true and predicted CNLL. Right pair: predicted optimal memory splits vs true ones. Mean $\pm$ standard deviation is shown for predictions.} \label{fig:predictions_ensembles} \end{center} \end{figure} \section{Related Work} {\bf Deep ensembles and overparameterization.$\quad$} The two main approaches to improve deep neural networks accuracy are ensembling and increasing network size. While a bunch of works report the quantitative influence of the above-mentioned techniques on model quality~\cite{fort2019deep, ju2018relative, deepens, double_descent, neyshabur2018towards, novak2018sensitivity}, few investigate the qualitative side of the effect. Some recent works~\cite{d2020double, scaling_description, geiger2019jamming} consider a simplified or narrowed setup to tackle it. For instance,~\citet{scaling_description} similarly discover the power laws in test error w.\,r.\,t.\,model and ensemble size for simple binary classification with hinge loss, and give a heuristic argument supporting their findings. We provide an extensive theoretical and empirical justification of similar claims for the calibrated NLL using modern architectures and datasets. Other layers of works on studying neural network ensembles and overparameterized models include but not limited to the Bayesian perspective~\cite{he2020bayesian, wilson2020case, wilson2020bayesian}, ensembles diversity improvement techniques~\cite{kim2018attention, lee2016stochastic, sinha2020dibs, zaidi2020neural}, neural tangent kernel (NTK) view on overparameterized neural networks~\cite{arora2019exact, jacot2018neural, lee2019wide}, etc. {\bf Power laws for predictions.$\quad$} A few recent works also empirically discover power laws with respect to data and model size and use them to extrapolate the performance on small models/datasets to larger scales~\cite{kaplan2020scaling, rosenfeld2019constructive}. Their findings even allow estimating the optimal compute budget allocation given limited resources. However, these studies do not account for the ensembling of models and the calibration of NLL. {\bf MSA-effect.$\quad$} Concurrently with our work, \citet{google_msa} investigate a similar effect for budgets measured in FLOPs. Earlier, an MSA-like effect has also been noted in~\cite{coupled_ensembles,li2019ensemblenet}. However, the mentioned works did not consider the proper regularization of networks of different sizes and did not propose the method for predicting the OMS, while both aspects are important in practice. \section{Conclusion} In this work, we investigated the power-law behaviour of CNLL of deep ensembles as a function of ensemble size $n$ and network size $s$ and observed the following power laws. Firstly, with a minor modification of the calibration procedure, CNLL as a function of $n$ follows a power law on the wide finite range of $n$, starting from $n=1$, but with the power parameter slightly higher than the one derived theoretically. Secondly, the CNLL of a single network follows a power law as a function of the network size $s$ on the whole reasonable range of network sizes, with the power parameter approximately the same as derived. Thirdly, the CNLL also follows a power law as a function of the total parameter count (memory budget). The discovered power laws allow predicting the quality of large ensembles based on the quality of the smaller ensembles consisting of networks with the same architecture. The practically important finding is that for a given memory budget, the number of networks in the optimal memory split is usually much higher than one, and can be predicted using the discovered power laws. Our source code is available at \url{https://github.com/nadiinchi/power_laws_deep_ensembles}. \section*{Broader Impact} In this work, we provide an empirical and theoretical study of existing models (namely, deep ensembles); we propose neither new technologies nor architectures, thus we are not aware of its specific ethical or future societal impact. We, however, would like to point out a few benefits gained from our findings, such as optimization of resource consumption when training neural networks and contribution to the overall understanding of neural models. As far as we are concerned, no negative consequences may follow from our research. \begin{ack} We would like to thank Dmitry Molchanov, Arsenii Ashukha, and Kirill Struminsky for the valuable feedback. The theoretical results presented in section~\ref{sec:our_theory} were supported by Samsung Research, Samsung Electronics. The empirical results presented in sections~\ref{sec:ensemble_size},~\ref{sec:network_size},~\ref{sec:budgets},~\ref{sec:prediction} were supported by the Russian Science Foundation grant \textnumero 19-71-30020. This research was supported in part through the computational resources of HPC facilities at NRU HSE. Additional revenues of the authors for the last three years: Stipend by Lomonosov Moscow State University, Travel support by ICML, NeurIPS, Google, NTNU, DESY, UCM. \end{ack} \medskip \small \bibliographystyle{apalike}
{ "timestamp": "2021-06-29T02:37:05", "yymm": "2007", "arxiv_id": "2007.08483", "language": "en", "url": "https://arxiv.org/abs/2007.08483" }
\section{Introduction} With growing interests in autonomous vehicles, 3D object detection has received considerable attention. Due to the superior capability of modeling 3D objects, point cloud is the most popular type of data source. Most existing 3D detectors are point-based~\cite{qi2018frustum,wang2019frustum,Lan_2019_CVPR,shi2019pointrcnn,yang2019std} and voxel-based~\cite{lang2019PointPillars,zhou2018voxelnet,yan2018second,ye2020sarpnet,hu2019you}. Point-based approaches generate features from raw point cloud data directly. Although achieving promising performance, these methods suffer from high computational complexity which discourages their application in real-time scenarios. Voxel-based approaches~\cite{lang2019PointPillars,zhou2018voxelnet,yan2018second,ye2020sarpnet,hu2019you} firstly convert point cloud into voxels and then employ deep convolutional neural networks (DCNN) to conduct object detection. Taking advantage of the advanced DCNN architecture, voxel-based approaches achieve the state-of-the-art performance with low computational cost. Our work follows the setting of voxel-based methods for their advanced balance of efficiency and effectiveness. Although much progress has been made in improving the performance of voxel-based detectors, an important characteristic of point cloud is not well explored: input data points are usually not uniformly distributed over the space. The density of point cloud can be affected by different factors, e.g., the distance of objects from LiDAR sensor and object self-occlusion. As illustrated in Fig.~\ref{fig:intro}, the density of point cloud over objects highly depends on the relative locations of different parts. It is also intuitive that the amount of information is highly related to the point density. However, existing voxel-based detectors extract features from uniformly divided sub-regions, regardless of the actual distribution of the points. We believe that this will lead to loss of useful information and ultimately result in sub-optimal detection performance. To fully exploit the non-uniform distribution of point cloud, we propose a novel 3D object detection framework, to adaptively model the rich feature of 3D objects according to the information density of points. Illustrated in Fig.~\ref{fig:arch}, our framework contains two stages. Coarse detection results are obtained in the first stage via voxel-based region proposal network. In the second stage, we introduce InfoFocus, to model and extract the informative features from regions of interest (RoI) (formed by the coarse predictions) according to the distribution of point cloud, and the predictions are improved with the help of the refined features. The InfoFocus is the core structure of our framework which contains three sequentially connected modules including the Point-of-interest (PoI) Pooling, the Visibility Attentive Module, and the Adaptive Point-wise Attention. \textbf{PoI Pooling.} Unlike 2D objects which contain densely distributed information over the whole RoI, more of the points of 3D objects locate on the their surfaces. Therefore, we hypothesize that most informative feature concentrates on the edge of RoI. Motivated by this intuition, we propose PoI Pooling which densely samples features on the edge and sparsely samples feature in the middle of RoI to accommodate the non-uniform information distribution of point cloud. \textbf{Visibility Attentive Module.} Heavy self-occlusion is presented because of the nature of LiDAR data that is no point cloud exists on the backside of object relatively to the sensor. To mitigate this issue, our proposed Visibility Attentive Module applies hard attention to emphasize the visible parts of objects and eliminate the features from invisible points. \textbf{Adaptive Point-wise Attention.} PoIs may contain different amount of information, although they are all visible. We introduce Adaptive Point-wise Attention to re-weight the features to improve the modeling of 3D objects. We conduct extensive experiments on the largest public 3D object detection benchmark, i.e, nuScenes~\cite{caesar2019nuscenes}. Experimental results show that our approach significantly outperforms the baselines, achieving $39.5\%$ mAP with $31$ FPS. Results of comprehensive ablation studies demonstrate the effectiveness of our InfoFocus and that each sub-module makes considerable contributions to our framework. \section{Related Work} \noindent\textbf{Point-based Detectors}. Inspired by the powerful feature learning capability of PointNet \cite{qi2017pointnet,qi2017pointnet++} and the advanced modeling structure of 2D object detectors \cite{Girshick_2014_CVPR,Girshick_2015_ICCV,ren2015faster}, Frustum PointNets~\cite{qi2018frustum} extrude the 2D object proposals into frustums to generate the 3D bounding boxes from raw point cloud. Lan et al.~\cite{Lan_2019_CVPR} add a decomposition-aggregation module modeling local geometry to extract the global feature descriptor of point cloud. Limited by initial 2D box proposals, those methods yield low performance when objects are occluded. In contrast, PointRCNN \cite{shi2019pointrcnn} generates 3D proposals directly from point cloud instead of 2D images. The recent STD \cite{yang2019std} attempts to refine the detection boxes in a coarse-to-fine manner. However, all those methods are computationally expensive due to the large amount of data points to be processed. \noindent\textbf{Multi-view 3D Detectors}. MV3D~\cite{chen2017multi} is proposed to fuse multi-view feature maps for the generation of 3D box proposals. Following \cite{chen2017multi}, Ku et al. \cite{ku2018joint} explore high resolution feature maps to compensate the information loss for small objects. These methods address the feature alignment between multi-modality in a coarse level and are typically slow. Liang et al.~\cite{Liang_2018_ECCV} design a continuous fusion layer to deal with the continuous state of LiDAR and the discrete state of images. Later, ~\cite{liang2019multi,vora2019pointpainting} leverage different strategies to jointly fuse related tasks to improve feature representation. \noindent\textbf{Voxel-based Detectors}. Recently, there is a trend of using regular 3D voxel grids to represent point cloud such that the input data can be easily processed by the 3D/2D convolution networks. Among those, VoxelNet~\cite{zhou2018voxelnet} is the pioneering work of performing voxelization on the raw 3D point cloud. To improve its efficiency, Second~\cite{yan2018second} adopts Sparse Convolution and speeds up detection process without compromising the detection accuracy. PointPillars~\cite{lang2019PointPillars} dynamically converts the 3D point cloud into a 2D pseudo image, making it more suitable for the application of the existing 2D object detection techniques. In ~\cite{ye2020sarpnet}, Ye et al. design a new voxel generator to preserve the information loss along the vertical direction. Building upon voxel-based detectors, our model captures richer information of objects by refining their feature representations at a second stage guided by the point cloud density and ultimately improves the detection results. There are several recent studies~\cite{chen2019fast,liu2019point} focusing on fusing the voxel-based features with PointNet-based features in order to extract more fine-grained 3D features. InfoFocus is complementary to these techniques and can be further applied on top of them. WYSIWYG~\cite{hu2019you} is the most related method to our approach since we both drive the model to encode visibility information. However, instead of using a separate branch to generate the hidden invisibility representation, our method directly aggregates the valuable point-wise features together from existing backbone network to refine the proposals in an end-to-end manner. \section{Proposed Approach} The proposed framework is illustrated in Fig.~\ref{fig:arch}, which consists of a deep feature extractor followed by a two-stage architecture. The deep feature extractor containing a Pillar Feature Network and a DCNN, converts the input point cloud to representative feature maps. Specifically, the Pillar Feature Network divides the whole space into equal pillars and generates the so-called pseudo images~\cite{lang2019PointPillars}. The pseudo images are then processed by the DCNN to obtain the feature maps which are shared by the two stages, i.e., Region Proposal Network (RPN) and InfoFocus. The RPN generates the initial coarse bounding box proposals that are refined by InfoFocus, with dynamic information modeling. Note that our Deep Feature Extractor and RPN follow the setting of~\cite{lang2019PointPillars}. \subsection{Deep Feature Extractor} Deep Feature Extractor is composed of two parts: 1) voxelization using Pillar Feature Network that converts the orderless point cloud into a sparse pseudo image via a simplified PointNet-like architecture and 2) feature extraction using DCNN to learn informative feature maps. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{images/architecture.pdf} \\ \caption{The proposed 3D object detection framework. It consists of three parts: Deep Feature Extractor(DFE), Region Proposal Network, and InfoFocus. InfoFocus contains three modules: PoI Pooling, Visibility Attentive Module, and Adaptive Point-wise Attention Module} \label{fig:arch} \end{figure} \noindent\textbf{Pillar Feature Network}. The Pillar Feature Network operates on the raw point cloud, and learns point-wise features for each pillar. After voxelizing raw point cloud into evenly spaced pillars, we randomly sample $N$ points from each non-empty pillar and then obtain a dense tensor with the size of $ D \times P \times N $, where D indicates the information dimension of each point, P denotes the number of non-empty pillars, and N denotes the number of points in each pillar. The Pillar Feature Network utilizes a PointNet-like block to learn a multi-dimensional feature vector for each pillar. The pillar-wise features are encoded into a 2D pseudo image with the shape of $ W \times L \times C $, where $W$ and $L$ indicate the width and length of the pseudo image, and $C$ is the channel of the feature map. \noindent\textbf{Deep Convolution Neural Network (DCNN)}. DCNN learns feature maps from the generated pseudo 2D image. The DCNN uses conv-deconv layers to extract features of different levels, and concatenates them to get the final features from different strides. \subsection{Region Proposal Network (RPN)} The RPN takes the feature maps provided by DCNN as inputs and produces high-quality 3D object proposals. Similar to the proposal generation in 2D object detection, anchor boxes are predefined at each position and proposals are generated by learning the offsets between anchors and the ground truths. To handle different scales of objects, a dual-head strategy is adopted. Specifically, the small-scale head takes features from the first conv-deconv phase of the DCNN, while the large-scale head takes the features from its concatenation phase. \subsection{InfoFocus} The InfoFocus serves as the second stage of our framework, which takes the candidate proposals from RPN and extracts features of objects in a hierarchical manner from the feature maps produced by the DCNN. Specifically, given each 3D object proposal, InfoFocus dynamically focuses on the informative parts of the feature maps by gradually emphasizing the representative PoIs in the following three steps: 1) the edge points are selected out from the whole proposal region by PoI Pooling; 2) Visibility Attention module emphasizes on the informative points according to their relative visibility to the LiDAR sensor and 3) in the Adaptive Point-wise Attention module, the features of the visible points are further weighted adaptively. The re-weighted features of the visible points are then fused to form the final representation of the proposal, on top of which two fully-connected layers are utilized to predict the refined box. \noindent\textbf{PoI Pooling}. When representing a 3D proposal, the most intuitive way is adopting the commonly used strategy in the two-stage 2D object detectors, i.e., RoI Pooling (see Fig.~\ref{fig:POIPooling} left). However, unlike the 2D images that have densely distributed information over the region proposals, the 3D point cloud mostly resides on the object surface which results in non-uniform information over the regions (most information locates on the edges of proposals). The proposed PoI pooling is illustrated in Fig. ~\ref{fig:POIPooling} (right). Instead of equally sampling points over a region of the feature maps, we focus on sampling the points on the informative parts including four corners, the center point and key-points on the edges. Note that we consider the center position as an additional useful signal since it is likely to capture the semantic-level information. We first project the 3D proposal to the birds' view coordinate system. Let $p_{0}, p_1, p_2$ and $p_3$ represent the positions of top-left, top-right, bottom-right, and bottom-left corners of a proposal on the pseudo image, respectively and $p_c$ denotes the center point. Along each edge, $n$ more key-points are uniformly sampled. For example, for the top edge between $p_0$ and $p_1$, the position of a sampled key-point $kp_j= (p_{0}\frac{j}{n+1} + p_{1}\frac{n+1-j}{n+1})$, where $j$ is an integer and $ 1 \leq j \leq n$. To this end, $(5 + 4*n)$ PoIs are obtained. A high-dimensional feature is extracted for each PoI according to its relative position on the feature map and then we obtain a feature set $F_{poi}=\{f^{poi}_1, f^{poi}_2,...,f^{poi}_{N_{poi}}\}$, where $N_{poi}=(5 + 4*n)$ representing the number of selected PoIs within the considered region. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{images/POIPooling.pdf} \caption{RoI Pooling \emph{vs.} PoI Pooling. The grid represents the feature map, and the dots denotes sampling points of interest. RoI Pooling samples the whole box, while PoI Pooling focuses on the key-points from edge-of-interest} \label{fig:POIPooling} \end{figure} \noindent\textbf{Visibility Attentive Module}. Severe self-occlusion typically occurs in point cloud, but is ignored by most of the existing methods. The Visibility Attentive Module (see Fig. \ref{fig:visibility} left) is proposed to mitigate this issue by focusing on the information provided by the visible parts of objects. We argue that visible regions contain more useful information than the occluded ones. Formally, we propose to re-weight features of PoIs according to their corresponding visibility by exploiting the geometric relationship between the proposals and the LiDAR sensor in bird’s eye view. As shown in Eq.~\ref{eq: vam}, $F_{vis}$ denotes the updated feature set, where $v^{poi}_i$ indicates the visibility score of the $i$th PoI. Different weighting strategies can be used and we use a hard attention strategy in this work for its simplicity, that is assigning $v^{poi}_i=1$ if the $i$th PoI is visible and $v^{poi}_i=0$ otherwise. In other words, we only take PoIs on the visible edges to represent the proposal. \begin{equation} \label{eq: vam} F_{vis} = \{ f^{poi}_1 * v^{poi}_1, f^{poi}_2 * v^{poi}_2,..., f^{poi}_{N_{poi}} * v^{poi}_{N_{poi}}\} \end{equation} For the consideration of model efficiency, a simple yet effective method is used to estimate the visibility of points in the bird's eye view. To figure out the sides of proposals facing to the sensor, we first compute the distance of each corner to the LiDAR sensor and determine the one that is closest to the sensor. Then, we consider the two edges passing this closest corner as the visible edges and the other two as the occluded ones. \noindent\textbf{Adaptive Point-wise Attention Module}. PoI Pooling and Visibility Attentive Module are motivated by the nature of the non-uniform density of point cloud. However, two points may offer different amount of information even though they are all visible by the sensor. Adaptive Point-wise Attention Module provides the flexibility for the visible PoIs to contribute unequally to the prediction. Suppose $F_{vis} = \{ f^{vis}_{1}, f^{vis}_{2}, ..., f^{vis}_{N_{vis}} \}$ indicates the feature set of visible PoIs. Adaptive Point-wise Attention Module learns an attention weight, $w_i$, for each $f^{vis}_i$ adaptively for the next-step feature aggregation. Specifically, a shared fully connected (FC) layer with sigmoid as the activation function is used to learn the attention weights, formally expressed as $v^{vis}_i = Sigmoid(\textbf{W}f^{vis}_i+b)$. We use $F_{att}=\{ f^{att}_{1}, f^{att}_{2}, ..., f^{att}_{N_{vis}} \}$ to represent the re-weighted feature set of visible PoIs updated using $F_{vis}$ and the attention weights, where $f^{att}_i=f^{vis}_i*v^{vis}_i$. The final representation of each proposal aggregates the features of its visible PoIs. Let $e_0, e_1, e_2$ and $e_3$ denote the top, right, down, left edges of a proposal, respectively. We first compute $f^e_i$ by applying max pooling to all the visible points on $e_i$. Then, the final representation is obtained by $f^e_0||f^e_1||f^e_2||f^e_3||f^p_c$, where $f^p_c$ indicates the feature of the center point and $||$ indicates concatenation. \subsection{Loss Function} Given the output PoI feature representation from the aforementioned three modules topped by fully-connected layers, the head network consists of three branches predicting the box class, localization and direction. The ground truth and anchor boxes are parameterized as $ (x, y, z, w, l, h, \theta) $, where $(x, y, z)$ is the center of box, $(w, l , h)$ is the dimension of box, and $\theta$ is the heading along the z-axis in the LiDAR coordinate system. The box regression target is computed as the residuals between the ground truth and the anchors as: \begin{equation} \begin{split} \triangle x = \frac{x^{gt} - x^a}{d^a}, \triangle y &= \frac{y^{gt} - y^a}{d^a}, \triangle z = \frac{z^{gt} - z^a}{h^a}, \\ \triangle w = \log (\frac{w^{gt}}{{w^{a}}}), \triangle l &= \log (\frac{l^{gt}}{{l^{a}}}), \triangle h = \log (\frac{h^{gt}}{{h^{a}}}), \\ \triangle \theta &= \theta_{gt} - \theta_{a} \end{split} \end{equation} where $x^{gt}$ and $x^a$ refer to ground truth and anchor box respectively, and $ d^a = \sqrt{(w^a)^2 + (l^a)^2} $. To deal with severe class imbalance problem in the dataset, we adopt the focal loss \cite{lin2017focal} for the classification loss. Smooth L1 loss \cite{Girshick_2014_CVPR} is used for the regression loss. In addition, to compensate for direction prediction missing in the regression, we adopt a softmax classification loss on orientation prediction. Similar with that of the vanilla PointPillars network \cite{lang2019PointPillars}, we formally define a multi-task loss for both stages as threefold, \begin{equation} L_{stage\_i} = \frac{1}{N_{pos}} (\beta_{cls} L_{cls\_i} + \beta_{reg} L_{reg\_i} + \beta_{dir} L_{dir\_i} ), \end{equation} where i could be either RPN or InfoFocus stage, $N_{pos}$ refers to the number of positive anchors and $ \beta_{cls}, \beta_{reg}, \beta_{dir} $ are chosen to balance the weights among classification loss, regression loss and direction loss. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{images/attention.pdf} \\ \caption{Left: illustration of the Visibility Attentive Module. We compute hard attention for each sampled point depending on whether it is visible to the sensor. We also show the visibility map on the bottom left. Points on the blue line are visible while points on the orange line are invisible. Right: the architecture of the Adaptive Point-wise Attention Module. The point-wise attention is generated using a fully connected (FC) layer followed by a Sigmoid function. The input of FC layer is the feature of each point} \label{fig:visibility} \end{figure} \subsection{Comparisons with Existing Approaches} \noindent\textbf{Point-based Approaches}. Our framework uses PointNet to extract features from equally divided sub-grids and employs a DCNN to generate 2D feature maps while point-based techniques \cite{qi2018frustum,wang2019frustum,Lan_2019_CVPR,shi2019pointrcnn} only use PointNet as its backbone. Both our approach and point-based approaches apply two-stage architecture to infer objects. Meanwhile, we both sample features considering the distribution of point cloud. However, compared to PointNet, InfoFocus is more computationally efficient without performance degradation. \noindent\textbf{Fusion-based Approaches}. Fusion-based detectors \cite{chen2019fast,liu2019point} make use of both RGB images and point cloud data for 3D object detection. InfoFocus is much faster than fusion-based approaches, since they contain two backbones to process multi-view sources and are heavily engineered. On the other hand, InfoFocus also achieves competitive results compared to fusion-based approaches. \noindent\textbf{Traditional Voxel-based Approaches}. Our method shares the similar backbone as the existing voxel-based architectures \cite{lang2019PointPillars,zhou2018voxelnet,yan2018second,ye2020sarpnet}. However, previous voxel-based detectors pay less attention to the distribution of LiDAR data that most 3D point cloud locates on the surface of the objects. Our proposed PoI Pooling, Visibility Attentive Module, and Adaptive Point-wise Attention model the non-uniform point cloud using dynamic information focus. First, the PoI Pooling decreases the sampling from the inside of objects where few points locate. Next, the Visibility Attentive Module eliminates the noise from the back of objects where points are occluded. Last, we apply the Adaptive Point-wise Attention to learn the focus on each sampled points. Jointly, these modules contribute significantly to the superior performance of InfoFocus. \section{Experiments} Our method is mainly evaluated on the nuScenes dataset~\cite{caesar2019nuscenes} which is considered as the most challenging 3D object detection benchmark. We first present our implementation details. We compare with the existing approaches both quantitatively and qualitatively. Then, extensive ablation studies are conducted to demonstrate the effectiveness of each designed module. Last, we analyze the inference time and the desired speed accuracy trade-off provided by our method. \subsection{Dataset and Evaluation Metric} NuScenes~\cite{caesar2019nuscenes} is one of the largest datasets for autonomous driving. There are 1000 scenes of 20$s$ duration each, including 23 object classes annotated with 28,130 training, and 6,019 validation samples. We use the LiDAR point cloud as the only input to our method and all the experiments follow the standard protocol on the training and validation sets. Officially, nuScenes evaluates the detection accuracy across different classes, based on the average precision metric (AP) which is computed based on 2D center distance between ground truth and the detection box on the ground plane. In details, the AP score is determined as the normalized area under the precision recall curve above 10\%. The final mean AP (mAP) is the average among the set of ten classes over matching thresholds of $\mathds{D} = \{0.5, 1, 2, 4 \} $ meters. \subsection{Implementation Details} We integrate InfoFocus into a state-of-the-art real-time 3D object detector~\cite{lang2019PointPillars} to improve the detection performance without largely compromising speed. Closely following the codebase\footnote{https://github.com/traveller59/second.pytorch.} recommended by the authors of PointPillars~\cite{lang2019PointPillars}, we use PyTorch to implement our InfoFocus modules and integrate it into vanilla PointPillars network. More details will be introduced in the supplementary materials. \noindent\textbf{RPN}. For each class of objects, the RPN anchor size is set by calculating the average of all objects from the corresponding class in training set of nuScenes. In addition, the matching thresholds are based on the custom configuration following the suggested codebase. 1,000 proposals are obtained from RPN, on which NMS with a threshold of 0.5 is applied to remove the overlapping proposals for both training and inference. The final top-ranked 300 proposals are kept for the InfoFocus stage to simultaneously predict the category, location and direction of objects during both the training and inference. \noindent\textbf{InfoFocus}. The second stage is our proposed InfoFocus. The three novel modules process object-centric feature sequentially based on the initial bounding box proposals from RPN. The number of sampled key-points for each edge, $n$, is set to be 2. Thus, the total number of PoIs, $N_{poi}$, is 13, including a center, 4 corners and 2 key-points on each edge. Similar to RoIAlign~\cite{he2017mask}, bi-linear interpolation is used to compute the deep feature from four neighboring regular locations of each point. As mentioned before, we apply a max-pool layer to summarize the features of points along each edge, resulting in 5 features for each proposal, including features from top, right, down, left edges and the center. When concatenating these features, we always treat the edge that is closest to the sensor as the top edge. A fully connected layer with a single node is used to generate point-wise attention weight for each point. The feature of each proposal is transformed by two consecutive FC layers with 512 nodes each and passed to three sibling linear layers, a box-regression branch, a box-classification branch and a box-direction branch. For the regression target assignment, anchors having Intersection over Union (IoU) bigger than 0.6 with the ground truth are considered positive, and smaller than 0.55 are assigned negative labels. \noindent\textbf{Training Parameters}. Experiments are conducted on a single NVIDIA 1080Ti GPU. The weight decay is set to be 0.01. We adopt the Adam optimizer \cite{kingma2014adam}, and use a one-cycle scheduler proposed in \cite{smith2018disciplined}. We train our model with a total of 20 epochs as a default choice, taking about 40 hours from scratch. For the first 8 epochs, the learning rate progressively increases from \num{3e-4} to \num{3e-3} with decreasing momentum from 0.95 to 0.85, while in the remaining 12 epochs learning rate decreases from \num{3e-3} to \num{3e-6} with increasing momentum from 0.85 to 0.95. The focal loss \cite{lin2017focal} with $ \alpha = 0.25 $ and $ \gamma = 2.0 $ is adopted for the classification loss. The balancing weights for the classification, box regression, and direction loss \( \beta_{cls}, \beta_{reg}, \beta_{dir} \) of both stages are 1, 2 and 0.2, respectively. \subsection{Main Results} First, we compare our framework with the state-of-the-art methods on the nuScenes validation set, including the vanilla PointPillars~\cite{lang2019PointPillars} as our baseline, and recently published WYSIWYG~\cite{hu2019you}. As can be seen from Table.~\ref{table:1}, the baseline has an mAP of 29.5\% with a single stage, while InfoFocus improves it by a massive 6.9\%. This demonstrates the effectiveness of InfoFocus. We also visualize the detection results of our framework on 2D and 3D BEV images in Fig.~\ref{fig:visualization_2D_3D}. As shown in Fig.~\ref{fig:comparison}, compared to the vanilla PointPillars qualitatively, InfoFocus helps remove the false positives significantly and obtains better results. \begin{table}[h!] \centering \caption{Object detection results (\%) on nuScenes validation set} \begin{tabular}{l c c c c c c c c c c c} \hline Method & car & peds. & barri. & traff. & truck & bus & trail. & const. & motor. & bicyc. & mAP \\ [0.5ex] \hline PointPillars \cite{lang2019PointPillars} & 70.5 & 59.9 & 33.2 & 29.6 & 25.0 & 34.3 & 16.7 & 4.5 & 20.0 & 1.6 & 29.5 \\ WYSIWYG \cite{hu2019you} & 80.0 & 66.9 & 34.5 & 27.9 & 35.8 & 54.1 & 28.5 & 7.5 & 18.5 & 0 & 35.4 \\ \hline Ours & 77.6 & 61.7 & 43.4 & 33.4 & 35.4 & 50.5 & 25.6 & 8.3 & 25.2 & 2.5 & \bf 36.4 \\ \hline \end{tabular} \label{table:1} \end{table} In addition, we submit the detection results of test set on the nuScenes test server. The results show that our method achieves the state-of-the-art performance with inference speed of 31 FPS, improving the baseline performance by 7\% mAP. Note that all methods listed in Table.~\ref{table:2} are LiDAR-based except that MonoDIS \cite{simonelli2019disentangling} and CenterNet\cite{zhou2019objects} are camera-based methods. Without bells and whilstles, our approach works better than WYSIWYG~\cite{hu2019you}. Considering that our model contains more parameters than the vanilla PointPillars, we empirically increase the number of the training epoch by 2 times. With all the others settings the same, our method is improved by 2\% mAP on the nuScenes test set as shown in Table.~\ref{table:2} (Ours 2$\times$). In total, our method outperforms WYSIWYG\cite{hu2019you} by 4.5\% mAP on the nuScenes test set. In the rest of paper, the default setting of training epochs is adopted. To the best of our knowledge, our framework is superior than all the published real-time methods with respect to mAP. \begin{table}[h!] \centering \caption{Object detection results (\%) on nuScenes test set. Note that MonoDIS and CenterNet are camera based methods, and the rest are LiDAR based. Ours 2$\times$ indicates 2$\times$ training time with other settings being the same with Ours} \begin{tabular}{l c c c c c c c c c c c} \hline Method & car & peds. & barri. & traff. & truck & bus & trail. & const. & motor. & bicyc. & mAP \\ [0.5ex] \hline MonoDIS \cite{simonelli2019disentangling} & 47.8 & 37.0 & 51.1 & 48.7 & 22.0 & 18.8 & 17.6 & 7.4 & 29.0 & 24.5 & 30.4 \\ PointPillars \cite{lang2019PointPillars} & 68.4 & 59.7 & 38.9 & 30.8 & 23.0 & 28.2 & 23.4 & 4.1 & 27.4 & 1.1 & 30.5\\ SARPNET \cite{ye2020sarpnet} & 59.9 & 69.4 & 38.3 & 44.6 & 18.7 & 19.4 & 18.0 & 11.6 & 29.8 & 14.2 & 32.4 \\ CenterNet \cite{zhou2019objects} & 53.6 & 37.5 & 53.3 & 58.3 & 27.0 & 24.8 & 25.1 & 8.6 & 29.1 & 20.7 & 33.8 \\ WYSIWYG \cite{hu2019you} & 79.1 & 65.0 & 34.7 & 28.8 & 30.4 & 46.6 & 40.1 & 7.1 & 18.2 & 0.1 & 35.0 \\ \hline Ours & 77.2 & 61.5 & 45.3 & 40.4 & 31.5 & 44.1 & 35.9 & 9.8 & 25.1 & 4.0 & \bf 37.5 \\ Ours 2$\times$ & 77.9 & 63.4 & 47.8 & 46.5 & 31.4 & 44.8 & 37.3 & 10.7 & 29.0 & 6.1 & \bf 39.5 \\ \hline \end{tabular} \label{table:2} \end{table} \subsection{Ablation Studies} To understand the contribution of our major component to the success of InfoFocus, Table.~\ref{table:3} summarizes the performance of our framework when a certain module is disabled, including PoI Pooling, Visibility Attention Module and Adaptive Attention Module. \begin{table}[] \centering \caption{Ablation studies on nuScenes validation set. "Vis. Att" and "Adp. Att." refer to Visibility Attention Module and Adaptive Attention Module, respectively} \label{table:3} \medskip \begin{tabular}{c@{\ \ \ \ \ \ } c@{\ \ \ \ \ \ \ } c@{\ \ \ \ \ \ \ }@{\ \ \ } c} \hline PoIPool & Vis. Att. & Adp. Att. & mAP \\ \hline & & & 29.5 \\ \checkmark & & & 32.5 \\ \checkmark & \checkmark & & 34.8 \\ \checkmark & & \checkmark & 34.8 \\ \checkmark & \checkmark & \checkmark & 36.4 \\ \hline \end{tabular} \end{table} \noindent\textbf{PoI Pooling}. To investigate the effect of PoI Pooling, we simply add the PoI Pooling on top of the vanilla PointPillar. This baseline introduces 3.0\% mAP improvement. However, when we vary the number of pooling key-points on each edge, we see that our framework with four key-points ($n=4$) on each edge degrades slightly by 0.8\% mAP than that of two key-points ($n=2$). A possible reason is that the higher number of samples along each edge might bring more noise which harms the detection performance. \noindent\textbf{Visibility Attention}. We further add the Visibility Attention module to filter out invisible edges before PoI pooling. Table.~\ref{table:3} shows that when using the features from two visible edges, the mAP result is improved by 2.3\% mAP compared to \emph{baseline+PoIPool}. Generally, the visible parts of objects correspond to their sides closer to the LiDAR sensor, thus they may capture richer information. By applying visibility attention, our method focuses more on the representative information which results in better performance. \noindent\textbf{Adaptive Point-wise Attention}. Without the Adaptive Point-wise Attention module, the framework naturally allows the same weight for each PoI feature. As we can see in Table.~\ref{table:3}, when adding this module, the result of \emph{baseline+PoIPool} improves by 2.3\% mAP and that of \emph{baseline+PoIPool+Vis.Att.} improves by 1.6\%. These results suggest that the Adaptive Point-wise Attention module helps emphasize on useful points which leads to a better performance. \begin{table}[] \centering \caption{Inference time of 3D object detectors. Note that inference time for the baseline here is the network reproduced by ourselves} \label{table:4} \medskip \begin{tabular}{l@{\ \ \ \ \ } c@{\ \ \ \ \ } c@{\ \ \ \ \ } c@{\ \ \ \ \ }} \hline Method & Input Format & mAP & Inference Time (ms) \\[0.5ex] \hline Baseline \cite{lang2019PointPillars} & LiDAR & 30.5 & 26.9 \\ MonoDIS \cite{simonelli2019disentangling} & RGB & 30.4 & 29.0 \\ SARPNET \cite{ye2020sarpnet} & LiDAR & 32.4& 70.0 \\ \hline Ours & LiDAR & 37.5 & 32.9 \\ \hline \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{images/baseline_vs_infofocus.pdf} \\ \caption{We visualize the detection results on nuScenes with 2D and 3D BEV images. On the top, we demonstrate the 2D images with the 3D bounding box annotated, while the BEV of LiDAR with ground truth (red) and detection (blue) box are shown on the bottom. Note that the line in the frame denotes the direction of the object} \label{fig:visualization_2D_3D} \end{figure} \noindent\textbf{Rotated RoIAlign Comparison}. One widely considered way to extract the region-wise features in the two-stage architecture is RoIAlign~\cite{he2017mask}. So, it is intuitive to compare with this strategy under the setting of 3D object detection. We implement rotated RoIAlign (RRoI) operation \cite{huang2018improving} to compensate for the rotated bounding box, since in our case they are often not axis-aligned. We conduct experiments exploring two different pooling sizes, $4 \times 4$ (pooled length and pooled width), and $8 \times 4$ with 4 sampled points in each bin. One of the reasons that we use $8 \times 4$ is that most of the objects like car and bus's length is larger than their width. With all other implementations the same as InfoFocus, Table.~\ref{table:5} presents detection results utilizing the rotated-RoI with different pooling sizes. Compared with the vanilla PointPillars \cite{lang2019PointPillars}, adding the RoIAlign layer with size of $4 \times 4$ increases the mAP performance by 4.4\%. However, InfoFocus still outperforms RoIAlign by 2.5\% with the better information modeling scheme. \begin{table}[h!] \centering \caption{Comparison with rotated RoIAlign feature extraction results (\%) on the nuScenes validation set} \begin{tabular}{c c c c c c c c c c c c} \hline Method & car & peds. & barri. & traff. & truck & bus & trail. & const. & motor. & bicyc. & mAP \\ [0.5ex] \hline RRoI 4x4 & 76.9 & 60.1 & 37.6 & 29.5 & 32.4 & 50.6 & 22.4 & 5.0 & 20.8 & \bf 3.8 & 33.9\\ RRoI 8x4 & 77.0 & 59.5 & 36.7 & 29.2 & 33.2 & \bf 51.5 & 25.4 & 4.5 & 24.0 & 1.8 & 34.3\\ \hline Ours & \bf 77.6 & \bf 61.7 & \bf 43.4 & \bf 33.4 & \bf 35.4 & 50.5 & \bf 25.6 & \bf 8.3 & \bf 25.2 & 2.5 & \bf 36.4 \\ \hline \end{tabular} \label{table:5} \end{table} \subsection{Real-time Inference Analysis} As indicated in Table.~\ref{table:4}, our framework takes about 32.9 ms to perform detection on an example of point cloud in the nuScenes, compared with 26.9 ms of the vanilla PointPillars when both are evaluated on a single NVIDIA 1080Ti GPU. In details, the pillar feature extraction time is 12.6 ms, the DCNN costs 1.1 ms, RPN takes 7.3 ms to generate proposals, and the InfoFocus stage takes 11.9 ms. Specifically, the proposal generation for the InfoFocus stage including NMS is 5.1 ms, the PoI feature extraction time is 3.1 ms, and the second stage including three branches takes 0.7 ms. We also note that WYSIWYG \cite{hu2019you} provides the overhead of computing visibility over a 32-beam LiDAR point to be $24.4 \pm 3.5$ ms on average and InfoFocus is faster than WYSIWYG \cite{hu2019you} since we share the similar backbone network. The framework with RROIAlign has an inference time of 32.2 ms. Further, compared with other point-based methods \cite{shi2019pointrcnn,yang2019std}, InfoFocus is considerably faster and conceptually simpler. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{images/baseline_vs_infofocus_2.pdf} \\ \caption{We visualize the BEV detection results for the same point cloud sample on nuScenes with the vanillar PointPillars (left) and InfoFocus (right)} \label{fig:comparison} \end{figure} \section{Conclusions} Non-uniform distribution of point cloud causes varying amount of information at different locations. Therefore, we argue that this imbalance distribution of information may result in degradation on previous 3D voxel-based detectors when modeling 3D objects. To address this issue, we propose a 3D object detection framework with InfoFocus to dynamically conduct information modeling. InfoFocus contain three effective modules including PoI Pooling, the Visibility Attentive Module, and the Adaptive Point-wise Attention. Demonstrated by the comprehensive experiments, our framework achieves the state-of-art performance among all the real-time detectors on the challenging nuScenes dataset. \noindent\textbf{Acknowledgement}. This work was supported by the Intelligence Advanced Research Projects Activity (IARPA) via DOI/IBC contract numbers D17PC00345 and D17PC00287. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright annotation thereon. The authors would like to thank Zuxuan Wu and Xingyi Zhou for proofreading the manuscript. \clearpage \bibliographystyle{splncs04}
{ "timestamp": "2020-07-20T02:01:06", "yymm": "2007", "arxiv_id": "2007.08556", "language": "en", "url": "https://arxiv.org/abs/2007.08556" }
\section{Introduction}\label{sec:intro} A general goal in geometric measure theory that has not yet been fully achieved is to understand in a systematic way how generic measures on a metric space interact with a prescribed family of sets in the space, e.g.~rectifiable curves, smooth submanifolds, etc. For instance, we could ask: Is a measure positive on some set in the family? Do there exist countably many sets in the family whose union captures all of the mass of the measure? For Hausdorff measures and measures with \emph{a priori} bounds on the asymptotic densities of the measure, much progress has been made. A description of this work as it stood at the end of the last century can be found can be found in \cite{Mattila}. Newer developments in the theory of rectifiability of absolutely continuous measures include \cite{AT15, Tolsa-n, TT-rect, ENV, Ghinassi, Goering-rect, Dabrowski-n,Dabrowski-s}. An alternative regularity condition that is usually \emph{a priori} weaker than upper and lower density bounds is asymptotic control on how much the measure grows when the radius of a ball is doubled. Recent investigations on the rectifiability of doubling measures include \cite{ADT1,ADT2,ATT-alphas,Naples-TST}. For Radon measures, which in general do not possess good bounds on density or doubling, the situation is far less understood and examples show that classical geometric measure theory methods are not strong enough to detect when measures charge Lipschitz images of Euclidean subspaces or graphs of Lipschitz functions \cite{MM1988}, \cite{CKRS}, \cite{GKS}, \cite{MO-curves}, \cite{Tolsa-betas}. On the positive side, we now possess a complete description of the interaction of an arbitrary Radon measure in Euclidean space with rectifiable curves \cite{BS3}. This advance required a thorough understanding of the geometry of subsets of rectifiable curves in Hilbert spaces \cite{Jones-TST,Ok-TST,Schul-Hilbert} and further blending geometric measure theory with techniques from modern harmonic analysis. For a longer overview of these and other related developments on generalized rectifiability of measures, including fractional and higher-order rectifiability, see the survey \cite{ident}. In this paper, we obtain a complete description of the interaction of Radon measures in $\mathbb{R}^n$ with graphs of Lipschitz functions over $m$-dimensional subspaces for all $1\leq m\leq n-1$. Moreover, the characterization of Lipschitz graph rectifiability that we identify depends only on the value of the measure on dyadic cubes below a fixed generation. The key insight is that to construct Lipschitz graphs that charge a measure, one must be able to equitably distribute mass which appears in bad cones. For a detailed statement, see Theorem \ref{t:main} and the definition of cone defect. The connection between Lipschitz graphs and the geometry of measures have been studied for over ninety years, appearing in foundational work on the structure of Hausdorff measures in the plane \cite{Bes28,Bes38,MR44} and higher-dimensional Euclidean space \cite{Fed47}. Radon measures on smooth Lipschitz graphs supply a model for generalized surfaces in connection with Plateau's problem \cite{Almgren-survey,Plateau-again}. Beyond the domain of geometric measure theory, understanding rectifiability of measures with respect to Lipschitz graphs is crucial in the study of boundedness of singular integral operators \cite{CMM-Lipschitz,DS91,DS93,David-Semmes-conjecture} and absolute continuity of harmonic measure on rough domains \cite{David-Jerison,Badger-nullsets,7author,AAM-advances}. \subsection*{Cones and Lipschitz Graph Rectifiability} Throughout the paper, we fix integer dimensions $1\leq m\leq n-1$, where $n$ denotes the dimension of ambient space and $m$ denotes the dimension of a Lipschitz graph. A \emph{bad cone} $X=X(V,\alpha)$ is a set of the form $$X=\{x\in\mathbb{R}^n:\mathop\mathrm{dist}\nolimits(x,V)>\alpha\mathop\mathrm{dist}\nolimits(x,V^\perp)\},$$ where $V\in G(n,m)$ is any $m$-dimensional subspace of $\mathbb{R}^n$, $V^\perp\in G(n,n-m)$ denotes its orthogonal complement, and $\alpha\in(0,\infty)$. In other words, $X$ is the set of points which are relatively closer to $V^\perp$ than $V$. We exclude the degenerate case $\alpha=0$, which corresponds to $X=\mathbb{R}^n\setminus V$. For every $x\in \mathbb{R}^n$ and bad cone $X$, we let $X_x=x+X=\{x+y:y\in X\}$ denote the translate of $X$ with center $x$. The importance of this family of cones is that they yield a perfect test to determine when a set is contained in a Lipschitz graph. \begin{lemma}[Geometric Lemma] \label{geometric-lemma} Let $V\in G(n,m)$, $\alpha\in(0,\infty)$, $X=X(V,\alpha)$, and $E\subseteq\mathbb{R}^n$ be nonempty. There exists a Lipschitz function $f:V\rightarrow V^\perp$ with Lipschitz constant at most $\alpha$ such that $E$ is contained in $$\mathop\mathsf{Graph}\nolimits(f)=\{(x,f(x)):x\in V\}\subseteq V\times V^\perp=\mathbb{R}^n$$ if and only if $E\cap X_x=\emptyset$ for all $x\in E$. The conclusion also holds when $\alpha=0$.\end{lemma} \begin{proof} Unwind the definitions or see e.g.~the proof in \cite[Lemma 4.7]{DeLellis}.\end{proof} We adopt the convention that a \emph{Radon measure} $\mu$ on $\mathbb{R}^n$ is a Borel regular outer measure that is finite on compact sets. At one extreme, given a nonempty family $\mathcal{F}$ of Borel sets, we say that $\mu$ is \emph{carried by $\mathcal{F}$} if there exist $F_1,F_2,\dots\in\mathcal{F}$ such that $\mu(\mathbb{R}^n\setminus \bigcup_1^\infty F_i)=0$. At the other extreme, we say that $\mu$ is \emph{singular to $\mathcal{F}$} provided $\mu(F)=0$ for every $F\in\mathcal{F}$. For any Radon measure $\mu$ and Borel set $E\subseteq\mathbb{R}^n$, the \emph{restriction} $\mu\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} } E$ defined by the rule $\mu\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} } E(A)=\mu(E\cap A)$ for all sets $A\subseteq\mathbb{R}^n$ is again a Radon measure. The \emph{support} $\mathop\mathrm{spt}\nolimits\mu$ of a Radon measure is the smallest closed set such that $\mu(\mathbb{R}^n\setminus\mathop\mathrm{spt}\nolimits\mu)=0$. A Radon measure $\mu$ on $\mathbb{R}^n$ is \emph{Lipschitz graph rectifiable} of dimension $m$ if $\mu$ is carried by \emph{$m$-dimensional Lipschitz graphs}, i.e.~graphs of Lipschitz functions $f:V\rightarrow V^\perp$ over subspaces $V\in G(n,m)$. For example, let $\Gamma_1,\Gamma_2,\dots$ be a sequence of Lipschitz graphs in $\mathbb{R}^n$ with uniformly bounded Lipschitz constants, let $\mu_i=\mathcal{H}^m\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} } \Gamma_i$ denote the restriction of the $m$-dimensional Hausdorff measure $\mathcal{H}^m$ to $\Gamma_i$, and let $c_1,c_2,\dots\in(0,\infty)$ be a sequence of weights with $\sum_1^\infty c_i<\infty$. Then $\mu=\sum_1^\infty c_i\mu_i$ is an $m$-dimensional Lipschitz graph rectifiable Radon measure on $\mathbb{R}^n$ with support equal to $\overline{\bigcup_1^\infty\Gamma_i}$. In particular, there exist Lipschitz graph rectifiable measures with $\mathop\mathrm{spt}\nolimits\mu=\mathbb{R}^n$. The following classical criterion for Lipschitz graph rectifiability is due to Federer \cite[Theorem 4.7]{Fed47}. The theorem only applies to Radon measures satisfying the upper density bounds $0<\limsup_{r\downarrow 0} r^{-m}\mu(B(x,r))<\infty$ $\mu$-a.e. Such measures are strongly $m$-dimensional in the sense that $\mu$ is carried by sets of finite Hausdorff measure $\mathcal{H}^m$ and singular to sets of zero $\mathcal{H}^m$ measure; e.g.~this can be shown using \cite[Theorem 6.9]{Mattila}. \begin{theorem}[Federer] \label{federer-theorem} Let $\mu$ be a Radon measure on $\mathbb{R}^n$. If for $\mu$-a.e.~$x\in\mathbb{R}^n$, there exists a bad cone $X=X(V_x,\alpha_x)$ such that \begin{equation}\label{federer-condition}\limsup_{r\downarrow 0} \frac{\mu(X_x\cap B(x,r))}{r^m} < \frac{\alpha_x^m}{2\cdot 100^m} \limsup_{r\downarrow 0} \frac{\mu(B(x,r))}{r^m}<\infty,\end{equation} then $\mu$ is carried by $m$-dimensional Lipschitz graphs. \end{theorem} For general locally finite measures, there is no uniform comparison between the measure of a ball and the radius of the ball. Thus, it is natural to try to replace the normalizing factor $r^m$ in Federer's condition \eqref{federer-condition} with $\mu(B(x,r))$. In this direction, Naples obtained a characterization of Lipschitz graph rectifiability for measures in Hilbert space with pointwise bounded asymptotic doubling. See \cite[Theorem D]{Naples-TST}. \begin{theorem}[Naples] \label{t:naples} Let $\mu$ be a Radon measure on $\mathbb{R}^n$ or a locally finite Borel regular outer measure on the Hilbert space $\ell_2$. Assume that $\mu$ is pointwise doubling, i.e.~$$\limsup_{r\downarrow 0} \frac{\mu(B(x,2r))}{\mu(B(x,r))}<\infty\quad\text{at $\mu$-a.e.~$x$}.$$ Then $\mu$ is carried by $m$-dimensional Lipschitz graphs if and only if for $\mu$-a.e.~$x$ there exists a bad cone $X=X(V_x,\alpha_x)$ such that \begin{equation}\label{eq:cone-point} \lim_{r\downarrow 0} \frac{\mu(X_x\cap B(x,r))}{\mu(B(x,r))}=0.\end{equation} \end{theorem} The restriction to pointwise doubling measures in Theorem \ref{t:naples} is crucial. For general Radon measures in Euclidean space, which may be non-doubling, it is impossible to characterize Lipschitz graph rectifiability using condition \eqref{eq:cone-point} by an example of Cs\"ornyei, K\"{a}enm\"{a}ki, Rajala, and Suomala. See \cite[Example 5.5]{CKRS}. \begin{theorem}[Cs\"ornyei \emph{et al.}] There exists a non-zero Radon measure $\mu$ on $\mathbb{R}^2$ and $V\in G(2,1)$ such that for all $\alpha>0$, condition \eqref{eq:cone-point} holds for $\mu$ and $X=X(V,\alpha)$ at $\mu$-a.e.~$x\in\mathbb{R}^2$, and $\mu$ is singular to 1-dimensional Lipschitz graphs.\end{theorem} In a recent preprint \cite{Dabrowski-cones}, Dabrowski independently announced a characterization of $m$-dimensional Lipschitz graph rectifiable measures, which are absolutely continuous with respect to $\mathcal{H}^m$. The characterization is in terms of a Dini condition on conical densities of the form $r^{-m}\mu(X\cap B(x,r))$ and requires certain \emph{a priori} bounds on the lower and upper $m$-dimensional density on $\mu$; for related examples, see \cite{Dabrowski-examples}. In \cite{DNOI}, Del Nin and Obinna Idu supply an extension of Theorem \ref{federer-theorem} to $C^{1,\alpha}$ graphs. \subsection*{Conical Defect} Our goal is to promote the characterization of subsets of Lipschitz graphs given by the geometric lemma to a characterization of Radon measures carried by Lipschitz graphs. Motivated by the characterization of Radon measures carried by rectifiable curves \cite{BS3}, we follow \cite[Remark 2.10]{ident} and design an anisotropic version of the geometric lemma for measures. For any nonempty set $Q\subseteq\mathbb{R}^n$, let $X_Q=\bigcup_{x\in Q}X_x$ denote the union of bad cones centered on $Q$. In particular, suppose that $Q$ is a (half-open) dyadic cube, i.e.~a set of the form$$Q=\left[\frac{j_1}{2^k},\frac{j_1+1}{2^k}\right)\times\cdots\times \left[\frac{j_n}{2^k},\frac{j_n+1}{2^k}\right),\quad k,j_1,\dots,j_n\in\mathbb{Z}.$$ We denote the side length $2^{-k}$ of $Q$ by $\mathop\mathrm{side}\nolimits Q$. Let $x_Q$ denote the geometric center of $Q$. For each $X=X(V,\alpha)$ with $\alpha\in(0,\infty)$, let $r_{Q,X}>0$ be sufficiently large such that if $R$ is a dyadic cube of the same generation as $Q$ and $R$ intersects the \emph{conical annulus} $A_{Q,X}$, $$A_{Q,X}=X_Q\cap B(x_Q,r_{Q,X})\setminus U(x_Q,r_{Q,X}/3),$$ then $R\cap B(x_Q,r_{Q,X}/4)=\emptyset$ and $\mathop\mathrm{gap}\nolimits(R,X_x(V,\alpha/2)^c)\geq \mathop\mathrm{diam}\nolimits Q$ for all $x\in Q$. Here $B(x,r)$ and $U(x,r)$ denote the closed and open balls centered at $x$ with radius $r$, respectively, $\mathop\mathrm{diam}\nolimits Q$ denotes the diameter of $Q$, $S^c=\mathbb{R}^n\setminus S$ for every set $S\subseteq\mathbb{R}^n$, and $\mathop\mathrm{gap}\nolimits(S,T)=\inf\{|s-t|:s\in S,\,t\in T\}$ for all nonempty sets $S,T\subseteq\mathbb{R}^n$. (In harmonic analysis, $\mathop\mathrm{gap}\nolimits(S,T)$ is often denoted by $\mathop\mathrm{dist}\nolimits(S,T)$, but because the gap between sets fails the triangle inequality, we believe it should not be called a distance; our terminology comes from variational analysis, see e.g.~\cite{Beer}.) By Lemma \ref{bounded-geometry} below, we may choose $$r_{Q,X}=81\sqrt{n}\max(\alpha,1/\alpha)\mathop\mathrm{side}\nolimits Q.$$ Define the \emph{discretized conical annulus} $\Delta^*_{Q,X}$ to be the set of all such $R$, i.e.~$R\in\Delta^*_{Q,X}$ if and only if $R$ is a dyadic cube, $\mathop\mathrm{side}\nolimits R=\mathop\mathrm{side}\nolimits Q$, and $R $ has nonempty intersection with $A_{Q,X}$. \begin{figure} \begin{center}\includegraphics[width=.8\textwidth]{slanted.png}\end{center} \caption{A conical annulus $A_{Q,X(V,\alpha)}$ (on the left) and its discretization $\Delta^*_{Q,X(V,\alpha)}$ (on the right), where $V=\mathop\mathrm{span}\nolimits \vec v$, $\vec v=(\cos(150^\circ),\sin(150^\circ))$, and $\alpha=1$.} \end{figure} Of course, the discretized conical annulus covers the conical annulus, i.e.~$A_{Q,X}\subseteq\bigcup\Delta^*_{Q,X}$. In addition, define the \emph{dual discretized conical annulus} $\nabla^*_{R,X}$ by setting $$Q\in\nabla^*_{R,X}\quad\text{if and only if}\quad R\in \Delta^*_{Q,X}.$$ (In fact, it can be shown that $\nabla^*_{R,X}$ and $\Delta^*_{R,X}$ are the same set of dyadic cubes\footnote{We thank an anonymous referee for this observation.}, but $\nabla^*_{R,X}$ and $\Delta^*_{R,X}$ play different logical roles in the proofs below, so we use separate notation.) Note that each discretized region $\Delta^*_{Q,X}$ and $\nabla^*_{R,X}$ is a finite family of cubes with cardinality controlled by $n$ and $\alpha$. For every Radon measure $\mu$ on $\mathbb{R}^n$, dyadic cube $Q$ with $\mu(Q)>0$, and bad cone $X=X(V,\alpha)$ with $\alpha\in(0,\infty)$, we define the \emph{conical defect} to be the quantity $$\mathop\mathsf{Defect}\nolimits(\mu,Q,X)=\sum_{R\in\Delta^*_{Q,X}}\frac{\mu(R)}{\mu(Q)}\in[0,\infty).$$ We also set $\mathop\mathsf{Defect}\nolimits(\mu,Q,X)=0$ if $\mu(Q)=0$. The conical defect is a weighted measurement of the mass of $\mu$ in the annular region $\bigcup \Delta^*_{Q,X}\supseteq A_{Q,X}$. It is anisotropic insofar as the normalization of each term $\mu(R)$ that appears in the defect depends on the value of the measure in cubes $Q\in\nabla^*_{R,X}$ emanating in different directions from the cube $R$. \subsection*{Conical Dini Functions} For every Radon measure $\mu$ on $\mathbb{R}^n$ and bad cone $X=X(V,\alpha)$ with $\alpha\in(0,\infty)$, we define the \emph{conical Dini function} $G_{\mu,X}:\mathbb{R}^n\rightarrow[0,\infty]$, $$G_{\mu,X}(x)=\sum_{\mathop\mathrm{side}\nolimits Q\leq 1} \mathop\mathsf{Defect}\nolimits(\mu,Q,X)\, \chi_Q(x)\quad\text{for all }x\in\mathbb{R}^n,$$ where the sum ranges over all dyadic cubes $Q$ of side length at most 1 that contain $x$. The definition of $G_{\mu,X}(x)$ is similar in spirit to the definition of the density-normalized square functions in \cite{BS1,BS2,BS3}. The magnitude of the conical Dini function determines the interaction of $\mu$ with $m$-dimensional Lipschitz graphs. There are several possible ways to formulate this. Perhaps the most important is the following. \begin{theorem}[Main Theorem] \label{t:main} Let $1\leq m\leq n-1$ be integers. Every Radon measure $\mu$ on $\mathbb{R}^n$ decomposes uniquely as $\mu=\mu_G+\mu_G^\perp$, where $\mu_G$ is a Radon measure carried by $m$-dimensional Lipschitz graphs and $\mu_G^\perp$ is a Radon measure singular to $m$-dimensional Lipschitz graphs. The component measures are identified by \begin{align*}\mu_G&=\mu\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} }\{x\in\mathbb{R}^n:G_{\mu,X}(x)<\infty \text{ for some bad cone }X\},\\ \mu_G^\perp &=\mu\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} }\{x\in\mathbb{R}^n:G_{\mu,X}(x)=\infty \text{ for every bad cone }X\}.\end{align*} That is, there exists a sequence $\Gamma_1,\Gamma_2,\dots$ of $m$-dimensional Lipschitz graphs such that $\mu_G(\mathbb{R}^n\setminus\bigcup_1^\infty\Gamma_i)=0$ and $\mu_G^\perp(\Gamma)=0$ for every $m$-dimensional Lipschitz graph $\Gamma$. \end{theorem} The main theorem implies that to determine whether or not a measure charges some Lipschitz graph or is carried by Lipschitz graphs it is enough to \emph{evaluate the measure on only countably many sets}, e.g.~on dyadic cubes of side length at most 1. \subsection*{Consequences} The first two corollaries are immediate applications of the main theorem. For variations on Corollary \ref{carried-by-graphs} and \ref{charge-a-graph}, which account for the direction $V$ and Lipschitz constant $\alpha$ of the underlying Lipschitz graphs, see \S\S\,2 and 3. \begin{corollary}\label{carried-by-graphs} A Radon measure $\mu$ on $\mathbb{R}^n$ is carried by $m$-dimensional Lipschitz graphs if and only if $G_{\mu,X}(x)<\infty$ for some bad cone $X=X(V_x,\alpha_x)$ for $\mu$-a.e.~$x\in\mathbb{R}^n$.\end{corollary} \begin{corollary}\label{charge-a-graph}A Radon measure $\mu$ on $\mathbb{R}^n$ charges some $m$-dimensional Lipschitz graph, i.e.~$\mu(\Gamma)>0$ for some Lipschitz graph $\Gamma$, if and only if there exists $E\subseteq\mathbb{R}^n$ with $\mu(E)>0$ such that $G_{\mu,X}(x)<\infty$ for some bad cone $X=X(V_x,\alpha_x)$ for each $x\in E$.\end{corollary} A basic geometric measure-theoretic fact is that every $m$-dimensional Lipschitz graph in $\mathbb{R}^n$ has locally finite $m$-dimensional packing measure. Further, typical points of sets of finite $s$-dimensional packing measure have positive lower $s$-dimensional density. Hence we discover the following relationship between the conical Dini functions for $\mu$ and the lower $m$-dimensional density for $\mu$. For details, see e.g.~\cite[Lemma 2.7, 2.8]{BS1}. \begin{corollary}Let $\mu$ be a Radon measure on $\mathbb{R}^n$. \begin{enumerate} \item At $\mu$-a.e.~$x\in\mathbb{R}^n$ such that $G_{\mu,X}(x)<\infty$ for some bad cone $X$, we have $$\underline{D}^m(\mu,x)=\liminf_{r\downarrow 0}\frac{\mu(B(x,r))}{r^m}>0.$$ \item At $\mu$-a.e.~$x\in\mathbb{R}^n$ such that $\underline{D}^m(\mu,x)=0$, we have $G_{\mu,X}(x)=\infty$ for every bad cone $X$.\end{enumerate}\end{corollary} It is a difficult, open problem to obtain similar theorems for Radon measures and Lipschitz images of $\mathbb{R}^m$ in $\mathbb{R}^n$ when $2\leq m\leq n-1$. (However, the case when $\mu$ vanishes on sets of zero $m$-dimensional Hausdorff measure is well understood, see e.g.~\cite{Mattila}.) The obstacle is back at the beginning of the paper: there is no known substitute for Lemma \ref{geometric-lemma} that characterizes subsets of Lipschitz images. Even the case $m=2$, $n=3$ is wide open. For related work on subsets of alternative classes of higher-dimensional curves and surfaces, see \cite{AS-TST, BNV, ENV-Banach, Hyde-TST} and the references therein. \subsection*{Organization} We establish sufficient conditions for Lipschitz graph rectifiability in \S\ref{sec:sufficient}, followed by necessary conditions in \S\ref{sec:necessary}. Using these results, we prove Theorem \ref{t:main} in \S \ref{sec:main}. Finally, we discuss variations on the main theorem in \S\ref{sec:variation}. \section{Sufficient Conditions}\label{sec:sufficient} We adopt the following standard notation. We write $C=C(p,q,\dots)$ to denote that $0<C<\infty$ is a constant depending on at most the parameters $p,q,\dots$. The value of $C$ may change from line to line. The notation $a\lesssim_{p,q,...}\! b$ is short hand for $a\leq C(p,q,\dots)\,b$. \begin{lemma}\label{bounded-geometry} Let $X=X(V,\alpha)$ be a bad cone over $V\in G(n,m)$ with opening $\alpha\in(0,\infty)$. In the definition of the conical defect, we may choose the radius $$r_{Q,X}=81\sqrt{n}\max(\alpha,1/\alpha)\mathop\mathrm{side}\nolimits Q.$$ For every dyadic cube $Q$ and for every $x\in Q$, $$X_x\cap B(x,s_{Q,X})\setminus U\left(x,\tfrac{1}{2}s_{Q,X}\right)\subseteq A_{Q,X},\quad \text{where }s_{Q,X}=r_{Q,X}-\sqrt{n}\mathop\mathrm{side}\nolimits Q.$$ There exists a constant $C_1=C_1(n,\alpha)$ such that for every dyadic cube $Q$ and $R\in\Delta^*_{Q,X}$, the Hausdorff distance between $Q$ and $R$ is at most $C_1\mathop\mathrm{side}\nolimits Q$. Moreover, there exists a constant $C_2=C_2(n,\alpha)<\infty$ such that $\Delta^*_{Q,X}$ and $\nabla^*_{R,X}$ have cardinality at most $C_2$ for every dyadic cube $Q$ and $R$ in $\mathbb{R}^n$. \end{lemma} \begin{proof} For ease of computation, pick coordinates on $\mathbb{R}^n$ so that $Q=[-\frac{1}{2},\frac{1}{2})\times \cdots\times [-\frac{1}{2},\frac{1}{2})$ is a ``dyadic cube'' of side length 1 with center at the origin. We return to the conventional definition of dyadic cubes at the conclusion of the proof. Suppose that $R$ is another dyadic cube of side length 1 such that $R\cap X_Q\setminus U(0,r/3)\neq\emptyset$. We first want to determine how large $r$ must be to ensure that $R\cap U(0,r/4)=\emptyset$. Choose any point $z\in R\cap X_Q\setminus U(0,r/3)$. For any $x\in R$, $|x| \geq |z|-\mathop\mathrm{diam}\nolimits R \geq r/3-\sqrt{n}.$ Hence $|x|>r/4$ if $r>12\sqrt{n}$. Impose this lower bound on $r$. Next we find how large $r$ must be to guarantee that $\mathop\mathrm{gap}\nolimits(R, X_b(V,\alpha/2)^c)\geq \mathop\mathrm{diam}\nolimits Q$ for every $b\in Q$. Continuing to work with $z$, pick $a\in Q$ such that $z\in X_a$, which exists because $z\in X_Q$. Fix $b\in Q$. To show that $\mathop\mathrm{gap}\nolimits(R,X_b(V,\alpha/2)^c)\geq \mathop\mathrm{diam}\nolimits Q$, it suffices to prove $B(z,2\mathop\mathrm{diam}\nolimits R)\subseteq X_b(V,\alpha/2)$. Fix $w\in B(z,2\mathop\mathrm{diam}\nolimits R)$ and recall that $\mathop\mathrm{diam}\nolimits Q=\mathop\mathrm{diam}\nolimits R=\sqrt{n}$. By repeated use of the triangle inequality and definition of $X_a$, \begin{equation*}\begin{split} \mathop\mathrm{dist}\nolimits(w,V_b)\geq \mathop\mathrm{dist}\nolimits(z,V_a)-3\sqrt{n}> \alpha&\mathop\mathrm{dist}\nolimits(z,V_a^\perp)-3\sqrt{n}\geq \alpha\mathop\mathrm{dist}\nolimits(w,V_b^\perp) - 3(1+\alpha)\sqrt{n} \\ &\ \ =\frac{\alpha}{2}\mathop\mathrm{dist}\nolimits(w,V_b^\perp)+\frac{\alpha}{2}\mathop\mathrm{dist}\nolimits(w,V_b^\perp)-3(1+\alpha)\sqrt{n}.\end{split}\end{equation*} We now split into cases. From the displayed inequality, we see that $w\in X_b(V,\alpha/2)$ if $(\alpha/2)\mathop\mathrm{dist}\nolimits(w,V_b^\perp)\geq 3(1+\alpha)\sqrt{n}$. Suppose otherwise that $\mathop\mathrm{dist}\nolimits(w,V_b^\perp)< 6\sqrt{n}(1+\alpha)/\alpha$. By the Pythagorean theorem, we have \begin{align*}\mathop\mathrm{dist}\nolimits(w,V_b)^2=|w-b|^2-\mathop\mathrm{dist}\nolimits(w,V_b^\perp)^2 &\geq (|z|-|w-z|-|b|)^2-\mathop\mathrm{dist}\nolimits(w,V_b^\perp)^2\\ &\geq (r/3-3\sqrt{n})^2-\mathop\mathrm{dist}\nolimits(w,V_b^\perp)^2.\end{align*} We want $\mathop\mathrm{dist}\nolimits(w,V_b)>(\alpha/2)\mathop\mathrm{dist}\nolimits(w,V_b^\perp)$. Thus, we want $r$ to be large enough so that $$\left(\frac{r}{3}-3\sqrt{n}\right)^2\geq \left[\left(\frac{\alpha}{2}\right)^2+1\right]\mathop\mathrm{dist}\nolimits(w,V_b^\perp)^2.$$ Set $\rho=r/3-3\sqrt{n}$. Because $\mathop\mathrm{dist}\nolimits(w,V_b^\perp)< 6\sqrt{n}(1+\alpha)/\alpha$ and $(\frac{1}{4}\alpha^2+1)<(1+\alpha)^2$, it suffices to choose $\rho$ large enough so that $\rho^2\geq 36n(1+\alpha)^4/\alpha^2.$ Taking square roots and using the crude estimate $(1+\alpha)^2/\alpha \le 4\max(\alpha,1/\alpha)$, we see that it suffices to assume that $r\geq 81\sqrt{n}\max(\alpha,1/\alpha).$ The computations above were for a dyadic cube $Q$ of side length 1 centered at the origin. By scale and translation invariance, we conclude that if we define $$r_{Q,X}=81\sqrt{n}\max(\alpha,1/\alpha)\mathop\mathrm{side}\nolimits Q\quad\text{for all $Q$},$$ then $R\cap B(x_Q,r_{Q,X}/4)=\emptyset$ and $\mathop\mathrm{gap}\nolimits(R,X_{x}(V,\alpha/2)^c)\geq \mathop\mathrm{diam}\nolimits Q$ for all $R\in\Delta^*_Q$ and $x\in Q$. Now, put $s_{Q,X}=r_{Q,X}-\sqrt{n}\mathop\mathrm{side}\nolimits Q$ for each dyadic cube $Q$. Fix a dyadic cube $Q$ and $a\in Q$. Suppose that $y\in X_a\cap B(a,s_{Q,X})\setminus U(a,\frac12 s_{Q,X})$. On the one hand, $$|y-x_Q| \leq |y-a|+|a-x_Q| \leq s_{Q,X}+\sqrt{n}\mathop\mathrm{side}\nolimits Q= r_{Q,X}.$$ On the other hand, \begin{align*}|y-x_Q| \geq |y-a|-|a-x_Q| &\geq \frac{s_{Q,X}}{2}-\sqrt{n}\mathop\mathrm{side}\nolimits Q \\ &\geq \frac{r_{Q,X}}{2} - \frac{3}{2}\sqrt{n}\mathop\mathrm{side}\nolimits Q \geq 39\sqrt{n}\max(\alpha,1/\alpha)\mathop\mathrm{side}\nolimits Q>\frac{r_{Q,X}}{3}.\end{align*} Thus, $y\in X_a\cap B(x_Q,r_{Q,X})\setminus U(x_Q,r_{Q,X}/3)\subseteq A_{Q,X}$. For every dyadic cube $Q$ and $R\in\Delta^*_{Q,X}$, we have $Q\subseteq B(x_Q,\sqrt{n} \mathop\mathrm{side}\nolimits Q)$ and $R\subseteq B(x_Q,r_{Q,X}+\sqrt{n}\mathop\mathrm{side}\nolimits Q)$, because $R$ intersects $A_{Q,X}$ and $\mathop\mathrm{diam}\nolimits Q=\sqrt{n}\mathop\mathrm{side}\nolimits Q$. Therefore, the Hausdorff distance between $Q$ and $R$ is at most $C_1(n,\alpha)\mathop\mathrm{side}\nolimits Q$. By volume doubling, it follows that $\Delta^*_{Q,X}$ and $\nabla^*_{R,X}$ have cardinality at most $C_2(n,\alpha)$. \end{proof} We say that $\mathcal{T}$ is a \emph{tree of dyadic cubes} if $\mathcal{T}$ is a set of dyadic cubes ordered by inclusion such that $\mathcal{T}$ has a unique maximal element, denoted by $\mathop\mathsf{Top}\nolimits(\mathcal{T})$, and if $Q\in\mathcal{T}$, then $P\in\mathcal{T}$ for all dyadic cubes $Q\subseteq P\subseteq\mathop\mathsf{Top}\nolimits(\mathcal{T})$. We may partition $\mathcal{T}=\bigcup_0^\infty\mathcal{T}_i$, where $$\mathop\mathrm{side}\nolimits Q=2^{-i}\mathop\mathrm{side}\nolimits\mathop\mathsf{Top}\nolimits(\mathcal{T})\quad\text{for all }Q\in\mathcal{T}_i.$$ An \emph{infinite branch} is a decreasing sequence $Q_0\supseteq Q_1\supseteq Q_2\supseteq\cdots$ of cubes in $\mathcal{T}$ with each $Q_i\in\mathcal{T}_i$. We define the \emph{set of leaves} of $\mathcal{T}$, denoted by $\mathop\mathsf{Leaves}\nolimits(\mathcal{T})$, to be $$\mathop\mathsf{Leaves}\nolimits(\mathcal{T})=\bigcup\left\{\bigcap_{i=0}^\infty Q_i: Q_0\supseteq Q_1\supseteq Q_2\supseteq\cdots\text{ is an infinite branch of $\mathcal{T}$}\right\}.$$ Although the set of leaves is facially a union over uncountably many infinite branches, because $\#\mathcal{T}_i<\infty$ for all $i\geq 0$, one may prove that $\mathop\mathsf{Leaves}\nolimits(\mathcal{T})=\bigcap_{i=0}^\infty \bigcup\mathcal{T}_i$; e.g., see the argument at the top of \cite[p.~48]{Rogers}. Hence $\mathop\mathsf{Leaves}\nolimits(\mathcal{T})$ is an $F_{\sigma\delta}$ Borel set. \begin{lemma}\label{basic-lemma} Let $\mathcal{T}$ be a tree of dyadic cubes in $\mathbb{R}^n$, let $\mu$ be a Radon measure on $\mathbb{R}^n$, and let $X=X(V,\alpha)$ be a bad cone. If $\mu(R)=0$ for every $Q\in\mathcal{T}$ and $R\in \mathcal{T}\cap \Delta^*_{Q,X}$, then there is a Lipschitz function $f:V\rightarrow V^\perp$ with Lipschitz constant at most $\alpha$ such that $\mu(\mathop\mathsf{Leaves}\nolimits(\mathcal{T})\setminus\mathop\mathsf{Graph}\nolimits(f))=0$.\end{lemma} \begin{proof}We may assume that $\mathop\mathsf{Leaves}\nolimits(\mathcal{T})\neq\emptyset$, since otherwise the conclusion is trivial. Let $A=\mathop\mathsf{Leaves}\nolimits(\mathcal{T})\setminus \bigcup_{Q\in\mathcal{T}}\bigcup_{R\in\mathcal{T}\cap \Delta^*_{Q,X}} R$. Then $$\mu(\mathop\mathsf{Leaves}\nolimits(\mathcal{T})\setminus A) \leq \sum_{Q\in\mathcal{T}}\sum_{R\in\mathcal{T}\cap\Delta^*_{Q,X}}\mu(R)=0.$$ We will use Lemma \ref{geometric-lemma} to show that $A$ is contained in the graph of a Lipschitz function over $V$ with Lipschitz constant at most $\alpha$. Let $x\in A$ and pick an infinite branch $Q_0\supseteq Q_1\supseteq Q_2\supseteq\cdots$ such that $\{x\}=\bigcap_0^\infty Q_i$. We must show that $A\cap X_x=\emptyset$. For each dyadic cube $Q$, write $s_{Q,X}=r_{Q,X}-\sqrt{n}\mathop\mathrm{side}\nolimits Q$ and note that $s_{Q,X}=C(n,\alpha)\mathop\mathrm{side}\nolimits Q\gg \mathop\mathrm{diam}\nolimits Q$. By Lemma \ref{bounded-geometry}, for each cube $Q_i$ in the infinite branch containing $x$, $$A\cap X_x\cap B(x,s_{Q_i,X})\setminus U\left(x,\tfrac{1}{2}s_{Q_i,X}\right)\subseteq A\cap A_{Q_i,X}\subseteq \bigcup \mathcal{T}\cap\Delta^*_{Q_i,X}\subseteq \mathbb{R}^n\setminus A.$$ Because $s_{Q_{i+1},X}=\frac12 s_{Q_i,X}$ for each $i\geq 0$, it follows that $A\cap X_x\cap B(x,s_{Q_0,X})=\emptyset$. Also, since $A\subseteq Q_0$ and $s_{Q_0,X}\gg \mathop\mathrm{diam}\nolimits Q_0$, we have $A\cap X_x\setminus B(x,s_{Q_0,X})=\emptyset$, as well. Thus, $A\cap X_x=\emptyset$ for all $x\in A$. By Lemma \ref{geometric-lemma}, there exists $f:V\rightarrow V^\perp$ with Lipschitz constant at most $\alpha$ such that $A\subseteq\mathop\mathsf{Graph}\nolimits(f)$. Finally, \begin{equation*}\mu(\mathop\mathsf{Leaves}\nolimits(\mathcal{T})\setminus\mathop\mathsf{Graph}\nolimits(f))\leq \mu(\mathop\mathsf{Leaves}\nolimits(\mathcal{T})\setminus A)=0.\qedhere\end{equation*} \end{proof} We have reached the main technical result of the paper. The proof of Proposition \ref{build-graphs} dictates the definition of the conical defect; see especially the computation in \eqref{e:main-computation}. \begin{proposition}[Drawing Lipschitz Graphs through Leaves of a Tree] \label{build-graphs} Let $\mathcal{T}$ be a tree of dyadic cubes in $\mathbb{R}^n$ and let $\mu$ be a Radon measure on $\mathbb{R}^n$. If there exists a bad cone $X=X(V,\alpha)$ such that $\sum_{Q\in\mathcal{T}} \mathop\mathsf{Defect}\nolimits(\mu,Q,X)\,\mu(Q)<\infty$, then $\mu\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} }\mathop\mathsf{Leaves}\nolimits(\mathcal{T})$ is carried by graphs of Lipschitz functions $f:V\rightarrow V^\perp$ with Lipschitz constant at most $\alpha$.\end{proposition} \begin{proof} Suppose that $\mathcal{T}=\bigcup_0^\infty \mathcal{T}_i$, where $\mathcal{T}_i$ denotes the cubes of side length $2^{-i}\mathop\mathrm{side}\nolimits\mathop\mathsf{Top}\nolimits(\mathcal{T})$. Without loss of generality, we may assume that $\mathop\mathrm{side}\nolimits\mathop\mathsf{Top}\nolimits(\mathcal{T})=1$ and $\mu(Q)>0$ for every cube $Q\in \mathcal{T}$, because deleting cubes in the tree with $\mu$ measure zero has no effect on the graph rectifiability of $\mu\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} }\mathop\mathsf{Leaves}\nolimits(\mathcal{T})$. As every $\mu$ null set is trivially graph rectifiable, we may further assume that $\mu(\mathop\mathsf{Leaves}\nolimits(\mathcal{T}))>0$. The general scheme of the proof is to identify countably many subtrees of $\mathcal{T}$ whose sets of leaves are each contained in a Lipschitz graph and collectively cover $\mu$ almost all of the set of leaves of $\mathcal{T}$. We will say that a dyadic cube $R\in\mathcal{T}$ is \emph{bad} if there exists $Q\in\mathcal{T}$ such that $R\in\Delta^*_{Q,X}$. Bad cubes are the obstacles to invoking Lemma \ref{basic-lemma}. Let us compute the total measure of bad cubes in level $i$ of the tree. There are no bad cubes in $\mathcal{T}_0$, because the first level only contains one cube. Fix $i\geq 1$ and let $\mathcal{B}_i$ denote the set of $Q\in\mathcal{T}_i$ such that there exists $R\in\Delta^*_{Q,X}\cap\mathcal{T}$, i.e.~the discretized conical annulus for $Q$ contains a bad cube. Then \begin{equation}\begin{split}\label{e:main-computation}\sum_{\text{bad }R\in\mathcal{T}_i}\mu(R) &=\sum_{\text{bad }R\in\mathcal{T}_i}\sum_{Q\in\nabla^*_{R,X}\cap\mathcal{T}}\frac{\mu(R)\mu(Q)}{\mu(\bigcup \nabla^*_{R,X}\cap\mathcal{T})} = \sum_{Q\in\mathcal{B}_i} \sum_{R\in\Delta^*_{Q,X}\cap\mathcal{T}}\frac{\mu(R)\mu(Q)}{\mu(\bigcup \nabla^*_{R,X}\cap\mathcal{T})} \\ &\leq \sum_{Q\in\mathcal{B}_i} \sum_{R\in\Delta^*_{Q,X}}\frac{\mu(R)\mu(Q)}{\mu(Q)}\leq \sum_{Q\in\mathcal{T}_i}\mathop\mathsf{Defect}\nolimits(\mu,Q,X)\,\mu(Q),\end{split}\end{equation} where in the penultimate inequality $0<\mu(Q)\leq \mu(\bigcup\nabla^*_{R,X}\cap \mathcal{T})$ because $Q\in\mathcal{B}_i$ and $Q\in\nabla^*_{R,X}$ for all $R\in\Delta^*_{Q,X}$. The first equality in \eqref{e:main-computation} may be interpreted as equitably distributing the mass of a bad cube $R$ to the cubes $Q\in\nabla^*_{R,X}\cap\mathcal{T}$ which ``see'' $R$. Let $0<\delta<1$ be given. Because the weighted sum of the conical defect over $\mathcal{T}$ converges, there exists $i_0=i_0(\delta)\geq 1$ sufficiently large such that the tail \begin{equation}\label{bad-cube-sum} \sum_{i=i_0}^\infty \sum_{\text{bad }R\in\mathcal{T}_i} \mu(R) \leq \sum_{i=i_0}^\infty \sum_{Q\in\mathcal{T}_i}\mathop\mathsf{Defect}\nolimits(\mu,Q,X)\,\mu(Q)< \delta\,\mu(\mathop\mathsf{Leaves}\nolimits(\mathcal{T})).\end{equation} Let $Q^\delta_1,\dots,Q^\delta_k$ be an enumeration of the cubes in $\mathcal{T}_{i_0}$ such that each cube $Q^\delta_j$ is not bad. Then let $\mathcal{U}^\delta_1,\dots,\mathcal{U}^\delta_k$ denote the maximal subtrees of $\mathcal{T}$ with $\mathop\mathsf{Top}\nolimits(\mathcal{U}^\delta_j)=Q^\delta_j$ that contain no bad cubes. By \eqref{bad-cube-sum}, the trees exist, and the set $A^{\delta}=\bigcup_{j=1}^k \mathop\mathsf{Leaves}\nolimits(\mathcal{U}^\delta_j)$ satisfies $$\mu(A^{\delta})\geq (1-\delta) \mu(\mathop\mathsf{Leaves}\nolimits(\mathcal{T})).$$ Moreover, by $k$ applications of Lemma \ref{basic-lemma} (each $\mathcal{U}^{\delta}_j$ contains no bad cubes), $A^{\delta}$ is contained in the union of $k=k(\delta)$ Lipschitz graphs over $V$ of Lipschitz constant at most $\alpha$. To complete the proof, repeat the construction in the previous paragraph over any countable choice of parameters $\delta=\delta_j$ with $\lim_{j\rightarrow\infty}\delta_j=0$. \end{proof} Let $\mathcal{T}$ be a tree of dyadic cubes, let $b:\mathcal{T}\rightarrow[0,\infty)$ be any function, and let $\mu$ be a Radon measure on $\mathbb{R}^n$. Following \cite[\S5]{BS3}, we define the \emph{$\mu$-normalized sum function} $$S_{\mathcal{T},b}(\mu,x)=\sum_{Q\in\mathcal{T}} b(Q)\frac{\chi_Q(x)}{\mu(Q)}\quad\text{for all }x\in\mathbb{R}^n,$$ with the convention that $0/0=0$ and $1/0=\infty$. For example, for every dyadic cube $Q_0$ of side length 1, the conical Dini function $G_{\mu,X}(x)=S_{\mathcal{T},b}(\mu,x)$ for all $x\in Q_0$, where $\mathcal{T}$ is the tree of dyadic cubes contained in $Q_0$ and $b(Q)=\mathop\mathsf{Defect}\nolimits(\mu,Q,X)\,\mu(Q)$. \begin{lemma}[Localization Lemma, {\cite[Lemma 5.6]{BS3}}] \label{localization} Let $\mathcal{T}$ be a tree of dyadic cubes, let $b:\mathcal{T}\rightarrow[0,\infty)$, and let $\mu$ be a Radon measure on $\mathbb{R}^n$. For all $N<\infty$ and $\varepsilon>0$, there exists a partition of $\mathcal{T}$ into a set $\mathcal{G}$ of \emph{good cubes} and a set $\mathcal{B}$ of \emph{bad cubes} with the following properties. \begin{enumerate} \item Either $\mathcal{G}=\emptyset$ or $\mathcal{G}$ is a tree of dyadic cubes with $\mathop\mathsf{Top}\nolimits(\mathcal{G})=\mathop\mathsf{Top}\nolimits(\mathcal{T})$. \item Every child of a bad cube is a bad cube: if $P,Q\in\mathcal{T}$, $P\in\mathcal{B}$, and $Q\subseteq P$, then $Q\in\mathcal{B}$. \item The set $A=\{x\in\mathop\mathsf{Top}\nolimits(\mathcal{T}):S_{\mathcal{T},b}(x)\leq N\}$ is Borel and $$\mu(A\cap\mathop\mathsf{Leaves}\nolimits(\mathcal{G}))\geq (1-\varepsilon\mu(\mathop\mathsf{Top}\nolimits(\mathcal{T})))\,\mu(A).$$ \item The sum of $b$ over $\mathcal{G}$ is finite: $\sum_{Q\in\mathcal{G}} b(Q)<N/\varepsilon$. \end{enumerate} \end{lemma} Countably many applications of Proposition \ref{build-graphs} and Lemma \ref{localization} yield the following sufficient condition in terms of the conical Dini function for Lipschitz graph rectifiability of a measure with prescribed direction $V$ and Lipschitz constant $\alpha$. Being similar to the proof of \cite[Theorem 5.1]{BS3}, we omit the details. \begin{theorem}\label{t:suff} Let $\mu$ be a Radon measure on $\mathbb{R}^n$ and let $X=X(V,\alpha)$ be a bad cone for some $V\in G(n,m)$ and $\alpha\in(0,\infty)$. Then $\mu\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} }\{x\in\mathbb{R}^n:G_{\mu,X}(x)<\infty\}$ is carried by graphs of Lipschitz functions $f:V\rightarrow V^\perp$ of Lipschitz constant at most $\alpha$. \end{theorem} \section{Necessary Conditions}\label{sec:necessary} Recall that $\mathop\mathrm{gap}\nolimits(S,T)=\inf\{|s-t|:s\in S, t\in T\}$ for all nonempty sets $S,T\subseteq\mathbb{R}^n$. We define the quantity $\mathop\mathrm{excess}\nolimits(S,T)=\sup_{s\in S}\inf_{t\in T}|s-t|\in[0,\infty]$ for all nonempty sets $S,T\subseteq\mathbb{R}^n$. By convention, we also set $\mathop\mathrm{excess}\nolimits(\emptyset,S)=0$, but leave $\mathop\mathrm{excess}\nolimits(S,\emptyset)$ undefined. The Hausdorff distance between nonempty sets $S$ and $T$ is defined to be the maximum of $\mathop\mathrm{excess}\nolimits(S,T)$ and $\mathop\mathrm{excess}\nolimits(T,S)$. To establish necessary conditions for Lipschitz graph rectifiability in terms of the conical Dini functions, we follow the strategy used in \cite[\S3]{BS1} and \cite[\S4]{BS3} to prove necessary conditions for a Radon measure to be carried by rectifiable curves. The argument must be modified to incorporate the geometry of Lipschitz graphs. \begin{proposition}\label{integral-bound} Let $\mu$ be a Radon measure on $\mathbb{R}^n$ and let $X=X(V,\alpha)$ be a bad cone for some $V\in G(n,m)$ and $\alpha\in(0,\infty)$. Suppose that $\Gamma=\mathop\mathsf{Graph}\nolimits(f)$ for some Lipschitz function $f:V\rightarrow V^\perp$ with Lipschitz constant at most $\alpha/2$. There exists a constant $C=C(n,\alpha)>1$ such that for every $x_0\in\Gamma$ and $r_0>0$, \begin{equation}\label{e:integral} \int_{\Gamma\cap B(x_0,r_0)} G_{\mu,X}(x)\,d\mu(x) \lesssim_{n,\alpha} \mu(B(x_0,r_0+C)\setminus \Gamma)<\infty.\end{equation} In particular, $G_{\mu,X}(x)<\infty$ at $\mu$-a.e.~$x\in\Gamma$.\end{proposition} \begin{proof} Let $\mu$, $V$, $\alpha$, $f$, $\Gamma$, $x_0$, and $r_0$ be fixed as in the statement of the lemma. Abbreviate $\Gamma_0=\Gamma\cap B(x_0,r_0)$. By Tonelli's theorem, \begin{align*}\int_{\Gamma_0} G_{\mu,X}(x)\,d\mu(x) &= \sum_{\mathop\mathrm{side}\nolimits Q\leq 1} \mathop\mathsf{Defect}\nolimits(\mu,Q,X)\int_{\Gamma_0}\chi_Q(x)\,d\mu(x) \\ &=\sum_{\mathop\mathrm{side}\nolimits Q\leq 1} \mathop\mathsf{Defect}\nolimits(\mu,Q,X)\,\mu(\Gamma_0\cap Q)\leq \sum_{\stackrel{\mathop\mathrm{side}\nolimits Q\leq 1}{\mu(\Gamma_0\cap Q)>0}} \mathop\mathsf{Defect}\nolimits(\mu,Q,X)\,\mu(Q).\end{align*} For any dyadic cube $Q$ such that $\mathop\mathrm{side}\nolimits Q\leq 1$ and $\mu(\Gamma_0\cap Q)>0$, $$\mathop\mathsf{Defect}\nolimits(\mu,Q,X)\,\mu(Q)=\sum_{R\in\Delta^*_{Q,X}}\frac{\mu(R)}{\mu(Q)}\mu(Q)=\mu\left(\bigcup \Delta^*_{Q,X}\right),$$ where $\bigcup\Delta^*_{Q,X}$ denotes the union of all cubes in $\Delta^*_{Q,X}$ Thus, $$\int_{\Gamma_0} G_{\mu,X}(x)\,d\mu(x) \leq \sum_{\stackrel{\mathop\mathrm{side}\nolimits Q\leq 1}{\mu(\Gamma_0\cap Q)>0}} \mu\left(\bigcup \Delta^*_{Q,X}\right).$$ We now aim to prove that the \emph{non-tangential regions} $T_{Q,X}=\bigcup \Delta^*_{Q,X}$ associated to dyadic cubes with $\mathop\mathrm{side}\nolimits Q\leq 1$ and $\mu(\Gamma_0\cap Q)>0$ are contained in $\mathbb{R}^n\setminus\Gamma$ and have bounded overlap. This requires that we use the geometry of the Lipschitz graph $\Gamma$. Let $Q$ be a dyadic cube of side length at most 1 such that $\mu(\Gamma_0\cap Q)>0$. Pick $a\in \Gamma_0\cap Q$. Because $\Gamma$ is the graph of a Lipschitz function over $V$ with Lipschitz constant at most $\alpha/2$, Lemma \ref{geometric-lemma} tells us that the graph $\Gamma$ is contained in $X_a(V,\alpha/2)^c$. By definition of $r_{Q,X}$ or proof of Lemma \ref{bounded-geometry}, $\mathop\mathrm{gap}\nolimits(R,X_a(V,\alpha/2)^c)\geq \mathop\mathrm{diam}\nolimits Q$ for all $R\in\Delta^*_{Q,X}$. Hence \begin{equation}\label{e:gap-below} \mathop\mathrm{gap}\nolimits(T_{Q,X},\Gamma)\geq \mathop\mathrm{gap}\nolimits(T_{Q,X},X_a(V,\alpha/2)^c)\geq \mathop\mathrm{diam}\nolimits Q.\end{equation} If $R\in\Delta^*_{Q,X}$, then there exists $z\in R$ such that $|z-x_Q|\leq r_{Q,X}=81\max(\alpha,1/\alpha)\mathop\mathrm{diam}\nolimits Q$. For an arbitrary point $y\in R$, $|y-a|\leq |y-z|+|z-x_Q|+|x_Q-a|$. Thus, \begin{equation}\label{e:excess-above} \mathop\mathrm{excess}\nolimits(T_{Q,X},\Gamma) \leq \mathop\mathrm{excess}\nolimits(T_{Q,X},\{a\}) \leq 83\max(\alpha,1/\alpha)\mathop\mathrm{diam}\nolimits Q.\end{equation} It follows that $T_{Q,X}\subseteq B(x_0,r_0+83\sqrt{n}\max(\alpha,1/\alpha))\setminus \Gamma$. Furthermore, suppose that $Q'$ is a dyadic cube of side length $2^{-N}\mathop\mathrm{side}\nolimits Q$ such that $\mu(\Gamma_0\cap Q')>0$. By \eqref{e:gap-below} and \eqref{e:excess-above}, $T_{Q,X}$ and $T_{Q',X}$ are disjoint if $2^{-N}83\max(\alpha,1/\alpha)<1$. Thus, if $T_{Q,X}\cap T_{Q',X}\neq\emptyset$, where $T_{Q,X}$ and $T_{Q',X}$ are non-tangential regions associated to dyadic cubes $Q$ and $Q'$ intersecting $\Gamma_0$ of side lengths $2^{-\lambda}$ and $2^{-\lambda'}$ at most 1, then $|\lambda-\lambda'|\leq C(n,\alpha)$. Another consequence of \eqref{e:excess-above} is that $\mathop\mathrm{diam}\nolimits T_{Q,X} \leq 166\max(\alpha,1/\alpha)\mathop\mathrm{diam}\nolimits Q$. It follows that we have bounded overlap of the non-tangential regions: $T_{Q,X}$ intersects $T_{Q',X}$ for at most $C(n,\alpha)$ other cubes $Q'$ with $\mathop\mathrm{side}\nolimits Q'\leq 1$ and $\mu(\Gamma_0\cap Q')>0$. Therefore, $$\int_{\Gamma_0} G_{\mu,X}(x)\,d\mu(x) \leq \sum_{\stackrel{\mathop\mathrm{side}\nolimits Q\leq 1}{\mu(\Gamma_0\cap Q)>0}} \mu\left(T_{Q,X}\right)\lesssim_{n,\alpha} \mu(B(x_0,r_0+83\sqrt{n}\max(\alpha,1/\alpha))\setminus \Gamma).$$ The last displayed quantity is finite, because $\mu$ is a Radon measure. This verifies that \eqref{e:integral} holds for all $x_0\in \Gamma$ and $r_0>0$. Hence the conical Dini function $G_{\mu,X}(x)<\infty$ at $\mu$-a.e.~$x\in \Gamma_0=\Gamma\cap B(x_0,r_0)$. Because $r_0>0$ was arbitrary and $\Gamma \subseteq \bigcup_{k=1}^\infty B(x_0,k)$, we conclude that $G_{\mu,X}(x)<\infty$ at $\mu$-a.e.~$x\in\Gamma$. \end{proof} \begin{theorem}Let $\mu$ be a Radon measure on $\mathbb{R}^n$. Suppose $V\in G(n,m)$ and $\alpha\in(0,\infty)$. If $\mu$ is carried by graphs of Lipschitz functions $f:V\rightarrow V^\perp$ of Lipschitz constant at most $\alpha$ and $\beta\geq 2\alpha$, then $G_{\mu,X(V,\beta)}(x)<\infty$ at $\mu$-a.e.~$x\in\mathbb{R}^n$.\end{theorem} \begin{proof} The hypothesis asserts that there exist Lipschitz functions $f_1,f_2,\dots:V\rightarrow V^\perp$ with Lipschitz constant at most $\alpha$ such that $\mu(\mathbb{R}^n\setminus \bigcup_1^\infty \Gamma_i)=0$, where each $\Gamma_i=\mathop\mathsf{Graph}\nolimits(f_i)$. Let $\beta\geq 2\alpha$, so that each function $f_i$ has Lipschitz constant at most $\beta/2$. By Proposition \ref{integral-bound}, for each $i\geq 1$, there exists a set $N_i\subseteq\Gamma_i$ such that $\mu(N_i)=0$ and $G_{\mu,X(V,\beta)}(x)<\infty$ at every $x\in\Gamma_i\setminus N_i$. Set $E=\bigcup_1^\infty \Gamma_i\setminus N_i$. Then $G_{\mu,X(V,\beta)}(x)<\infty$ for every $x\in E$ and \begin{equation*}\mu(\mathbb{R}^n\setminus E) \leq \mu\left(\mathbb{R}^n\setminus \bigcup_1^\infty\Gamma_i\right)+\sum_1^\infty \mu(N_i)=0. \qedhere\end{equation*} \end{proof} \section{Proof of the Main Theorem}\label{sec:main} Let $1\leq m\leq n-1$ and let $\mu$ be a Radon measure on $\mathbb{R}^n$. Existence and uniqueness of the decomposition is standard. \begin{lemma}\label{decomposition-lemma} There exists a unique decomposition $\mu=\mu_G+\mu_G^\perp$, where $\mu_G$ is a Radon measure that is carried by $m$-dimensional Lipschitz graphs and $\mu_G^\perp$ is a Radon measure that is singular to $m$-dimensional Lipschitz graphs.\end{lemma} \begin{proof} The decomposition follows from a simple modification of the usual proof of the Lebesgue decomposition theorem. For each integer $r\geq 2$, use the approximation property of the supremum to choose a set $\Gamma_r$, which is a finite union of $m$-dimensional Lipschitz graphs, such that $$\mu(\Gamma_r)\geq (1-1/r)\sup_{\Gamma}\mu(\Gamma\cap B(0,r))<\infty,$$ where the supremum ranges over all sets $\Gamma$, which are finite unions of $m$-dimensional Lipschitz graphs in $\mathbb{R}^n$. Then $\mu_G=\mu\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} }\bigcup_{r=2}^\infty \Gamma_r$ and $\mu_G^\perp=\mu-\mu_G$. Uniqueness of the decomposition can be proved by contradiction. For full details, see e.g.~the appendix of \cite{BV}.\end{proof} The content of Theorem \ref{t:main} over Lemma \ref{decomposition-lemma} and our task in the remainder of the proof is to identify the component measures $\mu_G$ and $\mu^\perp_G$ using the conical Dini functions. \begin{lemma}\label{measurability} For every bad cone $X$, the conical Dini function $G_{\mu,X}$ is Borel measurable.\end{lemma} \begin{proof} By definition, each conical Dini function $G_{\mu,X}$ is a countable linear combination of characteristic functions of Borel sets.\end{proof} \begin{lemma}\label{countable-decomposition} There exists a countable family $\mathcal{X}(n,m)$ of bad cones (independent of $\mu$) such that $G_{\mu,X}(x)<\infty$ at some $x\in\mathbb{R}^n$ for some bad cone $X$ if and only if $G_{\mu,X'}(x)<\infty$ for some $X'\in\mathcal{X}(n,m)$.\end{lemma} \begin{proof} The value of a conical Dini function $G_{\mu,X}(x)$ is determined by the value of the measure on dyadic cubes $Q$ and $R\in\Delta^*_{Q,X}$, where $Q$ ranges over all dyadic cubse of side length at most 1 that contains $x$. The cubes belonging to $\Delta^*_{Q,X}$ for any particular $Q$ are completely determined by $x_Q$, $\mathop\mathrm{side}\nolimits Q$, and the arrangement of cubes in $\Delta^*_{Q_0,X}$, where $Q_0=[0,1)\times\cdots\times[0,1).$ By Lemma \ref{bounded-geometry}, the cubes in $\Delta^*_{Q_0,X}$ are dyadic cubes of side length 1 contained in $B(x_{Q_0},C(n,\alpha))$, where for each integer $N\geq 1$, the constant $C(n,\alpha)$ is uniformly bounded for all $\alpha\in(1/N,N)$. Therefore, there are only countably many possible configurations of $\Delta^*_{Q,X}$. Define $\mathcal{X}(n,m)$ by including exactly one bad cone $X$ for each possible configuration of $\Delta^*_{Q_0,X}$.\end{proof} We are ready to complete the proof of Theorem \ref{t:main}. Let $\mu_1$ and $\mu_2$ be the measures defined by \begin{align*}\mu_1&=\mu\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} }\{x\in\mathbb{R}^n:G_{\mu,X}(x)<\infty \text{ for some bad cone }X\},\\ \mu_2 &=\mu\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} }\{x\in\mathbb{R}^n:G_{\mu,X}(x)=\infty \text{ for every bad cone }X\}.\end{align*} By Lemma \ref{countable-decomposition}, we may alternatively express \begin{align*} \mu_1 &=\mu\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} }\{x\in\mathbb{R}^n:G_{\mu,X}(x)<\infty\text{ for some }X\in\mathcal{X}(n,m)\},\\ \mu_2 &=\mu\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} }\{x\in\mathbb{R}^n:G_{\mu,X}(x)=\infty\text{ for every }X\in\mathcal{X}(n,m)\}.\end{align*} In view of Lemma \ref{measurability}, we conclude that $\mu_1$ and $\mu_2$ are restrictions of a Radon measure to a Borel set. Hence $\mu_1$ and $\mu_2$ are Radon. On the one hand, for each bad cone $X$, $\mu_{G,X}=\mu\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} }\{x\in\mathbb{R}^n:G_{\mu,X}(x)<\infty\}$ is carried by $m$-dimensional Lipschitz graphs by Theorem \ref{t:suff}. Because $\mathcal{X}(n,m)$ is countable, $\mu_+=\sum_{X\in\mathcal{X}(n,m)}\mu_{G,X}$ is also carried by Lipschitz graphs. By Lemma \ref{countable-decomposition}, $\mu_1\leq \mu_+$. Thus, $\mu_1$ is carried by $m$-dimensional Lipschitz graphs, because the dominant measure $\mu_+$ is carried by Lipschitz graphs. On the other hand, let $\Gamma$ be an arbitrary $m$-dimensional Lipschitz graph, say that $\Gamma=\mathop\mathsf{Graph}\nolimits(f)$ where $f$ is a Lipschitz function $f:V\rightarrow V^\perp$ with Lipschitz constant at most $\alpha$. By Proposition \ref{integral-bound}, $G_{\mu,X(V,2\alpha)}(x)<\infty$ at $\mu$-a.e.~$x\in\Gamma$. Because $\mu_2\leq \mu$, we have $G_{\mu,X(V,2\alpha)}(x)<\infty$ at $\mu_2$-a.e.~$x\in\Gamma$, as well. By definition, the measure $\mu_2$ vanishes on $\{x\in\mathbb{R}^n:G_{\mu,X(V,2\alpha)}(x)<\infty\}$. Therefore, $\mu_2(\Gamma)=0$. Since $\Gamma$ was arbitrary, we conclude that $\mu_2$ is singular to $m$-dimensional Lipschitz graphs. It is immediate from the definition of $\mu_1$ and $\mu_2$ that $\mu=\mu_1+\mu_2$. Since $\mu_1$ and $\mu_2$ are Radon measures, $\mu_1$ is carried by $m$-dimensional Lipschitz graphs, and $\mu_2$ is singular to $m$-dimensional Lipschitz graphs, we know that $\mu_1=\mu_G$ and $\mu_2=\mu_G^\perp$ by uniqueness of the decomposition in Lemma \ref{decomposition-lemma}. This completes the proof of Theorem \ref{t:main}. \section{Variations}\label{sec:variation} We conclude with some remarks on flexibility in the definition of the conical defect and variations on the main theorem. The restriction to half-open dyadic cubes is hidden in the proof of the localization lemma for the $\mu$-normalized sum function (see Lemma \ref{localization}). The lemma remains valid for any system of sets $\mathscr{A}$ with the property that if $\mathcal{T}$ is a tree of sets in $\mathscr{A}$ and $\mathcal{B}$ is a subset of $\mathcal{T}$ such that $A,B\in\mathcal{B}$ and $A\subseteq B$ implies $A=B$, then $\mathcal{B}$ has bounded overlap with constants independent of $\mathcal{B}$ and $\mathcal{T}$. Of course, half-open dyadic cubes, half-open triadic cubes, etc.~enjoy this property with bounded overlap 1. One could probably design a version of the conical defect and main theorem with Euclidean balls by using the Besicovitch covering theorem instead of the localization lemma. The main theorem remains valid if one replaces the conical defect by the larger quantity $$\sum_{R\in \Delta^*_{Q,X}} \frac{\mu(S_R)}{\mu(Q)},$$ where $S_R$ is any Borel set containing $R$ with diameter at most $C\mathop\mathrm{diam}\nolimits R$ for some constant $1\leq C<\infty$ independent of $R$. This change requires increasing the size of the radius $r_{Q,X}$ depending on the constant $C$ to ensure that $\mathop\mathrm{gap}\nolimits(S_R,X(V,\gamma)^c)\gtrsim \mathop\mathrm{diam}\nolimits Q$ for some $\gamma<\alpha$. Because the sets $S_R$ may overlap, this is a slight strengthening of the necessary condition for Lipschitz graph rectifiability. If one prefers a construction where the radius $r_{Q,X}$ of the conical annulus is independent of the cone opening $\alpha$, this can be achieved at the cost of taking the cubes $R\in\Delta^*_{Q,X}$ to have side length smaller than $Q$. In particular, there exists $\tilde r_{Q,X}$ depending only on $n$ and a \emph{jump parameter} $J\in\mathbb{N}$ depending on $n$ and $\alpha$ with the following property. If $R$ is a dyadic cube of side length $2^{-J}\mathop\mathrm{side}\nolimits Q$ that intersects $X_Q\cap B(x_Q,\tilde r_{Q,X})\setminus B(x_Q,\tilde r_{Q,X}/3)$, then $R\cap B(x_Q,\tilde r_{Q,X}/4)=\emptyset$ and $\mathop\mathrm{gap}\nolimits(R,X(V,\alpha/2)^c)\geq \mathop\mathrm{diam}\nolimits R$. The proofs of the sufficient and necessary conditions with this modification are essentially the same as above, although there is more bookkeeping involving $J$. Every $m$-dimensional plane $x_0+V$ is a Lipschitz graph over $V$ with constant at most $\alpha$ for every $\alpha>0$. It follows that $\mu\hbox{ {\vrule height .22cm}{\leaders\hrule\hskip.2cm} }\{x\in \mathbb{R}^n: \forall_{V\in G(n,m)}\exists_{\alpha>0}\, G_{\mu,X(V,\alpha)}=\infty\}$ is singular to affine $m$-dimensional planes. However, this cannot be directly used to characterize Radon measures that are carried by or singular to planes. We leave finding such a characterization as an open problem for future research.
{ "timestamp": "2020-12-17T02:24:06", "yymm": "2007", "arxiv_id": "2007.08503", "language": "en", "url": "https://arxiv.org/abs/2007.08503" }
\subsubsection*{\bibname}} \usepackage{breqn} \usepackage{stmaryrd} \usepackage{mathtools} \begin{document} \twocolumn[ \aistatstitle{Active Learning under Label Shift} \aistatsauthor{ Eric Zhao \And Anqi Liu \And Anima Anandkumar \And Yisong Yue } \aistatsaddress{ California Institute of Technology } ] \begin{abstract} \input{src/abstract} \end{abstract} \section{Introduction} \input{src/intro} \section{Related Works} \input{src/related} \section{Preliminaries} \input{src/preliminaries} \section{Medial Distribution} \input{src/medial} \section{Streaming MALLS} \input{src/alls} \section{Batched MALLS{}} \input{src/batched_alls} \section{Conclusion} \input{src/conclusion} \subsubsection*{Acknowledgements} Anqi Liu is supported by the PIMCO Postdoctoral Fellowship. Prof. Anandkumar is supported by Bren endowed Chair, faculty awards from Microsoft, Google, and Adobe, Beyond Limits, and LwLL grants. This work is also supported by funding from Raytheon and NASA TRISH. \subsection{NABirds Regional Species Experiment} We conduct an additional experiment on the NABirds dataset using the grandchild level of the class label hierarchy, which results in 228 classes in total. These classes correspond to individual species and present a significantly larger output space than considered in Figure 6. For realism, we retain the original training distribution in the dataset as the source distribution; sampling I.I.D. from the original split in the experiment. To simulate a scenario where a bird species classifier is adapted to a new region with new bird frequencies, we induce an imbalance in the target distribution to render certain birds more common than others. Table \ref{tab:nabirds} demonstrates the average accuracy of our framework at different label budgets. We observe consistent gains in accuracy at different label budgets. \begin{table}[h] \centering \begin{center} \begin{tabular}{||c c c c ||} \hline Strategy & Acc (854 Labels) & Acc (1708) & Acc (3416) \\ [0.5ex] \hline\hline MALLS (MC-D) & \textbf{0.51} & \textbf{0.53} & \textbf{0.56} \\ \hline Vanilla (MC-D) & 0.46 & 0.48 & 0.50 \\ \hline Random & 0.38 & 0.40 & 0.42 \\ \hline \end{tabular} \end{center} \caption{NABirds (species) Experiment Average Accuracy} \label{tab:nabirds} \end{table} \subsection{Change in distribution} To further analyze the learning behavior of MALLS, we can analyze the label distribution of datapoints selected by the active learner. In Figure \ref{fig:dists}, MC-Dropout, Max-Margin and Max-Entropy strategies are evaluated on CIFAR100 under \textit{canonical label shift}. By analyzing the uniformity bias and the rate of convergence to the target distribution, we can observe that MALLS exhibits a unique sampling bias which cannot be explained away as simply a class-balancing bias. This indicates that MALLS may be successful in recovering information from distorted uncertainty estimates. \begin{figure}[bthp] \centering \setlength{\tabcolsep}{-0.2pt} \begin{tabular}{ccc} \includegraphics[height=4cm]{figs/BasicBALD_cifar100_dirichlet_mix_warm_0,4_alpha_0,1_400Uniform_source_shift.jpg}& \includegraphics[height=4cm]{figs/BasicMaxEnt_cifar100_dirichlet_mix_warm_0,4_alpha_0,1_400Uniform_source_shift.jpg}& \includegraphics[height=4cm]{figs/BasicMargin_cifar100_dirichlet_mix_warm_0,4_alpha_0,1_400Uniform_source_shift.jpg} \\ \includegraphics[height=4cm]{figs/BasicBALD_cifar100_dirichlet_mix_warm_0,4_alpha_0,1_400Source_shift.jpg}& \includegraphics[height=4cm]{figs/BasicMaxEnt_cifar100_dirichlet_mix_warm_0,4_alpha_0,1_400Source_shift.jpg}& \includegraphics[height=4cm]{figs/BasicMargin_cifar100_dirichlet_mix_warm_0,4_alpha_0,1_400Source_shift.jpg} \end{tabular} \caption{ Average L2 distance between labeled class distribution and uniform/target distribution with 95\% confidence intervals on 10 runs of experiments on CIFAR100 in the \textit{canonical label shift} setting. MALLS (denoted by ALLS) converges to the target label distribution slower than vanilla active learning but with a similar uniform sampling bias. This suggests MALLS leverages a sampling bias different from that of vanilla active learning or naive class-balanced sampling. } \label{fig:dists} \end{figure} \subsection{Proof of Theorem 1} We formalize the violation of label shift assumptions resulting from subsampling as label shift drift \cite{azizzadenesheli_regularized_2019}. \begin{lemma} The drift from label shift is bounded by: \begin{align} \abs{ 1 - \expc{X, Y \sim P_{\scriptscriptstyle\text{test}}} \brcksq{ \frac{P_{\scriptscriptstyle\text{med}}(x | y)}{P_{\scriptscriptstyle\text{test}}(x | y)} } } \leq \norm{r_{\scriptscriptstyle{s \shortrightarrow m}}}_{\infty} \text{err}(h_0, r_{\scriptscriptstyle{s \shortrightarrow m}}) \end{align} \label{lemma:drift} \end{lemma} \begin{proof} The drift is equivalent to expected importance weights, \begin{align} \abs{1 - \expc{X, Y \sim P_{\scriptscriptstyle\text{test}}} \brcksq{ \frac{P_{\scriptscriptstyle\text{med}}(x| y)}{P_{\scriptscriptstyle\text{test}}(x |y)} }} & = \abs{1 - \int_{X, Y} P_{\scriptscriptstyle\text{med}}(x | y)P_{\scriptscriptstyle\text{test}}(y)} \nonumber \\ & = \abs{1 - \int_{X, Y} P_{\scriptscriptstyle\text{med}}(x, y)\frac{P_{\scriptscriptstyle\text{test}}(y)}{P_{\scriptscriptstyle\text{med}}(y)} } \nonumber \\ & = \abs{1 - \expc{X, Y \sim P_{\scriptscriptstyle\text{med}}} \brcksq{\frac{P_{\scriptscriptstyle\text{test}}(y)}{P_{\scriptscriptstyle\text{med}}(y)} }} \end{align} Drift can therefore be estimated in practice by randomly labeling subsampled points and measuring the average importance weight value. We can further expand the value of drift as: \begin{align} \abs{1 - \expc{X, Y \sim P_{\scriptscriptstyle\text{med}}} \brcksq{\frac{P_{\scriptscriptstyle\text{test}}(y)}{P_{\scriptscriptstyle\text{med}}(y)} }} & = \abs{1 - \int_{X, Y} C P_{\scriptscriptstyle\text{src}}(x, y) P_{\scriptscriptstyle\text{ss}}(h_0(x))\frac{P_{\scriptscriptstyle\text{test}}(y)}{P_{\scriptscriptstyle\text{med}}(y)} } \nonumber \\ & = \abs{1 - C \expc{X, Y \sim P_{\scriptscriptstyle\text{src}}} \brcksq{ P_{\scriptscriptstyle\text{ss}}(h_0(x))\frac{P_{\scriptscriptstyle\text{test}}(y)}{P_{\scriptscriptstyle\text{med}}(y)} }} \nonumber \\ & = \abs{1 - C \expc{X, Y \sim P_{\scriptscriptstyle\text{src}}} \brcksq{ P_{\scriptscriptstyle\text{ss}}(y)\frac{P_{\scriptscriptstyle\text{test}}(y)}{P_{\scriptscriptstyle\text{med}}(y)} }} + \abs{C \expc{X, Y \sim P_{\scriptscriptstyle\text{src}}} \brcksq{ \brck{P_{\scriptscriptstyle\text{ss}}(h_0(x)) - P_{\scriptscriptstyle\text{ss}}(y)} \frac{P_{\scriptscriptstyle\text{test}}(y)}{P_{\scriptscriptstyle\text{med}}(y)} }} \nonumber \\ & = \abs{1 - \sum_{Y} \brcksq{ P_{\scriptscriptstyle\text{med}}^*(y)\frac{P_{\scriptscriptstyle\text{test}}(y)}{P_{\scriptscriptstyle\text{med}}(y)} }} + \abs{C \expc{X, Y \sim P_{\scriptscriptstyle\text{src}}} \brcksq{ \brck{P_{\scriptscriptstyle\text{ss}}(h_0(x)) - P_{\scriptscriptstyle\text{ss}}(y)} \frac{P_{\scriptscriptstyle\text{test}}(y)}{P_{\scriptscriptstyle\text{med}}(y)} }} \end{align} where $C$ is a constant where $P_{\scriptscriptstyle\text{ss}} = \frac{1}{C} \frac{P_{\scriptscriptstyle\text{med}}}{P_{\scriptscriptstyle\text{src}}}$ and $P_{\scriptscriptstyle\text{med}}^*$ denotes the target medial distribution. The second term corresponds to a weighted L1 error on $P_{\scriptscriptstyle\text{src}}$. \begin{align} \abs{C \expc{X, Y \sim P_{\scriptscriptstyle\text{src}}} \brcksq{ \brck{P_{\scriptscriptstyle\text{ss}}(h_0(x)) - P_{\scriptscriptstyle\text{ss}}(y)} \frac{P_{\scriptscriptstyle\text{test}}(y)}{P_{\scriptscriptstyle\text{med}}(y)} }} & \leq \norm{r_{\scriptscriptstyle{s \shortrightarrow m}}}_{\infty} \expc{X, Y \sim P_{\scriptscriptstyle\text{src}}} \brcksq{ \abs{ \mathbbm{1}[h_0(x) \neq y]} \frac{P_{\scriptscriptstyle\text{test}}(y)}{P_{\scriptscriptstyle\text{med}}(y)} } \nonumber \\ & = \norm{r_{\scriptscriptstyle{s \shortrightarrow m}}}_{\infty} \text{err}(h_0, r_{\scriptscriptstyle{s \shortrightarrow m}}) \end{align} where $\text{err}(h_0, r)$ denotes the importance weighted 0/1-error of a blackbox predictor $h_0$ on $Ps$. As the first term is thus dominated, we have that drift is bounded by the accuracy of the blackbox hypothesis. \end{proof} Plugging Lemma \ref{lemma:drift} into Theorem 2 in \cite{azizzadenesheli_regularized_2019} yields a generalization of Theorem 1 where the number of unlabeled datapoints from the test distribution is $n'$. \begin{theorem} With probability $1 - \delta$, for all $n \geq 1$: \begin{align} | \Delta | & \leq \mathcal{O} \left( \frac{2 }{\sigma_{\min}} \left( \norm{\theta_{\scriptscriptstyle{m \shortrightarrow t}}}_2 \sqrt{ \frac{\log \brck{\frac{n k}{\delta}}}{n} } + \sqrt{ \frac{\log \brck{\frac{ n}{\delta}}}{n} } + \sqrt{ \frac{\log \brck{\frac{ n}{\delta}}}{n'} } + \norm{\theta_{\scriptscriptstyle{s \shortrightarrow m}}}_{\infty} \text{err}(h_0, r_{\scriptscriptstyle{m \shortrightarrow t}}) \right) \right) \end{align} where $\sigma_{\min}$ denotes the smallest singular value of the confusion matrix and $\text{err}(h_0, r)$ denotes the importance weighted $0/1$-error of a blackbox predictor $h_0$ on $P_{\scriptscriptstyle\text{src}}$. \end{theorem} Theorem 1 follows by setting $n' \rightarrow \infty$. \subsection{Theorem 2 and Theorem 3 Proofs} We will prove Theorem 2 and Theorem 3 for the general case where the number of unlabeled datapoints from the test distribution is $n'$. For the case depicted in the main paper, set $n' \rightarrow \infty$. First, we review the IWAL-CAL active learning algorithm \cite{beygelzimer_agnostic_2010}. Let $\text{err}_{S_i}(h) \rightarrow [0, 1]$ denote the error of hypothesis $h \in H$ as estimated on $S_i$ while $\text{err}_{P_{\scriptscriptstyle\text{test}}}(h)$ denote the expected error of $h$ on $P_{\scriptscriptstyle\text{test}}$. We next define, \begin{align} h^* & := \text{argmin}_{h \in H} \text{err}_{P_{\scriptscriptstyle\text{test}}}(h), \nonumber \\ h_k & := \text{argmin}_{h \in H} \text{err}_{S_{k-1}}(h), \nonumber \\ h'_k & := \text{argmin} \{ \text{err}_{S_{k-1}}(h) \mid h \in H \wedge h(\textbf{D}_{\text{unlab}}^{(k)}) \neq h_k(\textbf{D}_{\text{unlab}}^{(k)})\} \nonumber \\ G_k & := \text{err}_{S_{k-1}}(h'_k) - \text{err}_{S_{k-1}}(h_k) \nonumber \end{align} IWAL-CAL employs a sampling probability $P_t = \min \{1, s\}$ for the $s \in (0, 1)$ which solves the equation, \begin{align} G_t = \left( \frac{c_1}{\sqrt{s}} - c_1 + 1 \right) \sqrt{\frac{C_0 \log t}{t - 1}}\notag + \left(\frac{c_2}{s} - c_2 + 1 \right) \frac{C_0 \log t}{t - 1} \end{align} where $C_0$ is a constant bounded in Theorem 2 and $c_1 := 5 + 2 \sqrt{2}, c_2 := 5$. The most involved step in deriving generalization and sample complexity bounds for MALLS is bounding the deviation of empirical risk estimates. This is done through the following theorem. \begin{theorem} \label{thm:dev} Let $Z_i := (X_i, Y_i, Q_i)$ be our source data set, where $Q_i$ is the indicator function on whether $(X_i, Y_i)$ is sampled as labeled data. The following holds for all $n \geq 1$ and all $h \in \mathcal{H}$ with probability $1 - \delta$: \begin{align} \label{eq:dev} & \left | err(h, Z_{1:n}) - err(h^*, Z_{1:n}) - err(h) + err(h^*) \right| \nonumber \\ & \leq \mathcal{O} \left( (2 + \norm{\theta}_2) \sqrt{\frac{\varepsilon_n}{P_{\min, n}(h)}} + \frac{\varepsilon_n}{P_{\min, n}(h)} + \frac{ 2 d_{\infty} (P_{\scriptscriptstyle\text{test}}, P_{\scriptscriptstyle\text{src}}) \log (\frac{2 n |H|}{\delta}) }{ 3 n } + \sqrt{\frac{ 2 d_2 (P_{\scriptscriptstyle\text{test}}, P_{\scriptscriptstyle\text{src}}) \log (\frac{2 n |H|}{\delta}) }{ n }} \right. \\ & \left. + \norm{r_{\scriptscriptstyle{s \shortrightarrow m}}}_{\infty} \text{err}(h_0, r_{\scriptscriptstyle{s \shortrightarrow m}}) + \frac{2 }{\sigma_{\min}} \left( \norm{\theta_{\scriptscriptstyle{m \shortrightarrow t}}}_2 \sqrt{ \frac{\log \brck{\frac{n k}{\delta}}}{\lambda n} } + \sqrt{ \frac{\log \brck{\frac{n}{\delta}}}{\lambda n} } + \sqrt{ \frac{\log \brck{\frac{n}{\delta}}}{n'} } + \norm{\theta_{\scriptscriptstyle{s \shortrightarrow m}}}_{\infty} \text{err}(h_0, r_{\scriptscriptstyle{m \shortrightarrow t}}) \right) \right)\nonumber \end{align} where $\varepsilon_n := \frac{16 \log(2 ( 2 + n \log_2 n) n (n+1) |H| / \delta)}{n}$. \end{theorem} For reading convenience, we set $P_{\scriptscriptstyle\text{src}} := P_{\scriptscriptstyle\text{ulb}}$. This deviation bound will plug in to IWAL-CAL for generalization and sample complexity bounds. In the remainder of this appendix section, we detail our proof of Theorem \ref{thm:dev}. We proceed by expressing Theorem \ref{thm:dev} in a more general form with a bounded function $f: X \times Y \rightarrow [-1, 1]$ which will eventually represent $\text{err}(h) - \text{err}(h^*)$. We borrow notation for the terms $W, Q$ from \cite{beygelzimer_agnostic_2010}, where $Q_i$ is an indicator random variable indicating whether the $i$th datapoint is labeled and $W := Q_i \tilde{Q}_i r_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)} f(x_i, y_i)$. We use the shorthand $r^{(i)}$ for the $y_i$th component of importance weight $r$. Similarly, the indicator random variable $\tilde{Q}_i$ indicates whether the $i$th data sample is retained by the subsampler. The expectation $\expc{i}[W]$ is taken over the randomness of $Q$ and $\tilde{Q}$. We also borrow \cite{azizzadenesheli_regularized_2019}'s label shift notation and define $k$ as the size of the output space (finite) and denote estimated importance weights with hats, e.g. $\hat{r}$. We also introduce a variant of $W$ using estimated importance weights $r$: $\hat{W} := Q_i \tilde{Q}_i \hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)} f(x_i, y_i)$. Finally, we follow \cite{cortes_learning_2010} and use $d_\alpha(P || P')$ to denote $2^{D_\alpha(P || P')}$ where $D_\alpha(P || P') := \log (\frac{P_i}{P'_i})$ is the Renyi divergence of distributions $P$ and $P'$. We seek to bound with high probability, \begin{align} \abs{\Delta} := \abs{\frac{1}{n} \left(\sum_{i=1}^{n} \hat{W}_i \right) - \mathbb{E}_{x, y \sim P_{\scriptscriptstyle\text{trg}}} [f(x, y)]} \leq |\Delta_1| + |\Delta_2| + |\Delta_3| + \abs{\Delta_4} \end{align} where, \begin{align} \Delta_1 & := \expc{x, y \sim P_{\scriptscriptstyle\text{trg}}} [f(x, y)] - \expc{x, y \sim P_{\scriptscriptstyle\text{src}}} [W_i], \nonumber \\ \Delta_2 & := \expc{x, y \sim P_{\scriptscriptstyle\text{src}}} [W_i] - \frac{1}{n} \sum_{i=1}^{n} \expc{i} \brcksq{W_i}, \nonumber \\ \Delta_3 & := \frac{1}{n} \sum_{i=1}^{n} \expc{i}\brcksq{W_i} - \expc{i}\brcksq{\hat{W}_i} \nonumber \\ \Delta_4 & := \frac{1}{n} \sum_{i=1}^{n} \expc{i} [\hat{W}_i] - \hat{W}_i \nonumber \end{align} $\Delta_1$ corresponds to the drift from label shift introduced by subsampling, $\Delta_2$ to finite-sample variance. and $\Delta_3$ to label shift estimation errors. The final $\Delta_4$ corresponds to the variance from randomly sampling. We bound $\Delta_4$ using a Martingale technique from \cite{zhang_data_2005} also adopted by \cite{beygelzimer_agnostic_2010}. We take Lemmas 1, 2 from \cite{zhang_data_2005} as given. We now proceed in a fashion similar to the proof of Theorem 1 from \cite{beygelzimer_agnostic_2010}. We begin with a generalization of Lemma 6 in \cite{beygelzimer_agnostic_2010}. \begin{lemma} \label{lemma:d4l6} If $0 < \lambda < 3 \frac{P_i}{\hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)}}$, then \begin{align} \log \expc{i}{}[\exp( \lambda ( \hat{W}_i - \expc{i}{}[\hat{W}_i]))] \leq \frac{\hat{r}_i \hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)} \lambda^2}{2 P_i(1 - \frac{\hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)} \lambda}{3 P_i})} \end{align} where $\hat{r}_i := \hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)} \expc{i}{}[\tilde{Q}_i]$. If $\expc{i}{}[\hat{W}_i] = 0$ then \begin{align} \log \expc{i}{}[\exp(\lambda(\hat{W}_i - \expc{i}{}[\hat{W}_i]))] = 0 \end{align} \end{lemma} \begin{proof} First, we bound the range and variance of $\hat{W_i}$. The range is trivial \begin{align} |\hat{W_i}| \leq \left| \frac{Q_i \tilde{Q}_i \hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)}}{P_i} \right| \leq \frac{\hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)}}{P_i} \end{align} Since subsampling and importance weighting ideally corrects underlying label shift, we can simplify the variance as, \begin{align} \expc{i}{}[(\hat{W}_i - \expc{i}{}[\hat{W}_i])^2] & \leq \frac{\hat{r}_i \hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)}}{P_i} f(x_i, y_i)^2 - 2 \hat{r}_i^2 f(x_i, y_i)^2 + \hat{r}_i^2 f(x_i, y_i)^2 \leq \frac{\hat{r}_i \hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)}}{P_i} \end{align} Following \cite{beygelzimer_agnostic_2010}, we choose a function $g(x) := (\exp(x) - x - 1)/x^2$ for $x \neq 0$ so that $\exp(x) = 1 + x + x^2 g (x)$ holds. Note that $g(x)$ is non-decreasing. Thus, \begin{align} \expc{i}{}[\exp(\lambda( \hat{W}_i - \expc{i}{}[ \hat{W}_i]))] & = \expc{i}{}[1 + \lambda( \hat{W}_i - \expc{i}{}[ \hat{W}_i ]) + \lambda^2 ( \hat{W}_i - \expc{i}{}[ \hat{W}_i] )^2 g(\lambda( \hat{W}_i - \expc{i}{}[ \hat{W}_i]))] \nonumber \\ & = 1 + \lambda^2 \expc{i}{}[( \hat{W}_i - \expc{i}{}[ \hat{W}_i])^2 g(\lambda( \hat{W}_i - \expc{i}{}[ \hat{W}_i]))] \nonumber \\ & \leq 1 + \lambda^2 \expc{i}{}[( \hat{W}_i - \expc{i}{}[ \hat{W}_i])^2 g(\lambda \hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)} / P_i)] \nonumber \\ & = 1 + \lambda^2 \expc{i}{}[( \hat{W}_i - \expc{i}{}[ \hat{W}_i])^2] g(\lambda \hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)} / P_i) \nonumber \\ & \leq 1 + \frac{\lambda^2 \hat{r}_i \hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)}}{P_i} g(\frac{\hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)} \lambda}{P_i}) \end{align} where the first inequality follows from our range bound and the second follows from our variance bound. The first claim then follows from the definition of $g(x)$ and the facts that $\exp(x) - x - 1 \leq x^2/(2(1-x/3))$ for $0 \leq x < 3$ and $\log(1+x) \leq x$. The second claim follows from definition of $\hat{W}_i$ and the fact that $\expc{i}{}[\hat{W}_i] = \hat{r} f(X_i, Y_i)$. \end{proof} The following lemma is an analogue of Lemma 7 in \cite{beygelzimer_agnostic_2010}. \begin{lemma} Pick any $t \geq 0, p_{\min} > 0$ and let $E$ be the joint event \begin{align} \frac{1}{n} \sum_{i=1}^n \hat{W}_i - \sum_{i=1}^n \expc{i}{}[\hat{W}_i] \geq (1 + M) \sqrt{\frac{t}{2n p_{\min}}} + \frac{t}{3n p_{\min}} \nonumber \\ \text{ and } \min \{ \frac{P_i}{\hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)}} : 1 \leq i \leq n \wedge \expc{i}{}[W_i] \neq 0 \} \geq p_{\min} \end{align} Then $\Pr(E) \leq e^{-t}$ where $M := \frac{1}{n} \sum_{i=1}^n \hat{r}_i$. \end{lemma} \begin{proof} We follow \cite{beygelzimer_agnostic_2010} and let \begin{align} \lambda := 3 p_{\min} \frac{\sqrt{\frac{2t}{9 n p_{\min}}}} {1 + \sqrt{\frac{2t}{9 n p_{\min}}}} \end{align} Note that $0 < \lambda < 3p_{\min}$. By Lemma \ref{lemma:d4l6}, we know that if $\min \{\frac{P_i}{\hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)}} : 1 \leq i \leq n \wedge \expc{i}{}[\hat{W}_i] \neq 0 \} \geq p_{\min}$ then \begin{align} \frac{1}{n \lambda} \sum_{i=1}^n \log \expc{i}{}[ \exp( \lambda ( W_i - \expc{i}{}[W_i] ) ) ] \leq \frac{1}{n} \sum_{i=1}^n \frac{\hat{r}_i \hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)} \lambda}{2 P_i (1 - \frac{\hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)} \lambda}{3 P_i}) } \leq M \sqrt{\frac{t}{2 n p_{\min}}} \end{align} and \begin{align} \frac{t}{n \lambda} = \sqrt{\frac{t}{2 n p_{\min}}} + \frac{t}{3 n p_{\min}} \end{align} Let $E'$ be the event that \begin{align} \frac{1}{n} \sum_{i=1}^n (\hat{W}_i - \expc{i}{}[\hat{W}_i]) - \frac{1}{n \lambda} \sum_{i=1}^n \log \expc{i}{}[\exp( \lambda(\hat{W} - \expc{i}{}[\hat{W}]))] \geq \frac{t}{n \lambda} \end{align} and let $E''$ be the event $\min \{\frac{P_i}{\hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)}} : 1 \leq i \leq n \wedge \expc{i}{}[\hat{W}_i] \neq 0\} \geq p_{\min}$. Together, the above two equations imply $E \subseteq E' \bigcap E''$. By \cite{zhang_data_2005}'s lemmas 1 and 2, $\Pr(E) \leq \Pr(E' \bigcap E'') \leq Pr(E') \leq e^{-t}$. \end{proof} The following is an immediate consequence of the previous lemma. \begin{lemma} \label{lemma:midstocha} Pick any $t \geq 0$ and $n \geq 1$. Assume $1 \leq \frac{\hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)}}{P_i} \leq r_{\max}$ for all $1 \leq i \leq n$, and let $R_n := \max \{ \frac{\hat{r}_{\scriptscriptstyle{m \shortrightarrow t}}^{(i)}}{P_i} : 1 \leq i \leq n \wedge \expc{i}{}[\hat{W}] \neq 0 \} \bigcup \{1\} $. We have \begin{align} \Pr \left( \left | \frac{1}{n} \sum_{i=1}^n \hat{W}_i - \frac{1}{n} \sum_{i=1}^n \expc{i}{}[\hat{W}_i] \right | \geq (1 + M) \sqrt{\frac{R_n t}{2n}} + \frac{R_n t}{3n} \right) \leq 2(2 + \log_2 r_{\max}) e^{-t/2} \end{align} \end{lemma} \begin{proof} This proof follows identically to \cite{beygelzimer_agnostic_2010}'s lemma 8. \end{proof} We can finally bound $\Delta_4$ by bounding the remaining free quantity $M$. \begin{lemma} \label{lemma:maind4} With probability at least $1 - \delta$, the following holds over all $n \geq 1$ and $h \in H$: \begin{align} \left| \Delta_4 \right| \leq (2 + \norm{\hat{\theta}}_2) \sqrt{\frac{\varepsilon_n}{P_{\min, n}(h)}} + \frac{\varepsilon_n}{P_{\min, n}(h)} \end{align} where $\varepsilon_n := \frac{16 \log(2 ( 2 + n \log_2 n) n (n+1) |H| / \delta)}{n}$ and $P_{\min, n}(h) = \min \{P_i:1 \leq i \leq n \wedge h(X_i) \neq h^*(X_i) \} \bigcup \{1\}$. \end{lemma} \begin{proof} We define the $k$-sized vector $\tilde{\ell}(j) = \frac{1}{n} \sum_{i=1}^n \mathds{1}_{y_i = j} \hat{\theta}(j)$. Here, $v(j)$ is an abuse of notation and denotes the $j$th element of a vector $v$. Note that we can write $M$ by instead summing over labels, $M = \frac{1}{n} \sum_{i=1}^n \hat{\theta}_i = \sum_{j=1}^k \tilde{\ell}(j)$. Applying the Cauchy-Schwarz inequality, we have that $\frac{1}{n} \sum_{i=1}^n \hat{\theta}_i \leq \frac{1}{n} \norm{\hat{\theta}}_2 \norm{\dot{\ell}}_2$ where $\dot{\ell}(j)$ is another $k$-sized vector where $\dot{\ell}(j) := \sum_{i=1}^n \mathds{1}_{y_i = j}$. Since $\norm{\dot{\ell}}_2 \leq n$, we have that $M \leq 1 + \norm{\hat{\theta}}_2$. The rest of the claim follows by lemma \ref{lemma:midstocha} and a union bound over hypotheses and datapoints. \end{proof} The term $\Delta_1$ is be bounded with Theorem 1. We now bound $\Delta_2$. This is a simple generalization bound of an importance weighted estimate of $f$. \begin{lemma} For any $\delta > 0$, with probability at least $1 - \delta$, then for all $n \geq 1$, $h \in H$: \begin{align} \left | \Delta_2 \right | \leq \frac{ 2 d_{\infty} (P_{\scriptscriptstyle\text{test}}, P_{\scriptscriptstyle\text{src}}) \log (\frac{2 n |H|}{\delta}) }{ 3n } + \sqrt{\frac{ 2 d_2 (P_{\scriptscriptstyle\text{test}}, P_{\scriptscriptstyle\text{src}}) \log (\frac{2 n |H|}{\delta}) }{ n }} \end{align} \end{lemma} \begin{proof} This inequality is a direct application of Theorem 2 from \cite{cortes_learning_2010}. \end{proof} The following lemma bounds the remaining term $\Delta_1$. \begin{lemma} For all $n \geq 1, h \in H$: \begin{align} \abs{\Delta_1} \leq \norm{r_{\scriptscriptstyle{s \shortrightarrow m}}}_{\infty} \text{err}(h_0, r_{\scriptscriptstyle{s \shortrightarrow m}}) \end{align} \end{lemma} \begin{proof} This inequality follows from our Lemma \ref{lemma:drift} and \cite{azizzadenesheli_regularized_2019}'s Theorem 2. \end{proof} Theorem \ref{thm:dev} follows by applying a triangle inequality over $\Delta_1, \Delta_2, \Delta_3, \Delta_4$. If a warm start of $m$ datapoints sampled from $P_{\scriptscriptstyle\text{warm}}$ is used, the deviation bound is instead: \begin{align} \label{eq:warmdev} & \left | err(h, Z_{1:n}) - err(h^*, Z_{1:n}) - err(h) + err(h^*) \right| \nonumber \\ & \leq \mathcal{O} \left( (2 + \frac{n \norm{\theta_{\scriptscriptstyle{u \shortrightarrow t}}}_2 + m \norm{\theta_{\scriptscriptstyle{w \shortrightarrow t}}}_2}{n+m}) \sqrt{\frac{\varepsilon_n}{P_{\min, n}(h)}} + \frac{\varepsilon_n}{P_{\min, n}(h)} + \frac{ 2 d_{\infty} (P_{\scriptscriptstyle\text{test}}, P_{\scriptscriptstyle\text{src}}) \log (\frac{2 n |H|}{\delta}) }{ 3 ( n + m ) } \right. \nonumber \\ & \left. + \sqrt{\frac{ 2 d_2 (P_{\scriptscriptstyle\text{test}}, P_{\scriptscriptstyle\text{src}}) \log (\frac{2 n |H|}{\delta}) }{ n + m }} + \frac{n}{n+m} \norm{r_{\scriptscriptstyle{s \shortrightarrow m}}}_{\infty} \text{err}(h_0, r_{\scriptscriptstyle{s \shortrightarrow m}}) \right. \nonumber \\ & \left. + \frac{n}{\sigma_{\min}} \left( \norm{\theta_{\scriptscriptstyle{m \shortrightarrow t}}}_2 \sqrt{ \frac{\log \brck{\frac{n k}{\delta}}}{\lambda n} } + \sqrt{ \frac{\log \brck{\frac{n}{\delta}}}{\lambda n} } + \sqrt{ \frac{\log \brck{\frac{n}{\delta}}}{n'} } + \norm{\theta_{\scriptscriptstyle{s \shortrightarrow m}}}_{\infty} \text{err}(h_0, r_{\scriptscriptstyle{m \shortrightarrow t}}) \right) \right)\nonumber \end{align} The only change is that variance and subsampling terms are scaled by $\frac{n}{n+m}$, both of which disappear in the limit where $n >> m$. For the remainder of this proof, we continue to set $m = 0$. Theorem 2 follows by replacing the deviation bound in \cite{beygelzimer_agnostic_2010}'s Theorem 2 with our Theorem \ref{thm:dev}. Theorem 3 similarly follows from \cite{beygelzimer_agnostic_2010}'s Theorem 3 but with two additions. First, $\lambda n$ datapoints are sampled for label shift estimation. Second, the number of datapoints which are either accepted or rejected by the active learning algorithm can be much smaller than the number of datapoints sampled from $P_{\scriptscriptstyle\text{src}}$ due to subsampling. We can determine this proportion with an upper-tail Chernoff bound. \begin{lemma} When $\epsilon < 2^{(-2e-1)/\norm{r_{\scriptscriptstyle{s \shortrightarrow m}}}_{\infty}}$, given $n$ datapoints from $P_{\scriptscriptstyle\text{src}}$, subsampling will yield $\textbf{n}$ where, \begin{align} \Pr\brck{\textbf{n} \geq \frac{n}{\norm{r_{\scriptscriptstyle{s \shortrightarrow m}}}_{\infty}} + \log_2 \brck{\frac{1}{\epsilon}}} \leq \epsilon \end{align} \end{lemma} \begin{proof} The number of subsampled datapoints is sum of independent Bernoulli trials with mean $\mu$, \begin{align} \mu = \expc{y \sim P_{\scriptscriptstyle\text{src}}} \brcksq{P_{\scriptscriptstyle\text{ss}}(y)} = \expc{y \sim P_{\scriptscriptstyle\text{src}}} \brcksq{C \frac{P_{\scriptscriptstyle\text{med}}(y)}{P_{\scriptscriptstyle\text{src}}(y)}} = \expc{y \sim P_{\scriptscriptstyle\text{med}}} \brcksq{C} = C \end{align} where $C$ is a constant such that $C \frac{P_{\scriptscriptstyle\text{med}}(y)}{P_{\scriptscriptstyle\text{src}}(y)} \leq 1$ for all labels $y$. Thus, $\mu = C \leq 1 / \norm{r_{\scriptscriptstyle{s \shortrightarrow m}}}_{\infty}$. \end{proof}
{ "timestamp": "2021-03-01T02:01:59", "yymm": "2007", "arxiv_id": "2007.08479", "language": "en", "url": "https://arxiv.org/abs/2007.08479" }
\section{Introduction} \subsection{Motivation} Given two nonempty subsets $A, B$ of a group $G$, the set $A$ is said to be a left (resp. right) complement to $B$ if $A \cdot B = G$ (resp. $B\cdot A = G$). If $A$ is a left (resp. right) complement to $B$ and no subset of $A$ other than $A$ is a left (resp. right) complement to $B$, then $A$ is said to be a minimal left (resp. right) complement to $B$. The study of minimal complements began with Nathanson in \cite{NathansonAddNT4}, who introduced the notion in the context of additive number theory as a natural arithmetic analogue of the metric concept of nets. Since then, most of the literature about minimal complements have focussed on the direct problem about which sets admit minimal complements, see the works of Chen--Yang \cite{ChenYang12}, Kiss--S\'{a}ndor--Yang \cite{KissSandorYangJCT19}, of the authors \cite{MinComp1}, \cite{MinComp2} etc. Recently, the study of inverse problems, i.e., which sets occur as minimal complements, has become popular. The works of Kwon \cite{Kwon}, Alon--Kravitz--Larson \cite{AlonKravitzLarson}, Burcroff--Luntzlara \cite{BurcroffLuntzlara} and also of the authors \cite{CoMin1, CoMin2, CoMin3} have investigated this direction of research. However, most of the literature till date, has focussed on abelian groups. In this work, our motivation is two-fold: \begin{enumerate} \item To show some new results on the inverse problem. \item To concentrate on the inverse problem in not necessarily abelian or finite groups. \end{enumerate} In \cite[Theorem C]{CoMin1}, it has been proved that the ``large'' subsets of a group cannot be a minimal complement to any subset. In \cite{AlonKravitzLarson}, Alon--Kravitz--Larson have established several interesting results which includes the above statement in the context of finite abelian groups. For any group $G$, \cite[Theorem C]{CoMin1} states that a subset $C$ of $G$, other than $G$, is not a minimal complement in $G$ if $C$ is ``large'' in the sense that \begin{equation} \label{Eqn:BSRel} \frac{|C| }{|G\setminus C|} > 2. \end{equation} In \cite[Theorem C]{CoMin1}, the set $G\setminus C$ was assumed to be finite. A refined version of this result in the context of finite abelian groups is established in \cite[Proposition 17]{AlonKravitzLarson}, which states that a subset $C$ of a finite abelian group $G$, contained in a subgroup $H$, is not a minimal complement in $G$ if $C$ is ``large'' in the sense that $$ \frac{2|G||H| } {|H| + 2|G|} < |C| < |H|. $$ Note that the above inequality can be restated as \begin{equation} \label{Eqn:AKLRel} \frac{|C| }{|H\setminus C|} > 2[G:H] \end{equation} together with $C\subsetneq H$ (as explained in the proof of Proposition \ref{Prop:Fini}). We consider the subsets of $G$ which are contained in the subgroups of $G$ and establish a necessary condition (similar to Equations \eqref{Eqn:BSRel}, \eqref{Eqn:AKLRel}) for them to be non-minimal complements in $G$. For a subset $C$ of $G$, strictly contained in a subgroup $H$, define the \textit{relative quotient of $C$ with respect to $H$} to be $$\lambda_H(C) = \frac{|C| }{|H\setminus C|} .$$ Note that \cite[Proposition 17]{AlonKravitzLarson} (in the context of finite abelian groups $G$), \cite[Theorem C]{CoMin1} (for any group $G$ with $G = H$) can be restated as follows: a subset $C$ of a group $G$, properly contained in a subgroup $H$ of $G$, is not a minimal complement in $G$ if its relative quotient with respect to $H$ is greater than the double of the index of $H$ in $G$, i.e., $$\lambda_H(C) > 2[G:H].$$ The aim of this article is to establish that such a statement holds in more general contexts. \subsection{Results obtained} By suitably adapting the proof of \cite[Theorem C]{CoMin1}, we prove that a subset $C$ of a group $G$, properly contained in a subgroup $H$, is not a minimal complement in $G$ if the inequality $$\lambda_H(C) > 2[G:H]$$ holds (when the above inequality is interpreted in an appropriate manner). In fact, our results are more general. Under suitable hypothesis, we prove that not only such sets $C$, but also the sets of the form $(C\setminus E) \cup F$ are non-minimal complements for subsets $C$ of $H$ satisfying the above inequality, finite subsets $E\subseteq C$ and subsets $F\subseteq H\setminus C$. We refer to Theorems \ref{Thm:FAvoidsOneCoset}, \ref{Thm:QLeavesLAppears}, \ref{Thm:FContainedInSingleCoset}, \ref{Thm:CMinusCSymm}, \ref{Thm:Top}, \ref{Thm:Cardi} and Propositions \ref{Prop:Coset}, \ref{Prop:SansK}, \ref{Prop:Fini} for the precise statements. These results are more general than \cite[Proposition 17]{AlonKravitzLarson}, \cite[Theorem C]{CoMin1}. Using them, we obtain subsets of groups which are not minimal complements to any subset. Though the above-mentioned results apply to any group, to motivate the discussion, we provide the examples in the context of the integers. \begin{example} \label{Eg:Intro} \quad \begin{enumerate} \item It follows from Theorem \ref{Thm:FAvoidsOneCoset} that the set $$(\{5, 7, \cdots, 27, 29\} + 32\ensuremath{\mathbb{Z}}) \cup \{p\,|\, p \equiv \pm 1 \,(\mathrm{mod}\, 32), p \text{ is a prime}\}$$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$. \item It follows from Theorem \ref{Thm:QLeavesLAppears} that the set $$(\{3, 9, 11, 13, \cdots, 47\} + 48\ensuremath{\mathbb{Z}}) \cup \{p\,|\, p \equiv 1, 5, 7 \,(\mathrm{mod}\, 48), p \text{ is a prime}\}$$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$. \item It follows from Theorem \ref{Thm:FContainedInSingleCoset} that the set $$(\{3, 5, 7, 9, 11\} + 12\ensuremath{\mathbb{Z}}) \cup \{p\,|\, p \equiv 1 \,(\mathrm{mod}\, 12), p \text{ is a prime}\}$$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$. \item It follows from Theorem \ref{Thm:CMinusCSymm} that the set $$(\{0, 1, 2, 3, 6, 7, 8\} + 9\ensuremath{\mathbb{Z}}) \cup \{p\,|\, p \equiv \pm 5 \,(\mathrm{mod}\, 9), p \text{ is a prime}\} $$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$. \item It follows from Proposition \ref{Prop:Coset} that $\{2, 4, 6, 8, 10\} + 12\ensuremath{\mathbb{Z}}$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$. Moreover, it also follows that the set of irrational numbers is not a minimal complement in $\ensuremath{\mathbb{R}}$, and the set of transcendental numbers is not a minimal complement in $\ensuremath{\mathbb{C}}$. \item It follows from Proposition \ref{Prop:SansK} that for any positive integer $k$ and for any nonempty finite subset $F$ of $k\ensuremath{\mathbb{Z}}$, the set $k\ensuremath{\mathbb{Z}}\setminus F$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$. \item It follows from Theorem \ref{Thm:Top} that the set of real numbers having absolute value greater than one is not a minimal complement in $\ensuremath{\mathbb{R}}$. \item It follows from Theorem \ref{Thm:Cardi} that the set of irrational numbers, with a countable number of points removed, is not a minimal complement in $\ensuremath{\mathbb{R}}$, the set of transcendental numbers, with a countable number of points removed, is not a minimal complement in $\ensuremath{\mathbb{C}}$. \end{enumerate} \end{example} There are several immediate questions about the minimal complements in a finite group, for instance, given a group $G$ of order $n$, what are the sizes of the minimal complements, what are the integers $k$ between $1$ and $n$ such that any subset (or some subset) of $G$ of size $k$ is a minimal complement \cite[Question 1]{CoMin1}. Further, one can study these questions in the context of cyclic groups, or abelian groups, or finite groups. Some of these questions were answered by Alon, Kravitz and Larson in the context of abelian groups \cite[Theorem 1, Proposition 17]{AlonKravitzLarson}. The results obtained in Section \ref{Sec:NonMinComp} apply to groups, which are not assumed to be abelian, and thus they further improve our understanding about \cite[Question 1]{CoMin1}. Following \cite[Definition 5]{BurcroffLuntzlara}, one can consider the notion of robust MAC and robust non-MAC in any abelian group $G$. A subset of an abelian group $G$ is said to be a \textit{robust non-MAC} if it remains a non-minimal complement after the removal or the inclusion of finitely many points (see Definition \ref{Defn:Robust}). We obtain uncountably many examples of robust non-MACs in finitely generated abelian groups of positive rank and in any free abelian group of positive rank (see Theorem \ref{Thm:RobustNonMac} for a more general statement). Further, one can consider the analogous notion in non-abelian groups and obtain several examples by applying the results from Section \ref{Sec:NonMinComp}. In particular, we show that for any number field $K$ of degree $\geq 3$, the group $\ensuremath{\operatorname{GL}}_n(\ensuremath{\mathcal{O}}_K)$ contains uncountably many robust non-minimal complements where $\ensuremath{\mathcal{O}}_K$ denote the ring of integers of $K$. We refer to Section \ref{Sec:RobustNonMac} for the details. \section{Non-minimal complements in groups} \label{Sec:NonMinComp} The principal results of this Section are Theorems \ref{Thm:FAvoidsOneCoset}, \ref{Thm:QLeavesLAppears}, \ref{Thm:FContainedInSingleCoset}, \ref{Thm:CMinusCSymm}, \ref{Thm:Top}, \ref{Thm:Cardi}. They are aimed at establishing that a subset $C$ of a group $G$, properly contained in a subgroup $H$, is not a minimal complement in $G$ if the inequality $$\lambda_H(C) > 2[G:H]$$ holds (when the above inequality is interpreted in an appropriate manner). Moreover, these results not only deal with such sets $C$, but also deal with the sets of the form $(C\setminus E) \cup F$ where $C$ is a subset of $H$ satisfying the above inequality, $E$ is a finite subset of $C$ and $F\subseteq H\setminus C$. We refer to Theorems \ref{Thm:FAvoidsOneCoset}, \ref{Thm:QLeavesLAppears}, \ref{Thm:FContainedInSingleCoset}, \ref{Thm:CMinusCSymm}, \ref{Thm:Top}, \ref{Thm:Cardi} for the precise statements. These results are illustrated by applying them to subsets of certain groups, and thereby obtaining examples of non-minimal complements, see Remarks \ref{Remark:FAvoidsOneCoset}, \ref{Remark:QLeavesLAppears}, \ref{Remark:FContainedInSingleCoset}, \ref{Remark:CMinusCSymm}, \ref{Remark:Top}, \ref{Remark:Cardi}, see also Section \ref{Sec:RobustNonMac}. Some of their important consequences are stated in Propositions \ref{Prop:Coset}, \ref{Prop:SansK}, \ref{Prop:Fini}. We remark that no group is assumed to be abelian or finite unless otherwise stated. In the following, $H$ denotes a finite index subgroup of a group $G$, $K$ denotes a normal subgroup of $H$. If $X$ is a subset of $G$ and $X$ is the union of certain $K$-right cosets, then denote the number of $K$-right cosets contained in $X$ by $[X:K]$. Let $C$ denote a proper subset\footnote{A subset $A$ of a set $B$ is said to be a \textit{proper subset} if $B\setminus A$ is nonempty.} of $H$. Suppose $C$ is a union of certain right cosets of $K$ in $H$ and $H\setminus C$ is the union of finitely many right cosets of $K$ in $H$. Henceforth, we assume that the relative quotient of $C$ with respect to $H$ is greater than the double of the index of $H$ in $G$, i.e., the inequality $$\lambda_H(C) > 2[G:H]$$ holds in the following sense. \begin{assumption} \label{Assumption} The number of the $K$-right cosets contained in $C$ is greater than the product of $2[G:H]$ and the number of $K$-right cosets contained in $H\setminus C$. \end{assumption} Let $E$ be a finite subset of $C$ and $F$ be a subset of $H\setminus C$. \begin{theorem} \label{Thm:FAvoidsOneCoset} If \begin{enumerate} \item the set $F$ does not intersect with some $K$-right coset in $H\setminus C$, \item the number of elements of $K$ is greater than $2([G:H] + 1) |E|$, \end{enumerate} and Assumption \ref{Assumption} holds, then $(C\setminus E) \cup F$ is not a minimal complement in $G$. \end{theorem} \begin{proof} On the contrary, let us assume that $(C\setminus E) \cup F$ is a minimal left complement to a subset $S$ of $G$. Let $\ell$ denote the index of $H$ in $G$. Let $s_1, \cdots, s_\ell$ be elements of $S$ such that $$H s_i \cap H s_j = \emptyset \quad \text{ for all } i \neq j.$$ For $1\leq i \leq \ell$, let $S_i$ denote the subset of $S$ defined by $$S_i : = \{s\in S\,|\, Hs = Hs_i\}.$$ By the first condition, it follows that $(C\setminus E) \cup F$ and $K\cdot ((C\setminus E) \cup F)$ are proper subsets of $H$. So, for each $1\leq i \leq \ell$, there exists an element $s_i'$ in $S_i$ such that $$ (K\cdot ((C\setminus E) \cup F))s_i' \neq (K\cdot ((C\setminus E) \cup F))s_i $$ for all $1\leq i \leq \ell$. Since $K$ is normal in $H$, it follows that \begin{equation} \label{Eqn:DistinctModK} Ks_i \neq Ks_i' \end{equation} for any $i$. Note that there exists a subset $\ensuremath{\mathcal{C}}$ of $C$ consisting of certain $K$-right cosets such that $\ensuremath{\mathcal{C}}$ contains at most $\ell [(H \setminus C):K]$ many $K$-right cosets and $(\ensuremath{\mathcal{C}} \cup F)\cdot S$ contains $(H\setminus C) \cdot \{s_1, \cdots, s_\ell\}$. Moreover, there exists a subset $\ensuremath{\mathcal{E}}$ of $C\setminus E$ containing at most $|E|$ elements such that $((\ensuremath{\mathcal{C}}\setminus E) \cup \ensuremath{\mathcal{E}} \cup F)\cdot S$ contains $(H\setminus C) \cdot \{s_1, \cdots, s_\ell\}$. Further, the set $$\ensuremath{\mathcal{C}} \cup ((H\setminus C) \cdot \{s_1's_1^{-1}, \cdots, s_\ell' s_\ell^{-1}\})$$ contains at most $2\ell [(H \setminus C):K]$ many $K$-right cosets. By Assumption \ref{Assumption}, it follows that the set $C$ contains a $K$-right coset $Kh$ which is disjoint from the set $$\ensuremath{\mathcal{C}} \cup ((H\setminus C) \cdot \{s_1's_1^{-1}, \cdots, s_\ell's_\ell^{-1}\}).$$ Note that there exists a subset $\ensuremath{\mathcal{E}}'$ of $C\setminus E$ containing at most $\ell|E|$ elements such that $(\ensuremath{\mathcal{E}}' \cup F) \cdot S$ contains $E\cdot \{s_1', \cdots, s_\ell'\}$. Further, note that there exists a subset $\ensuremath{\mathcal{E}}''$ of $C\setminus E$ containing at most $\ell|E|$ elements such that $(\ensuremath{\mathcal{E}}'' \cup F) \cdot S$ contains $E\cdot \{s_1, \cdots, s_\ell\}$. We claim that $$Hs_i\subseteq (((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup \ensuremath{\mathcal{E}}' \cup \ensuremath{\mathcal{E}}'' \cup F ) \cdot S$$ for any $1\leq i \leq \ell$. Since $(\ensuremath{\mathcal{C}}\setminus \ensuremath{\mathcal{E}}) \cup F$ is contained in $(C\setminus E)\cup F$, the set $((\ensuremath{\mathcal{C}}\setminus E)\cup \ensuremath{\mathcal{E}} \cup F) \cdot S$ contains $(H\setminus C)\cdot s_i$ and $Kh$ does not intersect with $\ensuremath{\mathcal{C}}\setminus E$, it follows that $(H\setminus C)\cdot s_i$ is contained in $$(((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup F ) \cdot S.$$ Note that $(C\setminus Kh)\cdot s_i$ is contained in $$(E\cdot s_i ) \cup \left( ((C\setminus E) \setminus Kh) \cdot S \right),$$ which is contained in $$((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}}'' \cup F ) \cdot S.$$ Note that $Khs_i$ does not intersect with $(H\setminus C)\cdot \{s_i'\}$. Since $Hs_i = Hs_i'$, it follows that $Khs_i$ is contained in $C\cdot s_i'$. Further, note that $Khs_i$ does not intersect with $Khs_i'$, otherwise, $Khs_i = Khs_i'$. Since $K$ is normal in $H$, it follows that $hKs_i = hKs_i'$, which yields $Ks_i = Ks_i'$, contradicting $Ks_i \neq Ks_i'$. So $Khs_i$ is contained in $(C\setminus Kh)\cdot s_i'$. Since $(\ensuremath{\mathcal{E}}' \cup F) \cdot S$ contains $E\cdot \{s_1', \cdots, s_\ell'\}$, it follows that $Khs_i$ is contained in $$(((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}}' \cup F ) \cdot S.$$ This proves the claim that $$Hs_i\subseteq (((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup \ensuremath{\mathcal{E}}' \cup \ensuremath{\mathcal{E}}'' \cup F ) \cdot S$$ for all $1\leq i\leq \ell$. So $((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup \ensuremath{\mathcal{E}}' \cup \ensuremath{\mathcal{E}}'' \cup F$ is a left complement to $S$. By the second condition, $((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup \ensuremath{\mathcal{E}}' \cup \ensuremath{\mathcal{E}}''$ is a proper subset of $C\setminus E$. Hence $(C\setminus E)\cup F$ is not a minimal left complement to $S$. If $(C\setminus E) \cup F$ is a minimal right complement to some subset $T$ of $G$, then $(C^{-1} \setminus E^{-1}) \cup F^{-1}$ is a minimal left complement to $T^{-1}$, which is impossible. \end{proof} \begin{remark} \label{Remark:FAvoidsOneCoset} Taking $G = \ensuremath{\mathbb{Z}}, H = 2\ensuremath{\mathbb{Z}}, K = 32\ensuremath{\mathbb{Z}}$, $C = (\{5, 7, \cdots, 27, 29\} + 32\ensuremath{\mathbb{Z}}) -1$, and $F = \{p\,|\, p \equiv \pm 1 \,(\mathrm{mod}\, 32), p \text{ is a prime}\} -1$, it follows from Theorem \ref{Thm:FAvoidsOneCoset} that the set $$(\{5, 7, \cdots, 27, 29\} + 32\ensuremath{\mathbb{Z}}) \cup \{p\,|\, p \equiv \pm 1 \,(\mathrm{mod}\, 32), p \text{ is a prime}\}$$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$. \end{remark} In the proof of Theorem \ref{Thm:FAvoidsOneCoset}, Equation \eqref{Eqn:DistinctModK} played a crucial role. This equation was obtained by using the hypothesis that $F$ does not intersect with some $K$-right coset contained in $H\setminus C$. In the following result, we prove that even if $F$ intersects with each $K$-right coset contained in $H\setminus C$, one may obtain a similar result under an alternate hypothesis. \begin{theorem} \label{Thm:QLeavesLAppears} If \begin{enumerate} \item $F$ is a proper subset of $H\setminus C$ and given $2[G:H]$ many elements $x_1, y_1, \cdots, x_{[G:H]} , y_{[G:H]}$ of $G$ with $x_i \neq y_i$ for any $i$, there exists a finite index subgroup $L$ of $K$ such that $Lx_i \neq Ly_i$ for any $i$ and $L$ is normal in $H$. \item for any finite index subgroup $L$ of $K$, the number of elements of $L$ is greater than $2([G:H] + 1) |E|$, \end{enumerate} and Assumption \ref{Assumption} holds, then $(C\setminus E) \cup F$ is not a minimal complement in $G$. \end{theorem} \begin{proof} On the contrary, let us assume that $(C\setminus E) \cup F$ is a minimal left complement to a subset $S$ of $G$. Let $\ell$ denote the index of $H$ in $G$. Let $s_1, \cdots, s_\ell$ be elements of $S$ such that $$H s_i \cap H s_j = \emptyset \quad \text{ for all } i \neq j.$$ For $1\leq i \leq \ell$, let $S_i$ denote the subset of $S$ defined by $$S_i : = \{s\in S\,|\, Hs = Hs_i\}.$$ By the first condition, $(C\setminus E) \cup F$ is a proper subset of $H$. It follows that $S_i$ contains an element other than $s_i$. Let $1\leq i\leq \ell$ be an integer and $s_i'\neq s_i$ be an element of $S_i$. By the first condition, there exists a finite index subgroup $L$ of $K$ such that $L$ is normal in $H$ and $Ls_i \neq Ls_i'$ for any $i$. Replacing $K$ by $L$ (if necessary), we may (and do) assume that $Ks_i \neq Ks_i'$ for any $i$. Note that the same condition was obtained in Equation \eqref{Eqn:DistinctModK} in the course of the proof of Theorem \ref{Thm:FAvoidsOneCoset}. Proceeding in a similar fashion, we obtain the result. \end{proof} \begin{corollary} Suppose $C$ is a subset of $\ensuremath{\mathbb{Z}}$ and it is the union of translates of a nonzero subgroup $K$ of $\ensuremath{\mathbb{Z}}$. If $\lambda_\ensuremath{\mathbb{Z}}(C) > 2$, then $(C\setminus E) \cup F$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$ for any finite subset $E$ of $C$ and for any proper subset $F$ of $\ensuremath{\mathbb{Z}}\setminus C$. \end{corollary} \begin{remark} \label{Remark:QLeavesLAppears} Taking $G = \ensuremath{\mathbb{Z}}, H = 2\ensuremath{\mathbb{Z}}, K = 48\ensuremath{\mathbb{Z}}$, $C = \{2, 8, 10, 12, \cdots, 46\} + 48\ensuremath{\mathbb{Z}}$ and $F = \{p\,|\, p \equiv 1, 5, 7 \,(\mathrm{mod}\, 48), p \text{ is a prime}\}-1$, it follows from Theorem \ref{Thm:QLeavesLAppears} that the set $$(\{3, 9, 11, 13, \cdots, 47\} + 48\ensuremath{\mathbb{Z}}) \cup \{p\,|\, p \equiv 1, 5, 7 \,(\mathrm{mod}\, 48), p \text{ is a prime}\}$$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$. \end{remark} Note that the proofs of Theorems \ref{Thm:FAvoidsOneCoset}, \ref{Thm:QLeavesLAppears} crucially relied on the observation that $S_i$ contains two elements which lies in two disjoint $K$-right cosets. However, if we consider a set of the form $(C\setminus E)\cup F$ and assume that it is a minimal left complement to some set $S$, it is not clear whether each $S_i$ has this property. We show in the following result that even in such a situation, one may obtain a similar result under an alternate hypothesis. \begin{theorem} \label{Thm:FContainedInSingleCoset} If \begin{enumerate} \item the set $F$ is either empty or it is contained in a single $K$-right coset, \item the set $F\cup X$ does not contain any $K$-right coset for any subset $X$ of $H$ of size $\leq 2([G:H] + 1) |E|$, \item the set $E \cdot F^{-1}$ does not contain any $K$-right coset, \end{enumerate} and Assumption \ref{Assumption} holds, then $(C\setminus E) \cup F$ is not a minimal complement in $G$. \end{theorem} \begin{proof} On the contrary, let us assume that $(C\setminus E) \cup F$ is a minimal left complement to a subset $S$ of $G$. Let $\ell$ denote the index of $H$ in $G$. Let $s_1, \cdots, s_\ell$ be elements of $S$ such that $$H s_i \cap H s_j = \emptyset \quad \text{ for all } i \neq j.$$ For $1\leq i \leq \ell$, let $S_i$ denote the subset of $S$ defined by $$S_i : = \{s\in S\,|\, Hs = Hs_i\}.$$ By the first and second condition, $(C\setminus E) \cup F$ is a proper subset of $H$. It follows that $S_i$ contains an element other than $s_i$. Let $P, Q$ denote the sets defined by \begin{align*} P & = \{i \,|\, 1\leq i \leq \ell, Cs' \neq Cs_i \text{ for some } s'\in S_i\},\\ Q & = \{i \,|\, 1\leq i \leq \ell, i\notin P\}. \end{align*} For $i\in P$, let $s_i'$ denote an element of $S_i$ such that $Cs_i' \neq Cs_i$. For $i\in Q$, let $s_i'$ denote an element of $S_i$ other than $s_i$. Note that there exists a subset $\ensuremath{\mathcal{C}}$ of $C$ consisting of certain $K$-right cosets such that $\ensuremath{\mathcal{C}}$ contains at most $\ell [(H \setminus C):K]$ many $K$-right cosets and $(\ensuremath{\mathcal{C}} \cup F)\cdot S$ contains $(H\setminus C) \cdot \{s_1, \cdots, s_\ell\}$. There exists a subset $\ensuremath{\mathcal{E}}$ of $C\setminus E$ containing at most $|E|$ elements such that $((\ensuremath{\mathcal{C}}\setminus E) \cup \ensuremath{\mathcal{E}} \cup F)\cdot S$ contains $(H\setminus C) \cdot \{s_1, \cdots, s_\ell\}$. Further, the set $$\ensuremath{\mathcal{C}} \cup ((H\setminus C) \cdot \{s_1's_1^{-1}, \cdots, s_\ell' s_\ell^{-1}\})$$ contains at most $2\ell [(H\setminus C):K]$ many $K$-right cosets. By Assumption \ref{Assumption}, it follows that the set $C$ contains a $K$-right coset $\ensuremath{\mathcal{R}}$ which is disjoint from the set $$\ensuremath{\mathcal{C}} \cup ((H\setminus C) \cdot \{s_1's_1^{-1}, \cdots, s_\ell's_\ell^{-1}\}).$$ Note that there exists a subset $\ensuremath{\mathcal{E}}'$ of $C\setminus E$ containing at most $\ell|E|$ elements such that $(\ensuremath{\mathcal{E}}' \cup F) \cdot S$ contains $E\cdot \{s_1', \cdots, s_\ell'\}$. Further, note that there exists a subset $\ensuremath{\mathcal{E}}''$ of $C\setminus E$ containing at most $\ell|E|$ elements such that $(\ensuremath{\mathcal{E}}'' \cup F) \cdot S$ contains $E\cdot \{s_1, \cdots, s_\ell\}$. Assume that $F$ is contained in $K\alpha$ for some $\alpha\in H$. Let $h$ be an element of $\ensuremath{\mathcal{R}}$ such that $h\notin (E\cdot F^{-1} ) \alpha$ (if there were no such $h$, then $(E\cdot F^{-1} ) \alpha$ would contain $\ensuremath{\mathcal{R}}$, which would imply that $E\cdot F^{-1}$ contains a $K$-right coset, contradicting the third condition.). We claim that $$Hs_i\subseteq (((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup \ensuremath{\mathcal{E}}' \cup \ensuremath{\mathcal{E}}'' \cup (h\alpha^{-1} F)\cup F ) \cdot S$$ for any $1\leq i \leq \ell$. Since $(\ensuremath{\mathcal{C}}\setminus \ensuremath{\mathcal{E}}) \cup F$ is contained in $(C\setminus E)\cup F$, the set $((\ensuremath{\mathcal{C}}\setminus E)\cup \ensuremath{\mathcal{E}} \cup F) \cdot S$ contains $(H\setminus C)\cdot s_i$ and $Kh$ does not intersect with $\ensuremath{\mathcal{C}}\setminus E$, it follows that $(H\setminus C)\cdot s_i$ is contained in $$(((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup F ) \cdot S.$$ Note that $(C\setminus Kh)\cdot s_i$ is contained in $$(E\cdot s_i ) \cup \left( (((C\setminus E) \setminus Kh) \cup F ) \cdot S \right),$$ which is contained in $$((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}}'' \cup F ) \cdot S.$$ Let $i$ be an element of $P$. Note that $Khs_i$ does not intersect with $(H\setminus C)\cdot \{s_i'\}$. Since $Hs_i = Hs_i'$, it follows that $Khs_i$ is contained in $C\cdot s_i'$. Further, note that $Khs_i$ does not intersect with $Khs_i'$, otherwise, $Khs_i = Khs_i'$. Since $K$ is normal in $H$, it follows that $hKs_i = hKs_i'$, which yields $Ks_i = Ks_i'$, and consequently $K\widetilde h s_i' = K\widetilde h s_i$ holds for any $\widetilde h\in H$, contradicting $i\in P$. So $Khs_i$ is contained in $(C\setminus Kh)\cdot s_i'$. Since $(\ensuremath{\mathcal{E}}' \cup F) \cdot S$ contains $E\cdot \{s_1', \cdots, s_\ell'\}$, it follows that $Khs_i$ is contained in $$((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}}' \cup F ) \cdot S$$ for $i\in P$. For each $i\in Q$, we have $$F\cdot S_i = (H \setminus C) \cdot s_i.$$ Note that for any $\beta\in H$, we obtain \begin{align*} \beta \alpha^{-1} F & \subseteq \beta \alpha^{-1} K \alpha \\ & = K \beta \alpha^{-1} \alpha \\ & = K \beta \end{align*} and \begin{align*} (\beta \alpha^{-1} F )\cdot S_i & = (\beta \alpha^{-1}) \cdot (F \cdot S_i) \\ & \supseteq (\beta \alpha^{-1}) \cdot K \alpha s_i \\ & = K \beta \alpha^{-1} \alpha s_i\\ & = K \beta s_i \end{align*} for $i\in Q$. It follows that $Khs_i$ is contained in $(h\alpha^{-1} F)\cdot S$ for $i\in Q$. This proves the claim that $$Hs_i\subseteq ((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup \ensuremath{\mathcal{E}}' \cup \ensuremath{\mathcal{E}}'' \cup (h\alpha^{-1} F) \cup F ) \cdot S$$ for all $i$. So $(C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup \ensuremath{\mathcal{E}}' \cup \ensuremath{\mathcal{E}}'' \cup (h\alpha^{-1} F) \cup F$ is a left complement to $S$. Since $h\notin (E\cdot F^{-1} ) \alpha$, using the second condition, $((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup \ensuremath{\mathcal{E}}' \cup \ensuremath{\mathcal{E}}'' \cup (h\alpha^{-1} F)$ is a proper subset of $C\setminus E$. Hence $(C\setminus E)\cup F$ is not a minimal left complement to $S$. If $(C\setminus E) \cup F$ is a minimal right complement to some subset $T$ of $G$, then $(C^{-1} \setminus E^{-1}) \cup F^{-1}$ is a minimal left complement to $T^{-1}$, which is impossible. \end{proof} \begin{remark} \label{Remark:FContainedInSingleCoset} Taking $G = \ensuremath{\mathbb{Z}}, H = 2\ensuremath{\mathbb{Z}}, K = 12\ensuremath{\mathbb{Z}}$, $C = \{2, 4, 6, 8, 10\} + 12\ensuremath{\mathbb{Z}}$ and $F = \{p\,|\, p \equiv 1 \,(\mathrm{mod}\, 12), p \text{ is a prime}\}-1$, it follows from Theorem \ref{Thm:FContainedInSingleCoset} that the set $$(\{3, 5, 7, 9, 11\} + 12\ensuremath{\mathbb{Z}}) \cup \{p\,|\, p \equiv 1 \,(\mathrm{mod}\, 12), p \text{ is a prime}\}$$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$. \end{remark} Note that in the proof of Theorem \ref{Thm:FContainedInSingleCoset}, the hypothesis that $F$ is either empty or is contained in a single $K$-right coset, played a crucial role. It would be interesting to consider the subsets of $H$ of the form $(C\setminus E) \cup F$ for ``large'' $C$ and for any proper subset $F$ of $H\setminus C$. In Theorem \ref{Thm:CMinusCSymm}, we prove that even if $F$ intersects with each $K$-right coset contained in $H\setminus C$, one may obtain a similar result under an alternate hypothesis. \begin{proposition} \label{Prop:CMinusCSymmPrimeIndex} Let $X, Y$ be two nonempty disjoint subsets of a group $G$ with $X\cup Y = G$. Let $L$ be a subgroup of $G$ such that $X$ is the union of certain right cosets of $L$. Then the inclusion \begin{equation} \label{Eqn:CondXYL} (X\cdot Y^{-1}) \cup (Y \cdot X^{-1}) \subseteq G\setminus L \end{equation} holds. Moreover, the following conditions are equivalent. \begin{enumerate} \item The inclusion in Equation \eqref{Eqn:CondXYL} is a proper inclusion. \item For each $y\in Y$, there exists an element $y'\in Y\setminus (Ly)$ such that $$y'y^{-1} \cdot Y = Y.$$ \item For some $y\in Y$, there exists an element $y'\in Y\setminus (Ly)$ such that $$y'y^{-1} \cdot Y = Y.$$ \item The set $Y$ is the union of certain right cosets of some subgroup of $G$ which properly contains $L$. \item The set $X$ is the union of certain right cosets of some subgroup of $G$ which properly contains $L$. \end{enumerate} The set $$ G\setminus \left((X\cdot Y^{-1}) \cup (Y \cdot X^{-1}) \right)$$ is a subgroup of $G$ and it is the maximal subgroup of $G$ such that $Y$ is a union of its right cosets. If $Y$ is finite, then the inclusion \begin{equation} \label{Eqn:CondXY} (X\cdot Y^{-1}) \cup (Y \cdot X^{-1}) \subseteq G\setminus \{e\} \end{equation} is an equality under any one of the following conditions. \begin{enumerate}[(a)] \item The order of $y'y^{-1}$ is greater than the size of $Y$ for any $y, y'\in Y$ with $y \neq y'$. \item The size of $Y$ is not divisible by the size of any nontrivial finite subgroup of $G$. \end{enumerate} \end{proposition} \begin{proof} Since $X, Y$ are disjoint and each of them can be expressed as the union of certain $L$-right cosets, it follows that the inclusion in Equation \eqref{Eqn:CondXYL} holds. Note that $$ ((X\cdot g)\cdot (Y\cdot g)^{-1}) \cup ((Y\cdot g) \cdot (X\cdot g)^{-1}) = (X\cdot Y^{-1}) \cup (Y \cdot X^{-1}) $$ for any $g\in G$. Thus the inclusion $$ (X\cdot Y^{-1}) \cup (Y \cdot X^{-1}) \subseteq G\setminus L $$ is an equality if and only if the inclusion $$ ((X\cdot g)\cdot (Y\cdot g)^{-1}) \cup ((Y\cdot g) \cdot (X\cdot g)^{-1}) \subseteq G\setminus L $$ is an equality. Note that for $y, y'\in Y$, the element $y'y^{-1} $ does not belong to the set $((X\cdot y^{-1})\cdot (Y\cdot y^{-1})^{-1}) \cup ((Y\cdot y^{-1}) \cdot (X\cdot y^{-1})^{-1}) $ if and only if $$y'y^{-1} \cdot (Y\cdot y^{-1}) \subseteq Y\cdot y^{-1}, \quad \text{ and } \quad y'y^{-1} \cdot (X\cdot y^{-1}) \subseteq X \cdot y^{-1}, $$ which holds if and only if $$y'y^{-1} \cdot Y = Y.$$ Assume that the first condition holds. Choose an element $y\in Y$. Note that the set $((X\cdot y^{-1})\cdot (Y\cdot y^{-1})^{-1}) \cup ((Y\cdot y^{-1}) \cdot (X\cdot y^{-1})^{-1}) $ contains $X\cdot y^{-1}$. So this set does not contain $y'y^{-1} $ for some $y'\in Y\setminus (Ly)$. We obtain $$y'y^{-1} \cdot Y = Y.$$ So the first condition implies the second condition. Note that the second condition implies the third condition. Now, assume that the third condition holds, i.e., $$y'y^{-1} \cdot Y = Y$$ holds with $y, y'\in Y$, $Ly \neq Ly'$. Let $L'$ denote the subgroup of $G$ generated by $L$ and $y'y^{-1}$. Since $x\cdot Y = Y$ for any $x\in L$, and $y'y^{-1} \cdot Y = Y$, it follows that $x\cdot Y = Y$ for any $x\in L'$. So, $Y$ is union of certain right cosets of $L'$. Since $Ly \neq Ly'$, it follows that $L$ is properly contained in $L'$. Thus the fourth condition follows. Assume that the fourth condition holds, i.e., $Y$ is the union of certain translates of some subgroup $\ensuremath{\mathcal{L}}$ of $G$ which properly contains $L$. Then the set $ (X\cdot Y^{-1}) \cup (Y \cdot X^{-1}) $ does not contain $\ensuremath{\mathcal{L}}$. Since $\ensuremath{\mathcal{L}}$ properly contains $L$, the first condition follows. Since $X, Y$ are disjoint and $X \cup Y = G$, the fourth and the fifth conditions are equivalent. This proves the equivalence of the five conditions. Consider the subgroups $L'$ of $G$ such that $L'$ contains $L$ and $Y$ can be expressed as the union of right cosets of $L'$. Let $\ensuremath{\mathscr{L}}$ denote the subgroup of $G$ generated by such subgroups. Note that $Y$ can be expressed as the union of the right cosets of $\ensuremath{\mathscr{L}}$. It follows that $$(X\cdot Y^{-1}) \cup (Y \cdot X^{-1}) \subseteq G\setminus \ensuremath{\mathscr{L}}.$$ By the construction of $\ensuremath{\mathscr{L}}$, it follows that the above inclusion is an equality, and it also follows that $\ensuremath{\mathscr{L}}$ is the maximal subgroup of $G$ such that $Y$ is a union of its right cosets. Suppose $Y$ is finite and the order of $y'y^{-1}$ is greater than the size of $Y$ for any $y, y'\in Y$ with $y \neq y'$. Assume that the inclusion in Equation \eqref{Eqn:CondXY} is not an equality. So, there exist two distinct elements $y_1, y_2 \in Y$ such that $$y_1y_2^{-1} \cdot Y = Y.$$ Let $y_0$ be an element of $Y$. Denote the order of $y_1 y_2^{-1}$ by $r$. Then the set $Y$ contains the $r$ distinct elements $yy_0, y^2y_0, y^3y_0, \cdots, y^ry_0$ where $y = y_1 y_2^{-1}$, which is impossible, since $r$ is greater than the size of $Y$. Hence, the inclusion in Equation \eqref{Eqn:CondXY} is an equality. Moreover, if $Y$ is finite and the size of $Y$ is not divisible by the size of any nontrivial finite subgroup of $G$, then $Y$ cannot be expressed as the union of certain right cosets of some nontrivial subgroup of $G$. Hence, the inclusion in Equation \eqref{Eqn:CondXY} is an equality. \end{proof} \begin{theorem} \label{Thm:CMinusCSymm} If \begin{enumerate} \item \begin{equation} \label{Eqn:Cond} (C^{-1}(H\setminus C)) \cup ((H\setminus C)^{-1} C) = H \setminus K \end{equation} \item the set $F\cup X$ does not contain a $K$-right coset for any subset $X$ of $H$ of size $\leq 2([G:H] + 1) |E|$, \item the set $E \cdot F^{-1}$ does not contain any $K$-right coset, \end{enumerate} and Assumption \ref{Assumption} holds, then $(C\setminus E) \cup F$ is not a minimal complement in $G$. In particular, $(C\setminus E) \cup F$ is not a minimal complement in $G$ if \begin{enumerate} \item any subgroup $K'$ of $H$ such that $K$ is contained in $K'$ and $H\setminus C$ can be expressed as a union of right cosets of $K'$, is normal in $H$, \item the set $F\cup X$ does not contain a $K$-right coset for any subset $X$ of $H$ of size $\leq 2([G:H] + 1) |E|$, \item the set $E \cdot F^{-1}$ does not contain any $K$-right coset, \end{enumerate} and Assumption \ref{Assumption} holds. \end{theorem} \begin{proof} On the contrary, let us assume that $(C\setminus E) \cup F$ is a minimal left complement to a subset $S$ of $G$. Let $\ell$ denote the index of $H$ in $G$. Let $s_1, \cdots, s_\ell$ be elements of $S$ such that $$H s_i \cap H s_j = \emptyset \quad \text{ for all } i \neq j.$$ For $1\leq i \leq \ell$, let $S_i$ denote the subset of $S$ defined by $$S_i : = \{s\in S\,|\, Hs = Hs_i\}.$$ By the second condition, $(C\setminus E) \cup F$ is a proper subset of $H$. It follows that $S_i$ contains an element other than $s_i$. Let $P, Q$ denote the sets defined by \begin{align*} P & = \{i \,|\, 1\leq i \leq \ell, Cs' \neq Cs_i \text{ for some } s'\in S_i\},\\ Q & = \{i \,|\, 1\leq i \leq \ell, i\notin P\}. \end{align*} For $i\in P$, let $s_i'$ denote an element of $S_i$ such that $Cs_i' \neq Cs_i$. For $i\in Q$, let $s_i'$ denote an element of $S_i$ other than $s_i$. Note that there exists a subset $\ensuremath{\mathcal{C}}$ of $C$ consisting of certain $K$-right cosets such that $\ensuremath{\mathcal{C}}$ contains at most $\ell [(H \setminus C):K]$ many $K$-right cosets and $(\ensuremath{\mathcal{C}} \cup F)\cdot S$ contains $(H\setminus C) \cdot \{s_1, \cdots, s_\ell\}$. Moreover, there exists a subset $\ensuremath{\mathcal{E}}$ of $C\setminus E$ containing at most $|E|$ elements such that $((\ensuremath{\mathcal{C}}\setminus E) \cup \ensuremath{\mathcal{E}} \cup F)\cdot S$ contains $(H\setminus C) \cdot \{s_1, \cdots, s_\ell\}$. Further, the set $$\ensuremath{\mathcal{C}} \cup ((H\setminus C) \cdot \{s_1's_1^{-1}, \cdots, s_\ell' s_\ell^{-1}\})$$ contains at most $2\ell [(H\setminus C):K]$ many $K$-right cosets. By Assumption \ref{Assumption}, it follows that the set $C$ contains a $K$-right coset class $\ensuremath{\mathcal{R}}$ which is disjoint from the set $$\ensuremath{\mathcal{C}} \cup ((H\setminus C) \cdot \{s_1's_1^{-1}, \cdots, s_\ell's_\ell^{-1}\}).$$ Note that there exists a subset $\ensuremath{\mathcal{E}}'$ of $C\setminus E$ containing at most $\ell|E|$ elements such that $(\ensuremath{\mathcal{E}}' \cup F) \cdot S$ contains $E\cdot \{s_1', \cdots, s_\ell'\}$. Further, note that there exists a subset $\ensuremath{\mathcal{E}}''$ of $C\setminus E$ containing at most $\ell|E|$ elements such that $(\ensuremath{\mathcal{E}}'' \cup F) \cdot S$ contains $E\cdot \{s_1, \cdots, s_\ell\}$. Assume that $F\cap K\alpha$ is properly contained in $K\alpha$ for some $\alpha\in H$. Let $h$ be an element of $\ensuremath{\mathcal{R}}$ such that $h\notin (E\cdot F^{-1} ) \alpha$ (if there were no such $h$, then $(E\cdot F^{-1} ) \alpha$ would contain $\ensuremath{\mathcal{R}}$, which would imply that $E\cdot F^{-1}$ contains a $K$-right coset, contradicting the third condition.). We claim that $$Hs_i\subseteq (((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup \ensuremath{\mathcal{E}}' \cup \ensuremath{\mathcal{E}}'' \cup (h\alpha^{-1} F\cap Kh)\cup F ) \cdot S$$ for any $1\leq i \leq \ell$. Since $(\ensuremath{\mathcal{C}}\setminus \ensuremath{\mathcal{E}}) \cup F$ is contained in $(C\setminus E)\cup F$, the set $((\ensuremath{\mathcal{C}}\setminus E)\cup \ensuremath{\mathcal{E}} \cup F) \cdot S$ contains $(H\setminus C)\cdot s_i$ and $Kh$ does not intersect with $\ensuremath{\mathcal{C}}\setminus E$, it follows that $(H\setminus C)\cdot s_i$ is contained in $$((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup F ) \cdot S.$$ Note that $(C\setminus Kh)\cdot s_i$ is contained in $$(E\cdot s_i ) \cup \left( (((C\setminus E) \setminus Kh) \cup F ) \cdot S \right),$$ which is contained in $$((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}}'' \cup F ) \cdot S.$$ Let $i$ be an element of $P$. Note that $Khs_i$ does not intersect with $(H\setminus C)\cdot \{s_i'\}$. Since $Hs_i = Hs_i'$, it follows that $Khs_i$ is contained in $C\cdot s_i'$. Further, note that $Khs_i$ does not intersect with $Khs_i'$, otherwise, $Khs_i = Khs_i'$. Since $K$ is normal in $H$, it follows that $hKs_i = hKs_i'$, which yields $Ks_i = Ks_i'$, and consequently $K\widetilde h s_i' = K\widetilde h s_i$ holds for any $\widetilde h\in H$, contradicting $i\in P$. So $Khs_i$ is contained in $(C\setminus Kh)\cdot s_i'$. Since $(\ensuremath{\mathcal{E}}' \cup F) \cdot S$ contains $E\cdot \{s_1', \cdots, s_\ell'\}$, it follows that $Khs_i$ is contained in $$((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}}' \cup F ) \cdot S$$ for $i\in P$. We choose an element $i\in Q$. Note that the set $S_i$ is contained in $Ks_i$. Otherwise, there exists an element $s_i'\in S_i$ such that $K s_i ' \neq Ks_i$. Let $\beta$ denote the element $s_i's_i^{-1}$ of $H$. Note that $\beta$ does not lie in $K$. By the first condition (i.e. Equation \eqref{Eqn:Cond}), $\beta^{\pm 1} = \gamma ^{-1} \delta$ with $\gamma \in C, \delta \in H\setminus C$. If $\beta = \gamma ^{-1} \delta$, then $C\beta$ intersects with $H\setminus C$, and hence $C\beta s_i $ intersects with $(H\setminus C)s_i$, i.e., $C s_i' $ intersects with $(H\setminus C)s_i$, which implies that $Cs_i \neq Cs_i'$. If $\beta^{-1} = \gamma ^{-1} \delta$, then $C\beta^{-1}$ intersects with $H\setminus C$, and hence $C\beta^{-1} s_i' $ intersects with $(H\setminus C)s_i'$, i.e., $C s_i $ intersects with $(H\setminus C)s_i'$, which implies that $Cs_i '\neq Cs_i$. This shows that $i\in P$, which is a contradiction. So $S_i$ is contained in $Ks_i$. Since $K\alpha s_i$ is contained in $(C\cup F)\cdot S_i$, it follows that $K\alpha s_i$ is contained in $(F\cap K\alpha)\cdot S_i$, and hence \begin{align*} (h\alpha^{-1} F \cap Kh)\cdot S_i & = (h\alpha^{-1} (F \cap (\alpha h^{-1} Kh)))\cdot S_i \\ & = (h\alpha^{-1} (F \cap (K \alpha h^{-1} h)))\cdot S_i \\ & = (h\alpha^{-1} (F \cap K \alpha))\cdot S_i \\ & = (h\alpha^{-1}) ((F \cap K \alpha)\cdot S_i) \\ & \supseteq (h\alpha^{-1}) K \alpha s_i \\ & = K (h\alpha^{-1}) \alpha s_i \\ & = K h s_i. \end{align*} It follows that $Khs_i$ is contained in $(h\alpha^{-1} F\cap Kh)\cdot S$ for any $i\in Q$. This proves the claim that $$Hs_i\subseteq ((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup \ensuremath{\mathcal{E}}' \cup \ensuremath{\mathcal{E}}'' \cup (h\alpha^{-1} F\cap Kh) \cup F ) \cdot S$$ for all $1\leq i\leq \ell$. So $(C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup \ensuremath{\mathcal{E}}' \cup \ensuremath{\mathcal{E}}'' \cup (h\alpha^{-1} F\cap Kh) \cup F$ is a left complement to $S$. By the second condition, $((C\setminus E) \setminus Kh) \cup \ensuremath{\mathcal{E}} \cup \ensuremath{\mathcal{E}}' \cup \ensuremath{\mathcal{E}}'' \cup (h\alpha^{-1} F\cap Kh)$ is a proper subset of $C\setminus E$. Hence $(C\setminus E)\cup F$ is not a minimal left complement to $S$. If $(C\setminus E) \cup F$ is a minimal right complement to some subset $T$ of $G$, then $(C^{-1} \setminus E^{-1}) \cup F^{-1}$ is a minimal left complement to $T^{-1}$, which is impossible. Now we establish the second part. Since $H\setminus C$ is the union of finitely many right cosets of $K$, it follows from Proposition \ref{Prop:CMinusCSymmPrimeIndex} that $$(C^{-1}(H\setminus C)) \cup ((H\setminus C)^{-1} C) = H \setminus K'$$ and $H\setminus C$ is the union of certain left cosets of $K'$ for some subgroup $K'$ of $H$ containing $K$ as a finite index subgroup. Since $K'$ contains $K$, it follows from the hypothesis that the set $F\cup X$ does not contain a $K'$-right coset for any subset $X$ of $H$ of size $\leq 2([G:H] + 1) |E|$, and the set $E \cdot F^{-1}$ does not contain any $K'$-right coset. Hence from the first part, the result follows. \end{proof} \begin{remark} By Proposition \ref{Prop:CMinusCSymmPrimeIndex}, the first condition in Theorem \ref{Thm:CMinusCSymm} is equivalent to requiring that $H\setminus C$ cannot be expressed (or equivalently, $C$ cannot be expressed) as the union of certain left cosets of some subgroup $L$ of $H$ satisfying $L\supsetneq K$. \end{remark} \begin{remark} \label{Remark:CMinusCSymm} Taking $G = \ensuremath{\mathbb{Z}}, H = 2\ensuremath{\mathbb{Z}}, K = 2n\ensuremath{\mathbb{Z}}$, it follows from Theorem \ref{Thm:CMinusCSymm} that for any integer $n\geq 11$, for any $1\leq a < b \leq n$ with \begin{equation} \label{Eqn:CongruenceOdd} 2(a-b) \not\equiv 0\pmod n, \end{equation} the set $$((\{2, 4, 6, \cdots, 2n\} \setminus \{2a, 2b\}) + 2n\ensuremath{\mathbb{Z}}) \cup F$$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$ for any proper subset of $F$ of $\{2a, 2b\} + 2n\ensuremath{\mathbb{Z}}$ since \begin{align*} & ((\{2, 4, 6, \cdots, 2n\} \setminus \{2a, 2b\}) + 2n\ensuremath{\mathbb{Z}})+ (\{-2a,-2b\} + 2n\ensuremath{\mathbb{Z}})\\ & = ((\{2, 4, 6, \cdots, 2n\} \setminus \{0, 2(b-a)\}) + 2n\ensuremath{\mathbb{Z}}) \cup ((\{2, 4, 6, \cdots, 2n\} \setminus \{2(a-b), 0\}) + 2n\ensuremath{\mathbb{Z}}) \\ & = \{2, 4, 6, \cdots, 2n\} \setminus \{0\} + 2n\ensuremath{\mathbb{Z}}. \end{align*} Note that Equation \eqref{Eqn:CongruenceOdd} holds when $n$ is odd. One can obtain a more general example than the above. Taking $G = \ensuremath{\mathbb{Z}}, H = 2\ensuremath{\mathbb{Z}}, K = 2n\ensuremath{\mathbb{Z}}$, it follows from Proposition \ref{Prop:CMinusCSymmPrimeIndex} and Theorem \ref{Thm:CMinusCSymm} that for any integer $k\geq 2$, $n\geq 5k + 1$ such that $n$ is not divisible by any integer $1< i \leq k$ and for any $1\leq a_1 < a_2 < \cdots < a_k \leq n$, the set $$((\{2, 4, 6, \cdots, 2n\} \setminus \{2a_1,2a_2, \cdots, 2a_k\}) + 2n\ensuremath{\mathbb{Z}}) \cup F$$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$ for any proper subset of $F$ of $\{2a_1,2a_2, \cdots, 2a_k\} + 2n\ensuremath{\mathbb{Z}}$. \end{remark} When $F$ is the empty set, Theorems \ref{Thm:FAvoidsOneCoset}, \ref{Thm:FContainedInSingleCoset} are equivalent. One obtains the following consequences. \begin{proposition} \label{Prop:Coset} If \begin{enumerate} \item the number of elements of $K$ is greater than $2([G:H] + 1) |E|$, \end{enumerate} and Assumption \ref{Assumption} holds, then $C\setminus E$ is not a minimal complement in $G$. \end{proposition} \begin{remark} \label{Remark:Coset} It follows from Proposition \ref{Prop:Coset} that $\{2, 4, 6, 8, 10\} + 12\ensuremath{\mathbb{Z}}$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$. Taking $G = H = \ensuremath{\mathbb{R}}, K = \mathbb Q$, it follows from Proposition \ref{Prop:Coset} that the set of irrational numbers is not a minimal complement in $\ensuremath{\mathbb{R}}$. Taking $G = H = \ensuremath{\mathbb{C}}, K = \overline{\mathbb Q}$, it follows from Proposition \ref{Prop:Coset} that the set of transcendental numbers is not a minimal complement in $\ensuremath{\mathbb{C}}$. \end{remark} When $K$ is the trivial subgroup and $E$ is the emptyset in Proposition \ref{Prop:Coset}, one obtains the following results. \begin{proposition} \label{Prop:SansK} If $H\setminus C$ is finite and $C$ contains more than $2[G:H] |H\setminus C|$ elements, i.e., the relative quotient of $C$ with respect to $H$ satisfies $$\lambda_H(C) > 2[G:H],$$ then $C$ is not a minimal complement to any subset of $G$. In particular, if $D$ is a proper subset of $G$ such that $G \setminus D$ is finite and $D$ contains more than $2|G\setminus D|$ elements, then $D$ is not a minimal complement to any subset of $G$. \end{proposition} \begin{proof} The first part follows from Proposition \ref{Prop:Coset}. The second past follows from the first part as a consequence. \end{proof} \begin{remark} It follows from Proposition \ref{Prop:SansK} that for any positive integer $k$ and for any nonempty finite subset $F$ of $k\ensuremath{\mathbb{Z}}$, the set $k\ensuremath{\mathbb{Z}}\setminus F$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$. \end{remark} In the context of finite groups, one has the following consequence of Proposition \ref{Prop:SansK}. See also \cite[Proposition 17]{AlonKravitzLarson}. \begin{proposition} \label{Prop:Fini} If $G$ is finite and the relative quotient of $C$ with respect to $H$ satisfies $$\lambda_H(C) > 2[G:H],$$ i.e., $C$ is a subset of $H$ satisfying $$ |H| > |C| > 2[G:H] |H\setminus C|,$$ then $C$ is not a minimal complement to any subset of $G$. Equivalently, no subset $C$ of a subgroup $H$ of a finite group $G$ satisfying $$ |H| \frac{2[G:H]} {1 + 2[G:H]} = \frac{2|G||H| } {|H| + 2|G|} < |C| < |H| $$ is a minimal complement to some subset of $G$. In particular, if $C$ is a proper subset of a finite group $G$ containing more than $2|G\setminus C|$ elements, then $C$ is not a minimal complement in $G$. \end{proposition} \begin{proof} The first statement and the third statement follow from Proposition \ref{Prop:SansK}. To obtain the second statement, note that for a subset $C$ of $H$, the inequality $|C| > 2[G:H] |H\setminus C|$ is equivalent to $$ (2[G:H] + 1)|C| > 2[G:H] |C| + 2[G:H] |H\setminus C| = 2[G:H] |H| = 2|G| ,$$ which is equivalent to $$|C| > \frac{2|G||H| } {|H| + 2|G|}.$$ Then the second part follows from the first part. \end{proof} \begin{remark} It follows from Proposition \ref{Prop:Fini} that the set $\{\overline 2, \overline 4, \overline 6, \overline 8, \overline{10}\}$ is not a minimal complement in $\ensuremath{\mathbb{Z}}/12\ensuremath{\mathbb{Z}}$. \end{remark} \begin{theorem} \label{Thm:Top} Let $\ensuremath{\mathcal{H}} $ be a finite index subgroup of a topological group $\ensuremath{\mathcal{G}} $. Let $\ensuremath{\mathcal{C}} $ be a proper subset of $\ensuremath{\mathcal{H}} $. Suppose $\ensuremath{\mathcal{H}} \setminus \ensuremath{\mathcal{C}} $ is compact and closed\footnote{Note that the compact subsets need not be closed unless the ambient topological space is assumed to be Hausdorff.} in $\ensuremath{\mathcal{H}} $. If $\ensuremath{\mathcal{H}} $ is not a union of finitely many translates of $\ensuremath{\mathcal{H}} \setminus \ensuremath{\mathcal{C}} $, then $\ensuremath{\mathcal{C}} $ is not a minimal complement in $\ensuremath{\mathcal{G}}$. In particular, if $\ensuremath{\mathcal{C}} $ is a proper subset of a topological group of $\ensuremath{\mathcal{G}} $ such that $\ensuremath{\mathcal{G}} \setminus \ensuremath{\mathcal{C}} $ is closed and compact, and $\ensuremath{\mathcal{G}} \setminus \ensuremath{\mathcal{C}} $ is ``small'' in the sense that $\ensuremath{\mathcal{G}} $ is not a union of finitely many translates of $\ensuremath{\mathcal{G}} \setminus \ensuremath{\mathcal{C}} $, then $\ensuremath{\mathcal{C}} $ is not minimal complement in $\ensuremath{\mathcal{G}} $. \end{theorem} \begin{proof} On the contrary, let us assume that $\ensuremath{\mathcal{C}} $ is a minimal left complement to some subset $T$ of $\ensuremath{\mathcal{G}} $. Let $S$ be a subset of $T$ such that $\ensuremath{\mathcal{H}} \cdot S = \ensuremath{\mathcal{G}} $, and $\ensuremath{\mathcal{H}} s_1 \cap \ensuremath{\mathcal{H}} s_2 = \emptyset$ for any two distinct elements $s_1, s_2\in S$. Since $\ensuremath{\mathcal{C}}$ is a proper subset of $\ensuremath{\mathcal{H}} $, for each $s\in S$, there exists an element $t_s\in T$ such that $t_s \neq s$ and $\ensuremath{\mathcal{H}} s = \ensuremath{\mathcal{H}} t_s$. Since $\ensuremath{\mathscr{C}} $ is compact and $S$ is finite, there is a nonempty finite subset $T'$ of $T$ such that $\{\ensuremath{\mathcal{C}} \cdot s\}_{s\in T'}$ is an open cover of $\ensuremath{\mathscr{C}} \cdot S$. From the hypothesis, it follows that the subgroup $\ensuremath{\mathcal{H}} $ strictly contains $$(\ensuremath{\mathcal{H}} \cap (\cup_{x\in S\cdot T'^{-1}} \ensuremath{\mathscr{C}} \cdot x)) \cup (\cup_{s\in S} \ensuremath{\mathscr{C}} \cdot t_ss^{-1} ) \cup \ensuremath{\mathscr{C}} ,$$ and hence there is an element $h\in \ensuremath{\mathcal{H}} $ lying outside this union. We claim that $\ensuremath{\mathcal{C}} \setminus\{h\}$ is a left complement to $T$. Note that $\ensuremath{\mathcal{C}} \setminus \{h\}$ is nonempty. It suffices to show that $\ensuremath{\mathcal{H}} s$ is contained in $(\ensuremath{\mathcal{C}} \setminus \{h\})\cdot T$ for each $s\in S$. Let $k$ be an element of $\ensuremath{\mathscr{C}} $. Then $ks$ is equal to $ct'$ for some $c\in \ensuremath{\mathcal{C}} , t'\in T'$. So $c$ lies in the above union and hence $h \neq c$. Thus $ks$ lies in $(\ensuremath{\mathcal{C}} \setminus \{h\})\cdot T$. So $\ensuremath{\mathscr{C}} \cdot s$ is contained in $(\ensuremath{\mathcal{C}} \setminus \{h\})\cdot T$. Note that $hs$ lies in $\ensuremath{\mathcal{H}} t_s$ and does not lie in $\ensuremath{\mathscr{C}} t_s$. Thus $hs$ lies in $\ensuremath{\mathcal{C}} \cdot t_s$. Since $s\neq t_s$, it follows that $hs$ lies in $(\ensuremath{\mathcal{C}} \setminus \{h\})\cdot t_s$. Clearly, $(\ensuremath{\mathcal{C}} \setminus \{h\}) s$ is contained in $(\ensuremath{\mathcal{C}} \setminus \{h\})\cdot T$. So $\ensuremath{\mathcal{H}} \cdot s$ is contained in $(\ensuremath{\mathcal{C}} \setminus \{h\})\cdot T$. Thus $\ensuremath{\mathcal{C}} \setminus \{h\}$ is a minimal left complement to $T$. Note that $h$ lies in $\ensuremath{\mathcal{H}} $ and does not lie in $\ensuremath{\mathscr{C}} $. So $\ensuremath{\mathcal{C}} \setminus \{h\}$ is a proper subset of $\ensuremath{\mathcal{C}} $. Hence $\ensuremath{\mathcal{C}} $ is not a minimal left complement to $T$. Similarly, assuming $\ensuremath{\mathcal{C}} $ to be a right minimal complement to some subset of $\ensuremath{\mathcal{G}} $ will lead to a contradiction. Hence $\ensuremath{\mathcal{C}} $ is not a minimal complement in $\ensuremath{\mathcal{G}} $. The second statement follows from the first statement. \end{proof} \begin{remark} \label{Remark:Top} From Theorem \ref{Thm:Top}, it follows that the set of real numbers having absolute value greater than one is not a minimal complement in $\ensuremath{\mathbb{R}}$. \end{remark} \begin{corollary} Let $H$ be a finite index subgroup of an infinite group and $C$ be a proper subset of $H$ such that $H\setminus C$ is finite. Then $C$ is not a minimal complement in $G$. \end{corollary} \begin{proof} If $G$ is endowed with the discrete topology, then this corollary follows from Theorem \ref{Thm:Top}. It can also be seen as an immediate consequence of Proposition \ref{Prop:SansK} (and also of Proposition \ref{Prop:Coset}). \end{proof} \begin{corollary} For any positive integer $k$ and for any nonempty finite subset $F$ of $k\ensuremath{\mathbb{Z}}$, the set $k\ensuremath{\mathbb{Z}}\setminus F$ is not a minimal complement in $\ensuremath{\mathbb{Z}}$. \end{corollary} It turns out that the set of irrational numbers is not a minimal complement in $\ensuremath{\mathbb{R}}$ (see Remark \ref{Remark:Coset}). It is natural to ask whether the set of irrational numbers, with a countable number of points removed, is a minimal complement in $\ensuremath{\mathbb{R}}$. Theorem \ref{Thm:Top} does not seem to shed any light on this question since the set of irrational numbers, with a countable number of points removed, does not form a closed or a compact set under the Euclidean topology. \begin{theorem} \label{Thm:Cardi} Let $\ensuremath{\mathcal{H}}$ be a subgroup of a group $\ensuremath{\mathcal{G}}$. Let $\ensuremath{\mathcal{C}}$ be a proper subset of $\ensuremath{\mathcal{H}}$. Suppose Assumption \ref{Assumption} holds in the sense that no map from $\{0, 1\} \times (\ensuremath{\mathcal{H}}\setminus \ensuremath{\mathcal{C}}) \times (\ensuremath{\mathcal{G}}/\ensuremath{\mathcal{H}})$ to $\ensuremath{\mathcal{C}}$ is surjective. Then $\ensuremath{\mathcal{C}}$ is not a minimal left complement in $\ensuremath{\mathcal{G}}$. \end{theorem} \begin{proof} On the contrary, let us assume that $\ensuremath{\mathcal{C}}$ is a minimal left complement to a subset $\ensuremath{\mathcal{S}}$ of $\ensuremath{\mathcal{G}}$. Let $\{s_i\}_{i\in \Lambda}$ be elements of $\ensuremath{\mathcal{S}}$ such that $\ensuremath{\mathcal{G}} = \cup_{i\in \Lambda} \ensuremath{\mathcal{H}} s_i$ and $$\ensuremath{\mathcal{H}} s_i \cap \ensuremath{\mathcal{H}} s_j = \emptyset \quad \text{ for all } i \neq j.$$ Since $\ensuremath{\mathcal{C}}$ is a proper subset of $\ensuremath{\mathcal{H}}$, it follows that for each $i\in \Lambda$, there exists an element $s_i'$ such that $\ensuremath{\mathcal{H}} s_i' = \ensuremath{\mathcal{H}} s_i$ and $\ensuremath{\mathcal{C}} s_i' \neq \ensuremath{\mathcal{C}} s_i$. For each $(a, i) \in (\ensuremath{\mathcal{H}} \setminus \ensuremath{\mathcal{C}}) \times \Lambda$, choose an element $(c_{(a, i)}, s_{(a, i)})$ in $\ensuremath{\mathcal{C}} \times \ensuremath{\mathcal{S}}$ such that $c_{(a, i)}s_{(a, i)} = as_i$. Consider the map $$(\ensuremath{\mathcal{H}} \setminus \ensuremath{\mathcal{C}}) \times \Lambda \to \ensuremath{\mathcal{C}}$$ defined by $$(a, i) \mapsto c_{(a, i)}.$$ Denote the image of this map by $\ensuremath{\mathscr{C}}$. Note that $\ensuremath{\mathscr{C}}\cdot \ensuremath{\mathcal{S}}$ contains $(\ensuremath{\mathcal{H}}\setminus \ensuremath{\mathcal{C}}) \cdot \{s_i\,|\, i\in \Lambda\}$. Consider the map $$\{0, 1\} \times (\ensuremath{\mathcal{H}}\setminus \ensuremath{\mathcal{C}}) \times \Lambda \to \ensuremath{\mathcal{C}}$$ defined by $$ (*, a, i) \mapsto \begin{cases} c_{(a, i)} & \text{ if } * = 0, \\ as_i's_i^{-1} & \text{ if } * = 1. \end{cases} $$ By the hypothesis, $\ensuremath{\mathcal{C}}$ contains an element $h$ which lies outside the image of this map, i.e., $h$ avoids the set $$\ensuremath{\mathscr{C}} \cup ((\ensuremath{\mathcal{H}}\setminus \ensuremath{\mathcal{C}}) \cdot \{s_i's_i^{-1}\,|\, i\in \Lambda \}).$$ We claim that $$\ensuremath{\mathcal{H}} s_i\subseteq (\ensuremath{\mathcal{C}} \setminus \{h\} ) \cdot \ensuremath{\mathcal{S}}$$ for any $i\in \Lambda$. Since $\ensuremath{\mathscr{C}}$ is contained in $\ensuremath{\mathcal{C}}$, the set $\ensuremath{\mathscr{C}} \cdot \ensuremath{\mathcal{S}}$ contains $(\ensuremath{\mathcal{H}}\setminus \ensuremath{\mathcal{C}})\cdot s_i$ and $h$ does not lie in $\ensuremath{\mathscr{C}}$, it follows that $(\ensuremath{\mathcal{H}}\setminus \ensuremath{\mathcal{C}})\cdot s_i$ is contained in $(\ensuremath{\mathcal{C}} \setminus \{h\})\cdot \ensuremath{\mathcal{S}}$. Note that $hs_i$ does not lie in $(\ensuremath{\mathcal{H}}\setminus \ensuremath{\mathcal{C}}) s_i'$. Since $\ensuremath{\mathcal{H}} s_i = \ensuremath{\mathcal{H}} s_i'$, it follows that $hs_i$ belongs to $\ensuremath{\mathcal{C}} \cdot s_i'$. Hence $hs_i$ lies in $(\ensuremath{\mathcal{C}} \setminus \{h\}) \cdot s_i'$. Moreover, $(\ensuremath{\mathcal{C}} \setminus \{h\}) \cdot s_i$ is contained in $(\ensuremath{\mathcal{C}} \setminus \{h\}) \cdot \ensuremath{\mathcal{S}}$. This proves the claim that $\ensuremath{\mathcal{H}} s_i$ is contained in $(\ensuremath{\mathcal{C}} \setminus \{h\}) \cdot \ensuremath{\mathcal{S}}$ for any $i\in \Lambda$. Hence $\ensuremath{\mathcal{C}}$ is not a minimal left complement to $\ensuremath{\mathcal{S}}$. If $\ensuremath{\mathcal{C}}$ is a minimal right complement to some subset $\ensuremath{\mathcal{T}}$ of $\ensuremath{\mathcal{G}}$, then $\ensuremath{\mathcal{C}}^{-1}$ is a minimal left complement to $\ensuremath{\mathcal{T}}^{-1}$, which is impossible. \end{proof} \begin{remark} \label{Remark:Cardi} It follows from Theorem \ref{Thm:Cardi} that given any uncountable group $G$, no proper subset $C$ of $G$ having countable set-theoretic complement in $G$ is a minimal complement in $G$. In particular, \begin{enumerate} \item the set of irrational numbers, with a countable number of points removed, is not a minimal complement in $\ensuremath{\mathbb{R}}$, \item the set of transcendental numbers, with a countable number of points removed, is not a minimal complement in $\ensuremath{\mathbb{C}}$. \end{enumerate} \end{remark} \section{On robust non-minimal complements} \label{Sec:RobustNonMac} In an abelian group, a minimal complement is often called a minimal additive complement, abbreviated as MAC \cite{BurcroffLuntzlara}. Following \cite[Definition 5]{BurcroffLuntzlara}, one can consider the notion of robust MAC and robust non-MAC in any abelian group $G$. \begin{definition} \label{Defn:Robust} Let $G$ be an abelian group. A subset $C$ of $G$ is said to be a \textnormal{robust MAC} if any non-empty subset $D$ of $G$ having finite symmetric difference with $C$ is a MAC in $G$. A subset $C$ of $G$ is said to be a \textnormal{robust non-MAC} if any non-empty subset $D$ of $G$ having finite symmetric difference with $C$ is a non-MAC in $G$. \end{definition} Kwon proved that the finite subsets of the integers are robust MACs \cite[Theorem 9]{Kwon}. In \cite{CoMin1}, the authors showed that the finite subsets of any free abelian group of rank $\geq 1$ are robust MACs. Alon--Kravitz--Larson established that the finite subsets in any infinite abelian group are robust MACs \cite[Theorem 2]{AlonKravitzLarson}. Burcroff--Luntzlara proved results which provides several examples of infinite subsets of $\ensuremath{\mathbb{Z}}$ which are robust MACs and several examples of infinite subsets of $\ensuremath{\mathbb{Z}}$ which are robust non-MACs \cite[Theorems 3, 5]{BurcroffLuntzlara}. As a corollary of Theorem \ref{Thm:QLeavesLAppears}, one also obtains examples of robust non-MACs. \begin{corollary} Suppose $C$ is a subset of $\ensuremath{\mathbb{Z}}$ and it is the union of translates of a nonzero subgroup $K$ of $\ensuremath{\mathbb{Z}}$. If $\lambda_\ensuremath{\mathbb{Z}}(C) > 2$, then $(C\setminus E) \cup F$ is a robust non-MAC in $\ensuremath{\mathbb{Z}}$ for any finite subset $E$ of $C$ and for any subset $F$ of $\ensuremath{\mathbb{Z}}\setminus C$ such that $(\ensuremath{\mathbb{Z}}\setminus C)\setminus F$ is infinite. \end{corollary} Moreover, there are infinite sets which are neither bounded below nor above, and do not satisfy the hypothesis of \cite[Theorem 5]{BurcroffLuntzlara}. Indeed, consider the set $$(\{1, 2, 3\} + 4\ensuremath{\mathbb{Z}}) \cup \ensuremath{\mathcal{F}} \cup \ensuremath{\mathcal{F}}_1$$ for any subset $\ensuremath{\mathcal{F}}$ of $\ensuremath{\mathcal{F}}_0$ where the sets $\ensuremath{\mathcal{F}}_0, \ensuremath{\mathcal{F}}_1$ are defined by $$ \ensuremath{\mathcal{F}}_i = 4\ensuremath{\mathbb{Z}} \setminus \{ (4n)! + 4k \,|\, n\geq 1, n\equiv i \,(\mathrm{mod}\, 2), 0 \leq k \leq n \} $$ for $i = 0, 1$. Note that $(\{1, 2, 3\} + 4\ensuremath{\mathbb{Z}}) \cup \ensuremath{\mathcal{F}} \cup \ensuremath{\mathcal{F}}_1$ does not satisfy the hypothesis of \cite[Theorem 5]{BurcroffLuntzlara}. From Theorem \ref{Thm:FContainedInSingleCoset}, it follows that it is a robust non-MAC. Since $\ensuremath{\mathcal{F}}_0$ is an infinite set, it has uncountably many subsets. Thus Theorem \ref{Thm:FContainedInSingleCoset} yields examples of uncountably many robust non-MACs in $\ensuremath{\mathbb{Z}}$, none of them satisfying the hypothesis of \cite[Theorem 5]{BurcroffLuntzlara}. This provides a partial answer to \cite[Question 2]{BurcroffLuntzlara}. More generally, we establish the following Theorem \ref{Thm:RobustNonMac}, which shows in particular that any finitely generated abelian group of positive rank and any free abelian group of positive rank contain uncountably many robust non-MACs. Moreover, it follows from Theorem \ref{Thm:Cardi} that given any uncountable abelian group $G$, any proper subset $C$ of $G$ having countable set-theoretic complement in $G$ is a robust non-MAC. In the context of groups which are not assumed to be abelian, the minimal complements are more precisely called minimal multiplicative complements to indicate the underlying structure of the ambient group. The minimal multiplicative complements are abbreviated as MMCs. In the spirit of robust MACs and non-MACs, one can also define the notion of robust MMCs and robust non-MMCs. \begin{definition} \label{Defn:RobustMMC} Let $G$ be a group. A subset $C$ of $G$ is said to be a \textnormal{robust MMC} if any non-empty subset $D$ of $G$ having finite symmetric difference with $C$ is a MMC in $G$. A subset $C$ of $G$ is said to be a \textnormal{robust non-MMC} if any non-empty subset $D$ of $G$ having finite symmetric difference with $C$ is a non-MMC in $G$. \end{definition} Note that in the context of abelian groups, the MMCs (resp. robust MMCs, robust non-MMCs) coincide with the MACs (resp. robust MACs, robust non-MACs). \begin{theorem} \label{Thm:RobustNonMac} Any group that admits $\ensuremath{\mathbb{Z}}$ as a quotient, contains uncountably many robust non-MMCs. \end{theorem} \begin{proof} Note that there exists a normal subgroup $G'$ of $G$ such that $G/G'$ is isomorphic to $\ensuremath{\mathbb{Z}}$. Let $p\geq 5$ be a prime number. Let $a$ be a positive integer satisfying $p \geq 3a + 1$ and $\ensuremath{\mathscr{C}}$ denote a subset of $\{1, 2, \cdots, p\}$ of size $p-a$. Denote the set $\{1, 2, \cdots, p\}\setminus \ensuremath{\mathscr{C}}$ by $\ensuremath{\mathscr{C}}'$. Let $\psi: G/G' \to \ensuremath{\mathbb{Z}}$ be a group isomorphism. Let $K$ denote the subgroup $\psi^{-1} (p\ensuremath{\mathbb{Z}})$ of $G$. Note that $K$ is a normal subgroup of $G$. For any subset $\ensuremath{\mathcal{F}}$ of $\ensuremath{\mathscr{C}}' + p\ensuremath{\mathbb{N}}$, the subset $\psi^{-1} ((\ensuremath{\mathscr{C}} + p\ensuremath{\mathbb{Z}} )\cup \ensuremath{\mathcal{F}})$ is a robust non-MAC by Proposition \ref{Prop:CMinusCSymmPrimeIndex} and Theorem \ref{Thm:CMinusCSymm}. Since the set $\ensuremath{\mathscr{C}}' + p\ensuremath{\mathbb{N}}$ contains infinitely many elements, it has uncountably many subsets. Thus the group $G$ contains uncountably many robust non-MACs. \end{proof} \begin{corollary} For any number field $K$ of degree $\geq 3$, the group $\ensuremath{\operatorname{GL}}_n(\mathcal O_K)$ contains uncountably many robust non-MMCs where $\ensuremath{\mathcal{O}}_K$ denotes the ring of integers of $K$. \end{corollary} \begin{proof} Note that the group $\ensuremath{\operatorname{GL}}_n(\ensuremath{\mathcal{O}}_K)$ admits $\ensuremath{\mathcal{O}}_K^\times$ as a quotient. Since $K$ has degree $\geq 3$, by the Dirichlet's unit theorem, $\ensuremath{\mathcal{O}}_K^\times$ admits $\ensuremath{\mathbb{Z}}$ as a quotient. Hence $\ensuremath{\operatorname{GL}}_n(\ensuremath{\mathcal{O}}_K)$ admits $\ensuremath{\mathbb{Z}}$ as a quotient. By Theorem \ref{Thm:RobustNonMac}, the result follows. \end{proof} \begin{corollary} Any finitely generated abelian group of positive rank and any free abelian group of positive rank contain uncountably many robust non-MACs. \end{corollary} \begin{proof} It follows from Theorem \ref{Thm:RobustNonMac}. \end{proof} It seems plausible that any infinite abelian group contains uncountably many robust non-MACs. We conclude this section with the following remarks. \begin{remark} If a subset $A$ of $H$ (for example, the sets of form $C\cup F$ considered in Section \ref{Sec:NonMinComp}) is not a minimal left complement in $G$, then so are its left translates, i.e., the sets of the form $g\cdot A$ for any $g\in G$. \end{remark} \begin{remark} Note that the subsets which are shown to be non-minimal complements are not a part of any co-minimal pair\footnote{A pair $(A, B)$ of nonempty subsets of a group $G$ is called a \textit{co-minimal pair} if $A$ is a minimal left complement to $B$ and $B$ is a minimal right complement to $A$ \cite[Definition 1.1]{CoMin1}.}. By Theorem \ref{Thm:RobustNonMac}, any finitely generated abelian group of positive rank contains uncountably many infinite subsets which are robust non-MACs and in particular, not minimal complements. We contrast this result with \cite[Theorem 2.2]{CoMin3}, which states that any such group also contains uncountably many infinite subsets which admit minimal complements. In \cite{CoMin3}, we considered lacunary sequences\footnote{A sequence $t_0 < t_1 < t_2 < \cdots $ of elements of $\ensuremath{\mathbb{Z}}^d$ is said to be \textit{lacunary} if $t_0 > 0$ and for some positive integer $\lambda \geq 2$, $t_n > \lambda t_{n-1}$ for any $n\geq 1$, where ``$>$'' denotes the lexicographic order on $\ensuremath{\mathbb{Z}}^d$. } in $\ensuremath{\mathbb{Z}}^d$ for $d\geq 1$, and proved that ``a majority'' of such sequences are a part of a co-minimal pair, and in particular, they are minimal complements \cite[Theorem 2.1]{CoMin3}. It also follows that any such sequence remain a minimal complement even after the removal of finitely many points. It would be interesting to investigate whether ``a majority'' of such sequences are robust MACs. It follows from \cite[Theorem 4]{BurcroffLuntzlara} that it is indeed the case when $d = 1$. \end{remark} It would be interesting to consider the situations when the sets $C, F$ (as in Section \ref{Sec:NonMinComp}) are somewhat modified. For instance, \begin{enumerate} \item What happens if $C$ is taken to be a ``large'' set in $H$ and $F$ is taken to be a ``small'' set lying outside $H$ (or more generally, to be a ``small'' set in $G\setminus C$)? \item What happens if $C$ intersects with several right cosets of $H$ in $G$ and the intersection of $C$ with each such (or one such) right coset is ``large''? \end{enumerate} \section{Acknowledgements} The first author is supported by the ISF Grant no. 662/15. He wishes to thank the Department of Mathematics at the Technion where a part of the work was carried out. The second author would like to acknowledge the Initiation Grant from the Indian Institute of Science Education and Research Bhopal, and the INSPIRE Faculty Award from the Department of Science and Technology, Government of India. \def\cprime{$'$} \def\Dbar{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D} \def\cfac#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else \setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7 \hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"13\hss}}\penalty 10000 \hskip-1\wd7\penalty 10000\box7} \def\cftil#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else \setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7 \hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"7E\hss}}\penalty 10000 \hskip-1\wd7\penalty 10000\box7} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2020-07-17T02:22:37", "yymm": "2007", "arxiv_id": "2007.08507", "language": "en", "url": "https://arxiv.org/abs/2007.08507" }
\section{Introduction} \label{sec:intro} Stellar mass (M$_{*}$) is one of the fundamental physical properties of a galaxy since it traces star formation and evolution process of the galaxy, and is crucial to decompose the contributions from stars and dark matter to the dynamics of a galaxy. The stellar population synthesis (SPS) technique is an efficient way to estimate M$_{*}$ of a galaxy, whereby fitting the SPS models that rely on the extant stellar evolution theory to galaxy data, either in the form of observed multi-band spectral energy distributions (SEDs), spectra, or spectral indices of the galaxy. Such fit method requires data of SED or spectra, However, not all the galaxies have multi-band imaging or spectroscopic data, so a simple color-based method is more practical to estimate M$_{*}$ of a galaxy. The pioneering work of \citet[]{Bell2001}(hereafter Bdj01) and \citet[] {Bell2003}(hereafter B03) have defined relations between color and stellar mass-to-light ratio ($\gamma_{*}$) of galaxies in the form of equation (1). \begin{eqnarray} {\rm log} \ \gamma_{*}^{j} &=& a_{j} + b_{j}\times {\rm color} \end{eqnarray} The $\gamma_{*}$ of a galaxy can be predicted from the color $-$ stellar mass-to-light ratio relation (CMLR), and subsequently multiplied by the galaxy luminosity to yield M$_{*}$ of the galaxy. The CMLR method requires the minimal data and is hence expedient in all applications related to M$_{*}$ estimate. Afterwards, a variety of CMLRs have emerged. A number of these CMLRs are calibrated on model galaxies (e.g., \citet[]{Gallazzi2009}, \citet[]{Zibetti2009} (hereafter Z09), \citet[]{Into2013}(hereafter IP13), \citet[]{Roediger2015} (hereafter RC15)), and some are calibrated on samples of observed galaxies, such as spiral galaxies(e.g., B03,\citet[]{Portinari2004}, \citet[]{Taylor2011}), dwarf galaxies (e.g., \citet[]{Herrmann2016}), and low surface brightness galaxies (e.g., \citet[]{Du2020}). For galaxies the CMLR method could recover $\gamma_{*}$ from a single color within an accuracy of $\sim$0.1-0.2 dex \citep{Bell2001}, and could produce equivalent M$_{*}$ to those derived from SED fit method on average \citep{Roediger2015, Du2020}. However, in the aspect of the CMLR-based M$_{*}$, \citet[]{McGaugh2014}(hereafter MS14) found the existing CMLR tends to give different M$_{*}$ for the same galaxy when it is applied in different photometric bands. Based on a sample of disk galaxies, they re-calibrated several representative CMLRs in Johnson-Cousin filter system to ultimately produce internally self-consistent M$_{*}$ for the same galaxy when it is applied to different bands of $V$, $I$, $K$, and [3.6] bands (with B-V as color indicator). Inspired by MS14, we expect to extend their work from the Johnson-Cousins bands to the SDSS optical bands and near-infrared (NIR) bands in this work, based on a sample of low surface brightness galaxies (LSBGs), by first examining the internal self-consistency of a CMLR in M$_{*}$ estimates from different bands and then re-calibrating the CMLR to be able to give internally self-consistent M$_{*}$ estimates from different bands for the same galaxy. We describe the data in Section \ref{sec:data} and introduce the five representative CMLR models in Section \ref{sec:SPS}. We estimated M$_{*}$ from different bands for the sample by the CMLRs, and internally compared M$_{*}$ from different bands by each individual CMLR, and then externally compared M$_{*}$ predicted by different CMLRs in Section \ref{sec:mstar}. In Section \ref{sec:m2l}, each individual CMLR is re-calibrated to be internally self-consistent in M$_{*}$ estimates for the sample, when it is applied in different bands from optical to NIR. We make a discussion in Section \ref{sec:discus}, including the possible second color term to the re-calibrated relations in Section \ref{sec:discus_sub1}, the error budget in $\gamma_{*}$ predicted by the re-calibrated relations in Section \ref{sec:errors}, comparison between originally predicted $\gamma_{*}$ and those predicted by the re-calibrated relations in Section \ref{sec:discus_sub2}, and comparison between re-calibrated relations in this work and those by MS14 in Section \ref{sec:MS14}. A summary and conclusion is given in $\S$~\ref{sec:conclusion}. Throughout the work, the magnitude is in AB magnitude system, and the galaxy distance used to calculate the absolute magnitude and luminosity is directly from the Arecibo Legacy Fast ALFA Survey (ALFALFA) catalogue \citet{Haynes2018}, which adopts a Hubble constant of $H_{0}$ = 70 km s$^{-1}$ Mpc$^{-1}$. \section{Data} \label{sec:data} \subsection {LSBG Sample} \label{sec:sample} Since low surface brightness galaxies (LSBGs) are typically gas-rich, we have defined a sample of LSBGs from a survey combination of $\alpha$.40 H{\sc{i}} \citep{Haynes2011} and SDSS DR7 photometric \citep{Abazajian2009} surveys, and selection about this sample have been detailedly reported in \citet{Du2015} and \citet{Du2019}. This sample includes 1129 LSBGs which have B-band central surface brightnesses ($\mu_{0,B}$) fainter than 22.5 mag arcsec$^{-2}$ ($\mu_{0,B} >$ 22.5), and has extended the parameter space covered by the previous LSBG samples to fainter luminosity, lower H{\sc{i}} gas mass, and bluer color (Figure \ref{fig:property}). In color, the full range of this sample is -0.8 $< g-r <$1.7 (the peak at 0.28 and 1$\sigma$ scatter of 0.21), with 95.4$\%$ within -0.14 $< g-r <$ 0.70 and 68.3$\%$ within 0.07 $< g-r <$ 0.49. In absolute magnitude, the full range of the sample spans over 10 mag, with 95.4$\%$ within -13$ <$ M$_{r} <$ -21 mag and 68.3$\%$ within -15 $ <$ M$_{r} <$ -19 mag. In terms of luminosity, it is composed of the dwarf (M$_{B} \geq$-17.0~mag; 54$\%$ of the sample), moderate-luminosity (-19.0$<$M$_{B}<$-17.0~mag; 43$\%$), and giant galaxies (M$_{B} \leq$-19.0~mag; 3$\%$). In terms of morphology, it is dominated by the late-type spiral and irregular galaxies (Sd/Sm/Im; 84.1$\%$ of the sample), then the early- and middle-type spiral galaxies (Sa/Sab/Sb/Sbc/Sc/Scd;13.4$\%$), and finally the early-type galaxies (E/S0; 0.2$\%$)\citep{Du2019}. In this work, we intend to re-calibrate several literature CMLRs (Section ~\ref{sec:SPS}) based on this sample of LSBGs. \begin{figure}[ht!] \centering \includegraphics[width=0.7\textwidth]{sample_properties} \caption{Properties of the LSBG sample. In panels (a) - (f), the distributions of $r$-band absolute magnitude (M(r)), the $r$-band luminosity in logarithm (log L(r)), $g$-$r$ color ($g$-$r$), H{\sc{i}} mass in logarithm (log M$_{H_{I}}$/M$_{\odot}$), $B$-band central surface brightness ($\mu_{0,B}$), and effective radius (R$_{50,r}$) are shown, respectively. Panels (g) and (h) show $g$-$r$ versus H{\sc{i}} mass, and M(r) versus redshift, respectively.} \label{fig:property} \end{figure} \subsection {Photometry}\label{sec:phot} The optical images ($griz$ bands) of the sample were downloaded from SDSS DR7 \citep[]{Abazajian2009}, and the NIR images (JHK bands) were obtained from UKIDSS \citep{Lawrence2007}. For each image, we subtracted the sky background, excluded the bright disturbing objects around the target galaxy, and replaced the masked pixels with the mean value of the surrounding background pixels. The magnitudes of the target galaxy were then measured in these bands in \citet{Du2020} with SExtractor \citep{Bertin1996} in the dual-image mode, in which the $r$-band image is regarded as a reference and is used to detect the galaxy source and define the photometric apertures (center, size and shape). Images of the same galaxy in all other bands are photometrically measured within the same aperture defined in the $r$ band. The measured magnitudes in all the bands are corrected for Galactic extinction using the prescription of \citet{Schlafly2011}. As LSBGs are poor in dust content, we do not correct the internal extinction to magnitudes. Finally, magnitudes in all the bands were converted to AB magnitude system. We adopt a distance given in ALFALFA catalogue \citep{Haynes2018} to compute absolute magnitude and luminosity in each band of $griz$JHK. As the aperture definition for each galaxy does not vary between wavelength bands, such measurement gives internally consistent colors. \begin{table*}\footnotesiz \caption{Original CMLRs based on $g$-$r$ color } \label{tab:cmlr1} \begin{center} \begin{tabular}{lcccccccccccccc} \hline \hline model & IMF & TP-AGB& a$_{r}$ & b$_{r}$ & a$_{i}$ & b$_{i}$ & a$_{z}$ & b$_{z}$ & a$_{\rm J}$ & b$_{\rm J}$ & a$_{\rm H}$ & b$_{\rm H}$ &a$_{\rm K}$ & b$_{\rm K}$\\ \hline \hline B03& `diet' Salpeter &Girardi& -0.306& 1.097 & -0.222& 0.864&-0.223& 0.689&-0.172& 0.444&-0.189& 0.266&-0.209& 0.197 \\ IP13 & Kroupa &Marigo& -0.663&1.530&-0.633& 1.370&-0.665& 1.292&-0.732& 1.139&-0.880& 1.128&-0.945& 1.153 \\ Z09 &Chabrier &Marigo& -0.840 & 1.654&-0.845& 1.481&-0.914& 1.382&-1.007& 1.225&-1.147& 1.144&-1.257& 1.119\\ RC15(BC03)&Chabrier&Girardi& -0.792 & 1.629&-0.771& 1.438&-0.796& 1.306& -- & -- &-0.920& 0.980&-- & -- \\ RC15(FSPS)&Chabrier&Marigo&-0.647 &1.497 &-0.602& 1.281&-0.583& 1.102&-- & -- &-0.605& 0.672& -- & --\\ \hline \hline \multicolumn{15}{p{1.0\textwidth}}{Notes. Stellar mass-to-light ratios ($\gamma_{*}$) in SDSS $r$, $i$, $z$ and NIR J, H, K bands are given by the CMLRs of \citet[B03]{Bell2003}, \citet[IP13]{Into2013}, \citet[Z09]{Zibetti2009}, \citet[]{Roediger2015} based on BC03 model (RC15(BC03)), and \citet[]{Roediger2015} based on FSPS model (RC15(FSPS) in the formula of log $\gamma_{*}^{j}$ = $a_{j}$+$b_{j}\times(g - r)$. For reference, the initial mass function (IMF) and the TP-AGB prescription adopted by each CMLR model are also given. For IMF, the `Kroupa' denotes the \citet{Kroupa1998} IMF, and `Chabrier' denotes the \citet{Chabrier2003} IMF. For TP-AGB, the `Girardi' denotes the simplified TP-AGB prescriptions \citep[e.g.][]{Girardi1998,Girardi2000,Girardi2002}, while `Marigo' denotes the relatively new TP-AGB prescriptions \citep[e.g.][]{Marigo2007,Marigo2008}, which incorporate a larger number of TP-AGB stars.} \end{tabular} \end{center} \end{table*} \begin{figure}[ht!] \centering \includegraphics[width=1.0\textwidth]{m2l_color_others_v4} \caption{ Relation between $g$ - $r$ color and log $\gamma{*}^{j}$ ($j$= $g$, $r$, $i$, $z$, J, H, and K bands) from the CMLRs of B03 with an assumption of a `diet' Salpeter IMF (black circles), Z09 with an assumption of a \citet{Chabrier2003} IMF (green plus), IP13 with an assumption of a \citet{Kroupa1998} IMF (red triangles), RC15(BC03) with an assumption of \citet{Chabrier2003} IMF (purple open stars), and RC15(FSPS) with an assumption of \citet{Chabrier2003} IMF (blue filled stars).}\label{fig:MLCR} \end{figure} \section{CMLR Models}\label{sec:SPS} In the pioneering work of MS14, the CMLRs of B03, Z09, IP13, and \citet{Portinari2004}(P04) are re-calibrated in the V, I, K, and [3.6] bands with B - V as the color indicator. In this work, we aim to extend MS14 from Johnson-Cousins filters to SDSS optical and two more NIR filters. Besides the three CMLRs of B03, Z09, and IP13 studied in MS14 which also provide relations in SDSS bands, two more CMLRs of the RC15 based on the BC03 stellar population model (RC15(BC03)) and the FSPS model (RC15(FSPS)) will be considered. B03 is an empirical relation while the others (Z09, IP13, RC15) are theoretical. B03 is based on a sample of observed galaxies, which are mostly bright galaxies (13 $\leq r \leq$17.5~mag) with high surface brightnesses (HSB; $\mu_{r} <$21~mag~arcsec$^{-2}$), and spans a full range of 0.2 $< g$-$r <$ 1.2 with most galaxies within the range of 0.4 $< g$-$r<$1.0. (Figure 5 in B03 paper). For the theoretical relations, Z09 is based on a library of stellar population models from the 2007 version of BC03 (CB07), which covers from 0 to 20 Gyr in age, 6 values in metallicity (Z=0.0001 to 0.05), and spans a range of -0.3 $< g$-$i <$ 2.6. IP13 is based on a sample of stellar population models from the isochrones of the Padova, which covers from 0.1 to 12.6 Gyr in age, 7 values in metallicity (Z=0.0001,0.0004,0.001,0.004,0.008,0.019,0.03), and spans a range of from 0.25 $< g$-$r< $ 0.75. RC15 is also based on stellar population models from BC03 or FSPS, which spans a range of -0.25 $< g$-$r< $1.65 for RC15(BC03) and a range of -0.1 $< g$-$r< $ 1.65 for RC15(FSPS) (Figure 7 in RC15). By comparison, the sample of observed data of LSBGs (Section ~\ref{sec:data}), have a range of $\mu_{r} >$21~mag~arcsec$^{-2}$ and $r >$17.5~mag, and 73$\%$ of the sample is bluer than $g$-$r$=0.4. In Table ~\ref{tab:cmlr1}, we tabulated these 5 representative CMLRs of B03, IP13, Z09, RC15(BC03), and RC15(FSPS), in $r$, $i$, $z$, J, H, and K bands with $g$ - $r$ as the color indicator. Figure \ref{fig:MLCR} presents the stellar mass-to-light ratios ($\gamma_{*}$) in $j$ band ($\gamma_{*}^{j}$, $j$=$g$, $r$, $i$, $z$, J, H, K) predicted by each CMLR (Table ~\ref{tab:cmlr1}) for the sample, showing the beads-on-a-string nature of $\gamma_{*}$ from the single color-based CMLR method. It uncovers that the CMLR-based method fails to reproduce the intrinsic scatter of $\gamma_{*}$ expected from variations in star formation histories (SFH). In each panel, $\gamma_{*}$ from different CMLRs differ from each other due to distinct choices of initial mass function (IMF), star formation history (SFH), and stellar evolutionary tracks by different CMLR models. Different IMFs primarily differ in the treatment of low mass stars. The IMF that includes a larger number of low mass stars normally produces a higher $\gamma^{*}$ at a given color than the IMFs incorporating a smaller number of low mass stars. This is in principle because the low mass stars could greatly enhance the stellar mass but alter little the luminosity. Therefore, diverse IMFs would predominantly lead to difference in the zero-point of CMLRs. For example, stellar mass estimates based on a Chabrier or Salpeter IMF differ by 0.3 dex, with the latter being higher \citep{Roediger2015}. As listed in Table \ref{tab:cmlr1}, B03 adopts a `diet' Salpeter IMF, which includes more low mass stars than the Chabrier IMF utilized by RC15 and Z09 CMLRs and the \citet{Kroupa1998} IMF used by IP13 CMLR, so B03 gives a higher $\gamma^{*}$ than other CMLRs at a given color (Figure \ref{fig:MLCR}). Galaxies are expected to have a wide range of SFHs. The best-fit stellar mass could be significantly changed by different SFHs, in particular whether the SFH is continuous (rising/declining) or bursty. Any burst of star formation will bias the models towards lower $\gamma_{*}$ values than the smooth star formation models at a given color . The uncertainties of $\gamma_{*}$ in optical due to different SFHs are $\sim$0.2 dex for quiescent galaxies, $\sim$0.3 dex for star-forming galaxies \citep{Kauffmann2003}, $\sim$ 0.5 dex at a given $B$-$R$, and could be up to 0.6 dex in extreme cases \citep{Courteau2014}. For the CMLRs in this work, IP13 adopts a single component model of exponential SFH while other CMLRs in this work are all based on two-component models of SFH. Z09 and RC15(BC03) both consider the exponentially declining SFHs with a variety of random bursts superimposed. RC15(FSPS) uses the exponential SFH with only one instantaneous burst added. B03 assumes the exponential SFH (starting from 12 Gyr in the past) with bursts superimposed, but limits the strength of bursts to $\leq$10$\%$ by mass constrains the burst events to only take place in the last 2 Gyr, so it is relatively smooth. In Figure ~\ref{fig:MLCR}, the discrepancies in $\gamma_{*}$ among the CMLRs in the NIR bands (J, H, and K) are obviously larger than the discrepancies in the optical bands ($griz$). This primarily rises from the different treatments of the TP-AGB stars which are the low to intermediate mass stars (0.6 $\sim$ 10 M$_{\odot}$) in their late life stage, and emit a considerable amount of light in the NIR but little light to the optical. As listed in Table ~\ref{tab:cmlr1}, B03 and RC15(BC03) adopt a simplified prescription \citep{Girardi2000, Girardi2002} for TP-AGB stars, whereas IP13, RC15(FSPS), and Z09 consider a relatively new prescription \citep{Marigo2007,Marigo2008} for TP-AGB stars. The latter prescription incorporates a larger number of TP-AGB stars, and would hence greatly enhance the NIR luminosity but alter little to the optical luminosity. This inevitably results in lower NIR $\gamma_{*}$ but change little to the optical $\gamma_{*}$. \section{Stellar Mass}\label{sec:mstar} The average $\gamma_{*}$ in the $u$ band suffers more from the perturbations of the young, luminous, blue stars which formed recently and radiated a significant amount of light in the blue bands but contribute little to the galaxy mass. Additionally, the SDSS $u$-band data are of low quality, so we shall exclude the $u$-band $\gamma_{*}$ from the following analysis. For the LSBG sample, we predict $\gamma_{*}^{j}$ ($j$=$g$, $r$, $i$, $z$, J, H, and K bands) by each independent CMLR with $g$ - $r$ as the color indicator (Table \ref{tab:cmlr1}), as $g$-$r$ serves as a good color indicator for $\gamma_{*}$. The predicted $\gamma_{*}^{j}$ are then multiplied by luminosities in $j$ band (Section ~\ref{sec:phot}) to produce M$_{*}$ estimates from $j$ band (M$_{*}^{j}$). We list the mean and the median M$_{*}^{j}$ originally by each CMLR for the sample in the left part in Table ~\ref{tab:mass}. We can check external consistency of different CMLRs by comparing M$_{*}$ from different CMLRs. It is apparent that the five CMLRs produce distinct M$_{*}^{j}$ estimates from the $j$ band ($j$=$g$, $r$, $i$, $z$, J, H, and K bands). In the same $j$ band, B03 gives the highest M$_{*}$ while Z09 yields the lowest M$_{*}$ for the sample. The difference between M$_{*}$ predicted by B03 and Z09 is 0.3$\sim$0.5 dex in optical bands, and dramatically rises up to 0.6$\sim$0.8 dex in NIR bands due to the different treatments for TP-AGB stars (Section \ref{sec:SPS}). The external inconsistency is caused by the different choices of the IMF, SFH, and SPS models. We can examine each CMLR for the internal consistency from different bands. For any individual CMLR, M$_{*}$ predicted from $g$ band (M$_{*}^{g}$) are closely consistent with M$_{*}$ predicted from $r$ band (M$_{*}^{r}$). However, M$_{*}^{j}$ ($j$=$i$, $z$, J, H, and K), especially $j$= J, H, and K, deviate from M$_{*}^{r}$ to varying degrees, and the deviation is progressively increasing as the band goes redder. \textbf{For instance, the deviation of M$_{*}^{\rm NIR}$ from M$_{*}^{r}$ is 0.1 dex by B03, -0.3 dex by Z09, and -0.1 $\sim$ -0.3 dex by the three other CMLRs. } This implies that B03 is nearly internally self-consistent in M$_{*}$ estimate from different bands, but it has a small tendency to overestimate M$_{*}$ estimates from NIR bands, whereas the four other CMLRs all underestimate M$_{*}$ from NIR bands, compared with M$_{*}^{r}$. In Figure ~\ref{fig:mstar_discussion}, we show M$_{*}^{j}$ ($j$= $g$, $r$, $i$, $z$, J, H, and K) against M$_{*}^{r}$ predicted by each CMLR for the sample (black open circles or grey filled circles). For each panel, the black dashed lines represent the line of unity for the data. If the CMLR is internally self-consistent in M$_{*}$ estimate from band to band, the data should exactly follow the line of unity. However, it does not seem to be the fact given that the data (black open circles or grey filled circles) in each panel obviously deviate from the line of unity (black dashed lines) to different degrees, except for the data in the panel of M$_{*}^{g}$ versus M$_{*}^{r}$. This demonstrates that M$_{*}^{g}$ are highly consistent with M$_{*}^{r}$ while M$_{*}^{j}$ ($j$= $i$, $z$, J, H, and K) deviates from M$_{*}^{r}$, and the deviation is progressively increasing as the band goes redder. In order to clearly display the deviation of data from the line of unity, we plot the residuals of data from the line of unity in Figure ~\ref{fig:residual}. In the case of internally inconsistency of each CMLR from band to band, we shall calibrate each CMLR to be internally self-consistent in M$_{*}$ estimates from different bands, based on this LSBG sample in $\S$ ~\ref{sec:m2l}. \begin{table*} \caption{Mean (upper) and Median (lower) Stellar mass predicted by the original (left part) and re-calibrated CMLRs (right part) for the LSBG sample.} \label{tab:mass} \begin{center} \begin{tabular}{lccccc|ccccc} \hline \hline band&B03&IP13&R15(BC03)&R15(FSPS)&Z09 &B03&IP13&R15(BC03)&R15(FSPS)&Z09\\ \hline $g$&8.64&8.43&8.30&8.44&8.26 & 8.64 & 8.43 & 8.30 & 8.44 & 8.26\\ $r$&8.65&8.43&8.31&8.42&8.27 & 8.65 & 8.43 & 8.31 & 8.42 & 8.27\\ $i$&8.73&8.48&8.35&8.47&8.29 & 8.66 & 8.42 & 8.32 & 8.43 & 8.28\\ $z$&8.70&8.44&8.30&8.46&8.21 & 8.66 & 8.42 & 8.32 & 8.43 & 8.28\\ J&8.68&8.30&-- & --&8.06 & 8.62 & 8.38 & -- & -- & 8.24\\ H&8.68&8.21&8.15&8.38&7.97 & 8.62 & 8.37 & 8.28 & 8.39 & 8.23\\ K&8.75&8.26&-- &-- &7.97 & 8.64 & 8.38 & -- & -- & 8.24\\ \hline $g$&8.75&8.55&8.42&8.56&8.38 & 8.75 & 8.55 & 8.42 & 8.56 & 8.38\\ $r$&8.76&8.55&8.43&8.54&8.39 & 8.76 & 8.55 & 8.43 & 8.54 & 8.39\\ $i$&8.84&8.60&8.46&8.59&8.40 & 8.77 & 8.54 & 8.43 & 8.55 & 8.39\\ $z$&8.82&8.55&8.42&8.58&8.32 & 8.77 & 8.54 & 8.44 & 8.55 & 8.39\\ J&8.83&8.43& --&-- &8.20 & 8.77 & 8.52 & -- & -- & 8.38\\ H&8.80&8.33&8.28&8.51&8.09 & 8.75 & 8.49 & 8.4 & 8.52 & 8.35\\ K&8.88&8.38&-- &-- &8.08 & 8.77 & 8.49 & -- & -- & 8.35\\ \hline \hline \end{tabular} \end{center} \end{table*} \begin{figure} \centering \includegraphics[width=1.1\textwidth]{stellar_mass_2sigma_biweight_ALL} \caption{Stellar mass (M$_{*}$) estimates by different CMLRs of B03, IP13, Z09, RC15(BC03), and RC15(FSPS) listed in Table ~\ref{tab:cmlr1}. For each CMLR, M$_{*}$ estimates from $g$ (open black circles) or J (filled grey circles) bands are, respectively, plotted against M$_{*}$ from $r$ band (M$_{*}^{r}$) in the left panel. M$_{*}$ estimates from $i$ (open black circles) or H (filled grey circles) bands are, respectively, plotted against M$_{*}^{r}$ in the middle panel. M$_{*}$ estimates from $z$ (open black circles) or K (filled grey circles) bands are, respectively, plotted against M$_{*}^{r}$ in the right panel. For each panel, the two cases are offset for clarity, and the black dashed lines are the line of unity, and the red solid lines are the fit to the data. If the CMLR were internally self-consistent, the data would follow the line of unity (black dashed line). However, the fit line which the data follow obviously deviates from the line of unity, expect for data of $g$ v.s. $r$ bands. It should be noted that RC15 does not provide relations in J and K bands.}\label{fig:mstar_discussion} \end{figure} \begin{figure} \centering \includegraphics[width=1.1\textwidth]{stellar_mass7_residual} \caption{Residuals of data from the line of unity in each panel in Figure ~\ref{fig:mstar_discussion}. For each CMLR, residuals of the data from the line of unity in the $g$, $i$, and $z$ bands are shown as open black circles in the lower region of the left, middle, and right panels, respectively. For clarity, residuals of the data from the line of unity in J, H, and K bands are offset by +2 and shown as grey filled circles in the upper region of the left, middle, and right panels, respectively. The black and grey solid lines in each panel are the zero-residual lines for the corresponding data. }\label{fig:residual} \end{figure} \section{Self-consistent M/L-color relations}\label{sec:m2l} \subsection{Self-Consistent Stellar Masses} For each individual CMLR, the M$_{*}$ estimates from $g$ band (M$_{*}^{g}$) closely agree with those from $r$ band (M$_{*}^{r}$) for the sample. However, the M$_{*}$ estimates from $i$, $z$, J, H, and K bands differ from M$_{*}^{r}$ for the sample to varying degrees, respectively (Section \ref{sec:mstar}). Assuming M$_{*}^{r}$ as the reference M$_{*}$ for a galaxy, we can fit the relations between M$_{*}^{j}$ ($j$ = $i$, $z$, J, H, and K) and M$_{*}^{r}$ of the sample in the function form below following MS14 \begin{eqnarray} {\rm log} (M_{*}^{j}/M_{0})&=& B_{j} {\rm log} (M_{*}^{r}/M_{0}) \end{eqnarray} where $B_{j}$ is the slope of the linear fit line, and $M_{0}$ is the M$_{*}$ where $j$ band intersects $r$ band. A `robust' bi-square weighted line fit method is adopted to fit data of the LSBG sample. The fit lines are over-plotted as red solid lines in each panel in Figure \ref{fig:mstar_discussion}, which show deviation from the line of unity in panels of $i$, $z$, J, H, and K bands, demonstrating the problem of self-inconsistency in M$_{*}$ estimates from different bands for the same sample. The coefficients from the fit are tabulated in Table \ref{tab:self_consistent_mass}. \begin{table* \caption{Self-Consistent Stellar Masses} \label{tab:self_consistent_mass} \begin{center} \begin{tabular}{lcccccccccc} \hline \hline model & $B_{i}$ & log$M_{0}^{i}$ & $B_{z}$ & log$M_{0}^{z}$ & $B_{\rm J}$ & log$M_{0}^{\rm J}$& $B_{\rm H}$ & log$M_{0}^{\rm H}$ &$B_{\rm K}$ & log$M_{0}^{\rm K}$\\ \hline B03& 0.994&20.609& 0.994&16.132& 0.965&10.288& 0.988&13.395& 0.981&14.738\\ IP13& 0.995&17.362& 1.005& 6.132& 0.969& 6.302& 0.995&-21.17& 0.988& 0.482\\ Z09& 0.994& 9.032& 1.004&28.181& 0.968& 2.897& 0.984&-7.583& 0.983&-8.188\\ RC15(BC03)& 0.992&11.488& 0.999&-8.445& --& --& 0.981& 1.729& --& --\\ RC15(FSPS)& 0.992&13.331& 0.991&11.927& --& --& 0.976& 7.872& --& --\\ \hline \hline \multicolumn{11}{p{0.7\textwidth}}{\footnotesize{Notes. The coefficients are for the red solid lines in Fig. \ref{fig:mstar_discussion} in the function form of equation (2).}} \end{tabular} \end{center} \end{table*} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{MLCR_revised_2sigma_biweight_ALL} \caption{Renormalized stellar mass-to-light ratios ($\gamma_{*,re}^{j}, j$=$i$, $z$, J, H, and K) in logarithm as a function of $g$ - $r$ color. Galaxies in the sample are shown as black open circles in each panel, where the red solid line represents the fit relation between log $\gamma_{*,re}^{j}$ and $g$-$r$, and the blue line represents the original relation for comparison. }\label{fig:self_MLCR_gr} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{MLC_revised_v7_rz \caption{Renormalized stellar mass-to-light ratios ($\gamma_{*,re}^{j}, j$=$i$, $z$, J, H, and K) in logrithm as a function of $r$ - $z$ color. The illustrations are similar to Figure \ref{fig:self_MLCR_gr}.}\label{fig:MLC_rz} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{MLC_revised_v7_JK \caption{Renormalized stellar mass-to-light ratios ($\gamma_{*,re}^{j}, j$=$i$, $z$, J, H, and K) in logrithm as a function of J - K color. The illustrations are similar to Figure \ref{fig:self_MLCR_gr}. }\label{fig:MLC_JK} \end{figure} \subsection{Re-calibrated CMLRs}\label{sec:re_cmlr} According to the coefficients in Table ~\ref{tab:self_consistent_mass}, we renormalize M$_{*}^{j}$ ($j$=$i$, $z$, J, H, and K) to the reference mass M$_{*}^{r}$. Then, the renormalized M$_{*}^{j}$ (M$_{*,re}^{j}$) were divided by the luminosity in $j$ band to generate the renormalized $\gamma_{*}^{j}$ ($\gamma_{*,re}^{j}$). Next, the $\gamma_{*,re}^{j}$ were plotted against $g$-$r$ in Figure \ref{fig:self_MLCR_gr}. For each panel, galaxies of the LSBG sample are shown as black open circles, which show clear correlations between $\gamma_{*,re}^{j}$ and $g$-$r$ color. We then fit the relations between the $\gamma_{*,re}^{j}$ and $g$-$r$ in the function form of equation (1), using the bi-weight line fit method. The fit line is over-plotted as red solid line in each panel in Figure \ref{fig:self_MLCR_gr}, and the blue solid line represents the original CMLRs (Table \ref{tab:cmlr1}) for comparison. The re-calibrated CMLRs are tabulated in Table \ref{tab:self_consistent_CMLR}, which could produce internally self-consistent M$_{*}$ estimates from different bands for the galaxy, and this self-consistent M$_{*}$ should be highly consistent with the assumed reference mass which is M$_{*}^{r}$ in this work. Compared with M$_{*}^{r}$, the original B03 slightly overestimated M$_{*}$ from NIR bands (M$_{*}^{\rm NIR}$) while the four other original CMLRs underestimated M$_{*}^{\rm NIR}$ (Table \ref{tab:mass}). After re-calibration, the overestimate or underestimate are corrected correspondingly. As shown in each panel in Figure ~\ref{fig:self_MLCR_gr}, the re-calibrated B03 is below the original relation (blue solid line), and the four other re-calibrated relations of Z09, IP13, RC15(BC03), and RC15(FSPS) are all above the original relation, especially in NIR bands. Furthermore, the original B03 require the smallest corrections, while the original Z09 relations require the largest corrections in each band, in particular in NIR bands. This is because Z09 is based on the prescription for the TP-AGB phase, which incorporates a larger number of TP-AGB stars. It can greatly enhance the luminosities in the NIR but alter little the luminosities in the optical, inevitably resulting in a lower $\gamma_{*}$ from the NIR bands than from the optical bands. \begin{table*}\footnotesiz \caption{Re-calibrated CMLRs} \label{tab:self_consistent_CMLR} \begin{center} \begin{tabular}{lcccccccccccc} \hline \hline model& a$_{r}$ & b$_{r}$& a$_{i}$ & b$_{i}$ & a$_{z}$ & b$_{z}$ & a$_{\rm J}$ & b$_{\rm J}$ & a$_{\rm H}$ & b$_{\rm H}$ &a$_{\rm K}$ & b$_{\rm K}$\\ \hline B03& -0.306& 1.097&-0.299& 0.874&-0.272& 0.699&-0.245& 0.499&-0.253& 0.283&-0.333& 0.226\\ IP13&-0.663&1.530&-0.679& 1.380&-0.674& 1.280&-0.684& 1.199&-0.742& 1.138&-0.860& 1.175\\ Z09&-0.840 & 1.654&-0.854& 1.495&-0.842& 1.374&-0.852& 1.291&-0.896& 1.178&-0.990& 1.150\\ RC15(BC03)&-0.792 & 1.629&-0.801& 1.456&-0.781& 1.308& --& --&-0.803& 1.017& --& --\\ RC15(FSPS)&-0.647 &1.497 &-0.648& 1.298&-0.619& 1.120& --& --&-0.604& 0.714& --& --\\ \hline \hline \multicolumn{13}{p{0.8\textwidth}}{\footnotesize{Notes. The coefficients are for the red solid lines in Fig. \ref{fig:self_MLCR_gr} in the function form of equation (1).}} \end{tabular} \end{center} \end{table*} \section{Discussion}\label{sec:discus} \subsection{Secondary Color Dependence}\label{sec:discus_sub1} $g$-$r$ acts as a primary color indicator of $\gamma_{*}$(Figure \ref{fig:self_MLCR_gr} ). In this section, we shall examine whether the re-calibrated CMLRs based on $g$ - $r$ could be improved furthermore by including $r$-$z$ or $J$ - $K$ as a secondary color term. Firstly, we plot $\gamma_{*,re}^{j}$ against $r$ - $z$ (Figure ~\ref{fig:MLC_rz}) or J - K (Figure ~\ref{fig:MLC_JK}) for each CMLR. Although it appears little dependence of $\gamma_{*,re}^{j}$ on either $r$ - $z$ or J - K, the two colors could not be completely avoided without a further examination in quantity. For convenience, we denote the $\gamma_{*}$ from $j$ band predicted by the re-calibrated CMLRs (Table \ref{tab:self_consistent_CMLR}) as $\gamma_{*,rec}^{j}$, and those predicted by the renormalized M$_{*}^{j}$ (Table \ref{tab:self_consistent_mass}) as $\gamma_{*,re}^{j}$ ($j$= $i$, $z$, J, H, K). The residuals of $\gamma_{*,rec}^{j}$ from $\gamma_{*,re}^{j}$ are denoted as $\Delta^{j}$ ($\Delta^{j}$=$\gamma_{*,re}^{j}$-$\gamma_{*,rec}^{j}$), which are in fact the difference between the data (black open circles) and the re-calibrated line (red solid line) in each panel in Figure ~\ref{fig:self_MLCR_gr}. If $\Delta^{j}$ is dependent on the colors of $r$-$z$ or $r$-$z$, the re-calibrated CMLR based on only $g$ - $r$ color could be improved by equation (3), \begin{eqnarray} {\rm log} \ \gamma_{*}^{j} &=& a_{j} + b_{j}\times {\rm (g - r)}+ \Delta^{j} \end{eqnarray} In order to check whether $\Delta^{j}$ depends on $r$-$z$ or J - K , we additionally plot $\Delta^{j}$ against $r$ - $z$ (Figure \ref{fig:m2l_distr_rz}) or $J$ - $K$ (Figure \ref{fig:m2l_distr_JK}), and fit a linear relation between $\Delta^{j}$ and the color in each panel (red solid line). It shows that the fit line is almost flat and completely overlaps the zero-residual line (black line), implying that $\Delta^{j}$ depends little on either color of $r$ - $z$ or $J$ - $K$. Therefore, there is no need for a secondary color term based on $r$ - $z$ or J - K ($\Delta^{j}$ in equation (3)) to improve the re-calibrated CMLRs in this work. This demonstrates that the variation of $\gamma_{*}$ can be well traced by the optical color but is minimized in NIR color, which has already been proved in \citet{McGaugh2014}, where they changed the age of a solar metallicity stellar population \citet{Schombert2009} from 1 to 12 Gyr, and the induced changes in $B$ - $V$ are 0.37 mag but only 0.03 mag in $J$ - $K$. \begin{figure} \centering \includegraphics[width=1.0\textwidth]{2nd_term1 \caption{$\Delta^{j}$ ($j$= $i$,$z$, J, H, K) as a function of $r - z$. The black line is the zero-residual line, and the red line is the fit to the data, which nearly overlap the zero-residual line.}\label{fig:m2l_distr_rz} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{2nd_term2 \caption{$\Delta^{j}$ ($j$= $i$,$z$, J, H, K) as a function of J - K.The black line is the zero-residual line, and the red line is the fit to the data, which nearly overlap the zero-residual line.}\label{fig:m2l_distr_JK} \end{figure} \begin{table*}\footnotesiz \caption{Stellar mass-to-light ratios ($\gamma_{*}$) predicted by original and re-calibrated CMLRs. } \label{tab:cmlr2} \begin{center} \begin{tabular}{l|ccccc|ccccc|cc} \hline model&$\gamma_{0.3}^{i}$& $\gamma_{0.3}^{z}$& $\gamma_{0.3}^{J}$& $\gamma_{0.3}^{H}$& $\gamma_{0.3}^{K}$&$\gamma_{0.6}^{i}$& $\gamma_{0.6}^{z}$& $\gamma_{0.6}^{J}$& $\gamma_{0.6}^{H}$& $\gamma_{0.6}^{K}$ &$\gamma_{0.4}^{K}$ & $\gamma_{B-V=0.6}^{K}$\\ \hline \multicolumn{13}{c}{\bf{Original CMLR models}}\\ \hline B03&1.09&0.96&0.91&0.78&0.71&1.98&1.55&1.24&0.93&0.81& 0.74 &0.73\\ IP13&0.60&0.53&0.41&0.29&0.25&1.55&1.29&0.89&0.63&0.56& 0.33 &0.41\\ Z09&0.40&0.32&0.23&0.16&0.12&1.11&0.82&0.53&0.35&0.26& 0.16 &0.21\\ RC15B&0.46&0.39&--&0.24&--&1.24&0.97&--&0.47&-- &--&--\\ RC15F&0.61&0.56&--&0.40&--&1.47&1.20&--&0.63&-- &--&--\\ \hline \multicolumn{13}{c}{\bf{Re-calibrated CMLR models}}\\ \hline B03&0.92&0.87&0.79&0.67&0.53&1.68&1.40&1.13&0.84&0.63 &0.56 &0.60\\ IP13&0.54&0.51&0.47&0.40&0.31&1.41&1.24&1.09&0.89&0.71 &0.41 &0.54\\ Z09&0.39&0.37&0.34&0.29&0.23&1.11&0.96&0.85&0.67&0.51 &0.30 &0.50\\ RC15(BC03)&0.43&0.41&--&0.31&--&1.18&1.01&--&0.66&-- &--&--\\ RC15(FSPS)&0.55&0.52&--&0.40&--&1.35&1.13&--&0.68&-- &--&--\\ \hline \multicolumn{13}{p{0.8\textwidth}}{\footnotesize{Notes. The stellar mass-to-light ratios predicted from different bands ($i$, $z$, J, H, K) by each CMLR before (Table ~\ref{tab:cmlr1}) and after re-calibration (Table ~\ref{tab:self_consistent_CMLR}) are given at $g$ - $r$ =0.3 (the mean and the median colors of the LSBG sample) and $g$ - $r$ = 0.6. Additionally, $\gamma_{*}$ predicted by MS14 from K band at B-V=0.6 ($\gamma_{\rm B-\rm V=0.6}^{K}$) are listed, and for comparison, $\gamma_{*}$ predicted by our re-calibrated relations (Table ~\ref{tab:self_consistent_CMLR}) in Section \ref{sec:m2l} from K band at $g$-$r$=0.4 ($\gamma_{0.4}^{K}$) are also given for comparison, since $g$-$r$=0.4 is equivalent to B - V=0.6 according to the filter transformation prescription of \citet{Smith2002}.}} \end{tabular} \end{center} \end{table*} \subsection{Error budget}\label{sec:errors} The typical $\gamma_{*}$ uncertainties are $\sim$0.1 ($\sim$0.2) dex in the optical (NIR) for B03, $\sim$0.1 dex for IP13, and 0.1$\sim$0.15 dex for Z09. For RC15, it could be deduced (from their Figures 2 and 3 in RC15 paper) that the scatter in $\gamma_{*}$ from BC03 model is $\sim$0.1 dex, but the scatter from FSPS model is not clearly available. These typical uncertainties that are inherent in the original CMLRs should be directly transplanted into the re-calibrated CMLRs in this work, because the re-calibrating in this work does not change the models on which the CMLRs are based. For the LSBG sample, the uncertainty in $\gamma_{*}$ predicted by a CMLR should be a combination of the inherent uncertainty in the CMLR and the photometric error. The uncertainty in $g$-$r$ color of the LSBG sample in this work are $<$ 0.08 mag for 95$\%$ of the galaxies, which would be ultimately propagated to be uncertainties of $\sim$0.08 ($\sim$ 0.03), $\sim$0.11 ($\sim$0.10), $\sim$0.11 ($\sim$0.10), $\sim$0.11 ($\sim$0.08), and $\sim$0.10 ($\sim$0.05) dex in log $\gamma_{*}$ predicted in optical (NIR) bands by the re-calibrated relations, and almost the same values of uncertainties in log $\gamma_{*}$ predicted by original relations of B03, IP13, Z09, RC15(BC03), and RC15(FSPS), respectively, Therefore, for this LSBG sample, the total uncertainties in $\gamma_{*}$ predicted by each CMLR before or after re-calibration are almost the same. \subsection{$\gamma_{*}$ and M$_{*}$ from re-calibrated CMLRs}\label{sec:discus_sub2} In Table \ref{tab:self_consistent_CMLR}, $\gamma_{*}$ from $j$ band were estimated by each independent re-calibrated CMLR at $g$ - $r$ =0.3 ($\gamma_{0.3}^{j}$, $j$=$i$ $z$, J, H, K), which is the mean of color distribution of the sample in this work, and $\gamma_{*}^{j}$ at $g$ - $r$=0.6 ($\gamma_{0.6}^{j}$) were also tabulated in order to give an intuition for $\gamma_{*}^{j}$ estimates at some redder color by these re-calibrated CMLRs. In addition, the originally predicted $\gamma_{*}^{j}$ were also listed for a comparison. \textbf{Apparently, B03 always gives the highest $\gamma_{*}^{j}$, and Z09 gives the lowest values no matter before or after re-calibration, which is primarily due to the differences in the IMF.} In quantity, the span in originally predicted $\gamma_{*}^{j}$ are $\sim$0.44, $\sim$0.48, $\sim$0.60 $\sim$0.69, and $\sim$0.77 dex at the blue color ($g$-$r$=0.3), and are $\sim$0.25, $\sim$0.28 $\sim$0.37, $\sim$0.42, and $\sim$0.49 dex at the redder color ($g$-$r$=0.6) for $j$ = $i$, $z$, J, H, and K bands, respectively. In contrast, the span in $\gamma_{*}^{j}$ predicted by the re-calibrated relations has been greatly narrowed to $\sim$0.37, $\sim$0.37, $\sim$0.37, $\sim$0.36, and $\sim$0.36 dex at the blue color, and to $\sim$0.18, $\sim$0.16, $\sim$0.12, $\sim$0.09, and $\sim$0.09 dex at the red color in the corresponding bands. So it is clear that the range in $\gamma_{*}^{j}$ by re-calibrated CMLRs is much narrower than originally predicted, especially in the NIR bands. This demonstrates that the NIR luminosities are more robust than the optical luminosities to predict the $\gamma_{*}$ of galaxies. It is worth noting that the uncertainties (Section \ref{sec:errors}) in $\gamma_{*}$ predicted by the original or re-calibrated relation for each CMLR are almost the same, so these errors do not alter the comparison above. \textbf{We can examine each re-calibrated CMLR for the internal consistency in M$_{*}$ from band to band. We listed the mean and median M$_{*}$ predicted by each re-calibrated CMLR in the right part in Table \ref{tab:mass}. It is apparent that M$_{*}^{j}$ ($j$=$g$, $i$, $z$, J, H, and K) are highly consistent with M$_{*}^{r}$ which is the reference stellar mass. For instance, the difference of M$_{*}^{\rm j}$ from M$_{*}^{r}$ is reduced to 0.03 dex (from the original 0.1 dex) by B03, 0.04 dex (from the original 0.3 dex) by Z09, 0.06 dex (from the original 0.27 dex) by IP13, and 0.03 dex (from the original 0.1 - 0.2 dex) by RC15 CMLRs after re-calibration. This demonstrates that each CMLR could produce internally self-consistent M$_{*}$ after re-calibration when it is applied in different photometric bands.} \subsection{Comparison with MS14} \label{sec:MS14} In the pioneering work of MS14, several CMLRs were re-calibrated in filters of V, I, K, or [3.6] based on a sample of disk galaxies (B-V as the color indicator). In this work, three CMLRs that are common with MS14 were re-calibrated, but in SDSS and NIR filters of $r$, $i$, $z$, J, H, or K based on a sample of LSBGs ($g$-$r$ as the color indicator). So we shall compare our re-calibrated relations with those of MS14 for the three common CMLRs (B03, IP13, and Z09) in the common K band in this section. In MS14, the $\gamma_{*}$ from K band at B-V=0.6 ($\gamma_{\rm B-\rm V=0.6}^{\rm K}$) predicted by their re-calibrated relations are 0.60, 0.54 and 0.50 $\rm M_{\odot}/\rm L_{\odot}$ by B03, IP13, and Z09, respectively. In contrast, the originally predicted $\gamma_{\rm B-\rm V=0.6}^{\rm K}$ are correspondingly 0.73, 0.41, and 0.21 $\rm M_{\odot}/\rm L_{\odot}$ (the last column in Table \ref{tab:cmlr2}). It is apparent that the range in $\gamma_{\rm B-\rm V=0.6}^{\rm K}$ has been enormously narrowed to 0.08 dex from the original 0.54 dex by their re-calibrations. In order to compare with MS14, we additionally tabulate $\gamma^{\rm K}$ at $g$-$r$ = 0.4 ($\gamma_{0.4}^{\rm K}$) predicted by our re-calibrated relations, which are $\sim$0.57, $\sim$0.41, and $\sim$0.30 $\rm M_{\odot}/\rm L_{\odot}$ by B03, IP13, and Z09 (Table ~\ref{tab:cmlr2}), since $g$-$r$ = 0.4 is equivalent to B-V=0.6 according to the filter transformation prescriptions of \citet{Smith2002}. By comparison, the originally predicted $\gamma_{0.4}^{\rm K}$ are 0.74, 0.33, and 0.16 $\rm M_{\odot}/\rm L_{\odot}$, so the range in $\gamma_{0.4}^{\rm K}$ has been reduced to $\sim$0.28 dex from the original $\sim$0.67 dex by our re-calibrations. However, compared with $\gamma_{\rm B-\rm V=0.6}^{\rm K}$ predicted by MS14 re-calibrated relations, $\gamma_{0.4}^{\rm K}$ predicted by our re-calibrated relations in this work are 0.03, 0.09, 0.26 dex lower, respectively, by B03, IP13, and Z09. In order to find out the sources of the differences, we examined the only three different ingredients between this work and MS14, which are the independent procedures, the different assumptions of reference M$_{*}$, and the distinct data sets. For the procedures, although the procedure in this work was coded to implement the same methodology as adopted by MS14, it is independent of and not identical with the MS14 procedure. so we investigate the possible offset in re-calibrated relations due to the minor differences between our and MS14 procedures, by repeating the exact work of MS14 on their data using our procedures. It was found that, compared with MS14 procedures, our procedures would drag $\gamma_{\rm B-\rm V=0.6}^{\rm K}$ down by 0.05, 0.01, and 0.04 dex, respectively, by B03, IP13, and Z09. These minor offsets in $\gamma_{\rm B-\rm V=0.6}^{\rm K}$ caused by minor differences between our and MS14 procedures are denoted as $\Delta_{\rm pro}^{\rm K}$ for convenience (Table \ref{tab:A_tab2}). For the assumption of reference M$_{*}$, we assumed M$_{*}$ estimates from SDSS $r$ band (M$_{*}^{r}$) as the reference M$_{*}$ to which M$_{*}$ estimates from other filter bands were renormalized in this work, while MS14 assumed M$_{*}$ from Johnson $V$ band (M$_{*}^{\rm V}$) as their reference M$_{*}$. The different assumptions are the choices in the different filter systems (SDSS versus Johnson-Cousin), but it is necessary to investigate the possible offset in re-calibrated relations due to the different choices of reference M$_{*}$ between this work and MS14 (M$_{*}^{r}$-based versus M$_{*}^{\rm V}$-based). We present the investigation in Appendix \ref{sec:ref_mass}, which concludes that $\gamma_{\rm B-\rm V=0.6}^{\rm K}$ predicted by the M$_{*}^{r}$-based re-calibrated relations are 0.03, 0.11, and 0.23 dex lower than those predicted by M$_{*}^{\rm V}$-based re-calibrated relations. These major offsets in $\gamma_{\rm B-\rm V=0.6}^{\rm K}$ caused by the different assumptions of the reference M$_{*}$ are denoted as $\Delta_{\rm ref}^{\rm K}$ for convenience (Table \ref{tab:A_tab2}). In this case, for the three common CMLRs of B03, IP13, and Z09, the seeming differences (0.03, 0.09, 0.26 dex) between $\gamma_{0.4}^{\rm K}$ (this work) and $\gamma_{\rm B-\rm V=0.6}^{\rm K}$ (MS14) could be completely explained by the combination of $\Delta_{\rm ref}^{\rm K}$ (0.03, 0.11, and 0.23 dex) and $\Delta_{\rm pro}^{\rm K}$ (0.05, 0.01, and 0.04 dex; Table \ref{tab:A_tab2}) . This implies that the seeming differences between our re-calibrated relations in this work and those in MS14 in the common K band are totally caused by the systematic offsets due to the major differences in the assumptions of reference mass and the minor differences in procedures between this work and MS14. Therefore, taking into account of the different assumptions of reference mass and the independent procedures, our re-calibrated CMLRs based on a sample of LSBGs in this work yield very consistent $\gamma_{*}$ in the common K band with the re-calibrated CMLRs based on a sample of disk galaxies by MS14 . So there is no room left for any apparent difference in the re-calibrated relations introduced by the possible difference of our LSBG sample from the disk galaxy sample in MS14. It is beyond the scope of this work and also difficult to evaluate which assumption of reference mass is better, because the different assumptions are only the choices in different filter systems (SDSS versus Johnson-Cousins). Additionally, this work is motivated to re-calibrate each individual CMLR to give internally self-consistent M$_{*}$ for a same galaxy, when it is applied in different bands of SDSS and NIR filters, and the internally self-consistent M$_{*}$ from any band predicted by each re-calibrated CMLR should be highly consistent with the reference M$_{*}$. So, we examined the offset between different reference M$_{*}$ in Appendix, which gives that M$_{*}^{r}$ are systematically 0.11, 0.25, and 0.33 dex lower than M$_{*}^{V}$ by B03, IP13, and Z09 (Table \ref{tab:A_tab1}) for the same sample in this work. \section{Summary and Conclusions}\label{sec:conclusion} Based on a sample of LSBGs, we examined five representative CMLRs of B03, IP13, Z09, RC15(BC03), and RC15(FSPS). For each individual CMLR, it gives different stellar mass (M$_{*}$) estimates for the same sample, when it is applied in different photometric bands of SDSS optical $g$, $r$, $i$, $z$, NIR J, H, and K. M$_{*}^{g}$ closely agree with M$_{*}^{r}$, but M$_{*}^{j}$ ($j$=$i$, $z$, J, H, K) all deviate from M$_{*}^{r}$, with the deviation relatively larger in NIR bands. Assuming M$_{*}^{r}$ as a reference M$_{*}$, we re-normalized M$_{*}$ estimates from each of the other bands of $j$ (M$_{*}^{j}$) to the reference mass, and subsequently obtain the re-calibrated CMLR by fitting the relations between $g$-$r$ and $\gamma_{*}^{j}$ calculated from the re-normalized M$_{*}^{j}$ for each original CMLR ($j$=$i$, $z$, J, H, K). The $g$ -$r$ is the primary color indicator in the re-calibrated relations, which have little dependence on $r$ - $z$ or J - K. Each re-calibrated CMLR could produce internally self-consistent M$_{*}$ estimates for the same galaxy, when it is applied in different bands of $j$ ($j$=$r$, $i$, $z$, J, H, K), and the self-consistent M$_{*}$ should be ``the same as" or highly consistent with the reference mass of M$_{*}^{r}$. Besides, the differences in original predicted $\gamma_{*}^{j}$ by the five different CMLRs have been largely reduced, particularly in NIR bands. Compared with the pioneering work of MS14, the $\gamma_{*}^{\rm K}$ predicted by the re-calibrated CMLRs in this work are, respectively, 0.03, 0.09, 0.26 dex lower than $\gamma_{\rm B-\rm V=0.6}^{K}$ predicted by MS14 re-calibrations by B03, IP13, and Z09. These offsets could be fully explained by the combination of the major systematic offsets caused by the different choices of reference mass (0.03, 0.11, and 0.23 dex) and the minor systematic offsets caused by independent procedures (0.05, 0.01, and 0.04 dex) between this work and MS14. This implies that, considering the major effect of different choices of reference M$_{*}$ and the minor effect of independent procedures, the re-calibrated CMLRs in this work based on a sample of LSBGs give very consistent $\gamma_{*}^{\rm K}$ with the re-calibrated CMLRs by MS14 at the equivalent color. So there is no room left for any difference in the re-calibrations caused by the possible bias of the LSB galaxy sample from the disk galaxy sample in MS14. It is difficult to judge which choice of reference mass is better because the choices have to be made in different photometric filter systems. However, it is necessary to give the offsets between the final self-consistent M$_{*}$ predicted by the re-calibrated relations with different assumptions of the reference mass (M$_{*}^{r}$ versus M$_{*}^{\rm V}$). The M$_{*}^{r}$-based re-calibrated relations in this work (Table \ref{tab:cmlr2}) produce the final self-consistent M$_{*}$ which are systematically 0.11, 0.25, and 0.33 dex lower than those produced by the M$_{*}^{\rm V}$-based re-calibrated CMLRs in MS14, by B03, IP13, and Z09. \acknowledgements The authors appreciate the anonymous referee for his/her thorough reading and constructive comments. D.W. would like to gratefully acknowledge China Scholarship Council (CSC) for the scholarship which enables her as a visiting scholar to visit Professor McGaugh, S. Stacy in Astronomy Department at Case Western Reserve University (CWRU). The work presented in this paper is fully developed and completed during her visit in CWRU. D.W. is also supported by the National Natural Science Foundation of China (NSFC) grant Nos. U1931109, 11733006, the Young Researcher Grant funded by National Astronomical Observatories, Chinese Academy of Sciences (CAS), and the Youth Innovation Promotion Association, CAS. \vspace{5mm} \facilities{} \software{}
{ "timestamp": "2020-07-20T02:02:57", "yymm": "2007", "arxiv_id": "2007.08610", "language": "en", "url": "https://arxiv.org/abs/2007.08610" }
\section{Introduction} In gauge theories with fundamental representation matter fields, one can often dial parameters in a manner which smoothly interpolates between a Higgs regime and a confining regime without undergoing any change in the realization of global symmetries~\cite{tHooft:1979yoe,Osterwalder:1977pc,Fradkin:1978dv,Banks:1979fi}. In the Higgs regime gauge fields become massive via the usual Higgs phenomenon, while in the confining regime gauge fields also become gapped (or acquire a finite correlation length) due to the non-perturbative physics of confinement, with an approximately linear potential appearing between heavy fundamental test charges over a finite range of length scales which is limited by the lightest meson mass. In this paper we examine situations in which the Higgs and confining regimes of such theories can be sharply distinguished. This is, of course, an old and much-studied issue. In specific examples, when both regimes have identical realizations of global symmetries, it has been shown that confining and Higgs regimes can be smoothly connected with no intervening phase transitions~\cite{Fradkin:1978dv,Banks:1979fi}. These examples, which we will refer to as as the ``Fradkin-Shenker-Banks-Rabinovici theorem,'' have inspired a widely held expectation that there can be no useful gauge-invariant order parameter distinguishing Higgs and confining phases in any gauge theory with fundamental representation matter fields.% \footnote {% By a useful order parameter we mean an expectation value of a physical observable whose non-analytic change also indicates non-analytic behavior in thermodynamic observables and correlation functions of local operators. For a rather different take on these issues, see Refs.~\cite{Fredenhagen:1985ft,Greensite:2017ajx,Greensite:2018mhh,Greensite:2020nhg}. } But there are physically interesting situations in which the Fradkin-Shenker-Banks-Rabinovici theorem does not apply. We are interested in systems where no local order parameter can distinguish Higgs and confining regimes and yet the conventional wisdom just described is incorrect. We will analyze model theories, motivated by the physics of dense QCD, where Higgs and confining regimes cannot be distinguished by the realization of global symmetries and yet these are sharply distinct phases necessarily separated by a quantum phase transition in the parameter space of the theory. We will consider a class of gauge theories with two key features. The first is that they have fundamental representation scalar fields which are charged under a $U(1)$ global symmetry. Second, this $U(1)$ global symmetry is spontaneously broken in \emph{both} the Higgs and confining regimes of interest. In this class of gauge theories, we argue that one can define a natural non-local order parameter which does distinguish the Higgs and confinement regimes. This order parameter is essentially the phase of the expectation value of the holonomy (Wilson loop) of the gauge field around $U(1)$ global vortices; its precise definition is discussed below. We will find that this vortex holonomy phase acts like a topological observable; it is constant within each regime but has differing quantized values in the two regimes.% \footnote {% This statement assumes a certain global flavor symmetry. In the absence of such a symmetry, the phase of the vortex holonomy is constant in the $U(1)$-broken confining regime and changes non-analytically at the onset of the Higgs regime. } We present a general argument --- verifying it by explicit calculation where possible --- that implies that non-analyticity in our vortex holonomy observable signals a genuine phase transition separating the $U(1)$-broken Higgs and $U(1)$-broken confining regimes. The Higgs-confinement transition we discuss in this paper does not map cleanly onto the classification of topological orders which is much discussed in modern condensed matter physics~\cite{Wegner:1984qt,Wen:1989zg,Wen:1989iv,Wen:2012hm}. The basic reason is that the topological order classification is designed for gapped phases of matter, while here we focus on gapless phases. Some generalizations of topological order to gapless systems have been considered in the condensed matter literature, see e.g. Refs.~\cite{SACHDEV200258,Kitaev:2006lla,Sachdev:2018ddg}, but these examples differ in essential ways from the class of models we consider here. Our arguments also do not cleanly map onto the related idea of classifying phases based on realizations of higher-form global symmetries~\cite{Gukov:2013zka,Kapustin:2013uxa,Kapustin:2014gua, Gaiotto:2014kfa,Metlitski:2017fmd,Lake:2018dqm,Wen:2018zux}, because the models we consider do not have any obvious higher-form symmetries. But there is no reason to think that existing classification ideas can detect all possible phase transitions. We argue that our vortex order parameter provides a new and useful way to detect certain phase transitions which are not amenable to standard methods. Let us pause to explain in a bit more detail why the Fradkin-Shenker-Banks-Rabinovici theorem does not apply to theories of the sort we consider. The Fradkin-Shenker-Banks-Rabinovici theorem presupposes that Higgs fields are uncharged under any global symmetry. This assumption may seem innocuous. After all, if Higgs fields are charged under a global symmetry, it is tempting to think that this global symmetry will be spontaneously broken when the Higgs fields develop an expectation value, implying a phase transition associated with a change in symmetry realization and detectable with a local order parameter. In other words, a typical case lying within the Landau paradigm of phase transitions. But such a connection between Higgs-confinement transitions and a change in global symmetry realization is model dependent. In the theories we consider in this paper, as well as in dense QCD, these two phenomena are unrelated. Our scalar fields will carry a global $U(1)$ charge, but crucially, the realization of all global symmetries will be the same in the confining and Higgs regimes of interest. Consequently, the Fradkin-Shenker-Banks-Rabinovici theorem does not apply to these models and yet the confining and Higgs regimes are not distinguishable within the Landau classification of phases. Nevertheless, we will see that they are distinct. The basic ideas motivating this paper were introduced by three of us in an earlier study of cold dense QCD matter~\cite{Cherman:2018jir}. We return to this motivation at the end of this paper in Sec.~\ref{sec:QCD}, where we generalize our analysis to cover non-Abelian gauge theories in four spacetime dimensions and explain why it provides compelling evidence against the Sch\"afer-Wilczek conjecture of quark-hadron continuity in dense QCD~\cite{Schafer:1998ef}. The bulk of our discussion is focused on a simpler set of model theories which will prove useful to refine our understanding of Higgs-confinement phase transitions. We begin, in Sec.~\ref{sec:our_model}, by introducing a simple Abelian gauge theory in three spacetime dimensions in which Higgs and confinement physics can be studied very explicitly. In Sec.~\ref{sec:vortices_and_holonomies} we introduce our vortex order parameter and use it to infer the existence of a Higgs-confinement phase transition. Sec.~\ref{sec:QCD} discusses the application of our ideas to four-dimensional gauge theories such as QCD, while Sec.~\ref{sec:conclusion} contains some concluding remarks. Finally, in Appendices \ref{sec:EoMAppendix}--\ref{sec:gaugingU1} we collect some technical results on vortices, discuss embedding our Abelian model within a non-Abelian theory, and consider the consequences of gauging of our $U(1)$ global symmetry to produce a $U(1) \times U(1)$ gauge theory. \section{The model} \label{sec:our_model} We consider compact $U(1)$ gauge theory in three Euclidean spacetime dimensions. Let $A_{\mu}$ denote the (real) gauge field. Our analysis assumes $SO(3)$ Euclidean rotation symmetry, together with a parity (or time-reversal) symmetry. Parity symmetry precludes a Chern-Simons term, so the gauge part of the action is just a photon kinetic term, \begin{align} S_{\rm \gamma} = \int d^{3}x \> \frac{1}{4e^2} \, F_{\mu\nu}F^{\mu\nu} \,. \label{eq:S_gamma} \end{align} The statement that the gauge group is compact (in this continuum description) amounts to saying that the Abelian description (\ref{eq:S_gamma}) is valid below some scale $\Lambda_{\rm UV}$, and that the UV completion of the theory above this scale allows finite action monopole-instanton field configurations whose total magnetic flux is quantized \cite{Polyakov:1976fu}. Specifically, we demand that the flux through any 2-sphere is an integer, \begin{align} \int_{S^2} F = 2\pi k \,, \qquad k \in \mathbb{Z} \,, \label{eq:flux_quant} \end{align} where $F \equiv \tfrac{1}{2} F_{\mu\nu} \> dx^{\mu} \wedge dx^{\nu}$ is the 2-form field strength. Condition (\ref{eq:flux_quant}) implies charge quantization and removes the freedom to perform arbitrary field rescalings of the form $A \to A' \equiv (q'/q) \, A$. As shown by Polyakov, the presence of monopole-instantons, regardless of how dilute, leads to confinement on sufficiently large distance scales \cite{Polyakov:1976fu}. \subsection{Action and symmetries} We choose the matter sector of our model to be comprised of two oppositely-charged scalar fields, $\phi_+$ and $\phi_-$, plus one neutral scalar $\phi_0$. We assign unit gauge charges $q = \pm 1$ to the charged fields, making them analogous to fundamental representation matter fields in a non-Abelian gauge theory.% \footnote {% The fact that our charged matter fields have minimal charges of $\pm 1$ is an essential difference from a similar model studied by Sachdev and Park~\cite{SACHDEV200258} in a condensed matter context, see also \cite{Sachdev:2018ddg}. The model of Ref.~\cite{SACHDEV200258} has a $U(1)$ global symmetry and fields with charges $-1$ and $+2$ under an emergent $U(1)$ gauge symmetry. The existence of non-minimally charged matter fields allowed Sachdev and Park to use topological order ideas to delineate distinct phases. That approach does not work in our model. } We require the theory to have a single zero-form global $U(1)$ symmetry% \footnote {% A zero-form global symmetry is just an ordinary global symmetry which acts on local operators. } under which the fields $\phi_\pm$ both have charge assignments of $-1$ while $\phi_0$ has a charge assignment of $+2$. These charge assignments, summarized here: \begin{align} \begin{array}{c|ccc} & \phantom{+}\phi_+ & \phantom{+}\phi_- & \phantom{+}\phi_0 \\ \hline U(1)_{\rm gauge} & +1 & -1 & \phantom{+}0 \\ U(1)_{\rm global} & -1 & -1 & +2 \end{array} \label{eq:chargeTable} \end{align} are chosen in a manner which will allow independent control of the Higgsing of the $U(1)$ gauge symmetry (or lack thereof) and the realization of the $U(1)$ global symmetry by adjusting suitable mass parameters. This is the essential structure needed to examine the issues motivating this paper in the context of a model Abelian theory. The complete action of our model consists of the gauge action (\ref{eq:S_gamma}), standard scalar kinetic terms, plus a scalar potential containing interactions consistent with the above symmetries, \begin{align} \label{eq:the_model} S = \int d^{3}x \, &\left[ \frac{1}{4e^2} \, F_{\mu \nu}^2 + |D_{\mu}\phi_{+}|^2 + |D_{\mu} \phi_{-}|^2 + m_{c}^2 \, \big(|\phi_{+}|^2+|\phi_{-}|^2 \big) + |\partial_{\mu} \phi_{0}|^2 \right. + m_{0}^2 \, |\phi_{0}|^2 \nonumber\\ &\left. \vphantom{\int} - \epsilon \, \big(\phi_{+} \phi_{-} \phi_{0} + \mathrm{h.c.} \big)\right. + \lambda_{c} \big(|\phi_{+}|^4+|\phi_{-}|^4 \big) + \lambda_{0} |\phi_{0}|^4 \nonumber\\ &\left. \vphantom{\int} + g_{c} \big(|\phi_{+}|^6+|\phi_{-}|^6 \big) + g_{0} |\phi_{0}|^6 + \cdots + V_{\rm m}(\sigma) \right] . \end{align} The mass dimensions of the various couplings are $[e^2] = [\lambda_{c}] =[\lambda_{0}]=1$, $[\epsilon] = 3/2$, and $[g_c] = [g_0] = 0$. The ellipsis ($\cdots$) represents possible further scalar self-interactions, consistent with the imposed symmetries, arising via renormalization. The term $V_{\rm m}(\sigma)$ describes the effects of monopole-instantons, and is given explicitly below. The cubic term $\epsilon \, \phi_+ \phi_- \phi_0$ ensures that the model has a single $U(1)$ global symmetry, not multiple independent phase rotation symmetries. From here onward, we will denote the $U(1)$ global symmetry by $U(1)_{\rm G}$. The simplest local order parameter for the $U(1)_{\rm G}$ symmetry is just the neutral field expectation value $\langle \phi_0 \rangle$. This order parameter has a charge assignment (\ref{eq:chargeTable}) of +2 under the $U(1)_{\rm G}$ symmetry; there are no gauge invariant local order parameters with odd $U(1)_{\rm G}$ charge assignments. In addition to the $U(1)$ gauge redundancy and the $U(1)_{\rm G}$ global symmetry, this model has two internal $\ZZ$ discrete symmetries. One is a conventional (particle $\leftrightarrow$ antiparticle) charge conjugation symmetry, \begin{align} (\mathbb{Z}_2)_{\rm C} :\quad \phi_\pm \to \phi_\pm^{*} \,,\quad \phi_0 \to \phi_0^* \,,\quad A_{\mu} \to -A_{\mu} \,. \label{eq:ZC} \end{align} The other is a charged field permutation symmetry, \begin{align} (\mathbb{Z}_2)_{\rm F}:\quad \phi_+ \leftrightarrow \phi_- \,,\quad A_{\mu} \to -A_{\mu} \,. \label{eq:ZP} \end{align} A conserved current $j_{\rm mag}^{\mu} \equiv \epsilon^{\mu\nu\lambda} F_{\nu \lambda}$ associated with a $U(1)$ magnetic global symmetry is also present if monopole-instanton effects are neglected. But for our \emph{compact} Abelian theory this symmetry is not present. The functional integral representation of the theory includes a sum over finite-action magnetic monopole-instanton configurations with all integer values of total magnetic charge. These induce corrections to the effective potential (below the scale $\Lambda_{\rm UV})$ of the form \cite{Polyakov:1976fu} \begin{align} V_{\rm m}(\sigma) = - \mu_{\rm UV}^3 \, e^{-S_{\rm I}} \, \cos(\sigma) \,. \label{eq:S_monopole} \end{align} Here $S_{\rm I}$ is the minimal action of a monopole-instanton, and $\sigma$ is the dual photon field, related to the original gauge field by the Abelian duality relation% \footnote {% Expression (\ref{eq:S_monopole}) relies on a dilute gas approximation, valid when the instanton action is large, $S_{\rm I} \gg 1$. The duality relation (\ref{eq:duality_relation}) appears when one imposes the Bianchi identity for $F_{\mu\nu}$ by adding a Lagrange multiplier term $i\int d^{3}x\, \frac{\sigma}{4\pi} \epsilon^{\mu\nu\lambda} \, \partial_{\mu} F_{\nu\lambda}$ to the Euclidean action. Relation \eqref{eq:duality_relation} is the resulting equation of motion for $F_{\mu\nu}$, and integrating out $F_{\mu\nu}$ gives the Abelian dual representation of Maxwell theory.} \begin{align} F_{\mu\nu} = \frac{ie^2 }{2\pi}\, \epsilon_{\mu \nu \lambda} \, \partial^{\lambda} \sigma \,. \label{eq:duality_relation} \end{align} With this normalization the dual photon field is a periodic scalar, $\sigma \equiv \sigma + 2\pi$, with the Maxwell action becoming the kinetic term $\frac{1}{2}\left(\frac{e}{2\pi}\right)^2 (\partial\sigma)^2$. The parameter $\mu_{\rm UV}$ is a short-distance scale associated with the inverse core size of monopole-instantons. The $U(1)$ magnetic transformations act as arbitrary shifts on the dual photon field, $\sigma \to \sigma + c$. Such shifts are clearly not a symmetry, except for integer multiples of $2\pi$. Consequently, the $U(1)_{\rm G}$ phase rotation symmetry is the only continuous global symmetry in our model. In summary, the faithfully-acting internal global symmetry group of our model is \begin{align} G_{\rm internal} = \frac{\left[ U(1)_{\rm G} \rtimes (\mathbb{Z}_2)_{\rm C} \right] \times (\mathbb{Z}_2)_{\rm F} }{\mathbb{Z}_2} \,. \end{align} The quotient by $\mathbb{Z}_2 \subset U(1)_{\rm G}: \phi_{\pm} \to - \phi_{\pm}$ is necessary because it also lies in the gauge group $U(1)$. When the charged scalar mass squared, $m_c^2$, is sufficiently negative this theory has a Higgs regime in which the charged scalar fields are ``condensed.'' In this regime gauge field fluctuations are suppressed since the photon acquires a mass term, \begin{align} \big(|\langle\phi_+\rangle|^2+|\langle \phi_- \rangle|^2 \big) A_{\mu}A^{\mu} \equiv \frac{m_A^2}{2e^2} \, A_\mu A^\mu \,, \label{eq:m_A} \end{align} (to lowest order in unitary gauge).% \footnote {% Our charged scalar fields may be viewed as analogs of the electron pair condensate in a Ginsburg-Landau treatment of superconductivity, in which case $m_A$ is the Meissner mass whose inverse gives the penetration length of magnetic fields. } Monopole-instanton--antimonopole-instanton pairs become bound by flux tubes with a positive action per unit length $T_{\rm mag}$ \footnote {% On sufficiently long length scales when the flux tube length $L \gtrsim 2 S_{\rm I} /T_{\rm mag}$, these magnetic flux tubes can break due to production of monopole-instanton--antimonopole-instanton pairs. This is completely analogous to the situation in the confining regime, discussed next, where electric flux tubes exist over a limited range of scales controlled by the mass of fundamental dynamical charges. } In contrast, for sufficiently positive $m_c^2$ our model should be regarded as a confining gauge theory. Recall that in the context of QCD, the confining regime is characterized by a static test quark--antiquark potential which rises linearly with separation, $V_{q\bar q} \sim \sigma r$, for separations large compared to the strong scale, $r \gg \Lambda_{\rm QCD}^{-1}$. But such a linear potential is only present for separations where the confining string cannot break, which requires that $\sigma r < 2 m_q$, with $m_q$ the mass of dynamical quarks. So confinement is only a sharply-defined criterion in the heavy quark limit, $m_q \gg \sigma/\Lambda_{\rm QCD} =\mathcal O(\Lambda_{\rm QCD})$. Nevertheless, it is conventional to speak of QCD as a confining theory even with light quarks, as this is a qualitatively useful picture of the relevant dynamics. This summary applies verbatim to our compact $U(1)$ 3D gauge theory with massive unit-charge matter, with $\Lambda_{\rm QCD}$ replaced by an appropriate non-perturbative scale which depends (exponentially) on the monopole-instanton action $S_{\rm I}$ \cite{Polyakov:1976fu}. Finally, we note that in the absence of monopole-instanton effects, oppositely charged static test particles in 3D Abelian gauge theory would experience logarithmic Coulomb interactions which grow without bound with increasing separation. Such a phase could be termed ``confined,'' but for our purposes this terminology is not helpful. We find it more appropriate to reserve the term ``confinement'' for situations where the potential between test charges is linear over a significant range of distance scales. With this terminology, 3D compact $U(1)$ gauge theory with finite-action monopole-instantons and very heavy charged matter is confining, while the non-compact version of the theory, which does not have a regime with a linear potential between test charges, is not confining. \subsection{Analogy to dense QCD} Our 3D Abelian model is designed to mimic many features of real 4D QCD at non-zero density. Explicitly, \begin{enumerate} \item Both theories contain fundamental representation matter fields and are confining in the sense described above. Of course, the gauge groups are completely different: $SU(N)$ versus $U(1)$. \item{ QCD with massive quarks of equal mass has a vector-like $U(N_f)/{\mathbb{Z}_N}$ internal global symmetry. The quotient arises because $\mathbb{Z}_N$ transformations are part of the $SU(N)$ gauge symmetry. In our model, the corresponding global symmetry is $[(\mathbb{Z}_2)_{\rm F} \times U(1)_{\rm G} ]/{\mathbb{Z}_2}$. The $(\mathbb{Z}_2)_{\rm F} \times U(1)_{\rm G}$ symmetry is analogous to $U(N_f)$}, while the discrete quotient arises for the same reason as in QCD. \item The scalar fields in the 3D Abelian model may be regarded as playing the role of color anti-fundamental diquark operators which acquire non-zero vacuum expectation values in high density QCD, see Ref.~\cite{Alford:2007xm} for a review. The symmetry group $U(1)_{\rm G}$ is analogous to quark number $U(1) \subset U(N_f)$, while $U(1)_{\rm G}/\mathbb{Z}_2$ is analogous to baryon number $U(1)_B$. Note one distinction in the transformation properties of the scalar fields in our Abelian model and the diquark condensates in QCD; the former have charge $1$ under our $U(1)_{\rm G}$ group whereas the latter have charge $2$ under quark number. The $(\mathbb{Z}_2)_{\rm F}$ permutation symmetry of our 3D Abelian model is analogous to the $\mathbb{Z}_{N_f} \subset U(N_f)$ cyclic flavor permutation symmetry of 4D QCD. \item Since the charged scalars $\phi_{\pm}$ are analogous to anti-fundamental diquarks in three-color QCD, $\phi^{\dag}_{+}\phi^{\dag}_{-}$ is akin to a dibaryon. This means that $\phi_0$ can also be interpreted as a dibaryon interpolating operator, and the condensation of $\phi_0$ in our model is directly analogous to the dibaryon condensation which occurs in dense QCD. \item In QCD, the Vafa-Witten theorem~\cite{Vafa:1983tf} implies that phases with spontaneously broken $U(1)_B$ symmetry can only appear at non-zero baryon density, while in our Abelian model $U(1)_{\rm G}$-broken phases can appear at zero density. This difference reflects the fact that QCD contains only fermionic matter fields, while our Abelian model has fundamental scalar fields. \end{enumerate} \subsection{Symmetry constraints on the phase structure} \label{sec:Landau_constraints} We begin analyzing the phase structure of the model \eqref{eq:the_model} using the Landau paradigm based on realizations of symmetries with local order parameters. We will consider the phase diagram as a function of the charged and neutral scalar masses, $m_c^2$ and $m_0^2$. We focus on the regime where quartic and sextic scalar self-couplings are positive, the cubic, quartic and gauge couplings are comparable, $\epsilon/e^3$, $|\lambda_c|/e^2$ and $|\lambda_0|/e^2$ are all $\mathcal O(1)$, and the dimensionless sextic couplings are small, $g_c$, $g_0 \ll 1$. The simplest phase diagram consistent with our analysis is sketched in Fig.~\ref{fig:3D_phase_diagram}. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{abelian_phasediagram.pdf} \caption{A sketch of the simplest consistent phase diagram of our model as a function of the charged and neutral scalar mass parameters $m_c^2$ and $m_0^2$. The four corners correspond to weakly-coupled regimes in parameter space; curves in the interior of the figure represent phase transitions. These phase transition curves are robust: they cannot be evaded by varying any parameters of the model which are consistent with its symmetries. \label{fig:3D_phase_diagram} } \end{figure} Interpreting Fig.~\ref{fig:3D_phase_diagram} as if it were a map, let us refer to the four weakly-coupled corners of parameter space by their compass directions: \begin{subequations} \label{eq:corners} \begin{align} \textrm{NW} &: \{-m_c^2 \gg e^4,\, m_0^2 \gg e^4 \}, &\textrm{NE} &: \{m_c^2 \gg e^4,\, m_0^2 \gg e^4 \}, \\ \textrm{SW} &: \{-m_c^2 \gg e^4,\, -m_0^2 \gg e^4 \}, &\textrm{SE} &: \{m_c^2 \gg e^4,\, -m_0^2 \gg e^4 \}, \end{align} \end{subequations} each of which we discuss in turn. In this section we explain the origin of the phase transition curve (orange) separating the NE region from the W side of Fig.~\ref{fig:3D_phase_diagram}, as well as the (blue) curve separating the NE and SE regions. The bulk of the paper is dedicated to understanding the origin of the phase transition curve (green) separating the SE region from the W side of Fig.~\ref{fig:3D_phase_diagram}. First, consider region NE where $m_{c}^2$, $m_0^2 \gg e^4$. In this regime our model has a unique gapped vacuum state and no broken symmetry. To see this, one may integrate out all the matter fields and observe that the resulting tree-level effective action is \begin{align} S_{\rm eff} = \int d^3x\, \left[ \frac{1}{4e^2} \, F_{\mu\nu}^2 + V_{\rm m}(\sigma)\right]\,. \end{align} The monopole potential $V_{\rm m}(\sigma)$ has a unique minimum for the dual photon $\sigma$ and induces a non-zero photon mass, \begin{equation} m_{\gamma}^2 = 4\pi^2({\mu_{\rm UV}^3}/{e^2}) \, e^{-S_{\rm I}} \,. \label{eq:mgamma} \end{equation} Hence, the vacuum is gapped and unique. Both the continuous $U(1)_{\rm G}$ and the discrete $(\mathbb{Z}_2)_{\rm C}$ and $(\mathbb{Z}_2)_{\rm F}$ global symmetries are unbroken, and hence region NE may be termed ``confining and unbroken.'' Now consider the entire E side where $m_c^2 \gg e^4$ while the neutral mass $m_0^2$ is arbitrary. Then one may integrate out the charged fields and the effective action becomes \begin{align} S_{\rm eff} = \int d^3x\, \left[ \frac{1}{4e^2}\, F_{\mu\nu}^2 + V_{\rm m}(\sigma) + |\partial_{\mu} \phi_0|^2 + m_0^2 |\phi_0|^2 + \lambda_0 |\phi_0|^4 + g_0 |\phi|^6 +\cdots \right]\,. \end{align} This is a 3D XY model plus a decoupled compact $U(1)$ gauge theory. The photon is still gapped by the Polyakov mechanism. If we take $m_0^2 \gg |\lambda_0|^2$, then we come back to the discussion of the previous paragraph. If we take $-m_0^2 \gg |\lambda_0|^2$, then $\phi_0$ develops a non-vanishing expectation value, the $U(1)_{\rm G}/\mathbb{Z}_2$ symmetry is spontaneously broken, and there is a single massless Nambu-Goldstone boson. So region SE is ``confining and $U(1)_{\rm G}$ symmetry broken.'' The discrete $(\mathbb{Z}_2)_{\rm F}$ symmetry is unbroken in this region, as is a redefined $(\mathbb{Z}_2)_{\rm C}$ symmetry which combines the basic $(\mathbb{Z}_2)_{\rm C}$ transformation (\ref{eq:ZC}) with a $U(1)_{\rm G}$ transformation that compensates for the arbitrary phase of the condensate $\langle \phi_0 \rangle$. This symmetry-broken regime must be separated from the symmetry-unbroken regime by a phase transition depending on the value of $m_0^2/\lambda_0^2$. If we take our quartic and sextic couplings to be positive, this is just the well-known XY model phase transition, which is second order in three spacetime dimensions. Next, consider what happens on the W side where $-m_c^2 \gg e^4$ while $m_0^2$ is arbitrary. In this case the charged scalar fields $\phi_\pm$ will acquire non-zero expectation values (using gauge-variant language), with $v_c \equiv |\langle \phi_\pm \rangle| = \mathcal O(|m_c \, \lambda_c^{-1/2}|)$.% \footnote {% In this and subsequent parametric estimates, we neglect the cubic coupling and sextic couplings. For the sextic couplings this is justified by our assumption that they are small. We have dropped $\epsilon$-dependence purely for simplicity: taking it into account is straightforward but results in much more cumbersome expressions. } This has several effects. First, since these fields transform non-trivially under the $U(1)_{\rm G}$ symmetry, this global symmetry is spontaneously broken leading to a massless Nambu-Goldstone excitation. Second, the $U(1)$ gauge field becomes Higgsed, as discussed above, with the photon acquiring a mass $m_A$. Writing $\phi_\pm = (v_c + H_\pm/\sqrt 2) \, e^{-i\chi}$, up to an arbitrary $U(1)$ gauge transformation, the resulting effective action has the form \begin{align} S = \int & d^{3}x \, \left[ \frac{1}{4e^2} \, F_{\mu\nu}^2 + \tfrac{1}{2} m_A^2 \, A_{\mu}A^{\mu} + |\partial_\mu \phi_0|^2 + m_0^2 |\phi_0|^2 + \lambda_0 |\phi_0|^4 \right. \nonumber\\ &\left. \vphantom{\int d^3x} + 2 v_c^2 (\partial_{\mu}\chi)^2 - 2\epsilon v_c^2 \, \mathrm{Re}\,( e^{-2i \chi} \phi_0) + \sum_{i = \pm} \Big[ \tfrac{1}{2} ( \partial_{\mu} H_{i})^2 + \tfrac{1}{2} m_H^2 H_i^2 \Big] + \cdots \right] , \end{align} where $\chi$ is the $U(1)_{\rm G}$ Nambu-Goldstone boson and $H_\pm$ are real Higgs modes with mass $m_H$. This regime, extending inward from the W boundary of the phase diagram, may be termed ``Higgsed and $U(1)_{\rm G}$ symmetry broken.'' The discrete symmetries remain unbroken in the same manner as in region SE. Regardless of the sign of $m_0^2$, the neutral scalar $\phi_0$ acquires a non-zero vacuum expectation value whose phase, $2\chi$, is set by the phase of the Higgs condensate. As $m_0^2$ is varied from large positive to large negative values, the magnitude $|\langle \phi_0 \rangle|$ varies from a small $\mathcal O(\epsilon v_c^2 \, m_0^{-2})$ value to a large $\mathcal O(m_0 \lambda_0^{-1/2})$ value, while always remaining non-zero. Throughout this Higgs regime monopole-instanton--antimonopole-instanton pairs become linearly confined by magnetic flux tubes as noted earlier. The fact that the $U(1)_{\rm G}$ symmetry is spontaneously broken in this Higgs regime means that the entire W region of parameter space with $-m_c^2 \gg e^4$ must be separated by a phase transition from the trivially gapped region NE where $m_c^2$ and $m_0^2$ are large and positive. But the pattern of global symmetry breaking throughout the W side Higgs regime of $-m_c^2 \gg e^4$ is identical to that in region SE where $m_c^2 \gg e^4$ and $-m_0^2 \gg \lambda_0^2$. This raises the central question in this paper: \bigskip \emph{Are the Higgs and confining $U(1)_{\rm G}$-breaking regimes smoothly connected, or are they distinct phases?} \bigskip \noindent As summarized in the introduction and sketched in Fig.~\ref{fig:3D_phase_diagram}, we will find that the Higgs and confining $U(1)_{\rm G}$-breaking regimes must be distinct phases, separated by at least one phase transition, even though there are no distinguishing local order parameters. Before leaving this section, we pause to consider two further issues: the realization of the $(\mathbb{Z}_2)_{\rm F}$ symmetry and the nature of the $\epsilon \to 0$ limit. In our discussion below we will assume that the $(\mathbb{Z}_2)_{\rm F}$ symmetry is not spontaneously broken. It is possible to tune the scalar potential to break $(\mathbb{Z}_2)_{\rm F}$ spontaneously, but this results in a Higgs phase which is separated by an obvious phase boundary from both the $U(1)_{\rm G}$-broken confining phase and the $(\mathbb{Z}_2)_{\rm F}$-invariant Higgs phase. This makes the $(\mathbb{Z}_2)_{\rm F}$-broken regime uninteresting for the purposes of this paper. Next, one should observe that $\epsilon \to 0$ is a non-generic limit of the model. An additional global symmetry which purely phase rotates the charged fields, $\phi_{\pm} \to e^{i\alpha} \, \phi_{\pm}$, is present when $\epsilon = 0$; we denote this symmetry as $U(1)_{\rm extra}$. The $\epsilon=0$ theory has four distinct phases distinguished by realizations of the $U(1)_{\rm G}$ and $U(1)_{\rm extra}$ symmetries. There is a phase where only the $U(1)_{\rm extra}$ symmetry is spontaneously broken, with one Nambu-Goldstone boson. This phase is not present at non-zero $\epsilon$. At $\epsilon = 0$, the $U(1)_{\rm G}$-broken Higgs phase in Fig.~\ref{fig:3D_phase_diagram} becomes a phase with two spontaneously broken continuous global symmetries, $U(1)_{\rm G}$ and $U(1)_{\rm extra}$, and has two Nambu-Goldstone bosons. This is a distinct symmetry realization from the $U(1)_{\rm G}$-broken confining regime with only a single Nambu-Goldstone boson implying, by the usual Landau paradigm reasoning, at least one intervening separating phase transition. When $\epsilon$ is non-zero but very small compared to all other scales, there is a parametrically light pseudo-Nambu-Goldstone boson with a mass $m_{\rm pNGB} \propto \sqrt{\epsilon}$ in the Higgs regime. Determining whether the $U(1)_{\rm G}$-broken Higgs and confining regimes remain distinct for non-zero values of $\epsilon$ is the goal of our next section in which we examine the long-distance behavior of holonomies around vortices. In this analysis, it will be important that the holonomy contour radius be large compared to microscopic length scales --- which include the Compton wavelength of the pseudo-Goldstone boson, $m_{\rm pNGB}^{-1}$. There is non-uniformity between the large distance limit of the holonomy and the $\epsilon\to 0$ limit, and consequently the physics of interest must be studied directly in the theory with $\epsilon \ne 0$. \section{Vortices and holonomies} \label{sec:vortices_and_holonomies} \subsection{The order parameter $O_{\Omega}$} \label{sec:order_param_def} Consider the portion of the phase diagram in which the $U(1)_{\rm G}$ symmetry is spontaneously broken. Then the field $\phi_0$ has a non-vanishing expectation value and the spectrum contains a Nambu-Goldstone boson. The Goldstone manifold has a non-trivial first homotopy group, $\pi_1(U(1)_{\rm G}) = \mathbb{Z}$. This implies that there are stable global vortex excitations, which are particle-like excitations in two spatial dimensions. Vortex excitations may be labeled by an integer winding number $w$ indicating the number of times the phase of $\langle \phi_0 \rangle$ wraps the unit circle as one encircles a vortex. More explicitly, one may write the winding number as a contour integral of the gradient of the phase, \begin{equation} w = \frac 1{2\pi} \oint_C dx^\mu \, u_\mu \,, \end{equation} where $ u_\mu \equiv -i \partial_\mu \left( \langle \phi_0 \rangle / | \langle \phi_0 \rangle| \right)$. Using the language of a superfluid, $u_\mu$ is the superfluid flow velocity, and the winding number $w$ is the quantized circulation around a vortex. As with vortices in superfluid films, vortex excitations have logarithmic long range interactions, with a $1/r$ force between vortices separated by distance $r$. A single vortex in infinite space has a logarithmically divergent long distance contribution to its self-energy. Nevertheless, vortices are important collective excitations and, in any sufficiently large volume, a non-zero spatial density of vortices and antivortices will be present due to quantum and/or thermal fluctuations. From a spacetime perspective, vortex/antivortex world lines, as they appear and annihilate, form a collection of closed loops, with an action scaling as $L \log L$ for loops with characteristic size $L$.% \footnote {This is only a logarithmic enhancement over the linear scaling of a vortex loop action in superconductors (or simple Abelian Higgs models).} \begin{figure} \centering \includegraphics[width=.4\textwidth]{linking.pdf} \caption {% A contour $C$ (red dashed curve) which links a vortex world-line (solid black curve). Of interest is the gauge field holonomy $\Omega \equiv e^{i\oint_C A}$ for contours $C$ far from the vortex core. \label{fig:vortex_holonomy} } \end{figure} Consider the gauge field holonomy, $\Omega \equiv e^{i\oint_C A}$, evaluated on some large circular contour $C$ surrounding a vortex of non-zero winding number $k$, illustrated in Fig.~\ref{fig:vortex_holonomy}, which we denote by $\langle \Omega(C) \rangle_k$. Let $r$ denote the radius of the contour $C$ encircling the vortex. We are interested in the phase of the holonomy, but as the size of the contour $C$ grows, short distance quantum fluctuations will cause the magnitude of the expectation $\langle \Omega (C) \rangle_k$ to decrease (with at least exponential perimeter-law decrease). To compensate, we consider the large distance limit of a ratio of the holonomy expectation values which do, or do not, encircle a vortex of minimal non-zero winding number, \begin{align} O_{\Omega} \equiv \lim_{r \to \infty} \frac{\langle \Omega(C) \rangle_1}{\langle \Omega (C) \rangle} \,. \label{eq:order_parameter} \end{align} Here, the numerator should be understood as an expectation value defined by a constrained functional integral in which there is a prescribed vortex loop of characteristic size $r$ and winding number 1 linked with the holonomy loop of size $r$, with both sizes, and the minimal separation between the two loops, scaling together as $r$ increases. The denominator is the ordinary unconstrained vacuum expectation value. The quantity $O_{\Omega}$ measures the phase acquired by a particle with unit gauge charge when it encircles a minimal global vortex. Or equivalently, it is the phase acquired by a minimal global vortex when it is dragged around a particle with unit gauge charge. Our analysis below will demonstrate that $O_{\rm \Omega}$ cannot be a real-analytic function of the charged scalar mass parameter $m_c^2/e^4$. We will also argue that non-analyticities in the topological order parameter $O_{\Omega}$ are associated with genuine thermodynamic phase transitions. A quick sketch of the argument is as follows. Since the vacuum is invariant under the $(\mathbb{Z}_2)_{\rm C}$ charge conjugation symmetry, the denominator of $O_{\Omega}$ must be real and at sufficiently weak coupling is easily seen to be positive.% \footnote {% One may equally well appeal to reflection symmetry, as this reverses the orientation of a reflection symmetric contour like a circle, and hence maps the holonomy on a circular contour to its complex conjugate. This alternative will be relevant for our later discussion in Sec.~\ref{sec:QCD} of dense QCD and related models with non-zero chemical potential, where charge conjugation symmetry is explicitly broken by the chemical potential but the ground state remains invariant under reflections. \label{fn:reflect} } In the constrained expectation value in the numerator of $O_{\Omega}$, the $(\mathbb{Z}_2)_{\rm C}$ symmetry is explicitly broken by the unit-circulation condition that enters the definition of $\langle \Omega(C) \rangle_1$. But the unit-circulation condition does not break $(\mathbb{Z}_2)_{\rm F}$ permutation symmetry \eqref{eq:ZP}, which also flips the sign of the gauge field.% \footnote {% The $(\mathbb{Z}_2)_{\rm F}$ symmetry cannot be spontaneously broken due to the presence of a vortex because the vortex worldvolume is one-dimensional, and discrete symmetries cannot break spontaneously in one spacetime dimension. (The exception to this statement involving mixed 't Hooft anomalies~\cite{Gaiotto:2017yup} is irrelevant in our case.) } Therefore the numerator of $O_{\Omega}$ must be invariant under $(\mathbb{Z}_2)_{\rm F}$, and hence real. We will see below that it is negative deep in the Higgs regime, but is positive deep in the $U(1)_{\rm G}$-broken confining regime. In the large-$r$ limit defining our vortex observable $O_\Omega$, the magnitudes of the holonomy expectations in numerator and denominator will be identical. Hence, our vortex observable $O_{\Omega}$ obeys \begin{align} O_{\Omega} = \begin{cases} -1 \,, & U(1)_{\rm G}\textrm{-broken Higgs regime;} \\ +1 \,, & U(1)_{\rm G}\textrm{-broken confining regime,} \end{cases} \end{align} and therefore cannot be analytic as a function of $m_c^2/e^4$. In the remainder of this section we support the above claims. We study the properties of vortices in the Higgs and confining $U(1)_{\rm G}$-broken regimes in Secs. \ref{sec:Higgs_holonomy_Abelian} and \ref{sec:holonomy_confining}, respectively. Then in Sec.~\ref{sec:ColemanWeinberg} we argue that non-analyticities in our topological order parameter are associated with genuine thermodynamic phase transitions. Finally, in Sec.~\ref{sec:broken_permutations} we extend the treatment and consider the effects of perturbations which explicitly break the $(\mathbb{Z}_2)_{\rm F}$ symmetry. We find that $O_{ \Omega}$ remains a non-analytic function of the charged scalar mass parameter(s) even in the presence of such perturbations. This shows that the phase transition line separating the Higgs and confining $U(1)_{\rm G}$-broken regimes is robust against sufficiently small $(\mathbb{Z}_2)_{\rm F}$-breaking perturbations. \subsection{$O_{\Omega}$ in the Higgs regime} \label{sec:Higgs_holonomy_Abelian} We first consider $O_{\Omega}$ deep in the Higgs regime, $-m_c^2 \gg e^4$ and, to begin, neglect quantum fluctuations altogether. So the holonomy expectation values in the definition (\ref{eq:order_parameter}) of $O_\Omega$ just require evaluation of the holonomy in the appropriate energy-minimizing classical field configurations. As always, the holonomy $\Omega(C)$ is the exponential of the line integral $\oint_C A$ (times $i$) which, in our Abelian theory, is just the magnetic flux passing through a surface spanning the curve $C$. For the ordinary vacuum expectation value in the denominator of $O_\Omega$, vacuum field configurations have everywhere vanishing magnetic field and hence $\langle \Omega(C) \rangle = 1$. For the constrained expectation value in the numerator, one needs to understand the form of the minimal vortex solution(s). Choose coordinates such that the vortex lies at the origin of space and let $\{r,\theta\}$ denote 2D polar coordinates. For a vortex configuration with winding number $k$, the phase of the neutral scalar $\phi_0$ must wrap $k$ times around the unit circle as one encircles the origin. There exist classical solutions which preserve rotation invariance, and we presume that these rotationally invariant solutions capture the relevant global energy minima. Such field configurations may be written in the explicit form \begin{subequations}% \label{eq:ansatz}% \begin{align} \phi_{+}(r,\theta) &= v_c \, f_+(r) \, e^{i \nu_+ \theta } \,, & \phi_{0}(r,\theta) &= v_0 \, f_0(r) \, e^{i k \theta } \,, \\ \phi_{-}(r,\theta) &= v_c \, f_-(r) \, e^{i \nu_- \theta} \,, & A_{\theta}(r) &= \frac{\Phi\, h(r)}{2\pi r} \,. \end{align} \end{subequations} Here $v_0$ and $v_c$ are the magnitudes of the vacuum expectation values of $\phi_0$ and $\phi_\pm$, determined by minimizing the potential terms in the action. The angular wavenumbers $\nu_+$, $\nu_-$, and $k$ must be integers to have single valued configurations and $k$, by definition, is the winding number of the vortex configuration. For non-zero values of $k$ and $\nu_\pm$ the radial functions $f_0(r)$ and $f_\pm(r)$ interpolate between 0 at the origin and 1 at infinity. Similarly, to minimize energy the gauge field must approach a pure gauge form at large distance, implying that $h(r)$ may also be taken to interpolate between 0 and 1 as $r$ goes from the origin to infinity. The associated magnetic field is \begin{equation} B(r) = \frac{(r A_\theta(r))'}{r} = \frac{ \Phi \, h'(r)}{2\pi r} \,. \end{equation} The gauge field in ansatz (\ref{eq:ansatz}) is written in a form which makes the coefficient $\Phi$ equal to the total magnetic flux, \begin{equation} \Phi_B \equiv \int d^2x \> B = 2\pi \int_0^\infty r \, dr \> B(r) = \Phi \int_0^\infty dr \> h'(r) = \Phi \,. \label{eq:flux} \end{equation} To avoid having an energy which diverges linearly with volume (relative to the vacuum), the phases of $\phi_0$, $\phi_+$ and $\phi_-$ must be correlated in a fashion which minimizes the cubic term in the action. Below we will suppose that the coefficient of the cubic term $\epsilon>0$, but essentially the same formulas would result if $\epsilon < 0$. (The singular point $\epsilon=0$ must be handled separately, see the discussion at the end of Sec.~\ref{sec:Landau_constraints}.) Minimizing the cubic term in the action forces the product $\phi_0 \, \phi_+ \, \phi_-$ to be real and positive, implying that \begin{equation} \nu_+ = n - k \,, \qquad \nu_- = -n \,, \label{eq:nupm} \end{equation} for some integer $n$. After imposing condition (\ref{eq:nupm}), there remains a logarithmic dependence on the spatial volume caused by the scalar kinetic terms which, due to the angular phase variation of the scalar fields, generate energy densities falling as $1/r^2$. Explicitly, this long-distance energy density is \begin{equation} \mathcal E(r) = \frac {v_c^2}{r^2} \left[ \left(n-k - \frac \Phi{2\pi}\right)^2 + \left(-n + \frac \Phi{2\pi}\right)^2 \right] + \frac {v_0^2 \, k^2}{r^2} + \mathcal O(r^{-4}) \,. \label{eq:energy_long_distance} \end{equation} Minimizing this IR energy density, for given values of $k$ and $n$, determines the magnetic flux $\Phi$, leading to \begin{equation} \Phi_B = \Phi = (2n - k) \, \pi \,, \label{eq:fluxval} \end{equation} and an IR energy density $ \mathcal E(r) = (\tfrac{1}{2} v_c^2 + v_0^2) \, k^2 / r^2 + \mathcal O(r^{-4}) $. The explicit form of the radial functions is determined by minimizing the remaining IR finite contributions to the energy. These consist of the magnetic field energy and short distance corrections to the scalar field kinetic and potential terms, all of which are concentrated in the vortex core region. Semi-explicitly, \begin{align} E = 2\pi \int r \, dr \> \Biggl[& \frac {h'(r)^2}{8e^2 r^2} \, (2n{-}k)^2 + \frac {v_c^2 \, f_+(r)^2}{4r^2} \left[ (2n{-}k) (1{-}h(r)) - k \right]^2 \nonumber\\ & + \frac {v_0^2 \, k^2 f_0(r)^2}{r^2} + \frac {v_c^2 \, f_-(r)^2}{4r^2} \left[ (2n{-}k) (1{-}h(r)) + k \right]^2 \nonumber\\ & + v_c^2\left[f_+'(r)^2+f_-'(r)^2\right] + v_0^2 f_0'(r)^2 + \mbox{(potential terms)} \Biggr] \,. \label{eq:tree_level_Veff} \end{align} Minimizing this energy leads to straightforward but unsightly ordinary differential equations which determine the precise form of the radial profile functions, see Appendix~\ref{sec:EoMAppendix}. Qualitatively, the gauge field radial function $h(r)$ approaches its asymptotic value of one exponentially fast on the length scale $\textrm{min}(m_A^{-1},\widetilde{m}^{-1})$, where $m_A = 2e v_c $ and $\widetilde{m}^2 \equiv 4\lambda_c v_c^2 + 2\epsilon v_0$. The scalar field profile functions $f_0(r)$ and $f_\pm(r)$ approach their asymptotic large $r$ values with $1/r^2$ corrections on the length scales set by the corresponding masses $m_0$ and $m_c$. For a given non-zero winding number $k$, the above procedure generates an infinite sequence of vortex solutions distinguished by the value of $n$, or more physically by the quantized value of the magnetic flux (\ref{eq:fluxval}) carried in the vortex core. The minimal energy vortex, for a given winding number, is the one which minimizes this flux. For even winding numbers, this is $n = k/2$ and vanishing magnetic flux. In such solutions, the phases of the two charged scalar fields are identical with $\nu_\pm = - k/2$. For odd winding number $k$ there are two degenerate solutions with $n = (k\pm 1)/2$ and magnetic flux $\Phi = \pm \pi$. In these solutions, the charged scalar fields have differing phase windings with $ \nu_+ = -(k \mp 1)/2 $ and $ \nu_- = -(k \pm 1)/2 $. For minimal $|k| = 1$ vortices, one of the charged scalars has a constant phase with no winding, while the other charged scalar has a phase opposite that of $\phi_0$. The gauge field holonomy surrounding a vortex, far from its core, is simply $\pm 1$ depending on whether the magnetic flux is an even or odd multiple of $\pi$ and this, in turn, merely depends on whether the vortex winding number $k$ is even or odd, \begin{equation} \langle \Omega(C) \rangle_k = e^{i \Phi} = (-1)^k \,. \end{equation} \begin{figure}[t] \centering \includegraphics[width=.6\textwidth]{vortex_symmetries.pdf} \caption {% There are four distinct minimal energy vortex solutions, with winding number $k = \pm 1$ and magnetic flux $\Phi = \pm \pi$. The $(\mathbb{Z}_2)_{\rm F}$ and $(\mathbb{Z}_2)_{\rm C}$ discrete symmetries relate these vortices as shown. \label{fig:min_vortex} } \end{figure} The net result is that there are four different minimal energy vortex solutions, illustrated in Fig.~\ref{fig:min_vortex}, having $(k,\Phi) = (1,\pi)$, $(1,-\pi)$, $(-1,\pi)$, and $(-1,-\pi)$. As indicated in the figure, the $(\mathbb{Z}_2)_{\rm F}$ symmetry interchanges vortices with identical winding number and opposite values of magnetic flux, while the $(\mathbb{Z}_2)_{\rm C}$ symmetry interchanges vortices with opposite values of both winding number and magnetic flux. Therefore, all these vortices have identical energies. For our purposes, the key result is that the long distance holonomy is the same for all minimal vortices, namely $\langle \Omega(C) \rangle_{k = \pm1} = -1$. Consequently, we find \begin{align} O_{\Omega} = -1 \, \;\; \textrm{ at tree-level}. \label{eq:O_tree} \end{align} We now consider the effects of quantum fluctuations on this result. Using standard effective field theory (EFT) reasoning, as one integrates out fluctuations below the UV scale $\Lambda_{\rm UV}$, the action \eqref{eq:the_model} will receive scale-dependent corrections which (a) renormalize the coefficients of operators appearing in the action \eqref{eq:the_model}, and (b) induce additional operators of increasing dimension consistent with the symmetries of the theory. But the result \eqref{eq:O_tree} follows directly from the leading long-distance form \eqref{eq:energy_long_distance} of the energy density whose minimum fixes the vortex magnetic flux equal to $\pm \pi$ for minimal winding vortices. Because this $1/r^2$ energy density leads to a total energy which is logarithmically sensitive to the spatial volume, short distance IR-finite contributions to the energy cannot affect the flux quantization condition \eqref{eq:fluxval} in the limit of large spatial volume. Only those corrections which modify this $1/r^2$ long distance energy density have the potential to change the quantization condition. One may construct the long distance EFT as an expansion in derivatives, with the effective expansion parameter being the small ratio of fundamental length scales (such as the vortex core size or Compton wavelengths of massive excitations) to the arbitrarily large length scale of interest. Any term in the EFT action with more than two derivatives will produce a contribution to the energy density which falls faster than $1/r^2$ when evaluated on a vortex configuration, and hence cannot contribute to the $\mathcal{O}(1/r^2)$ long distance energy density \eqref{eq:energy_long_distance}. Similarly, terms with less than two derivatives also do not contribute to the $\mathcal{O}(1/r^2)$ long distance energy density \eqref{eq:energy_long_distance}. Hence the only fluctuation-induced terms that might affect the long distance vortex holonomy are those with precisely two derivatives acting on the charged scalar fields. Consequently, the portion of the effective action that controls holonomy expectation values around vortices can be written in the form \begin{align} \label{eq:quantum_eff_action} S_{\textrm{eff}, \, U(1)\textrm{ holonomy}} = \int d^{3}x \, \Bigl\{ \, & f_1(\phi_0, \phi_{+},\phi_{-})(D_{\mu}\phi_{+})(D^\mu\phi_+)^\dagger+f_1(\phi_0, \phi_{-},\phi_{+})(D_{\mu}\phi_{-})(D^\mu\phi_-)^\dagger \nonumber \\ {}+{} & f_2(\phi_0,\phi_+,\phi_-) (D_\mu\phi_+)(D^\mu \phi_+)+f_2(\phi_0,\phi_-,\phi_+) (D_\mu\phi_-)(D^\mu \phi_-) \nonumber \\[6pt] {}+{} & f_3(\phi_0,\phi_+,\phi_-) (D_\mu\phi_+)(D^\mu \phi_-)^\dagger+ f_3(\phi_0,\phi_-,\phi_+) (D_\mu\phi_-)(D^\mu \phi_+)^\dagger \nonumber \\ {}+{} & f_4(\phi_0, \phi_{+},\phi_{-})(D_{\mu} \phi_{+})(D^{\mu} \phi_{-}) \Bigr\}+\textrm{h.c.}\,, \end{align} with coefficient functions $\{f_i\}$ depending on the fields $\phi_0$, $\phi_{\pm}$ (but not their derivatives) such that each term is $U(1)_{\rm G}$ and gauge invariant. We emphasize that the long-distance EFT (\ref{eq:quantum_eff_action}) does not rely on a weak-coupling expansion. It is valid at long distances whenever the theory is in the Higgs phase.% \footnote {% More precisely, the long-distance EFT (\ref{eq:quantum_eff_action}) neglects the instanton-monopole induced potential for the dual photon and, as such, is valid provided the mass $m_A$ (\ref{eq:m_A}) generated by the Higgs mechanism is large compared to the monopole induced photon mass $m_\gamma$ (\ref{eq:mgamma}). } The $f_1$ terms represent wavefunction renormalizations which simply modify the overall normalizations in the energy density \eqref{eq:energy_long_distance}, and have no effect on the flux quantization condition \eqref{eq:fluxval}. When evaluated on the vortex, the $f_2$ terms also have the same form as the long distance energy density \eqref{eq:energy_long_distance}. The $f_3$ and $f_4$ terms produce a $1/r^2$ contribution to the vortex energy density proportional to \begin{align} \frac{v_c^2 }{r^2} \left( n - k - \frac{\Phi}{2\pi} \right) \left(n -\frac{\Phi}{2\pi} \right) = \frac{v_c^2 }{2r^2}\left[ \left(n -\frac{\Phi}{2\pi} \right)^2 + \left( n - k - \frac{\Phi}{2\pi} \right)^2-k^2\right] \end{align} Hence, up to holonomy-independent terms, the $f_3$ and $f_4$ terms also merely change the normalization of the tree-level energy density \eqref{eq:energy_long_distance}. Therefore, provided fluctuations are not strong enough to flip its overall sign, the holonomy-dependent $1/r^2$ energy density has minima (with respect to $\Phi$) at $\Phi = (2n-k)\pi$. In particular, \emph{all} minimal-circulation ($k=\pm1$) vortices which minimize the quantum-corrected long-distance energy density carry flux $\Phi = \pi$ modulo $2\pi$. If fluctuations do flip the sign in front of Eq.~\eqref{eq:energy_long_distance}, then the energy density becomes unbounded below as a function of $\Phi$, with no additional local minima appearing. The EFT description \eqref{eq:quantum_eff_action} therefore breaks down, signaling the departure from the Higgs phase. Therefore, within the Higgs phase, the fluctuation-induced corrections to the effective action have no effect on the flux quantization condition \eqref{eq:fluxval}. This shows that the minimal vortex expectation value $\langle \Omega(C) \rangle_{1}$ at large distance remains real and negative to all orders in perturbation theory, provided that the fluctuations are not so large that they completely destroy the Higgs phase. The size of quantum fluctuations in this model is controlled by the dimensionless parameter $e^2/m_A = \mathcal O(e\lambda_c^{1/2}/|m_c|) =\mathcal O(e^2/|m_c|)$, where we have assumed $\lambda_c^{1/2} \sim \epsilon^{1/3} \sim e$ and $g_0, g_c \ll 1$ for simplicity, and hence this conclusion about a negative value of $\langle \Omega(C) \rangle_{1}$ holds exactly whenever $m^2_c/e^4$ is sufficiently negative to put the theory into the Higgs phase. As discussed earlier, quantum fluctuations do suppress the magnitude of holonomy expectation values leading to perimeter law exponential decay. By construction, this size dependence cancels in our ratio $O_{\Omega} = \langle \Omega(C)\rangle_{1}/ \langle \Omega(C)\rangle $. Unbroken $(\mathbb{Z}_2)_{\rm F}$ symmetry (or $(\mathbb{Z}_2)_{\rm C}$, or reflection symmetry) in the vacuum state guarantees that the ordinary expectation value $\langle \Omega(C)\rangle $ in the denominator is real. It is easy to check that it is positive at tree level, and sufficiently small quantum fluctuations cannot make it negative. So $O_{\Omega}$ is determined by the phase of the vortex state holonomy expectation value in the numerator. The net result from this argument is that within the Higgs phase, \begin{align} \boxed{\textrm{Higgs phase:}\;\; O_{\Omega} = -1} \,, \label{eq:holonomy_higgs} \end{align} holds precisely. The next subsection gives useful alternative perspectives on the same conclusion. \subsubsection{Vortex junctions, monopoles, and vortex flux quantization} \label{sec:vortex_junctions} In the preceding section we analyzed the physics of vortices using effective field theory in the bulk $3$-dimensional spacetime. This analysis showed that the minimal energy vortices carry quantized magnetic flux $\pm \pi$, and the phase of the holonomy around vortices is quantized, leading to result \eqref{eq:holonomy_higgs}. We now reconsider the same physical questions from the perspective of an effective field theory defined on the vortex worldline. This will lead to a discussion of vortex junctions, their interpretation as magnetic monopoles, a connection between vortex flux quantization and Dirac charge quantization, and finally to distinct logically independent arguments for the result \eqref{eq:holonomy_higgs}. The $(0{+}1)$ dimensional effective field theory describing fluctuations of a vortex worldline includes two gapless modes arising from the translational moduli representing the spatial position of the vortex. The vortex effective field theory must include an additional real scalar field which may be chosen to equal the magnetic flux $\Phi$ carried by a vortex configuration. This field will serve as a coordinate along field configuration paths which interpolate between distinct vortex solutions. The field $\Phi$ appears in the 1D worldline EFT in the form \begin{align} S_{\textrm{vortex EFT}} = \int dt \left[c_K \,(\partial_t \Phi)^2 + c_V \,V(\Phi) \right] + \cdots \,. \label{eq:vortex_EFT} \end{align} Here $t$ is a coordinate running along the vortex worldline, $\Phi$ is dimensionless, $c_K$ and $c_V$ are low-energy constants with dimensions of inverse energy and energy, respectively, and the ellipsis represents terms with additional derivatives or couplings to other fields on the worldline. The worldline potential $V(\Phi)$ in expression \eqref{eq:vortex_EFT} obeys two important constraints. First, since $(\mathbb{Z}_2)_{\rm F}$ symmetry acts on $\Phi$ by $\Phi \to -\Phi$, $V(\Phi)$ is an even function. Second, Dirac charge quantization in the underlying bulk quantum field theory further constrains the possible minima of $V(\Phi)$. To see this, suppose that $V(\Phi)$ has a minimum at $\Phi = \Phi_{\rm min} \neq 0$. Since $V(\Phi)$ is an even function, it must also have a distinct minimum at $\Phi = -\Phi_{\rm min}$. For generic values of the microscopic parameters, the potential $V$ is finite for all finite values of $\Phi$. This means that there exists a solution to the equation of motion for $\Phi$ in which $\Phi$ interpolates between $-\Phi_{\rm min}$ and $\Phi_{\rm min}$ as the worldline coordinate $t$ runs from $-\infty$ to $+\infty$. Suppose that this tunneling event has an action which is both UV- and IR-finite, so that it is meaningful to describe it within the worldline effective field theory. What is its interpretation in bulk spacetime? It has unit $U(1)_{\rm G}$ circulation at all times, but also possesses a ``junction'' at some finite time where the magnetic flux changes sign. For the tunneling event to have finite action, the azimuthal component of the electric field far from the vortex core must decay faster than $1/r$. Then the flux of the field strength through a $2$-sphere surrounding the junction is simply $\Phi_{\rm min} - (-\Phi_{\rm min}) = 2\Phi_{\rm min}$. Comparing this to the Dirac charge quantization condition in \eqref{eq:flux_quant} implies that $\Phi_{\rm min} \in \pi \mathbb{Z}$ when $(\mathbb{Z}_2)_{\rm F}$ is unbroken.\footnote{If $(\mathbb{Z}_2)_{\rm F}$ symmetry is explicitly broken, Dirac charge quantization together with the assumption that tunneling events have finite action leads to the conclusion that any two distinct minima $\Phi_1, \Phi_2$ of $V(\Phi)$ must satisfy $\Phi_1 - \Phi_2 \in 2\pi \mathbb{Z}$.} These remarks imply that the worldline tunneling events can be interpreted as monopole-instantons in the 3d bulk, and their action must depend on the UV completion of our compact Abelian gauge theory.\footnote{Appendix~\ref{sec:nonAbelian} describes an explicit $SU(2)$ gauge theory which reduces to our $U(1)$ gauge theory at long distances, and where $S_{\rm I} \sim m_W/e^2$ with $m_W$ the $W$-boson mass.} \begin{figure} \centering \includegraphics[width=.2\textwidth]{single_monopole_vortex.pdf} \caption {% A junction between the two minimal energy unit-winding vortex worldlines is a magnetic monopole with flux $2\pi$. \label{fig:single_monopole} } \end{figure} In the preceding section, we saw that in the Higgs phase minimal-energy unit-circulation vortices carry magnetic flux $\pm\pi$ at tree-level. The vortex flux quantization argument in the paragraph above implies that quantum corrections cannot change this result, again leading to result \eqref{eq:holonomy_higgs}. We also learn that a junction between two minimal-energy unit-circulation vortices with flux $\pi$ and $-\pi$ can be interpreted as a magnetic monopole carrying the minimal $2\pi$ flux consistent with Dirac charge quantization, as illustrated in Fig.~\ref{fig:single_monopole}. This is the Higgs phase version of a single monopole-instanton, discussed earlier, when the $\phi_0$ condensate has unit winding. As noted earlier near the end of Sec.~\ref{sec:our_model}, Higgs phase monopole--antimonopole pairs are connected by magnetic flux tubes (which can break at sufficiently large separation due to monopole--antimonopole pair creation). This is true in the absence of any vortices carrying unit $U(1)_{\rm G}$ winding. But in the presence of a unit circulation vortex, a monopole--antimonopole pair can bind to the vortex, with the monopole and antimonopole then free to separate arbitrarily along the vortex worldline.% \footnote {% Deconfinement of magnetic monopoles on both local and semilocal vortices with and without supersymmetry has been extensively studied previously. In our model the vortices are global but the monopole deconfinement mechanism described here is essentially identical to previous discussions in, for example, Refs.~\cite{Hindmarsh:1985xc,Tong:2003pz,Shifman:2004dr,Hanany:2004ea, Eto:2009tr,Eto:2009kg,Cipriani:2011xp,Gorsky:2011hd,Chatterjee:2019zwx}. } This is illustrated in Fig.~\ref{fig:monopole_anti_monopole}. To see this, note that for fixed separation $L$ between monopole and antimonopole, the action will be lowered if the monopole and antimonopole move onto the vortex line, provided they are oriented such that adding the monopole--antimonopole flux tube to the vortex magnetic flux has the effect of merely flipping the sign of vortex magnetic flux on a portion of its worldline. This eliminates the cost in action of the length $L$ flux tube initially connecting the monopole and antimonopole. As noted above, the $(\mathbb{Z}_2)_{\rm F}$ symmetry guarantees that the vortex action per unit length is independent of the sign of the magnetic flux. Once the monopole and antimonopole are bound to the vortex worldline, there is no longer any cost in action (neglecting exponentially falling short distance effects) to separate the monopole and antimonopole arbitrarily. In summary, the monopole--antimonopole string tension vanishes on the vortex, and magnetic monopoles are deconfined on minimal Higgs phase vortices.% \footnote {% Provided monopoles and antimonopoles alternate along the vortex worldline. There is a direct parallel between this phenomenon and charge deconfinement in 2D Abelian gauge theories at $\theta = \pi$, see for example Refs.~\cite{Coleman:1975pw,Coleman:1976uz,Witten:1978ka,Anber:2018jdf,Anber:2018xek,Armoni:2018bga,Misumi:2019dwq}. } \begin{figure} \centering \includegraphics[width=.7\textwidth]{monopole_anti_monopole.pdf} \caption{ Monopole--antimonopole pairs with minimal magnetic flux $2\pi$ are confined in bulk spacetime, but such pairs are attracted to the worldline of a minimal global vortex where they become deconfined. \label{fig:monopole_anti_monopole} } \end{figure} One can also regard the monopole--antimonopole pair as an instanton--antiinstanton pair in the worldline EFT \eqref{eq:vortex_EFT}.\footnote{The deconfinement of magnetic monopoles on unit-circulation vortices corresponds to the fact that the separation of an instanton--antiinstanton pair is a quasi-zero mode.} We now argue that this perspective leads to yet another derivation of the result \eqref{eq:holonomy_higgs}. The existence of degenerate global minima with flux $\pm\Phi_{\rm min}$ means that the $(\mathbb{Z}_2)_{\rm F}$ symmetry is spontaneously broken on the worldline to all orders in perturbation theory. But non-perturbatively, the finite-action worldline instantons connecting these minima will proliferate and restore the $(\mathbb{Z}_2)_{\rm F}$ symmetry. As is familiar from double-well quantum mechanics, the unique minimal energy vortex state will be a symmetric linear combination of $\Phi_{\rm min}$ and $-\Phi_{\rm min}$ configurations. From our previous arguments we know that $\Phi_{\rm min} = \pi$, so that both of these vortex configurations have the same $-1$ long distance holonomy, and none of this non-perturbative physics has any effect on the validity of the result \eqref{eq:holonomy_higgs} regarding Higgs phase vortices. But suppose that we did not already know that $\Phi_{\rm min} = \pi$. The existence of finite-action tunneling events connecting the two $\Phi$ minima would imply that the minimal energy vortex state with a given winding number is unique and invariant under $(\mathbb{Z}_2)_{\rm F}$. Unbroken $(\mathbb{Z}_2)_{\rm F}$ symmetry in turn implies that the holonomy expectation value in the minimal vortex state is purely real. Therefore, on symmetry grounds alone, our observable $O_\Omega$ is quantized to be either $+1$ or $-1$. Our analysis in the weakly coupled regime serves to establish that in the Higgs phase the value is $-1$, and we again arrive at result \eqref{eq:holonomy_higgs}. \subsection{$O_\Omega$ in the $U(1)_{\rm G}$-broken confining regime} \label{sec:holonomy_confining} We now turn to a consideration of holonomies around vortices in the $U(1)_{\rm G}$-broken confining phase. Once again, it is useful to consider the appropriate effective field theory deep in this regime, near the SE corner of the phase diagram of Fig.~\ref{fig:3D_phase_diagram}. Suppose that $m_c^2 \gg e^4$. Given the scale separation, it is useful to to integrate out the charged fields. The resulting effective action retains the gauge field and neutral scalar $\phi_0$ and has the form \begin{align} S_{\rm eff} = \int d^{3}x\, &\left[ \frac{1}{4 e^2}\, F_{\mu\nu}^2 + V_{\rm m}(\sigma) + |\partial_\mu \phi_0|^2 + V(|\phi_0|) + \frac{a}{m_c^2} \, |\phi_0|^2 F_{\mu\nu}^2 + \cdots \right]\,, \label{eq:effective_action} \end{align} where the ellipsis denotes higher dimension terms involving additional powers of fields and derivatives. The dimension five term shown explicitly, with coefficient $a$, is the lowest dimension operator coupling the gauge and neutral scalar fields. This term describes ``Raleigh scattering'' processes in which photons scatter off fluctuations in the magnitude of $\phi_0$. Within this EFT, the $(\mathbb{Z}_2)_{\rm F}$ symmetry simply flips the sign of the gauge field and hence forbids all terms involving odd powers of the gauge field strength. When $m_0^2$ is sufficiently negative so that the $U(1)_{\rm G}$ symmetry is spontaneously broken and $\phi_0$ condenses, the leading effect of the $|\phi_0|^2 F^2$ coupling is merely to shift the value of the gauge coupling by an amount depending on the condensate $v_0 \equiv \langle \phi_0 \rangle$, \begin{equation} \frac 1{e^2} \to \frac 1{e'^{\,2}} \equiv \frac 1{e^2} + \frac {4a \, |v_0|^2}{m_c^2} \,. \label{eq:shift} \end{equation} This is a small shift of relative size $\mathcal O(e^4/m_c^2)$ within the domain of validity of this effective description. The $(\mathbb{Z}_2)_{\rm F}$ symmetry (or parity) guarantees that the neutral scalar condensate cannot source the gauge field strength, so the magnetic field $B \equiv \tfrac{1}{2} \epsilon_{ij} F^{ij}$ ($i,j = 1,2$) must have vanishing expectation value. Within this $U(1)_{\rm G}$ broken phase, there are vortex configurations in which the condensate $\langle \phi_0 \rangle$ has a phase which winds around the vortex, while its magnitude decreases in the vortex core, vanishing at the vortex center. As far as the gauge field is concerned, one sees from the effective action (\ref{eq:effective_action}) that the only effect this has is to modulate the gauge coupling, effectively undoing the shift (\ref{eq:shift}) in the vortex core. But such coupling renormalizations, or dielectric effects, do not change the fact that the effective action is an even function of magnetic field which is minimized at $B = 0$. In other words, even in the presence of vortices, the neutral scalar field does not source a magnetic field. And consequently, both the vacuum state \emph{and} minimal energy vortex states are invariant under the $(\mathbb{Z}_2)_{\rm F}$ symmetry. Once again, invariance of the both the vacuum and vortex states under the $(\mathbb{Z}_2)_{\rm F}$ symmetry implies that holonomy expectation values in both states are real, and hence our observable $O_\Omega$ must be either $+1$ or $-1$. The Abelian gauge field holonomy is, of course, nothing but the exponential of the magnetic flux, $ \Omega(C) = e^{i \oint_C A} = e^{i \int_S B} = e^{i \Phi_B} $ (with contour $C$ the boundary of disk $S$). The above EFT discussion shows that deep in the confining $U(1)_{\rm G}$-broken phase the influence of a vortex on the magnetic field is tiny and hence $\langle \Omega(C) \rangle_1$ is positive, implying that $O_\Omega = +1$. And once again, by analyticity, this result must hold throughout the confining $U(1)_{\rm G}$-broken phase. In summary, \begin{align} \boxed{\textrm{$U(1)_{\rm G}$-broken confining phase:}\;\; O_\Omega = +1} \,, \end{align} is an exact result within this phase. \subsection{Higgs-confinement phase transition} \label{sec:ColemanWeinberg} We have seen that $O_\Omega$ has constant magnitude but changes sign between the Higgs and confining, $U(1)_{\rm G}$-broken regimes; it cannot be a real-analytic function of $m_c^2$. Hence, there must be at least one phase transition as a function of $m_c^2$. A single phase transition would be associated with an abrupt jump of $O_{\rm \Omega}$ from $-1$ to $1$ at some critical value of $m_c^2$. If instead $O_{\rm \Omega}$ equals $-1$ for charged mass-squared below some value, $m_c^2 < (m_c^2)_A$, equals $+1$ above a different value $(m_c^2)_B < m_c^2$, and continuously interpolates from $-1$ to $+1$ in the intervening interval $(m_c^2)_A < m_c^2 < (m_c^2)_B$, this would indicate the presence of two phase transitions bounding an intermediate phase in which the $(\mathbb{Z}_2)_{\rm F}$ symmetry is spontaneously broken. (This follows since, as discussed above, unbroken $(\mathbb{Z}_2)_{\rm F}$ symmetry implies that $O_\Omega$ must equal $\pm 1$.) In much of parameter space, phase transitions in our model occur at strong coupling and are not amenable to analytic treatment. But the theory becomes weakly coupled when the masses $|m_c^2|$ and $|m_0|^2$ are sufficiently large. Specifically, we will assume that the dimensionful couplings $|\lambda_c|$, $|\lambda_0|$ and $e^2$ are all small relative to the masses $|m_c|$ and $|m_0|$, the cubic coupling obeys $\epsilon \ll \textrm{min}(|m_c|^{3/2},|m_0|^{3/2})$, and the sextic couplings are small, $g_c, g_0 \ll 1$. If a first order transition lies within this region, then simple analytic arguments suffice to identify and locate the transition. A first-order transition involving a complex scalar $\phi$ with $U(1)$ symmetry requires multiple local minima in the effective potential viewed as a function of $|\phi|$. In four dimensions, a renormalizable scalar potential is quartic and, as a function of $|\phi|$, has at most a single local minimum. So to find a first-order phase transition in a weakly coupled four-dimensional $U(1)$ invariant scalar theory one must either be abnormally sensitive to higher order non-renormalizable terms (and thus probing cutoff-scale physics), or else reliant on a one-loop or higher order calculation producing non-analytic terms like $|\phi|^2 \log |\phi|$. This is illustrated by the classic Coleman and Weinberg analysis \cite{Coleman:1973jx}. But in three spacetime dimensions, renormalizable scalar potentials are sextic, and $U(1)$ invariant sextic potentials can easily have multiple local minima. Consequently, a tree-level analysis can suffice to demonstrate the existence of a first-order phase transition, in a renormalizable theory, without any need to consider higher-order corrections. \begin{figure} \centering \includegraphics[width=\textwidth]{contours.pdf} \caption {Contour plots of the tree-level scalar effective potential at three different values of $m_c^2$ in the vicinity of the first-order Higgs-confinement phase transition. We have used gauge and global symmetries to choose the phases of the scalar fields such that the potential can be interpreted as a function of $\phi_c \equiv \phi_{+}$ and $\phi_0$, with $\phi_- = |\phi_+|$. We have set $m_0^2 = -200 \, e^4$, $\epsilon = 40 \, e^3$, $\lambda_c = \lambda_0 = -5 \, e^2$, and $g_c = g_0 = 0.04$. Decreasing values of the scalar potential are colored with darker colors, and global minima are marked with red dots. Note that the global minimum is degenerate when $m_c^2 \approx 360 \, e^4$, and the location of the global minimum jumps as $m_c^2$ crosses this value, from a point where the charged fields are condensed to one where they are not condensed. This shows the presence of a strong first-order Higgs-confinement phase transition, with the $U(1)_{\rm G}$ global symmetry spontaneously broken on both sides of the transition. \label{fig:contours} } \end{figure} Let us see how this works in our model. Consider the region where $m_0^2$, $\lambda_c$, and $\lambda_0$ are all negative. For simplicity, let us also suppose that $e^2 \ll |\lambda_c|,$ $|\lambda_0|$, and $\epsilon \ll e^3 \ll \textrm{min}(|m_c|^{3/2},|m_0|^{3/2}) $. In Fig.~\ref{fig:contours} we show contour plots of the scalar potential as a function of $\phi_c \equiv \phi_{+}$ and $\phi_0$, with $\phi_- = |\phi_+|$, as $m_c^2/e^4$ is varied. The figure shows that the potential has multiple local minima with relative ordering that changes as $m_e^2/e^4$ is varied with all other parameters held fixed. With the parameter choices given in the caption of Fig.~\ref{fig:contours}, the figure shows the existence of a strong first-order phase transition between $U(1)_{\rm G}$-broken confining and $U(1)_{\rm G}$-broken Higgs states in the regime where $m_0^2/e^4$ is large and negative and $m_c^2/e^4$ is large and positive. Correspondingly, the change in the derivative of the energy density with respect to the charged scalar mass squared in units of $e^2$, $e^{-2} \Delta(\partial \mathcal E/\partial m_c^2)$, is large across the transition. For the parameter values used in Fig.~\ref{fig:contours} one finds $e^{-2}\Delta(\partial \mathcal E/\partial m_c^2) = 2\Delta \phi_c^2/e^2 \approx 127 \gg 1$. This behavior is generic. The effective masses (i.e., curvatures of the potential) at the minima are comparable to the input mass parameters, so there are no near-critical fluctuations and the phase transition is reliably established at weak coupling. Finally, the analysis of the previous subsections shows that our vortex holonomy order parameter $O_\Omega$ changes sign across this phase transition, confirming that the abrupt change in this ``topological'' order parameter is associated with a genuine thermodynamic phase transition. As one moves into the interior of the $(m^2_0,m^2_c)$ phase diagram, out of the weakly-coupled periphery, we certainly expect this direct correlation between a jump in our vortex order parameter and a thermodynamic phase transition to persist. But one may contemplate whether this association could cease to apply at some point in the interior of the $U(1)_{\rm G}$ spontaneously broken domain. In general, a line of first order phase transitions which is not associated with any change in symmetry realization can have a critical endpoint (as seen in the phase diagram of water). Could our model have such a critical endpoint, beyond which the first order transition becomes a smooth cross-over as probed by any local observable? If so, there would necessarily remain some continuation of the phase transition line across which our topological observable $O_\Omega$ continues to flip sign, but all local observables remain smooth. What would be necessary for such a scenario to take place? First, note that the magnetic flux carried by vortices can change in steps of $2\pi$ due to alternating monopole-instanton fluctuations appearing along the vortex worldline, but such processes do not affect the sign of the holonomy around a vortex. At the transition between the Higgs and confining phases the magnetic flux carried by minimal-winding vortices changes by $\pi$ (modulo $2\pi)$. It is very tempting to expect such a sudden change in the vortex magnetic flux to imply non-analyticity in the IR-finite core energy of a vortex, or equivalently the vortex fugacity. Whenever the $U(1)_{\rm G}$ symmetry is spontaneously broken, the equilibrium state of the system will contain a non-zero density of vortices and antivortices due to quantum fluctuations. If the minimal vortex energy is non-analytic this will in turn induce non-analyticity in the true ground state energy density. (This argument ceases to apply only when the vortex density reaches the point where vortices condense, thereby restoring the $U(1)_{\rm G}$ symmetry.) In other words, if non-analyticities in vortex magnetic flux imply non-analyticity in the vortex energy, our vortex holonomy observable functions as a useful order parameter, identifying thermodynamically distinct gapless phases. There is a possible loophole in the above argument: what if the change in vortex magnetic flux is caused by a level crossing between vortices of flux $\pi$ and $0$ (mod $2\pi$)?\footnote{We are grateful to N.~Seiberg for useful discussions on this issue.} Such a level crossing could produce non-analyticy in our vortex holonomy observable without being associated with non-analyticity in the ground state energy or other thermodynamic observables. However, for such a level crossing to be possible, a (metastable) unit-winding vortex with flux $0$ (mod $2\pi$) would need to exist in the Higgs phase and become degenerate with the flux $\pi$ (mod $2\pi$) unit-winding vortex as one varies parameters. Our analysis of the quantum effective action for the vortex holonomy shows that, within the domain of validity of the effective action \eqref{eq:quantum_eff_action}, there simply are no static solutions describing unit-winding vortices with flux equal to $0$ mod $2\pi$ in the Higgs phase. The quantum effective action \eqref{eq:quantum_eff_action} is a valid long distance description throughout the Higgs regime relying, essentially, only on a large ratio of the distance scale of interest to microscopic scales. However, Eq.~\eqref{eq:quantum_eff_action} does not take into account monopole-instanton effects, so it necessarily ceases to be applicable in a transition region between the confining and Higgs regimes where the Higgs mass scale $m_A \sim e^2 v_c^2$ becomes comparable to the monopole-induced photon mass scale $m_{\gamma}^2 \sim ({\mu_{\rm UV}^3}/{e^2}) \, e^{-S_{\rm I}}$. This region in parameter space can be made arbitrarily small by increasing $S_{\rm I}$. For the level-crossing scenario to take place, one would need to envision that as we go from the Higgs regime toward the confining regime, a flux $2\pi$ minimal-winding vortex has to appear with a higher energy than a $\pi$-flux minimal-winding vortex, and then cross it in energy, all within this arbitrarily small region. Moreover, this phenomenon would have to take place \emph{only} in the strongly-coupled region of parameter space, because it certainly does not happen in the weakly-coupled domain, illusgrated in Fig.~\ref{fig:contours}, where we have shown the existence of a first order phase transition. So while we cannot absolutely rule out this level-crossing scenario, in our view it requires enough conspiracies to seem very far-fetched. This concludes our arguments for the presence of at least one phase transition curve separating the SE and W regions of Fig.~\ref{fig:3D_phase_diagram}. \subsection{Explicit breaking of flavor permutation symmetry} \label{sec:broken_permutations} We now generalize our model to include operators which break the $(\mathbb{Z}_2)_{\rm F}$ symmetry explicitly. The simplest such term is just a mass perturbation giving the two charged fields $\phi_+$ and $\phi_-$ distinct masses $m_+$ and $m_-$. Let \begin{align} m^2_{\rm avg} &\equiv \frac{1}{2} (m^2_{+}+m^2_{-})\,, \quad \Delta \equiv \frac{1}{e^4}(m^2_{+} - m^2_{-}) \,, \end{align} denote the average mass squared and a measure of their difference, respectively. We will examine the dependence of physics on $m^2_{\rm avg}/e^{4}$ with $\Delta >0$ held fixed. If $\Delta$ is sufficiently large then there are two seemingly different regimes where no global symmetries are spontaneously broken: one where no scalar fields are condensed, and another where only $\phi_{-}$ is condensed. The latter regime is not a distinct phase as condensation of the charged field $\phi_-$, by itself, does not imply a non-vanishing expectation value of any physical order parameter. In fact, these two regimes are smoothly connected to each other and are trivial in the sense that they have a mass gap and a vacuum state which is invariant under all global symmetries. The more interesting regimes of the model are those with spontaneously broken $U(1)_{\rm G}$ symmetry. The cubic term in the action $\epsilon \phi_{0} \phi_{+} \phi_{-} + \textrm{h.c.}$ ensures that there is no regime where $\phi_0$ and only one of the two charged fields are condensed. Hence we only need to consider two regimes with spontaneously broken $U(1)_{\rm G}$ symmetry: one where all scalar fields are condensed, another where only the neutral scalar $\phi_0$ is condensed. \subsubsection{Higgs regime} Consider the Higgs regime where $-m_{\rm avg}^2 \gg e^4$ and all scalars are condensed. The tree-level long-distance energy density that determines the holonomy around a $U(1)_{\rm G}$ vortex of winding number $k$ is given by an obvious generalization of Eq.~\eqref{eq:energy_long_distance}, \begin{equation} \mathcal E(r) = \frac{v_+^2}{r^2}\left(n-k - \frac \Phi{2\pi}\right)^2 + \frac{v_-^2}{r^2}\left(-n + \frac \Phi{2\pi}\right)^2 + \frac {v_0^2 \, k^2}{r^2} + \mathcal O(r^{-4}) \,. \end{equation} Due to the explicit breaking of $(\mathbb{Z}_2)_{\rm F}$, the magnitudes of the charged scalar expectation values $v_+$ and $v_-$ are no longer equal; let us denote their average by $v_{\rm avg}$. For given values of $k$ and $n$, minimizing the above energy density yields \begin{equation} \Phi = \left(2n-k\frac{v_+^2}{v_{\rm avg}^2}\right)\pi, \end{equation} and $\mathcal E = \left(\frac{1}{2}\frac{v_+^2v_-^2}{v_{\rm avg}^2}+v_0^2\right)k^2/r^2 + \mathcal O(r^{-4})$. Due to the explicit breaking of $(\mathbb{Z}_2)_{\rm F}$, there are no longer two degenerate minimal-winding vortices at tree-level. Suppose $v_-^2 < v_+^2$ without loss of generality. Then the unique minimal energy unit-winding vortex (corresponding to $k=1,n=1$) carries magnetic flux \begin{equation} \Phi = \frac{v_-^2}{v_{\rm avg}^2} \, \pi \,, \label{eq:brokensymflux} \end{equation} which is no longer quantized in units of $\pi$. This means that the holonomy encircling a vortex, $\langle \Omega(C)\rangle_1 = e^{i\Phi}$, is no longer real. The ordinary holonomy expectation value in the denominator of $O_\Omega$ necessarily remains real and positive due to the continuing presence of unbroken $(\mathbb{Z}_2)_{\rm C}$ symmetry. Consequently, in this tree-level analysis, our nonlocal order parameter $O_\Omega$ is a non-trivial phase which differs from both $-1$ and $+1$. Small quantum corrections cannot bring the vortex magnetic flux (\ref{eq:brokensymflux}) to $0$, so this conclusion must hold generically throughout the phase which extends inward from the weakly coupled regime. In particular, \begin{align} \boxed{\textrm{Higgs phase without $(\mathbb{Z}_2)_{\rm F}$ symmetry:} \;\;O_{\Omega} \neq 1} \,. \label{eq:O_Omega_HiggsWithoutZP} \end{align} Following the analysis in the next subsection, we will see that one can actually interpret the condition \eqref{eq:O_Omega_HiggsWithoutZP} as a gauge-invariant criterion defining the Higgs phase. \subsubsection{$U(1)_{\rm G}$-broken confining regime} Now consider the regime where neither charged scalar field is condensed. When $m_{\rm avg}^2$ is large (compared to other scales) one may integrate out both charged fields and the effective description of the theory is given by Eq.~\eqref{eq:effective_action}, with $m_c$ now defined as the mass of the lightest charged field, $m_c = \min(m_+,m_-)$, plus additional higher dimension operators which are no longer forbidden by the $(\mathbb{Z}_2)_{\rm F}$ symmetry. Writing out the lowest dimension such term explicitly, we have \begin{align} \label{eq:broken_effective_action} S_{\rm eff} = \int d^{3}x & \left[ \frac{1}{4e^2} \, F_{\mu\nu}^2 + V_{\rm m}(\sigma) +|\partial_\mu \phi_0|^2 + V(|\phi_0|) + \frac{a}{m_c^2} \, |\phi_0|^2 F_{\mu\nu}^2 + \frac{b}{m_c^2} \, S_{\mu\nu} F^{\mu\nu} + \cdots \right], \end{align} where the ``polarization'' $ S_{\mu\nu} \equiv \frac{i}{2}\big[ (\partial_\mu\phi_0^\dagger)(\partial_\nu\phi_0) -(\partial_\nu\phi_0^\dagger)(\partial_\mu\phi_0) \big] $. To examine the effect of this new dimension-5 $(\mathbb{Z}_2)_{\rm F}$-odd term $S \cdot F$ in the presence of vortices, it will prove helpful to integrate by parts and rewrite it as direct coupling between the gauge field and a current, $\int d^3x \> A_\mu J^\mu_{\rm eff}$, with the current built out of gradients of the neutral scalar $\phi_0$, \begin{equation} \label{eq:bound_current} J^{\mu}_{\rm eff} = \frac{2b}{m_c^2} \, \partial_{\nu} S^{\mu\nu} \,. \end{equation} This current is automatically conserved, $\partial_{\mu} J^{\mu}_{\rm eff} = 0$, as required by gauge invariance.% \footnote {% Alternatively, one might be tempted to eliminate this term, which induces mixing between $S_{\mu\nu}$ and $F_{\mu\nu}$, by making a suitable redefinition of the gauge field. But for our purposes such a field redefinition is unhelpful as it complicates the evaluation of holonomies, effectively introducing a current-current interaction between the $U(1)_{\rm G}$ current and the current associated with a heavy electrically-charged probe particle used to measure the holonomy. } Now consider the minimal vortex configuration where the neutral scalar has a spatially varying magnitude and phase, $\phi_0 = v_0 \, f_0(r)\, e^{i\theta}$. This induces a non-zero antisymmetric $S_{\mu \nu}$ with \begin{align} S_{r \theta} = v_0^2\, \frac{ f_0(r) f_0'(r)}{r} \,. \label{eq:Svortex} \end{align} This polarization is localized on the vortex core (with an $\mathcal O(r^{-4})$ power-law tail). The associated current $J^\mu_{\rm eff}$ has an azimuthal component, $ J^{\theta}_{\rm eff}(r) = \frac{2b\,v_0^2}{m_c^2} \, \partial_{r} \big[ {f_0(r)f_0'(r)}/r \big] $. As in any solenoid, this current sources a magnetic field which is also localized within the vortex core, i.e., $r \lesssim |m_0|^{-1}$, up to an $\mathcal O(r^{-4})$ tail. What does all of this mean for holonomies around vortices? There are several distinct physical length scales in the $U(1)_{\rm G}$-broken confining phase. Recall that the non-perturbative monopole-instanton induced contribution to the action depends on the classical action $S_{\rm I}$ of a monopole-instanton and the scale $\mu_{\rm UV}$ which is set by the inverse length scale of the monopole core, $S_{\rm monopole} =\int d^3x\, V_{\rm m}(\sigma)= -\int d^3x\, \mu_{\rm UV}^3 \, e^{-S_{\rm I}} \cos(\sigma)$. This term is responsible for linear confinement with a string tension $T \sim e^2 m_\gamma \sim \sqrt{e^2\mu_{\rm UV}^3}\, e^{-S_{\rm I}/2}$. Suppose that $T^{1/2} \ll m_c$, as is the case in the weakly coupled portion of the phase. The possibility of charged scalar pair production implies that sufficiently long strings can break. The string-breaking length, \begin{equation} L_{\rm br} \equiv 2 m_c/T \,, \end{equation} characterizes the length scale beyond which string-breaking effects cannot be neglected. Hence, linear confinement and area law behavior for Wilson loops only holds for intermediate distance scales between $T^{-1/2}$ and $L_{\rm br}$. For our purposes, the quantity of primary interest is the holonomy for a circular contour around a vortex when the contour radius $r$ exceeds the largest intrinsic scale of the theory, $r \gg L_{\rm br}$. However, let us work up to this case by considering holonomies calculated on circles of progressively increasing size. Consider a circular contour $C$ with a unit-winding $\phi_0$ vortex at its center. To begin, suppose that the radius $r$ of the contour $C$ is large compared to the coherence length $\xi \sim 1/|m_0|$ but small compared to $T^{-1/2}$, the inverse dual photon mass. Then confinement and monopole effects can be ignored, and a calculation of the magnetic flux using Eqs.~\eqref{eq:broken_effective_action}--\eqref{eq:Svortex} gives \begin{align} \langle \Omega(C) \rangle_1 = e^{-2\pi r \mu'} \, e^{i \Phi} \,, \qquad \xi \ll r \ll T^{-1/2} \,, \label{eq:small_circle} \end{align} where $\mu'$ is a scheme dependent renormalization scale and the flux is given by \begin{align} \label{eq:vme} \Phi = 2\pi b\left(\frac{e\, v_0}{m_c}\right)^2 \,. \end{align} So, for contours encircling vortices in this ``inner'' distance regime (but still far outside the vortex core), we find that \begin{align} \frac{\langle \Omega(C) \rangle_1}{\langle \Omega(C) \rangle} = e^{i \Phi} \,, \qquad \xi \ll r \ll T^{-1/2} \,. \end{align} Next, suppose that the contour radius satisfies $T^{-1/2} \ll r \ll L_{\rm br}$. The dual photon mass term is important in this regime. To compute the behavior of the Wilson loop, we recall the usual prescription of Abelian duality, see e.g.~% Ref.~\cite{Polyakov:1976fu,Unsal:2008ch}: an electric Wilson loop along a contour $C$ maps to a configuration of the dual photon with a $2\pi$ monodromy on curves that link $C$. A very large Wilson loop in the $x$-$y$ plane can be described by a configuration of $\sigma$ which, well inside the loop, is purely $t$-dependent, with $\sigma$ vanishing as $t \to \pm\infty$ while having a $2\pi$-discontinuity at $t=0$. In the Abelian dual description, the effective action \eqref{eq:broken_effective_action} then takes the form \begin{align} S_{\rm eff} = \int d^{3}x\, &\left[ |\partial \phi_0|^2 + V(|\phi_0|) + \frac{e^2}{8\pi^2} \, (\partial\sigma)^2 \Bigl( 1 - \frac {4a \, e^2}{m_c^2} \, |\phi_0|^2 \Bigr) \right. - \mu_{\rm UV}^3 \, e^{-S_{\rm I}} \cos(\sigma) \nonumber\\ &\left.\vphantom{[|\partial \phi_0|^2} + \frac{ib\, e^2}{2\pi m_c^2} \, \epsilon^{\mu\nu\rho} \, S_{\mu\nu} \, \partial_{\rho} \sigma + \cdots \right] \,. \label{eq:dual_effective_action} \end{align} On the vortex configuration, the final $b$ term becomes \begin{equation} \frac{ib\, e^2}{2\pi m_c^2} \, \epsilon^{\mu\nu\alpha} \, S_{\mu\nu} \, \partial_\alpha\sigma = \frac{ib}{\pi} \, \left(\frac{ev_0}{m_c}\right)^2 \frac{ f_0(r) f_0'(r)}{r} \,{\partial_t\sigma} \,. \end{equation} Since the vortex configuration is time-independent, the integral of this term only receives a contribution from the $2\pi$ discontinuity in $\sigma$ at $t = 0$. Evaluating the effective action \eqref{eq:dual_effective_action} on this solution gives a result for the holonomy expectation value of \begin{align} \langle \Omega(C) \rangle_1 = e^{-2\pi r \mu'} e^{-T \pi r^2} e^{i \Phi} \,, \qquad T^{-1/2} \ll r \ll L_{\rm br} \,, \end{align} showing area-law decrease in magnitude together with the same phase \eqref{eq:vme} appearing in smaller holonomy loops. Of course, without a vortex the $b$ term vanishes and the holonomy expectation shows pure area-law decrease with no phase, \begin{align} \langle \Omega(C) \rangle = e^{-2\pi r \mu'} e^{-T \pi r^2} \,, \qquad T^{-1/2} \ll r \ll L_{\rm br} \,. \end{align} Consequently, for this ``intermediate'' range of circle sizes we again find \begin{align} \frac{\langle \Omega(C) \rangle_1}{\langle \Omega(C) \rangle} = e^{i \Phi} \,, \qquad T^{-1/2} \ll r \ll L_{\rm br} \,. \end{align} Now we are finally ready to consider the most interesting regime of holonomy contours, those with $r \gg L_{\rm br}$. First, consider the unconstrained vacuum expectation value. Due to the presence of heavy dynamical charged excitations, Wilson loop expectation values contain a sum of area-law and perimeter-law contributions, but the perimeter-law contribution dominates in the long-distance regime, \begin{align} \langle \Omega(C) \rangle = e^{-2\pi r \mu'} \big( e^{-T \pi r^2} + e^{-2\pi r m_c } \big) \sim e^{-2\pi r(m_c + \mu') } \,, \qquad L_{\rm br} \ll r \,, \label{eq:vachol} \end{align} (Here, irrelevant prefactors are neglected.) Physically, this Wilson loop expectation describes a process where a unit test charge and anticharge are inserted at some point, separated, and then recombined after following semicircular worldlines (in Euclidean space) forming two halves of the contour $C$. The second perimeter-law term arises from contributions in which dynamical charges of mass $m_c$ are pair-created and dress the test charge and anticharge to create two bound gauge-neutral ``mesons''. These mesons have physical size of order $\ell_{\rm meson} \sim \textrm{min}(T^{-1/2},(e^2 m_c)^{-1/2})$, and experience no long range interactions.\footnote{When $m_c \gg T^{1/2}/e^2$ the dressed test charges are analogous to charmed $B$ mesons in QCD, and can be described as $2{+}1$D Coulomb bound states, see e.g. Ref.~\cite{Aitken:2017ayq}. } Once the loop size exceeds $L_{\rm br}$, pair creation of dynamical charges of mass $m_c$ and the associated meson formation becomes the dominant process. Finally, suppose that this very large contour $C$ encircles a minimal vortex. Then the area-law contribution to the holonomy expectation acquires the phase $\Phi$, in exactly the same manner described above. In contrast, the perimeter-law contribution arises from fluctuations of the charged fields within distances of order of $\ell_{\rm meson}$ from any point on the contour $C$. The amplitude for such screening fluctuations, and consequent meson formation, must be completely insensitive to the presence of a vortex very far away at the center of the loop. Consequently, in the presence of a vortex the two different contributions to the holonomy expectation value have different phases, \begin{align} \langle \Omega(C) \rangle_1 = e^{-2\pi r \mu'} \big( e^{-T \pi r^2} e^{i\Phi} + e^{-2\pi r m_c } \big) \,. \label{eq:vorhol} \end{align} Once again, in the long distance regime, $r \gg L_{\rm br}$ the string-breaking or perimeter-law term dominates. Combining the vortex holonomy expectation (\ref{eq:vorhol}) with the vacuum expectation (\ref{eq:vachol}), we find that their ratio, in the long distance regime, equals 1 up to exponentially small corrections, \begin{align} \frac{\langle \Omega(C) \rangle_1}{\langle \Omega(C) \rangle} &= 1 + \mathcal O\Big(e^{-T \pi r^2 (1-L_{\rm br}/r)} \left(e^{i \Phi}-1\right)\Big) \,. \end{align} Hence, the large $r$ limit defining our vortex observable $O_\Omega$ exists and yields the simple result: \begin{align} \boxed{\textrm{$U(1)_{\rm G}$-broken confining phase without $(\mathbb{Z}_2)_{\rm F}$ symmetry:} \;\;O_{\rm \Omega} = +1} \,. \end{align} Being strictly constant (i.e., with no dependence whatsoever on microscopic parameters), this result must hold exactly throughout the phase connected to the weakly-coupled confining $U(1)_{\rm G}$-broken regime. \emph{Any} deviation from $O_\Omega = +1$ must signal a phase transition. \subsubsection{Summary} Let us take stock of what we have learned about the relation between the Higgs and confining $U(1)_{\rm G}$-broken regimes in the absence of the $(\mathbb{Z}_2)_{\rm F}$ symmetry. So long as the $U(1)_{\rm G}$ global symmetry is spontaneously broken, there is no way to distinguish the Higgs and confining regimes within the Landau paradigm using local order parameters. But our vortex holonomy order parameter \emph{does} distinguish them! Consider the theory with large positive $m_{\rm avg}^2$, in its regime where $U(1)_{\rm G}$ is spontaneously broken due to the dynamics of the neutral scalar sector, and imagine progressively decreasing $m_{\rm avg}^2/e^4$. Initially, for large positive $m_{\rm avg}^2/e^4$, the gauge field holonomy calculated on arbitrarily large circles around $U(1)_{\rm G}$ vortices is trivial, dominated by perimeter-law contributions, and our order parameter $O_{\Omega} = +1$. But once $m_{\rm avg}^2/e^4$ decreases sufficiently, the charged scalars condense. Then the holonomy around vortices acquires a non-trivial phase, with $O_{\Omega}$ first deviating from $1$ at some critical value of $m_{\rm avg}^2/e^4$. The same reasoning as in Sec.~\ref{sec:ColemanWeinberg} implies that this non-analytic behavior in $O_\Omega$ should also signal a genuine phase transition. \section{QCD and the hypothesis of quark-hadron continuity} \label{sec:QCD} A central topic in strong interaction physics is understanding the phase structure of QCD as a function of baryon number density, or equivalently as a function of the chemical potential $\mu_B$ associated with the $U(1)_B$ baryon number symmetry. (For reviews see, for example, Refs.~\cite{Alford:2007xm,Baym:2017whm}.) At low (nuclear) densities, or small $\mu_B$, it is natural to describe the physics in terms of nucleons, while at large $\mu_B$ a description in terms of quark matter is appropriate thanks to asymptotic freedom. Are ``confined'' nuclear matter and ``deconfined'' quark matter sharply distinct phases of matter, necessarily separated by at least one phase transition, or might they be smoothly connected, similar to the gas and liquid phases of water? Following Sch\"afer and Wilczek \cite{Schafer:1998ef}, we focus on the behavior of QCD with three flavors of quarks having a common mass $m_q$, so that there is a vector-like $SU(3)$ flavor symmetry. We ignore the weak, electromagnetic, and gravitational interactions. Some readers may wonder why it is especially interesting to consider the limit of QCD with $SU(3)$ flavor symmetry. Physically there are, of course, six quark flavors in the Standard Model. But the three heaviest quark flavors (charm, bottom and top) are so heavy that it is an excellent approximation to ignore them entirely when considering the possible continuity between nuclear matter and quark matter. The three lightest quark flavors (up, down and strange) have distinct masses in nature, so there is no exact global $SU(3)$ symmetry acting on the light quark fields. However, in practice the strength of $SU(3)$ flavor symmetry breaking is not terribly large, since none of the three lightest quarks are heavy compared to the strong scale $\Lambda_{\rm QCD}$. So one motivation to study the $SU(3)$ flavor symmetric limit of QCD is that the physics is simplest in this limit, and at the same time it is a useful starting point for much phenomenology. There is also a more theoretical justification for focusing on the $SU(3)$ flavor symmetric limit. Suppose that the up and down quarks are approximately degenerate in mass, but $SU(3)$ flavor symmetry is broken because the strange quark is heavier, as is the case in nature. In dense QCD, the effective strength of $SU(3)$-flavor breaking effects due to unequal quark masses depends on the mass differences relative to $\mu_B$. At sufficiently large $\mu_B$, or high density, $SU(3)$ flavor breaking effects are negligible and one is always in the so-called CFL regime, described below. However, when the strange quark mass is made large enough compared to the light quark mass scale, one can show reliably that at intermediate values of $\mu_B$ the theory lies in a different regime called 2SC. The 2SC regime is known to be separated by phase transitions from \emph{both} nuclear matter and high density CFL regimes, because the realizations of global symmetries in the 2SC phase differ from those in both confined nuclear matter and the CFL phase \cite{Rischke:2000cn,Alford:2007xm}. The open issue is to understand what happens to the phase structure of QCD near the $SU(3)$ flavor limit. Let us briefly review what is known about the behavior of the $SU(3)$ flavor symmetric QCD as a function of $\mu_B$. There is a critical value of $\mu_B$, which we denote by $\mu_B^{\rm sat}$, at which the baryon number density $n_B$ jumps from zero to a finite value known as the nuclear saturation density, $n_B^{\rm sat}$.\footnote{For physical values of quark masses, $n_B^{\rm sat} \sim 0.17 \, \textrm{fm}^{-3}$ and $\mu_B^{\rm sat} \sim 920 \, \textrm{MeV}$.} For $\mu_B$ above but close to $\mu_B^{\rm sat}$, the ground state of QCD may be thought of as modestly compressed nuclear matter, by which we mean that a description in terms of interacting nucleon quasiparticles is useful. It is believed that $U(1)_B$ is spontaneously broken for any $\mu_B > \mu_B^{\rm sat}$ due to condensation of dibaryons, so $SU(3)$-symmetric nuclear matter is a superfluid (see, e.g., Refs.~\cite{Dean:2002zx,Gandolfi:2015jma}). In real nuclear matter neutron pairs condense, while in $SU(3)$ symmetric QCD it is flavor singlet $H$-dibaryons which condense. Nuclear matter should be regarded as a ``confined phase'' of QCD, with quark confinement defined in the same heuristic fashion as at zero density. (The infamous difficulties of making the notion of confinement precise in theories like QCD are reviewed in, e.g., Ref.~\cite{Greensite:2016pfc}.) In contrast, when $\mu_B \gg \mu_B^{\rm sat}$ it becomes natural to describe the system in terms of interacting quarks rather than interacting nucleons. Cold high density quark matter is known to feature ``color superconductivity.'' Attractive gluon mediated interactions between quarks near the Fermi surface lead to quark pairing and condensation, analogous to phonon-induced Cooper pairing of electrons in conventional superconductors. The condensing diquarks in $SU(3)$ flavor-symmetric three-color QCD have the quantum numbers of color-antifundamental scalar fields with charge $2/3$ under $U(1)_B$. The condensation of these diquark fields spontaneously breaks $U(1)_B$ to $\mathbb{Z}_2$. At the same time, the color $SU(3)$ gauge group is completely Higgsed, while the flavor $SU(3)$ symmetry is unbroken. The unbroken symmetry transformations consist of common global $SU(3)$ rotations in color and flavor space, and as a result the high density regime of three-flavor QCD is called the ``color-flavor-locked'' (CFL) phase. The term ``color superconductivity'' for this phase is something of a misnomer as there are no physically observable macroscopic persistent currents or related phenomena analogous to those present in real superconductors. It is far better to think of this phase as a baryon superfluid in which the $SU(3)$ gauge field is fully Higgsed. Consequently, as $\mu_B$ is increased from $\mu_B^{\rm sat}$ to values that are very large compared to $\textrm{max}(\Lambda_{QCD},m_q)$, the ground state of flavor symmetric QCD evolves from a confining regime with spontaneously broken baryon number symmetry to a Higgs regime which also has spontaneously broken $U(1)_B$. The realization of all conventional global symmetries is identical between the low and high density regimes. One may also confirm that `t Hooft anomalies match and the pattern of low energy excitations in the different regimes may be smoothly connected \cite{Schafer:1998ef,Rajagopal:2000wf,Wan:2019oax}. So a natural question is whether there is a phase transition between the nuclear matter and quark matter regimes of flavor-symmetric QCD \cite{Schafer:1998ef}. If one can argue that such a phase transition is required, then ``confined'' nuclear matter and ``Higgsed'' or ``deconfined'' quark matter become sharply distinct phases of QCD, and one would obtain some insight into the meaning of the loosely defined term ``confinement'' in QCD. This question was the subject of the well-known conjecture by Sch\"afer and Wilczek \cite{Schafer:1998ef}. Based on the matching symmetry realizations and other points noted above, they argued that no phase transition is required between the Higgsed (quark matter) and confined (nuclear matter) regimes of $SU(3)$ flavor symmetric QCD, a conjecture known as ``quark-hadron continuity.'' It should be noted that this conjecture is more general than its name suggests. The arguments in favor of this conjecture do not rely on the existence of fermionic fundamental representation matter fields, and apply just as well to gauge theories with fundamental scalar fields and analogous symmetry structures. The Sch\"afer-Wilczek conjecture can be summarized as the statement that if one considers a gauge theory with gauge group $G$, fundamental representation matter, a $U(1)$ global symmetry, and parameters that allow one to interpolate between a ``confining'' regime where the $U(1)$ global symmetry is spontaneously broken, and a regime where the gauge group $G$ is completely Higgsed and the $U(1)$ global symmetry is also spontaneously broken, then these regimes are smoothly connected (i.e., portions of a single phase) at zero temperature.% \footnote {% This may sound similar to the Fradkin-Shenker-Banks-Rabinovici theorem~\cite{Fradkin:1978dv,Banks:1979fi} but, as discussed in the introduction, the Fradkin-Shenker-Banks-Rabinovici theorem does not apply in situations where the Higgs field is charged under global symmetries, while the Sch\"afer-Wilczek conjecture concerns precisely such situations. } Apart from its intrinsic theoretical interest, the status of quark-hadron continuity is also of experimental interest, at least to the extent that the flavor symmetric limit of QCD is a decent approximation to QCD with physical quark masses. If phase transitions between nuclear matter and quark matter do occur, then the interiors of neutron stars may reach densities where the equation of state and transport properties are strongly affected by such transitions, leading to signatures that might be detectable via multi-messenger observations of neutron stars \cite{Lin:2005zda,Sagert:2008ka,Lattimer:2015nhk,Alford:2015gna,Han:2018mtj,Most:2018eaw,McLerran:2018hbz, Bauswein:2018bma,Christian:2018jyd,Xia:2019pnq,Gandolfi:2019zpj, Chen:2019rja,Alford:2019oge,Han:2019bub,Christian:2019qer, Chatziioannou:2019yko,Annala:2019puf,Chesler:2019osn,Fischer:2020xjl,Zha:2020gjw}. \subsection{Status of the Sch\"afer-Wilczek conjecture} In the two decades since Sh\"afer and Wilczek hypothesized quark-hadron continuity in flavor symmetric QCD, based on compatible symmetry realizations and other necessary but not sufficient correspondences, their conjecture has reached the status of a highly plausible folk theorem. The expectation of quark-hadron continuity has been used as the starting point for a large number of further conjectures and developments, see e.g., Refs.~\cite{Hatsuda:2006ps,McLerran:2007qj, Yamamoto:2007ah,Alford:2019oge,Baym:2019iky,Nishimura:2020odq, Hirono:2018fjr,BitaghsirFadafan:2018uzs,Schmitt:2010pf,Buballa:2003qv, Fukushima:2013rx,Fukushima:2015bda,Schafer:1999pb,Schafer:2000tw,Masuda:2012ed,Kovensky:2020xif}. Recently, however, three of the present authors argued that a change in particle-vortex statistics between the Higgs regime (quark matter) and the confined regime (nuclear matter) should be interpreted as compelling evidence for invalidity of the Sch\"afer-Wilczek conjecture \cite{Cherman:2018jir}.% \footnote {% For other examinations of vortices in dense quark matter, see also Refs.~\cite{Eto:2009kg,Eto:2009tr,Cipriani:2012hr, Chatterjee:2015lbf,Chatterjee:2018nxe,Hirono:2019oup}. } We showed that color holonomies around minimal circulation $U(1)_B$ vortices have non-trivial phases of $\pm 2\pi/3$ in high density quark matter, noted that these holonomies should have vanishing phases in the nuclear matter regime, and used this sharp change in the physics of topological excitations to argue that the nuclear matter and quark matter regimes of dense QCD will be separated by a phase transition. Subsequent work by other authors \cite{Hirono:2018fjr,Alford:2018mqj} offered some objections to the arguments in our Ref.~\cite{Cherman:2018jir}. Let us address these objections, starting with Ref.~\cite{Hirono:2018fjr} by Hirono and Tanizaki. Changes in particle-vortex statistics are a commonly used diagnostic for phase transitions in \emph{gapped} phases of matter, see e.g., Refs.~\cite{doi:10.1080/00018739500101566,Hansson:2004wca}. In gapped phases, changes in particle-vortex statistics are connected to changes in intrinsic topological order, which in turn can be related to changes in the realization of higher-form global symmetries~\cite{Gaiotto:2014kfa}. Reference~\cite{Hirono:2018fjr} tacitly assumed that these statements also hold in gapless systems, and misinterpreted our work \cite{Cherman:2018jir} as proposing that the zero temperature high density phase of QCD is topologically ordered. Reference~\cite{Hirono:2018fjr} then argued that this is not the case by discussing the realization of a putative low-energy ``emergent'' higher-form symmetry in a gauge-fixed version of $N_c=3$ Yang-Mills theory coupled to fundamental Higgs scalar fields. Besides relying on a non-manifestly gauge invariant approximate description to suggest some higher form symmetry, this discussion missed the central points of Ref.~\cite{Cherman:2018jir} for two reasons. First, Ref.~\cite{Cherman:2018jir} already explicitly emphasized that the CFL phase of QCD is not topologically ordered according to the standard definition of that term, so arguing that the CFL phase does not have topological order in no way contradicts the analysis of Ref.~\cite{Cherman:2018jir}. Second, while Ref.~\cite{Hirono:2018fjr} agreed with us that in the flavor-symmetric limit, CFL quark matter features non-trivial color holonomies around $U(1)_B$ vortices, it did not address the key question of how this could be consistent with the expected behavior of color holonomies in the nuclear matter regime. Without addressing this crucial question, one cannot conclude that quark-hadron continuity remains a viable scenario in QCD. Reference~\cite{Alford:2018mqj} by Alford, Baym, Fukushima, Hatsuda and Tachibana accepted the main result of Ref.~\cite{Cherman:2018jir}, namely that in the flavor-symmetric limit color holonomies around vortices take sharply different values in the nuclear matter and quark matter phases. But Ref.~\cite{Alford:2018mqj} argued that the hadronic and color-superconducting regimes may nevertheless be smoothly connected. Alford et al. considered (straight) minimal-circulation vortices in a setting where the density varies along the direction of a vortex, and argued that a single superfluid vortex in the color-superconducting regime can connect to a single superfluid vortex in the hadronic regime.% \footnote {% The argument for this statement in Ref.~\cite{Alford:2018mqj} is very simple: one can consider a gedanken situation involving a rotating bucket of density stratified quark/nuclear matter when the quantized superfluid circulation equals unity on every cross-section of the bucket. There must then be a single minimal circulation vortex threading both phases and crossing the interface between them. } This was interpreted as evidence against a ``boojum''\footnote{A boojum is a junction or special defect at points where vortices pass through the interface between distinct superfluid phases.} of the sort discussed in Refs.~\cite{Cipriani:2012hr,Chatterjee:2018nxe} in which three vortices in the quark matter phase must join in order to pass into the hadronic phase. We agree that there is no reason for a boojum at the interface between quark matter and nuclear matter to necessarily involve multiple vortices joining together. Instead, given the behavior of the color holonomy, it is entirely consistent for the interface to be a genuine boundary between distinct thermodynamic phases, with minimal-energy boojums involving just one minimal circulation vortex on either side, with the behavior of the color gauge fields changing sharply at the interface. In our view, the key limitations of our work in Ref.~\cite{Cherman:2018jir} were that we could not explicitly compute expectation values of color holonomies in the superfluid nuclear matter regime and demonstrate that they have trivial phases, nor could we give a proof that a change in the behavior of gauge field holonomies around vortices must be associated with a bulk thermodynamic phase transition. (Although we did give physical arguments for this which we believe are convincing.) In the preceding sections of the present paper, we have analyzed a 3D model which was deliberately constructed to be analogous to dense QCD, and to which Sch\"afer and Wilczek's continuity conjecture applies and predicts that no phase transition separates the $U(1)_{\rm G}$-broken Higgs and confining regimes. This allowed us to examine both of these earlier limitations in the context of this instructive model, and find that continuity does \emph{not} hold. The Higgs and confining $U(1)_{\rm G}$-broken regimes of the 3D theory are distinct phases of matter characterized by a novel order parameter. \subsection{Higgs versus confinement in 4D gauge theory} \label{sec:QCD_discontinuity} In earlier sections we focused on our 3D Abelian model because this provided the simplest setting in which to examine the issue of Higgs-confinement continuity within superfluid (or spontaneously broken $U(1)$) phases, with good theoretical control in both regimes. It is, of course, of interest to understand how the relevant physics might change when one turns to 4D gauge theories which are more QCD-like. To that end, we now consider an $SU(3)$ gauge theory coupled to three antifundamental representation scalar fields, as well as an additional gauge-neutral complex scalar field $\phi_0$. We will build a model with $SU(3)$ flavor symmetry, and write the charged scalar fields as a $3\times 3$ matrix $\Phi$ which transforms in the bifundamental representation of $SU(3)_{\rm flavor} \times SU(3)_{\rm gauge}$, \begin{align} \Phi \to F \, \Phi \, C^\dagger \,, \quad F \in SU(3)_{\rm flavor}, \,\, C \in SU(3)_{\rm gauge}\,. \end{align} We also assume the theory has a $U(1)$ global symmetry, which acts as \begin{equation} U(1)_{\rm G}: \Phi \to e^{2i\alpha/3}\,\Phi, \quad \phi_0 \to e^{2i\alpha} \, \phi_0\, , \end{equation} and assume that there exist (or could exist) heavy `baryon' test particles with unit charge under the $U(1)_{\rm G}$ global symmetry. Since $U(1)_{\rm G}$ phase rotations which lie within $\mathbb Z_3$ coincide with the action of $SU(3)$ gauge transformations, the faithfully acting $U(1)$ global symmetry is $U(1)_{\rm G}/\mathbb{Z}_3$. The action defining this model is given by \begin{align} \label{eq:4dScalarQCD} S = \int d^{4}x \, &\left[ \frac{1}{2g^2} \, \text{tr}\, F_{\mu\nu}^2 + \text{tr}\, (D_{\mu} \Phi)^{\dag} D^{\mu} \Phi + |\partial_{\mu}\phi_0|^2 + m_{\Phi}^2 \, \text{tr}\, \Phi^{\dag}\Phi + m_{0}^2 \, |\phi_0|^2 \right. \nonumber\\ &\left.\vphantom{[\frac{1}{2g^2}\text{tr}\, F_{\mu\nu}^2} + \lambda_0|\phi_0|^4 + \lambda_\Phi \, \text{tr}\, (\Phi^\dagger\Phi)^2 + \epsilon\, (\phi_0^\dagger \det \Phi + \textrm{h.c}) + \cdots \right] . \end{align} As usual, $D_{\mu} \Phi = \partial_{\mu} \Phi + i \Phi A_{\mu}$ is the covariant derivative in the antifundamental representation, and the ellipsis denotes possible further scalar self-interactions which are invariant under the chosen symmetries. The field strength $F_{\mu \nu} \equiv F^a_{\mu \nu} t^a$, with Hermitian $SU(3)$ generators satisfying $\text{tr}\, t^a t^b = \tfrac{1}{2} \delta^{ab}$. This 4D model is very similar to the scalar part of the effective field theory that describes high-density three-color QCD in the CFL quark matter regime~\cite{Alford:2007xm}, with $U(1)_{\rm G}/\mathbb{Z}_3$ playing the role of $U(1)_B$ in QCD. The matrix-valued scalar $\Phi$ represents three color-antifundamental diquark fields, so that $\det\Phi$ has the quantum numbers of flavor-singlet dibaryons, which are condensed in both the CFL phase and the $SU(3)$-symmetric nuclear matter phases. Due to the $\epsilon$ coupling between the gauge-neutral scalar $\phi_0$ and $\det \Phi$, one can think of $\phi_0^\dagger$ as a (dynamical) source for flavor-singlet dibaryons. Explicitly introducing the neutral scalar $\phi_0$ allows the model \eqref{eq:4dScalarQCD} to describe both the Higgs regime and a regime where dibaryons are light, but the gauge and charged scalar fields can be integrated out. Of course, the effective action for dense QCD in the CFL regime is rotation-invariant but not Lorentz invariant, and also includes heavy fermionic excitations, in contrast to the purely bosonic Lorentz-invariant theory defined by Eq.~\eqref{eq:4dScalarQCD}. These differences are not relevant to our discussion, and we expect the phase structure of the model \eqref{eq:4dScalarQCD} to mimic the phase structure of QCD with approximate $SU(3)$ flavor symmetry. Consider the Higgs regime of the model \eqref{eq:4dScalarQCD} where (in gauge-fixed language) $\Phi$ has an expectation value of color-flavor locked form, $\langle \Phi \rangle = v_{\Phi} \, \mathbf{1}_3$, and there is a residual unbroken $SU(3)_{\rm global}$ symmetry acting as $\Phi \to U \Phi U^{\dag}$, with $U \in SU(3)$. The $U(1)_{\rm G}$ global symmetry is spontaneously broken implying, as always, the existence of vortex topological excitations. To describe a straight ``superfluid'' vortex, using cylindrical coordinates with $r=0$ at the center of the vortex, one may fix a gauge in which the vortex configuration has $\Phi$ diagonal and $A_\mu$ taking values in the Cartan subalgebra, \begin{subequations} \label{eq:QCDvortex} \begin{align} \phi_0(r,\theta) &= v_0 \, f_0(r)\, e^{ik\theta} \,, \\[5pt] \Phi(r,\theta) &= v_{\Phi} \> \mathrm{diag} \left( f_1(r)\, e^{i(n+k)\theta} ,\> f_2(r)\, e^{i(m-n)\theta} ,\> f_3(r)\, e^{-im\theta} \right) , \\ A_{\theta}(r) &= \frac{a\, h_8(r)}{2\pi r} \, t_8 + \frac{b\, h_3(r)}{2\pi r} \, t_3 \,. \end{align} \end{subequations} Here $k,m,n \in \mathbb Z$, with $k$ the vortex winding number, $t_8 \equiv \frac{1}{2\sqrt{3}}\,\textrm{diag}(1,1,-2)$ and $t_3 \equiv \frac{1}{2}\,\textrm{diag}(1,-1,0)$ are the usual diagonal $SU(3)$ generator matrices, and the radial profile functions $\{ f_i \}$ and $\{ h_i \}$ approach $1$ as $r\to \infty$. Minimizing the long-distance energy density of the vortex configuration determines the gauge field asymptotics. One finds, \begin{equation} a = -\tfrac{2\pi}{\sqrt{3}} \, (k+3m) \,,\quad b = -2\pi(k+2n-m) \,. \end{equation} The minimal energy vortex with unit circulation ($k=1$) corresponds to $n=m=0$ (with physically equivalent forms related by Weyl reflections), in which case \begin{equation} a = -\tfrac{2\pi}{\sqrt{3}} \,,\quad b = -2\pi \,, \label{eq:minvals} \end{equation} and \begin{equation} \Phi(r,\theta) = v_\Phi \,\textrm{diag} \left(f(r)\, e^{i\theta},\> g(r),\> g(r) \right),\quad A_\theta(r) = \frac{h(r)}{3r} \,\textrm{diag}(-2,1,1) \,. \label{eq:minimalQCDvortex} \end{equation} Here, we have set $f_1(r) = f(r)$, $f_2(r) = f_3(r) = g(r)$, and $h_3(r) = h_8(r) = h(r)$. The minimal-energy vortex configuration \eqref{eq:minimalQCDvortex} preserves an $SU(2) \times U(1)$ symmetry (cf., Ref.~\cite{Auzzi:2003fs}). Hence, these minimal energy unit-winding vortices have zero modes associated with the moduli space \begin{align} \frac{SU(3)}{SU(2) \times U(1)} = \mathbb{CP}^2 \,. \end{align} Consequently, the worldsheet effective field theory for a vortex contains a $\mathbb{CP}^2$ non-linear sigma model~\cite{Eto:2009kg,Gorsky:2011hd,Eto:2011mk}. But the $\mathbb{CP}^2$ model in two spacetime dimensions (with vanishing topological angle $\theta$) has a mass gap and a unique ground state angle, see e.g., Refs.~\cite{Campostrini:1992ar,Bonati:2019owp}. So, despite the appearance of the classical configuration \eqref{eq:QCDvortex}, the $SU(3)_{\rm global}$ symmetry is unbroken both in the vacuum and in the presence of vortices. Now consider the behavior of our vortex holonomy order parameter in this theory. The gauge field holonomy is now a path-ordered exponential around some contour $C$, $\Omega(C) \equiv \mathcal P (e^{i\int_C A})$, and defines an $SU(3)$ group element. The natural non-Abelian version of our vortex order parameter involves gauge invariant traces of holonomies, \begin{align} O_{\Omega} \equiv \lim_{r \to \infty} \frac{\langle \text{tr}\, \Omega(C) \rangle_1}{\langle \text{tr}\, \Omega (C) \rangle} \,, \label{eq:QCD_order_parameter} \end{align} where in the numerator the circular contour $C$ encircles a minimal vortex in the same direction as the circulation of the $U(1)_{\rm G}$ current.% \footnote {% Once again, the numerator is defined by a constrained functional integral with a prescribed vortex world-sheet, with the size of that world-sheet and the minimal separation between the vortex world-sheet and the holonomy contour $C$ scaling together as the contour radius $r$ increases. } Both expectations in the ratio \eqref{eq:QCD_order_parameter} have perimeter-law dependence on the size of the contour $C$ arising from quantum fluctuations on scales small compared to $r$, but this geometric factor cancels by construction in the ratio. Unbroken charge conjugation symmetry implies that the denominator is real, and it must be positive throughout any phase connected to a weakly coupled regime. So as in our earlier Abelian model, the behavior of $O_{\Omega}$ is determined by the phase of the vortex expectation value in the numerator. A trivial calculation (identical to that in Ref.~\cite{Cherman:2018jir}) shows that at tree-level, far from the vortex, \begin{align} \frac{1}{3} \left\langle \text{tr}\, \Omega(C) \right\rangle_1^{\rm tree} = e^{2\pi i /3} \,. \end{align} demonstrating that $O_\Omega = e^{2\pi i/3}$ at tree-level. An effective field theory argument, analogous to that given in section \ref{sec:Higgs_holonomy_Abelian} (see also Appendix~\ref{sec:nonAbelian}), shows that this result is unchanged when quantum fluctuations are taken into account, as long as they are not so large as to restore the spontaneously broken $U(1)_{\rm G}$ symmetry. To see this, consider the form of the effective action generated by integrating out fluctuations on scales small compared to $r$. Only terms in the effective action with two derivatives acting on the charged scalar field $\Phi$ can contribute to the $\mathcal O(1/r^2)$ holonomy-dependent part of the energy density, and hence affect the gauge field asymptotics (\ref{eq:minvals}) which determines the expectation value of holonomies far from the vortex core. Traces of operators containing a single covariant derivative, such as $\text{Tr}\, \Phi^\dagger D_\mu \Phi$, are independent of the gauge field far from the vortex core. Consequently, the portion of the effective action which controls the holonomy expectation value far from a vortex may be written in the form \begin{align} S_{\textrm{eff}, \, SU(3) \textrm{ holonomy}} = \int d^{4}x \, \Big\{ & \text{Tr}\, \!\big[ f_1(\phi_0,\Phi) (D_{\mu} \Phi)^\dagger f_2(\phi_0,\Phi) (D^\mu\Phi) \big] \nonumber\\ & + \epsilon^{ABC}\epsilon_{IJK} \, f_3(\phi_0,\Phi)^I_{\ A} (D_\mu\Phi)^J_{\ B} (D^\mu\Phi)^K_{\ C} \Big\}+ \textrm{h.c.}\,, \label{eq:pert} \end{align} where $A,B,C$ are color indices and $I,J,K$ are flavor indices. The three coefficient functions $\{ f_i\} $ depend on the fields $\phi_0$ and $\Phi$, but not on their derivatives, only in combinations which are are invariant under $U(1)_{\rm G}$. The function $f_1$ is a color adjoint and flavor singlet (like $\Phi^\dagger\Phi$), $f_2$ is color singlet and flavor adjoint (like $\Phi\Phi^\dagger$), and $f_3$ is antifundamental in color and fundamental in flavor (like $\phi_0^\dagger \Phi$). Plugging in the configuration \eqref{eq:QCDvortex}, one can easily verify that both terms in \eqref{eq:pert} have extrema, with respect to the asymptotic gauge field coefficients $a$ and $b$, at the same location \eqref{eq:minvals} regardless of the form of the functions $\{f_i\}$. Therefore small quantum corrections do not perturb the gauge field asymptotics far from a vortex, and hence cannot shift the phase of the vortex holonomy expectation $\left\langle \text{tr}\, \Omega(C) \right\rangle_1$ away from $2\pi/3$. Hence, we learn that \begin{align} \boxed{ \textrm{$U(1)_{\rm G}$-broken Higgs phase:}\;\; O_{\Omega} = e^{2\pi i/3} } \,, \label{eq:QCD_holonomy_higgs} \end{align} holds exactly throughout the phase connected to the weakly coupled Higgs regime. Alternatively, when $m_{\Phi}^2 \gtrsim \Lambda^2$, with $\Lambda$ the strong dynamics scale of the theory, we can recycle the arguments of Sec.~\ref{sec:broken_permutations} to understand the behavior of $O_{\Omega}$. In this regime, due to the presence of heavy dynamical charged excitations, the expectation values of large fundamental representation Wilson loops are (exponentially) dominated by a perimeter-law contribution. Physically, a Wilson loop describes a process where a fundamental representation test particle and antiparticle are inserted at some point, separated and then recombined as they traverse the contour $C$. The perimeter law behavior arises from configurations in which dynamical fundamental representation excitations of mass $m_{\Phi}$ are pair-created and dress the test charge and anticharge to create two bound gauge-neutral ``mesons.'' These mesons have physical size of order $\ell_{\rm meson} \sim \textrm{min}\left(\Lambda^{-1}, (\alpha_s m_{\Phi})^{-1} \right)$, and experience no long range interactions. Once the Wilson loop size exceeds the string breaking scale $\sim 2 m_{\Phi}/\Lambda^2$, pair creation of dynamical charges of mass $m_{\Phi}$ and the associated meson formation becomes the dominant process contributing to fundamental Wilson loop expectation values. The perimeter law contribution to large fundamental representation Wilson loop expectation values arises from fluctuations of the gauge-charged fields within distances of order of $\ell_{\rm meson}$ from any point on the contour $C$. The amplitude for such screening fluctuations, and consequent meson formation, must be completely insensitive to the presence of a vortex very far away at the center of the loop. This means that the holonomy expectations in the numerator and denominator of the vortex observable \eqref{eq:QCD_order_parameter} will be identical (up to exponentially small corrections vanishing as $r \to \infty$), leading to the conclusion that \begin{align} \boxed{\textrm{$U(1)_{\rm G}$-broken confining phase:}\;\; O_{\Omega} = 1 } \,. \label{eq:QCD_holonomy_conf} \end{align} Once again, the differing results \eqref{eq:QCD_holonomy_higgs} and \eqref{eq:QCD_holonomy_conf}, each strictly constant within their respective domains, implies that $O_{\Omega}$ cannot be a real-analytic function of $m_{\Phi}^2$. Adapting the arguments in Sec.~\ref{sec:ColemanWeinberg} regarding the impact of abrupt changes in the properties of vortex loops on the ground state energy, we see that $O_{\Omega}$ functions as an order parameter that distinguishes the $U(1)_{\rm G}$-broken Higgs and $U(1)_{\rm G}$-broken confining phases of this four-dimensional $SU(3)$ gauge theory with $SU(3)$ flavor symmetry.% \interfootnotelinepenalty=10 \footnote {% Further evidence that changes in our non-local order parameter signal genuine phase transitions in non-Abelian gauge theories may be gained by considering other calculable examples. One such case is described in Appendix~\ref{sec:nonAbelian}. A different example which is closer to the model discussed in this section consists of a version of the theory \eqref{eq:4dScalarQCD} in three spacetime dimensions, with gauge group $SU(2)$ and two flavors of $SU(2)$ antifundamental scalar fields, with a global flavor symmetry containing an $SU(2)$ factor. Generalizing the analysis in Sec.~\ref{sec:ColemanWeinberg} to this non-Abelian model, we have checked that there is a set of parameters (essentially identical to the ones in Sec.~\ref{sec:ColemanWeinberg}) for which the phase transition between the $U(1)_{\rm G}$-broken confining and Higgs regimes is strongly first-order as a function of the mass of the antifundamental scalars. The fact that the transition is strongly first-order allows the existence of the phase transition to be reliably established despite the fact that the gauge sector is strongly coupled within the $U(1)_{\rm G}$-broken confining phase. It is easy to check in this example that our vortex observable $O_{\Omega}$ jumps from $+1$ to $-1$ across the transition, and serves as an order parameter distinguishing distinct phases, even when the transition is no longer strongly first-order. Finally, it is easy to check that these statements generalize to $N = N_f >2$ gauge theories. } Finally, if the $SU(3)$ flavor symmetry of this theory is explicitly broken by a small perturbation, a simple generalization of the analysis leading to the gauge field asymptotics \eqref{eq:minimalQCDvortex} implies that the phase of $O_{\Omega}$ will now deviate slightly from $2\pi /3$. But in the $U(1)_{\rm G}$-broken confined phase, $O_{\Omega}$ remains exactly $1$ due to the confinement and string breaking effects discussed above. This implies that the $U(1)_{\rm G}$-broken Higgs and $U(1)_{\rm G}$-broken confining regimes of our 4D $SU(3)$ scalar theory \eqref{eq:4dScalarQCD} must remain separated by a quantum phase transition even when the $SU(3)$ flavor symmetry is explicitly broken. Most importantly, essentially the same argument applies to dense QCD. Before leaving this section, we note that one may consider our original 3D model (\ref{eq:the_model}), or the 4D non-Abelian generalization (\ref{eq:4dScalarQCD}), with the addition of a non-zero chemical potential for the $U(1)_{\rm G}$ symmetry. Such a chemical potential explicitly breaks charge conjugation symmetry, just like the baryon chemical potential in dense QCD. In our earlier discussion we used unbroken charge conjugation symmetry to conclude that the ground state expectation value of the holonomy must be real. But, as noted in footnote \ref{fn:reflect}, for a reflection-symmetric holonomy contour (such as a circle), reflection symmetry is an equally good substitute. Consequently, all of our arguments demonstrating that the phase of the holonomy encircling a vortex at large distance serves as an order parameter distinguishing ``confining'' and ``Higgs'' superfluid phases go through without modification in the presence of a non-zero chemical potential. In summary, we have shown that consideration of our new order parameter implies that there is a phase transition between nuclear matter and quark matter in dense QCD near the $SU(3)$ flavor limit. This means that the confining nuclear matter regime of QCD (at least with approximate $SU(3)$ flavor symmetry) has a sharp definition as a phase of QCD where the expectation values of color holonomies around superfluid vortices are positive, while quark matter --- a Higgs regime --- can be defined as the phase of QCD where these holonomy expectation values become complex. Given the notorious difficulties in giving a sharp definition for confining and Higgs regimes in gauge theories with fundamental representation matter (see Ref.~\cite{Greensite:2016pfc} for a review), this is a satisfying result in the theory of strong interactions. Our results are also encouraging for observational searches for evidence of quark matter cores in neutron stars, see e.g.~Refs.~\cite{Lattimer:2015nhk,Alford:2015gna,Han:2018mtj,McLerran:2018hbz, Bauswein:2018bma,Christian:2018jyd,Xia:2019pnq,Gandolfi:2019zpj, Chen:2019rja,Alford:2019oge,Han:2019bub,Christian:2019qer, Chatziioannou:2019yko,Annala:2019puf,Chesler:2019osn}, because our results imply that hadronic matter and quark matter must be separated by a phase transition as a function of density. \section{Conclusion} \label{sec:conclusion} We have explored the phase structure of gauge theories with fundamental representation matter fields and a $U(1)$ global symmetry. Motivated by the physics of dense QCD, we considered both Higgs and confining portions of the phase diagram in which the $U(1)$ global symmetry is spontaneously broken, and hence the theory is gapless due to the presence of a Nambu-Goldstone boson. These two regimes cannot be distinguished by conventional local order parameters probing global symmetry realizations, nor do they naturally fit into more modern classification schemes based on topological order and related concepts. Nevertheless, using a novel vortex order parameter introduced in Sec.~\ref{sec:vortices_and_holonomies}, we found that $U(1)$-broken confining and Higgs regimes are sharply distinct phases of matter separated by at least one phase transition in parameter space, as illustrated in Fig.~\ref{fig:3D_phase_diagram}. In Secs.~\ref{sec:our_model} and \ref{sec:vortices_and_holonomies} (and Appendix~\ref{sec:nonAbelian}) we examined instructive parity-invariant Abelian (and non-Abelian) gauge theories in three spacetime dimensions illustrating this physics. Then in Sec.~\ref{sec:QCD} we considered related theories with a $U(1)$ global symmetry in four spacetime dimensions, and explained how our considerations serve to rule out the Sch\"afer-Wilczek conjecture of quark-hadron continuity in cold dense QCD. Why are these results interesting? First, we have added to the toolkit of techniques for diagnosing phase transitions in gauge theories, and shown that it predicts previously unexpected phase transitions in theories with fundamental representation matter fields. Second, our analysis implies a phase transition between quark matter and nuclear matter in dense QCD near the $SU(3)$ flavor limit, with possible implications for observable properties of neutron stars. Third, our analysis provides a sharp distinction between a confined nuclear matter regime of QCD and dense quark matter. In other words, it provides sharp answers to some basic questions about strong dynamics: \begin{itemize} \item ``What is the confined phase of QCD?'' Our work shows that this question has a sharp answer when the $U(1)_B$ baryon number symmetry is spontaneously broken. The confined phase of QCD with spontaneously broken $U(1)_B$ symmetry can be defined as the phase of QCD where the expectation values of color holonomies around minimal-circulation superfluid vortices are positive. \item ``What is cold quark matter?'' Our analysis shows that cold quark matter can be defined as the phase of QCD where the expectation values of color holonomies around minimal-circulation superfluid vortices have non-vanishing phases. \end{itemize} Our results raise a number of other interesting questions that we hope can be addressed in future work. These include: \begin{itemize} \item What is the nature of the point in Fig.~\ref{fig:3D_phase_diagram} where the three different phase transition curves intersect? \item What can be said in general about the order of the phase transition(s) separating $U(1)$-broken confining and Higgs phases in the theories we have considered? As discussed in Sec.~\ref{sec:ColemanWeinberg}, for some ranges of parameters there is a single first order phase transition. Is this always the case, or is there a range of parameters where the transition becomes second-order? How does the answer depend on the spacetime dimension? These issues are of more than just theoretical interest, because the properties of the nuclear to quark matter phase transition(s) in dense QCD can have observational impacts for the physics of neutron stars. \item Relatedly, when the transition is first order what is the physics on an interface separating coexisting phases? This is also directly connected to potential neutron star phenomenology. \item What happens to the phase structure of the class of theories we have considered, in both three and four spacetime dimensions, at non-zero temperature? \item How should the modern classification of the phases of matter be generalized when considering transitions between gapless regimes? Is there a natural embedding of the constructions in this paper into some more general framework? In Appendix~\ref{sec:gaugingU1} we gauge the $U(1)_{\rm G}$ symmetry of our 3D Abelian model and show that the resulting gapped theory (which flows to TQFTs at long distances) has a phase transition analogous to the Higgs-confinement phase transition studied in the body in the paper. But we also argue that, by itself, this cannot be used to infer the existence of a phase transition in the original model with a global $U(1)_{\rm G}$ symmetry. \item Can our construction be generalized to gauge theories where the $U(1)_{\rm G}$ global symmetry is explicitly broken to a discrete subgroup $\mathbb{Z}_k$? Such theories would contain domain walls, and the behavior of gauge field holonomies around domain wall junctions could be used to identify phase transitions. \item Are there condensed matter systems which realize the physics of $U(1)$-broken Higgs-confinement phase transitions? \end{itemize} \section*{Acknowledgments} We are especially grateful to Fiona Burnell for extensive discussions and collaboration at the initial stages of this project. We are also grateful to M.~Alford, F.~Benini, S.~Benvenuti, K.S.~Damle, L.~Fidkowski, D.~Harlow, Z.~Komargodski, S.~Minwalla, E.~Poppitz, N.~Seiberg, Y.~Tanizaki and M.~\"Unsal for helpful discussions and suggestions during the long gestation of this paper. AC acknowledges support from the University of Minnesota. TJ is supported by a UMN CSE Fellowship. SS acknowledges the support of Iowa State University startup funds. LY acknowledges support from the U.S. Department of Energy grant DE-SC\-0011637.
{ "timestamp": "2020-12-01T02:50:22", "yymm": "2007", "arxiv_id": "2007.08539", "language": "en", "url": "https://arxiv.org/abs/2007.08539" }
\section*{Introduction} For a fixed metric space $X$, consider the family $\mathcal{S}$ of every metric quotient $S$ obtained from $X$. The first result in this paper is a tool for proving that a given $S \in \mathcal{S}$ is the limit of a given sequence $S_n \in \mathcal{S}$. It is based on comparing the data $\mathcal{G}$ and $\mathcal{G}_n$ that produces the respective quotients, giving precision to the idea that if these data ``look alike'', then the associated metric quotients are close with respect to the the Gromov-Hausdorff distance. Intuitively, the conditions of Theorem \ref{thm:main} require that for a ``collapsing'' sequence $D_n$ of subsets of $X$, points outside $D_n$ are equally related by $\mathcal{G}$ and $\mathcal{G}_n$, and points outside $D_n$ are related to points inside it by $\mathcal{G}$ and $\mathcal{G}_n$ in a controlled manner. More precisely, for a collection $\mathcal{G}$ of subsets of $X$ and $x,y \in X$, denote $x \, \mathcal{G} \, y$ if $x = y$ or there exist $g \in \mathcal{G}$ with $x,y \in g$. The boundary, the complement and the diameter of each subset $D$ of $X$ are denoted, respectively, $\partial D$, $X \setminus D$ and $\operatorname{diam} D$; and the Gromov-Hausdorff distance between metric spaces $S$ and $S'$ is denoted $\dgh (S,S')$. \begin{thmm} \label{thm:main} Let $S$ and $S_n$, $n \in \mathbb{N}$, be the metric quotients associated to collections $\mathcal{G}$ and $\mathcal{G}_n$ of subsets of a metric space $X$. Suppose that, for each $n \in \mathbb{N}$, there exist $g^n \in \mathcal{G}$, $g_n \in \mathcal{G}_n$ and $D_n \subset X$ such that: \begin{enumerate}[label=\roman*)] \item \label{thm:gh2-hyp1} For any $x, y \in X \setminus D_n$, $x \, \mathcal{G} \, y$ if, and only if, $x \, \mathcal{G}_n \, y$. \item \label{thm:gh2-hyp2} For any $x \in X \setminus D_n$ and $y \in D_n$, $x \, \mathcal{G} \, y$ implies that $x, y \in g^n$, and $x \, \mathcal{G}_n \, y$ implies that $x,y \in g_n$. \item \label{thm:gh2-hyp3} $g^n \cap (\partial D_n \setminus D_n) \neq \emptyset$ and $g_n \cap (\partial D_n \setminus D_n) \neq \emptyset$. \end{enumerate} If $\lim_{n \to + \infty} \operatorname{diam} D_n = 0$, then $\lim_{n \to + \infty} \dgh (S,S') = 0$. \end{thmm} In case $X$ is compact, standard results on the Gromov-Hausdorff topology (see Theorem 7.4.15 in \cite{BBI}) guarantee that any sequence $S_n \in \mathcal{S}$ contains a subsequence that converges to a compact metric space. In this context, Theorem \ref{thm:main} may be useful for recognizing such limit, and for constructing sequences approximating a given limiting space. Taking a $X$ as fixed polygon $P$ in some model plane, collections $\mathcal{G}$ associated to side-pairings $\mathcal{P}$ of $P$ known as paper-folding schemes \cite{dCH} are considered. These pairings glue together, isometrically, interior-disjoint plane segments contained in $\partial P$, as in classical surface theory. However, infinitely many pairings are allowed, provided that the paired segments cover $\partial P$ up to measure-zero. While the formal definitions are a bit lengthy (section \ref{sec:back-paperspaces}), it is easy to come up with and represent the simplest identifications patterns in paper-folding schemes as in Figures \ref{fig:general}, \ref{fig:canon1} and \ref{fig:repeat}, where dotted lines connect paired points. Specifically, plain paper-folding schemes are considered, this being a restriction on how the paired points are linked along $\partial P$. These are known to produce quotients homeomorphic to the $2$-sphere. A particular application of the following result is shown in Figure \ref{fig:canon1}. \begin{thmm}[Theorem \ref{thm:paperlimit}] \label{thm:main2} Given a plain paper-folding scheme, sequences of plain paper-folding schemes approximating it in the Gromov-Hausdorff sense are constructed. \end{thmm} If $\mathcal{P}$ prescribe only finitely many pairings, the associated quotient $S$ is a conic-flat surface, also known as polyhedral, in the sense that every point has either a flat or conical neighborhood. On the other hand, gluing by infinitely many pairings may produce points in $S$ around which the metric is not described by these models. These are called \textit{singular}, and simple examples of such are accumulations of conical points, and points with infinite total angle around it. Theorem \ref{thm:main2} is applied to approximate examples of conic-flat spheres with singularities, in the Gromov-Hausdorff sense, by conic-flat spheres without singularities (Examples \ref{exem:general}, \ref{exem:canon1} and \ref{exem:repeat}). This is known to imply that the convergence also occurs uniformly -- see Exercise 7.15.14 in \cite{BBI}. Some singularities in the given examples implies that $S$ is not a space of bounded curvature in the sense of comparative geometry \cite{BBI}, both above and below. This comes together with the total curvature exploding along the approximating sequence, in accordance with the theory of surfaces of bounded curvature \cite{AZ}, which guarantee that if certain bounds on the curvature of conical points hold for a sequence, then the property of being a surface of bounded curvature passes to uniform limits. Theorems \ref{thm:main} and \ref{thm:main2} outgrew from particular cases first established in \cite{master}. Originally, paper-folding schemes were considered due to their associated quotients being the domains of certain surface homeomorphisms that are relevant in the theory of dynamical systems (see \cite{unimodal, dCH} and references therein), later gaining attention also in three-manifold geometry/topology (recent developments are found in \cite{palimits}). In these contexts, both the quotients and the transformations show up in families. Besides the geometric content mentioned above, the results in this paper were also motivated by the problem of passing limits along these families, which is tackled in \cite{dCH,dCH2} by means of uniformization techniques of complex analysis. While the results presented here are more restrictive, due to requiring a fixed polygon and leaving aside any transformations defined on the quotients, the author hopes that they consist the first step in an alternate approach to this matter. The paper is structured as follows: section \ref{sec:back} summarizes definitions and results on metric quotients, the Gromov-Hausdorff distance between compact metric spaces, and paper-folding schemes. The proofs of Theorems \ref{thm:main} and \ref{thm:main2} are given in sections \ref{sec:gromovhausdorff} and \ref{sec:paperlimit}, the later also containing the aforementioned examples. The author thanks to Andr\'e de Carvalho for presenting him to paper-folding schemes, and aknowledges the partial finantial support of CNPq, FAPESP and UFPA for his research. \section{Background} \label{sec:back} This section summarizes the basic concepts developed in the main results of the paper. Metric spaces, quotients and the Gromov-Hausdorff distance are presented as in \cite{BBI}, with minor non-essential modifications (the terminology adopted in the definition of metric quotients follows \cite{bonahon}). For paper-folding schemes, the main reference is \cite{dCH}. \input{sec-back-metquo} \input{sec-back-gh} \input{sec-back-paperspaces} \input{sec-gromovhausdorff} \bibliographystyle{alpha} \subsection{Gromov-Hausdorff distance} \label{sec:back-gh} The way of defining the Gromov-Hausdorff distance that better suits the purposes of the present paper is based on the concept of correspondences between metric spaces. \begin{defi} Let $(X_1, d_1)$ and $(X_2,d_2)$ be metric spaces. A \textit{correspondence between $X_1$ and $X_2$} is a subset $R \subset X_1 \times X_2$ with the following properties: for each $x_1 \in X_1$, there exist $x_2 \in X_2$ such that $(x_1,x_2) \in R$; and for each $x_2 \in X$, there exist $x_1 \in X$ such that $(x,y) \in R$. In both cases, uniqueness is not required. The \textit{distortion of a correspondence $R$} is: \begin{equation} \operatorname{dis} R = \sup \{| d_1 (x_1, x_1') - d_2 (x_2, x_2') | \, : \, (x_1, x_2), (x_1 ', x_2 ') \in R \}. \end{equation} \end{defi} To each correspondence $R \subset X_1 \times X_2$, the projections $R \to X_1$ and $R \to X_2$ are surjective. Reciprocally, given a pair of surjective functions $f_i : X \to X_i$, $R = \{ (f_1 (x), f_2 (x)) \in X_1 \times X_2 \, : \, x \in X \}$ is a correspondence between $X_1$ and $X_2$. Its distortion is given by: \begin{equation} \operatorname{dis} R = \sup_{x,x' \in X} |d_1 (f_1 (x), f_1(x')) - d_2 (f_2 (x), f_2 (x')) | . \end{equation} \begin{defi} Let $X_1$ and $X_2$ be metric spaces. The \textit{Gromov-Hausdorff distance between $X_1$ and $X_2$} is defined by: \begin{equation} \label{eqn:dghdis} \dgh (X_1,X_2) = \frac{1}{2} \inf_R \operatorname{dis} R \, , \end{equation} where the infimum is over every correspondence $R \subset X_1 \times X_2$. \end{defi} \begin{defi} Let $(X,d)$ be a metric space. For each $r > 0$, an \emph{$r$-net on $X$} is a subset $A \subset X$ with the property that, given $x \in X$, there exist $a \in A$ such that $d(x,a) < r$. \end{defi} \begin{remark} \begin{enumerate} \item The Gromov-Hausdorff distance satisfies the triangle inequality. \item If $A$ is an $r$-net on a metric space $(X,d)$ and is considered as a metric space with the restriction of $d$, then $\dgh (X,A) \leq r$. \end{enumerate} \end{remark} \subsection{Metric quotients and intrinsic metrics} \label{sec:metquo} For a non-empty set $X$, a function $d : X \times X \to \mathbb{R} \cup \{ \infty \}$ is a \textit{semi-metric} if, for any $x, y, z \in X$, $d(x,x) = 0$; $d(x,y) = d(y,x)$; and the triangle inequality is valid: $d(x,z) \leq d(x,y) + d (y,z)$. A \textit{metric} on $X$ is a semi-metric on $X$ such that $d(x,y) = 0$ implies that $x = y$, and, in this case, the pair $(X,d)$ is a \textit{metric space}. For a semi-metric $d$ on $X$, $d(x,y) = 0$ defines an equivalence relation $\sim$ on $X$, and $d$ induces a metric, also denoted $d$, in the quotient $X / {\sim}$. Each collection $\mathcal{G}$ of subsets of $X$ induces a reflexive and symmetric relation on $X$, defined by $x \, \mathcal{G} \, y$ if, and only if, $x = y$ or there exists $g \in \mathcal{G}$ with $x, y \in g$. \begin{defi} \label{defi:metquo} Let $(X,d)$ be a metric space and $\mathcal{G}$ be any collection of subsets of $X$. For $x, y \in X$, a \emph{$\mathcal{G}$-walk} $\mathcal{W}$ from $x$ to $y$ is a finite sequence of pairs $\{x_j, y_j \}_{j=1}^N$ of points in $X$ such that, for each $1 \leq j \leq N -1$, $y_j \, \mathcal{G} \, x_{j+1}$. Each $\{ x_j, y_j \}$ is a \textit{step} of $\mathcal{W}$ and each $\{ y_j \, ; \, x_{j+1}\}$ is a \textit{jump} of $\mathcal{W}$. The $\mathcal{G}$-walk $\{ x, y\}$ is called \emph{trivial}. The \textit{length} of a step $\{ x_j, y_j \}$ is equal to $d(x_j,y_j)$, and the \emph{length} of $\mathcal{W}$ is equal to $\sum_{j = 1}^N d (x_j, y_j)$. For any $x,y \in X$, \begin{equation} d^\mathcal{G} (x,y) = \inf \sum_{j = 1}^N d (x_j, y_j) \, , \end{equation} the infimum being over every $\mathcal{G}$-walk from $x$ to $y$, is the \textit{quotient semi-metric of $X$ associated to $\mathcal{G}$}. The equivalence relation on $X$ defined by $d^\mathcal{G} (x,y) = 0$ is denoted $\sim_\mathcal{G}$, and the induced metric on the quotient $X/{\sim}_\mathcal{G}$ is also denoted $d^\mathcal{G}$. The \emph{metric quotient associated to $\mathcal{G}$} is the metric space $(X/{\sim}_\mathcal{G}, d^\mathcal{G})$. The \textit{projection map $\pi: X \to X/{\sim}_\mathcal{G}$} associates to each $x$ its ${\sim}_\mathcal{G}$-equivalence class. \end{defi} Due to the trivial walk, $d^\mathcal{G} (x,y) \leq d (x,y) $ for every $x,y \in X $. So, the projection map does not increase distances: \begin{equation} d^\mathcal{G} (\pi (x), \pi(y)) \leq d (x,y) \, . \end{equation} In particular, it is a continuous map from $(X,d)$ to $(X/{\sim}_\mathcal{G}, d^\mathcal{G})$, of course also surjective. The collection $\mathcal{G}$ can be arbitrary, and is not assumed to be a decomposition of $X$, as in the definition of the quotient topology. This makes easier to describe $\mathcal{G}$, and anyway is not essential, since decompositions of $X$ generating the same quotient semi-metric can be obtained from any collection of subsets of $X$. Of course, if $x,y \in g$ for some $g \in \mathcal{G}$, then $d^\mathcal{G} (x,y) = 0$. But, in general, the quotient topology and the metric quotient do not coincide, as there are other situations when $d^\mathcal{G} (x,y) = 0$. For instance, if $y$ is an accumulation point of $g \in \mathcal{G}$, then $d^\mathcal{G} (x,y) = 0$ for every $x \in g$. In this case, if $y \notin g$, the quotient topology fails to be Hausdorff, and so is not even metrizable. This happens frequently among collections of subsets associated to paper-folding schemes. Furthermore, in general, the topology of the metric quotient $(X/{\sim_\mathcal{G}},d^\mathcal{G})$ may not be equivalent to the quotient topology on $X/{\sim}_\mathcal{G}$. However, if $X$ is compact, the identity map of $X/{\sim}_\mathcal{G}$ is a homeomorphism between these topologies, since it is a continuous bijection from a compact space to a Hausdorff one. \begin{defi} The \textit{diameter} of a subset $D$ of a metric space $(X,d)$ is defined by $\operatorname{diam} D = \sup_{x,y \in D} d (x,y)$. \end{defi} \begin{defi} Let $\gamma : [a,b] \to X$ be a curve in a metric space $(X,d)$. The \textit{length of $\gamma$ in $X$} is: \begin{equation} |\gamma| = \sup \sum_{i=0}^{N-1} d( \gamma(t_i) , \gamma(t_{i+1}) ) \in \mathbb{R}_{\geq 0} \cup \{ \infty \}, \end{equation} where the supremum is taken over every partition $a = t_0 < \cdots < t_N = b$. For each subset $A$ of a metric space $(X,d)$, the induced \emph{intrinsic metric of $A$} is defined, for any $x, y \in A$, by: \begin{equation} d_A (x,y) = \inf |\gamma| \, , \end{equation} where the infimum is over every curve $\gamma$ contained in $A$ connecting $x$ and $y$. In particular, $d_A (x,y) = \infty$ if there is no such curve. The metric $d$ is \emph{intrinsic} if $d_X = d$ and, in this case, $(X,d)$ is a \emph{length space}. Also, $d$ is \emph{strictly intrinsic} if, for any points at finite distance from each other, the infimum of the definition is realized by some path connecting them. \end{defi} \begin{remark} \begin{enumerate} \item If a length space is compact, then its metric is strictly intrinsic. \item If $X$ is a length space, then any metric quotient of $X$ is a length space. In particular, the metric of any metric quotient of a compact length space is strictly intrinsic. \end{enumerate} \end{remark} \subsection{Paper-folding schemes} \label{sec:back-paperspaces} In the sequence, ``the plane'' is a fixed model plane. Namelly, either a hyperbolic plane, or the Euclidean plane, or a round $2$-sphere. Originally, paper-folding schemes and paper spaces were defined in \cite{dCH} only in the Euclidean setting. This is imaterial for the present purposes, and the definitions therein work in the more general context with only small adaptations. \begin{defi} An \textit{arc} in a metric space $X$ is a homeomorphic image $\gamma \subset X$ of the interval $[0,1]$. Its \textit{endpoints}, or \textit{extremities}, are the images of $0$ and $1$, and its \textit{interior} is the image $\overset{\circ}{\gamma}$ of the open interval $(0,1)$. A \textit{segment} is an arc in the plane that is a subset of a geodesic. The length of a segment $\alpha$ is denoted $|\alpha|$. A \textit{simple closed curve} in $X$ is a homeomorphic image of the unit circle. An arc or simple closed curve in the plane is \textit{polygonal} if it is the concatenation of finitely many segments. Its \textit{vertices} are the intersections of consecutive maximal segments, and the maximal segments themselves are its \textit{edges}. A (polygonal) multicurve is a disjoint union of finitely many (polygonal) simple closed curves. A \textit{polygon} is a closed topological disk in a plane whose boundary is a polygonal simple closed curve. Its \textit{vertices} are the vertices of its boundary, and its \textit{sides} are the edges of its boundary. In the spherical case, a polygon is assumed to be properly contained in a hemisphere. A \textit{plane multipolygon} is a disjoint union of finitely many plane polygons, which may belong to distinct copies of a same plane. The intrinsic metric of a multipolygon $P$ induced by the ambient plane(s) metric is denoted $d_P$. The boundary of a plane multipolygon will be considered with its positive orientation induced by the orientation of the plane. \end{defi} \begin{defi} Let $C$ be an oriented polygonal multicurve and $\tilde{\alpha}, \tilde{\alpha}' \subset C$ be segments of with the same length and disjoint interiors. The associated \textit{segment pairing} $\langle \tilde{\alpha}, \tilde{\alpha} ' \rangle$ is the decomposition of $\tilde{\alpha} \cup \tilde{\alpha} '$ obtained by gluing $\tilde{\alpha}$ and $\tilde{\alpha} '$ isometricaly reversing its orientations. More precisely, for parametrizations of $\tilde{\alpha}$ and $\tilde{\alpha} '$ by arc length compatible with their orientations, each element of $\langle \tilde{\alpha}, \tilde{\alpha} ' \rangle$ is of the form $\{ \tilde{\alpha} (t) , \tilde{\alpha} ' (|\tilde{\alpha} '| - t) \}$, and these points are said \textit{paired}. Paired points that belong to the interior of the paired segments constitutes \textit{interior pairs}. A \textit{fold} is a pairing whose segments have a common endpoint, this point being called a \textit{folding point}. The \textit{length} of a pairing is defined by $|\langle \tilde{\alpha}, \tilde{\alpha} ' \rangle| = |\tilde{\alpha} | = |\tilde{\alpha} '|$. A collection $\mathcal{P} = \{ \langle \tilde{\alpha}_i, \tilde{\alpha}_i ' \rangle\}_i$ is \textit{interior disjoint} if the interiors of all the segments $\tilde{\alpha}_i$ and $\tilde{\alpha}_j '$ are disjoint. Notice that, in this case, $\mathcal{P}$ is at most countable. It is \textit{full} if $\sum_i |\langle \alpha_i , \alpha_i ' \rangle|$ is equal to half the length of $C$. For a interior disjoint collection $\mathcal{P}$ of segment pairings, $\mathcal{P}$ will also denote the collection of subsets of $C$ whose elements are points paired by some element of $\mathcal{P}$, and the associated reflexive and symmetric relation on $C$. \end{defi} \begin{defi} A \textit{paper-folding scheme} is a pair $(P, \mathcal{P})$, where $P$ is multipolygon with its intrinsic metric $d_P$, and $\mathcal{P} = \{ \langle \alpha_i , \alpha_i ' \rangle \}_i$ is full interior disjoint collection of segment pairings of $\partial P$. The metric quotient $(S,d_S) = (P / {\sim_\mathcal{P}}, d_P^\mathcal{P})$ is the associated \textit{paper space}. Recall that the projection map is denoted $\pi : P \to S$. \end{defi} \begin{remark} For a paper-folding scheme $(P,\mathcal{P})$, each interior point of $P$ is $\sim_\mathcal{P}$-equivalent only to itself. In fact, the restriction of the projection map to the interior of $P$ is a homeomorphism onto its image. Also, each interior pair $\{ z, z' \}$ is precisely the $\sim_\mathcal{P}$-equivalence class of it points, while each folding point coincides with its $\sim_\mathcal{P}$-equivalence class. Every paper space is a compact length space, homeomorphic to the topological quotient $P/{\sim_\mathcal{P}}$. Its metric is strictly intrinsic and, away from a singular set, locally isometric to metric cones on circles. In particular, this singular set is empty if $\mathcal{P}$ consists of finitely many pairings and, in this case, $S$ is a conic-flat surface without singularities, also known as a polyhedral surface. For more details on this, as well as topological, measure-theoretic, and conformal developments of the subject, see \cite{dCH,dCH2}. More information on the geometry of a paper space around certain kinds of singularities will be given in further works. \end{remark} This paper giver particular attention to paper-folding schemes called plain, that will now be defined. Theorem \ref{thm:plain} below is contained in Lemmas 38 and 41, and Theorem 42, in \cite{dCH}. \begin{defi} \label{defi:plainscheme} Let $\gamma$ be a polygonal arc or polygonal simple closed curve. Two pairs of (not necessarily distinct) points $\{ x, x ' \}$ and $\{ y, y ' \}$ of $\gamma$ are \textit{unlinked} if one pair is contained in the closure of a connected component of the complement of the other. Otherwise, they are \textit{linked}. A reflexive and symmetric relation $R$ on $\gamma$ is \textit{unlinked} if any two unrelated pairs of related points are unlinked: that is, if $x \, R \, x'$, $y \, R \, y'$, and neither $x$ nor $x'$ is related to either $y$ or $y '$, then $\{ x, x ' \}$ and $\{ y, y ' \}$ are unlinked. An interior disjoint collection $\mathcal{P}$ of segment pairings on $\gamma$ is unlinked if the corresponding relation $\mathcal{P}$ is unlinked. A paper-folding scheme $(P, \mathcal{P})$ is \textit{plain} if $P$ is a single polygon and $\mathcal{P}$ is unlinked. \end{defi} \begin{defi} Let $R$ be a reflexive and symmetric relation on a set $X$. A subset $U$ of $X$ is $R$-saturated if it contains $\{ y \in X \, | \, y \, R \, x \}$ for every $x \in U$. \end{defi} \begin{defi} \label{defi:plainarc} Let $(P,\mathcal{P})$ be a paper-folding scheme. An arc $\gamma \subset \partial P$ is $\mathcal{P}$-\textit{plain} if: \begin{enumerate} \item Every pairing in $\mathcal{P}$ which intersects the interior of $\gamma$ is contained in $\gamma$ (that is, if $\langle \alpha, \alpha ' \rangle$ is a segment pairing and either $\alpha$ or $\alpha '$ intersects the interior of $\gamma$, then both $\alpha$ and $\alpha '$ are contained in $\gamma$); and \item The restriction of $\mathcal{P}$ to $\gamma$ is unlinked. \end{enumerate} A component $\gamma$ of $\partial P$ is \textit{plain} if it is $\mathcal{P}$-saturated and the restriction of $\mathcal{P}$ to $\gamma$ is unlinked. In particular, a paper folding scheme $(P, \mathcal{P})$ is plain if $P$ consists of a single polygon whose boundary $\partial P$ is plain. \end{defi} \begin{thm}[\cite{dCH}] \label{thm:plain} Let $(P, \mathcal{P})$ be a paper-folding scheme, $\sim_\mathcal{P}$ be the equivalence relation induced by the quotient semi-metric $d^\mathcal{P}$, and $\gamma \subset \partial P$ be an arc with endpoints $a$ and $b$. \begin{enumerate} \item If $\gamma$ is $\mathcal{P}$-plain, then $a \sim_\mathcal{P} b$. \item $\gamma \setminus [a] = \gamma \setminus [b]$ is $\sim_\mathcal{P}$-saturated. \item Suppose that $(P, \mathcal{P})$ is plain and let $x,y \in \partial P$ be distinct points which are not in interior $\mathcal{P}$-pairs. Then $x \sim_\mathcal{P} y$ if, and only if, an arc in $\partial P$ with endpoints $x$ and $y$ is plain. \item If $(P, \mathcal{P})$ is plain, then the associated paper space is homeomorphic to the two-dimensional sphere. \end{enumerate} \end{thm} \section{Proof of Theorem \ref{thm:main}} \label{sec:gromovhausdorff} The proof is divided into Lemmas \ref{thm:gh1} and \ref{lemma:gh2-2} on pairs of metric quotients of $X$. The first relates the Gromov-Hausdorff distance between them and the difference between their associated semi-metrics over nets on $X$, while the second estimates this difference in the complement of a set $D$ as in the statement of Theorem \ref{thm:main}. It will be convenient to reformulate its hypothesis as Conditions \ref{thm:gh2-hyp} relating pairs of collections of subsets on $X$. The proof of Lemma \ref{lemma:gh2-2} depends on a simple general result on metric spaces, stated and proved as Proposition \ref{prop:gh2-1}. \begin{lemma} \label{thm:gh1} Let $(X,d)$ be a metric space and $\mathcal{G}_i$ be collections of subsets of $X$, $i = 1, 2$. Consider the associated semi-metrics $d^{\mathcal{G}_i}$ on $X$. Given $r > 0$, if there exist an $r$-net $A$ on $X$ such that \begin{equation} \label{eqn:thmgh1-hyp} \sup_{a,a' \in A} |d^{\mathcal{G}_1} (a,a') - d^{\mathcal{G}_2} (a,a')| \leq r \, , \end{equation} then $\dgh (S_1, S_2) < 5 r / 2$. \end{lemma} \begin{proof} Denote by $d_i$ the metric of the metric quotients $S_i$ and by $\pi_i : X \to S_i$ the projection maps. Since $A$ is an $r$-net on $X$, each $A_i = \pi_i (A)$ is an $r$-net on $S_i$. Therefore, $\dgh(S_i, A_i) < r$, where $A_i$ is considered as a metric space with the restriction of $d_i$. This, with the triangle inequality, gives: \begin{eqnarray} \dgh (S_1, S_2) & \leq & \dgh (S_1, A_1) + \dgh (A_1, A_2) + \dgh (S_2, A_2) \\ & < & 2 r + \dgh (A_1, A_2) \, . \label{eqn:thmgh1-1} \end{eqnarray} To bound the last term, it suffices to bound the distortion of some correspondence $R$ between $A_1$ and $A_2$, due to formula (\ref{eqn:dghdis}) for the Gromov-Hausdorff distance. So let $R$ be induced by restrictions $\pi_1 : A \to A_1$ and $\pi_2 : A \to A_2$. Recall that, by the definition of quotient metric, $d_i (\pi_i (x), \pi_i (y)) = d^{\mathcal{G}_i} (x,y)$ for any $x,y \in X$. Then, the hypothesis (\ref{eqn:thmgh1-hyp}) bounds the distortion of $R$: \begin{eqnarray} \dgh (A_1, A_2) & \leq & \frac{1}{2} \operatorname{dis} R \\ & = & \frac{1}{2} \sup_{a,a' \in A} | d_1 (\pi_1 (a), \pi_1 (a')) - d_2 (\pi_2 (a), \pi_2 (a')) | \\ & = & \frac{1}{2} \sup_{a,a' \in A} |d^{\mathcal{G}_1} (a,a') - d^{\mathcal{G}_2} (a,a')| \\ & \leq & \frac{r}{2} . \label{eqn:thmgh1-2} \end{eqnarray} The result follows from estimates (\ref{eqn:thmgh1-1}) and (\ref{eqn:thmgh1-2}). \end{proof} \begin{condition} \label{thm:gh2-hyp} Given collections of subsets $\mathcal{G}_i$, $i = 1, 2$, of a metric space $(X,d)$, suppose that there exist $g_i \in \mathcal{G}_i$ and $D \subset X$ satisfying: \begin{enumerate} \item \label{thm:gh2-hyp1a} For any $x, y \in X \setminus D$, $x \, \mathcal{G}_1 \, y$ if, and only if, $x \, \mathcal{G}_2 \, y$. \item \label{thm:gh2-hyp2a} If $x \in X \setminus D$ and $y \in D$ are such that $x \, \mathcal{G}_i \, y$, then $x,y \in g_i$ ($i = 1, 2$). \item \label{thm:gh2-hyp3a} For $i = 1, 2$, $g_i \cap (\partial D \setminus D) \neq \emptyset$. \end{enumerate} \end{condition} \begin{prop} \label{prop:gh2-1} Let $X$ be a metric space and $D \subset X$. For any $x \in X \setminus D$, $y \in D$ and $z \in \partial D$: \begin{equation} d(x,z) \leq d(x,y) + \operatorname{diam} D \, . \end{equation} Here, $\partial D$ and $\operatorname{diam} D$ denotes, respectively, the boundary and the diameter of $D$. \end{prop} \begin{proof} Let $z_n$ be a sequence in $D$ converging to $z$. For each $n$, the triangle inequality and the definition of diameter gives: \[ d(x, z_n) \leq d(x,y) + d(y, z_n) \leq d(x,y) + \operatorname{diam} D, \] Then, since $p \mapsto d (x,p)$ is a continuous map $X \to \mathbb{R}$, it follows: \begin{eqnarray*} d(x, z) & = & d(x, \lim z_n) = \lim d(x, z_n) \\ & \leq & \lim (d(x,y) + d(y, z_n)) = d(x,y) + \lim d(y, z_n) \\ & \leq & d(x,y) + \operatorname{diam} D. \end{eqnarray*} \end{proof} \begin{lemma} \label{lemma:gh2-2} For collections of subsets $\mathcal{G}_i$, $i = 1, 2$, of a metric space $(X,d)$, if Conditions \ref{thm:gh2-hyp} are fulfilled, then the associated quotient semi-metrics $d^{\mathcal{G}_1}$ and $d^{\mathcal{G}_2}$ are such that: \begin{equation} |d^{\mathcal{G}_1} (x,y) - d^{\mathcal{G}_2} (x,y)| \leq 2 \operatorname{diam} D \quad \forall x,y \in X \setminus D \, . \end{equation} \end{lemma} \begin{proof} Denote $E = X \setminus D$. It will be proven that: \begin{equation} d^{\mathcal{G}_2} (x,y) \leq d^{\mathcal{G}_1} (x,y) + 2 \operatorname{diam} D \quad \forall x,y \in E \, . \end{equation} The proof that $d^{\mathcal{G}_1} (x,y) \leq d^{\mathcal{G}_2} (x,y) + 2 \operatorname{diam} D$ for any $x,y \in E$ is analogous, implying the result. Let $x, y \in E$. Due to Condition \ref{thm:gh2-hyp}-\ref{thm:gh2-hyp1a}, every $\mathcal{G}_1$-walk contained in $E$ is a $\mathcal{G}_2$-walk (of course with the same length). Then, it suffices to show that to each $\mathcal{G}_1$-walk $\mathcal{W} = \{ x_j, y_j \}_{j=1}^N$ from $x$ to $y$ corresponds a $\mathcal{G}_1$-walk contained in $E$ with length at most $\ell(\mathcal{W}) + 2 \operatorname{diam} D $. Suppose that $\mathcal{W}$ intersects $D$, and let $1 \leq j_0 \leq j_1 \leq N$ be the first and last indices such that this happens: $\{ x_j, y_j \} \cap D \neq \emptyset$ for $j = j_0, j_1$; and $\{ x_j, y_j \} \cap D = \emptyset$ for every $j < j_0$ and $j > j_1$. The subwalk $\{x_{j_0}, y_{j_0} \, ; \ldots ; \, x_{j_1}, y_{j_1} \}$ will be modified to obtain the result. The cases when the walk enters and leaves $D$ through a steps or jumps are treated separately. \textit{Case 1. $\mathcal{W}$ enters and leaves $D$ through jumps:} $x_{j_0}, y_{j_1} \in D$. In this case, since $y_{j_0 - 1}, x_{j_1 + 1} \in E$, Condition \ref{thm:gh2-hyp}-\ref{thm:gh2-hyp3a} implies that $y_{j_0 - 1}, x_{j_1 + 1} \in g_1$. Then, $\{x_{j_0}, y_{j_0} \, ; \ldots ; \, x_{j_1}, y_{j_1} \}$ can be simply removed from the walk, and the result is a $\mathcal{G}_1$-walk contained in $E$ with smaller length. \textit{Case 2. $\mathcal{W}$ enters and leaves $D$ through steps:} $x_{j_0}, y_{j_1} \in E$ and, as a consequence, $y_{j_0}, x_{j_1} \in D$. In particular, $j_0 \neq j_1$. Condition \ref{thm:gh2-hyp}-\ref{thm:gh2-hyp3a} says that there exist $y_{j_0} ', x_{j_1} ' \in g_1 \cap (\partial D \setminus D)$, and Proposition \ref{prop:gh2-1} guarantee that: \begin{equation} d(x_{j_0}, y_{j_0} ') + d(x_{j_1} ', y_{j_1}) \leq d(x_{j_0}, y_{j_0}) + d(x_{j_1}, y_{j_1}) + 2 \operatorname{diam} D. \end{equation} It follows that replacing $\{x_{j_0}, y_{j_0} \, ; \ldots ; \, x_{j_1}, y_{j_1} \}$ by $\{ x_{j_0}, y_{j_0} ' \, ; \, x_{j_1} ', y_{j_1} \}$ produces the wanted $\mathcal{G}_1$-walk. \textit{Case 3. $\mathcal{W}$ enters $D$ through a step and leaves it through a jump:} $x_{j_0} \in E$, $y_{j_0} \in D$ and $y_{j_1} \in D$. In particular, $j_0 > 1$ and $j_1 < N$. Then $N \geq j_1 + 1$, $x_{j_1 + 1} \in E$, and Condition \ref{thm:gh2-hyp}-\ref{thm:gh2-hyp2a} implies that $x_{j_1 + 1} \in g_1$. As in the previous case, Condition \ref{thm:gh2-hyp}-\ref{thm:gh2-hyp3a}, gives $y_{j_0} ' \in g_1 \cap (\partial D \setminus D)$, and again Proposition \ref{prop:gh2-1} shows that replacing $\{x_{j_0}, y_{j_0} \, ; \ldots ; \, x_{j_1}, y_{j_1} \}$ by $\{ x_{j_0}, y_{j_0} ' \}$ gives the desired $\mathcal{G}_1$-walk, as : \begin{equation} d(x_{j_0}, y_{j_0} ') \leq d(x_{j_0}, y_{j_0}) + \operatorname{diam} D. \end{equation} By symmetry, the case when the walk enters through a jump and leaves through a step is analogous to Case 3. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main}] For each $n \in \mathbb{N}$, the hypothesis says precisely that $g_n \in \mathcal{G}_n$, $g^n \in \mathcal{G}$ and $D_n \subset X$ fulfills Conditions \ref{thm:gh2-hyp}. Denote $E_n = X \setminus D_n$, and fix any $\delta_0 > 0$. Then, Lemma \ref{lemma:gh2-2} gives: \begin{equation} \label{eqn:mainthm1} |d^{\mathcal{G}_n} (x,y) - d^\mathcal{G} (x,y)| < 2 \operatorname{diam} D_n < 2 (1+\delta_0) \operatorname{diam} D_n \quad \forall x, y \in E_n \, . \end{equation} It's clear that, for each $n \in \mathbb{N}$, $x \in X$ and $r > \operatorname{diam} D_n$, $B(x,r) \cap E_n \neq \emptyset$. In particular, each $E_n$ is a $[2(1+\delta_0)\operatorname{diam} D_n]$-net on $X$. Thus, by (\ref{eqn:mainthm1}), the hypothesis of Lemma \ref{thm:gh1} hold for $r = 2(1+\delta_0) \operatorname{diam} D_n$, and it follows that: \begin{equation} \dgh (S_n,S) < 5(1 + \delta_0)\operatorname{diam} D_n . \end{equation} Finally, given $\varepsilon > 0$, let $n_0 \in \mathbb{N}$ be such that $5 (1+\delta_0) \operatorname{diam} D_n < \varepsilon$ for every $n \geq n_0$, so $\dgh (S_n,S) < \varepsilon$. \end{proof} \section{Application to paper-folding schemes} \label{sec:paperlimit} The conditions of Theorem \ref{thm:main} can be verified in the context of plain paper-folding schemes, yielding: \begin{thm} \label{thm:paperlimit} Given a plain paper-folding scheme $(P, \mathcal{P})$ and a sequence $\gamma_n \subset \partial P$, $n \in \mathbb{N}$, of $\mathcal{P}$-plain arcs, let $(P,\mathcal{P}_n)$ be a sequence of paper-folding schemes such that $\mathcal{P}_n$ coincides with $\mathcal{P}$ on $\partial P \setminus \overset{\circ}{\gamma_n}$ and $\gamma_n$ is $\mathcal{P}_n$-plain for every $n$. Let $S$ and $S_n$ be the paper spaces associated to $(P, \mathcal{P})$ and $(P,\mathcal{P}_n)$, respectively. If $|\gamma_n| \to 0$ as $n \to \infty$, then $\dgh (S, S_n) \to 0$. \end{thm} \begin{proof} Let $\sim_{\mathcal{P}}$ and $\sim_{\mathcal{P}_n}$ be the equivalence relations associated to the quotient semi-metrics $d^\mathcal{P}$ and $d^{\mathcal{P}_n}$, respectively, and take as $\mathcal{G}$ and $\mathcal{G}_n$ the collections of $\sim_\mathcal{P}$ and $\sim_{\mathcal{P}_n}$-equivalence classes in accordance. Notice that $d^\mathcal{P} = d^{\mathcal{G}}$ and $d^{\mathcal{P}_n} = d^{\mathcal{G}_n}$, so the paper spaces $S$ and $S_n$ are precisely the metric quotients associated to $\mathcal{G}$ and $\mathcal{G}_n$. Let $D_n = \overset{\circ}{\gamma_n}$, and denote by $z_n$ and $w_n$, $z_n \neq w_n$, its endpoints, whose equivalence classes are denoted $[z_n]_{\sim_\mathcal{P}} = [w_n]_{\sim_\mathcal{P}}$ and $[z_n]_{\sim_{\mathcal{P}_n}} = [w_n]_{\sim_{\mathcal{P}_n}}$. These equalities are guaranteed by the hypothesis that each $\gamma_n$ is both $\mathcal{P}$ and $\mathcal{P}_n$-plain, due to Theorem \ref{thm:plain}. Finally, let $g^n = [z_n]_{\sim_\mathcal{P}} \in \mathcal{G}$ and $g_n = [z_n]_{\sim_{\mathcal{P}_n}} \in \mathcal{G}_n$. The conditions of Theorem \ref{thm:main} will be verified in the sequence. \textit{Condition \ref{thm:gh2-hyp1}.} Let $x,y \in P \setminus D_n$ be such that $x \, \mathcal{G} \, y$. If $x = y$, then it is obvious that $x \, \mathcal{G}_n \, y$. For instance, this is the case when one of these points is interior to $P$. So assume that $x \neq y$ and $x, y \in \partial P$. In case $\{ x, y \}$ is an interior $\mathcal{P}$-pair, then it is also an interior $\mathcal{P}_n$-pair, and $x \, \mathcal{G}_n \, y$, since $\mathcal{P}_n$ coincides with $\mathcal{P}$ on $\partial P \setminus D_n$. And in case $\{ x, y \}$ is not an interior $\mathcal{P}$-pair, consider the arc $[x,y]$ having these points as endpoints contained in $\partial P \setminus D_n$. Since $(P,\mathcal{P})$ is plain, Theorem \ref{thm:plain} applies, and $[x,y]$ is a $\mathcal{P}$-plain arc. It is also a $\mathcal{P}_n$-plain arc, since $\mathcal{P}_n$ coincides with $\mathcal{P}$ on $[x,y]$. Then, $x \, \mathcal{G}_n \, y$ follows from Theorem \ref{thm:plain}. The proof that $x \, \mathcal{G}_n \, y$ implies $x \, \mathcal{G} \, y$ for $x,y \in P \setminus D_n$ is completely analogous, the last step being valid since $(P,\mathcal{P}_n)$ is plain paper-folding scheme, which can be easily verified. \textit{Condition \ref{thm:gh2-hyp2}.} Theorem \ref{thm:plain} says that, since each $\gamma_n$ is both $\mathcal{P}$ and $\mathcal{P}_n$-plain, every $\gamma_n \setminus g^n$ and $\gamma_n \setminus g_n$ are, respectively, $\sim_{\mathcal{P}}$ and $\sim_{\mathcal{P}_n}$-saturated. By definition, this means that $[y]_{\sim_{\mathcal{P}}} \subset \gamma_n \setminus g^n$ (resp. $[y]_{\sim_{\mathcal{P}_n} } \subset \gamma_n \setminus g_n$) for every $y \in \gamma_n \setminus g^n$ (resp. $y \in \gamma_n \setminus g_n$). Of course, for each $y \in D_n$, either $y \in g^n$ (resp. $y \in g_n$), or $y \in \gamma_n \setminus g^n$ (resp. $y \in \gamma_n \setminus g_n$). Therefore, if $x \, \mathcal{G} \, y$ (resp. $x \, \mathcal{G}_n \, y$) with $x \in P \setminus D_n$ and $y \in D_n$, then $x,y \in g^n$ (resp. $x, y \in g_n$). \textit{Condition \ref{thm:gh2-hyp3}.} As $\partial D_n = \gamma_n$, $\partial D_n \setminus D_n = \{ z_n, w_n \}$, so it's clear that $g^n \cap (\partial D_n \setminus D_n) \neq \emptyset$ and $g_n \cap (\partial D_n \setminus D_n) \neq \emptyset$. The proof is concluded by noticing that $\lim_{n \to \infty}\operatorname{diam} D_n = 0$ is a consequence of $\lim_{n \to \infty} |\gamma_n| = 0$. This is imediate if $\gamma_n$ does not contain any point whose angle internal to $P$ is smaller than $\pi$, as in this case $\operatorname{diam} D_n = |\gamma_n|$; and follows from the Law of Cosines, otherwise. \end{proof} The following examples illustrates how to use Theorem \ref{thm:paperlimit}. It includes every identification pattern that appears in \cite{unimodal}. Recall that in the associated Figures, dotted lines connect points paired by the paper-folding scheme. \begin{figure} \centering \includegraphics{PlainPatterns} \caption{Identification patterns (\ref{eqn:canon2}), (\ref{eqn:canon1}) and (\ref{eqn:singular1}) on the arcs $\gamma_n$ in Example \ref{exem:general}, and a couple of folds used for approximating them. The point $p$ corresponds to $\blacksquare$.} \label{fig:general} \end{figure} \begin{exem}[Figure \ref{fig:general}] \label{exem:general} Suppose that a plain paper-folding scheme $(P,\mathcal{P})$ is such that a point $p \in \partial P$ is the intersection of a nested sequence $\gamma_n$, $n \in \mathbb{N}^*$, of arcs contained in $\partial P$, on which $\mathcal{P}$ has one of the following forms: \begin{equation} \label{eqn:canon2} p \cdots \alpha_{n+1} ' \, \alpha_{n+1} \, \alpha_n ' \, \alpha_n \, . \end{equation} \begin{equation} \label{eqn:canon1} \alpha_{-n} \, \alpha_{-n} ' \, \alpha_{-n-1} \, \alpha_{-n-1} ' \cdots p \cdots \alpha_{n+1} ' \, \alpha_{n+1} \, \alpha_n ' \, \alpha_n \, . \end{equation} \begin{equation} \label{eqn:singular1} \beta_n \, \alpha_{n+1} \, \alpha_{n+1} ' \, \beta_{n+1} \, \alpha_{n+2} \, \alpha_{n+2} ' \cdots p \cdots \, \beta_{n+1} ' \, \beta_n ' . \end{equation} In case $p$ is a vertex of $P$, assume that every $\gamma_n$ is contained in the pair of sides of $P$ meeting at $P$ (for pattern (\ref{eqn:canon2}) it end up contained in just one of them). To define $\mathcal{P}_n$ as in Theorem \ref{thm:paperlimit}, the simplest choices of plain patterns to place on $\gamma_n$ are a couple of folds folds. So, for each $n \in \mathbb{N}^*$, let the restriction of $\mathcal{P}_n$ to $\gamma_n$ be of the form $\phi_n \, \phi_n ' \, \psi_n ' \, \psi_n$. When $\gamma_n$ is contained in one side of $P$, a single fold can be used as well. Theorem \ref{thm:paperlimit} readily aplies, and the paper spaces $S$ associated to $(P,\mathcal{P})$ are the Gromov-Hausdorff limits of the sequences of paper spaces $S_n$ associated to $(P,\mathcal{P}_n)$. Notice that the lengths of the pairings are not important here. Recall that, due to Theorem \ref{thm:plain}, all these quotients are homeomorphic to $2$-spheres. \end{exem} \begin{figure} \centering \includegraphics[width=\linewidth]{canon1convgh} \caption{Sequences of conic-flat spheres without singularities converging to conic-flat spheres with precisely one singularity (Example \ref{exem:canon1}).} \label{fig:canon1} \end{figure} \begin{exem}[Figure \ref{fig:canon1}] \label{exem:canon1} As a particular case of Example \ref{exem:general}, Figure \ref{fig:canon1} shows plain a paper-folding scheme on a square that is of the form (\ref{eqn:canon1}) around the inferior left-hand vertex $p$, and three elements of the sequence constructed above approximating it. When, as indicated, $P$ is an Euclidean square and the lengths of folds producing the limiting paper space are chosen by halving, $S$ is the domain of the self-homeomorphism of the sphere known as the tight horseshoe \cite{dCH}. Independently of these choices, Figure \ref{fig:canon1} gives concrete examples of sequences of conic-flat spheres without singularities converging to a conic-flat sphere with precisely one singular point, the projection $\hat{p}$ of $p$. It possesses the following two singular properties. First, $\hat{p}$ is the accumulation of the projections of the folding points marked as $\bullet$ in Figure \ref{fig:canon1}. These are conical points with total angle around it in $S$ equal to $\pi$ and, thus, with positive curvature. Second, $\hat{p}$ is \textit{the vertex of an $\infty$-od in $S$}: there exist a convex subspace $K$ of $S$, whose points are flat except for $\hat{p}$, isometric to the intrinsic metric of a countable infinite collection of pairwise disjoint half-closed intervals glued together by its endpoints, which corresponds to $\hat{p}$ (here, convex means that every shortest path connecting points in $K$ is contained in $K$). This is related to $\hat{p}$ having infinite total angle around it in $S$. Due to these two singular properties, arbitrarily small neighborhoods of $\hat{p}$ contain geodesic triangles implying that $S$ violates the definitions of bounded curvature, both above and below, in the sense of comparative geometry \cite{BBI}. Analogous properties hold for the projection of the point $p$ in pattern (\ref{eqn:canon2}). \end{exem} \begin{remark} The projection of the point $p$ in pattern (\ref{eqn:singular1}) is in $S$ the accumulation of conical points with total angles equal to $\pi$ and $3 \pi$ around it, while it is not the vertex of an $\infty$-od in $S$. Singular properties such as this and the ones mentioned above will be rigorously pursued in further works, as mentioned in section \ref{sec:back-paperspaces}. \end{remark} \begin{figure} \centering \includegraphics{PlainPatternsRepeat} \caption{Applying the construction in Example \ref{exem:general} repeatedly (Example \ref{exem:repeat}). Loose lines represent generic portions of $\partial P$.} \label{fig:repeat} \end{figure} \begin{exem}[Figure \ref{fig:repeat}] \label{exem:repeat} As an example of how a given plain paper-folding scheme $(P, \mathcal{P})$ can be successively ``simplified'' in order to approximate its quotient $S$ by a sequence of spheres with fewer / less complicated singularities, suppose that there are pairwise distinct points $p_1, \ldots, p_m \in \partial P$ around which $\mathcal{P}$ has the forms of Example \ref{exem:general}. Given $\varepsilon > 0$, let $n_1 \in \mathbb{N}$ be such that $\dgh (S, S_{1,n}) < \varepsilon/m$ for every $n \geq n_1$, where $S_{1,n}$ is the paper space associated to a $\mathcal{P}_{1,n}$ constructed as in Example \ref{exem:general}. Now, approximate $S_{1,n_1}$ by modifying $\mathcal{P}_{1,n_1}$ around $p_2$, obtainining a sequence $\mathcal{P}_{2,n}$ and $n_2 \in \mathbb{N}$ whose quotients $S_{2,n}$ satisfy $\dgh (S_{1,n_1},S_{2,n}) < \varepsilon/m$ for every $n \geq n_2$. Repeating this for $S_{2,n_2}$, and so on, a sequence $S_{m,n}$ converging to $S_{m-1,n_{m-1}}$ as $n \to \infty$ is obtained, associated to patterns $\mathcal{P}_{m,n}$ that coincides with the original $\mathcal{P}$ except at small arcs containing $p_1, \ldots, p_m$, where it contains only folds. Due to the triangle inequality for $\dgh$, $\dgh(S,S_{m,n}) < \varepsilon$ for every $n \geq n_m$. \end{exem}
{ "timestamp": "2020-07-17T02:20:32", "yymm": "2007", "arxiv_id": "2007.08420", "language": "en", "url": "https://arxiv.org/abs/2007.08420" }